I'm trying to code a high performance reverse proxy server using Netty 4.1.
I based my code on an Java adaptation of Feng-Zihao/protox and the Netty Proxy Example.
I first had some trouble handling 100-CONTINUE but adding the HttpObjectAggregator into my pipeline kinda solved that.
serverBootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.DEBUG))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new LoggingHandler(LogLevel.DEBUG));
ch.pipeline().addLast(new HttpRequestDecoder());
ch.pipeline().addLast(new HttpResponseEncoder());
ch.pipeline().addLast(new HttpObjectAggregator(1048576));
ch.pipeline().addLast(new FrontendHandler());
}
})
// .option(ChannelOption.SO_REUSEADDR, true)
// .option(ChannelOption.SO_BACKLOG, 128)
// .childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.AUTO_READ, false)
.bind(port).sync();
On the client side, the request hangs indefinitely.
The thing is, AUTO_READ being at false seems to prevent the HttpObjectAggregator to do his work and my FrontendHandler only ever receives the channelActive event but never the channelRead.
It seems though that I need that to make sure I don't get into some race condition between the reads and the remote peer connection.
FYI, my goal in the end is to choose to forward or not the request based on a filter (probably a new handler right before my FrontendHandler) that will need to read the full http content.
Am I missing something here ?
Turn on auto read when your outbound channel becomes active, and have your FrontendHandler turn it off while processing each message. Then turn it on again when you are ready to handle another message.
This will let HttpObjectAggregator keep reading as many messages as it needs to in order to create a FullHttpMessage, and then stop sending it messages while your FrontendHandler is processing or waiting on some client write to invoke a listener.
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.channel().config().setAutoRead(false);
...
// This call should probably be in some event listener
ctx.channel().config().setAutoRead(true);
Related
I have an issue using NettyJaxrsServer with SSL. The first request succeed and subsequent ones timeout. Typically, if I use curl to send an HTTPS request, only the first call passes. I checked with wireshark to see what is happening. The server performs the handshake only for the first request. For the second request the client sends a "Hello" message, and the server do not continue the handshake. It seems like the server is keeping an SSL session even after the client disconnection.
I verified the code and for each client connection, a new SSLHandler is inserted in the client pipeline but using the same SSLEngine.
final SSLEngine engine = sslContext.createSSLEngine();
engine.setUseClientMode(false);
bootstrap.group(eventLoopGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst(new SslHandler(engine));
ch.pipeline().addLast(channelHandlers.toArray(new ChannelHandler[channelHandlers.size()]));
ch.pipeline().addLast(new HttpRequestDecoder());
ch.pipeline().addLast(new HttpObjectAggregator(maxRequestSize));
ch.pipeline().addLast(new HttpResponseEncoder());
ch.pipeline().addLast(new RestEasyHttpRequestDecoder(dispatcher.getDispatcher(), root, RestEasyHttpRequestDecoder.Protocol.HTTPS));
ch.pipeline().addLast(new RestEasyHttpResponseEncoder());
ch.pipeline().addLast(eventExecutor, new RequestHandler(dispatcher));
}
})
.option(ChannelOption.SO_BACKLOG, backlog)
.childOption(ChannelOption.SO_KEEPALIVE, true);
From Netty documentation, I understand that a new SSLEngine should be used for each new client connection:
Restarting the session
To restart the SSL session, you must remove the existing closed SslHandler from the ChannelPipeline, insert a new SslHandler with a new SSLEngine into the pipeline, and start the handshake process as described in the first section.
Can someone explain this behavior? or is this a bug in Netty resteasy server?
Used version: resteasy-netty4 3.0.11.Final
Good morning,
in java.sun version of http servers we used to do this for creating contexts and different handlers :
server = HttpServer.create(new InetSocketAddress(PORT), 0);
server.createContext("/##getPersonalImage", new PersonalImageHandler());
server.createContext("/##getProfile", new ProfileGetter());
and then you could reach it by typing
127.0.0.1:15000/##getProfile
but in the netty i think i have searched every thing in examples etc , but havent seen creating contexts like this , is this some sort of depcerated method or what ?
could you please help me to achieve this sort of context in the netty too ? thanks in advance
Netty works in this fashion.
You have the server and/or client you must setup and when you set the server up you can add handlers by adding a ChannelInitializer. You can also add or remove on the fly, but this is not always recommended as it can be costly.
When you need to pass data to or from that is not network related or related to the network data you read you can take several approaches, such as extending the handlers and adding some sort of field where you can access or put data or use ChannelAttributes.
Their tutorials and examples definitely are helpful when building out. I will comment on their example and explain and hope that is helpful.
From their User Guide
Channels
Client Code
package io.netty.example.time;
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() { //** This is the ChannelInitializer - The Channel is the nexus basically to communications, you add handlers to the channel in the order of how data is handled
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler()); //** Here we are adding TimeClient Handler to the SocketChannel seen below, there are many ways to add handlers
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
Handler Code
package io.netty.example.time;
import java.util.Date;
public class TimeClientHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg; // Here we are getting the message
try {
long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L; //** We will always have to write the logic unless there is already a netty handler for it, but even then you may or probably will have to implement business logic handler specific to your application(s)
System.out.println(new Date(currentTimeMillis));
ctx.close(); //** you can close a connection on a channel or a ChannelHandlerContext
} finally {
m.release(); //** Here you have to release the buffer
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
So if you want to be able to reach out, when you construct the handler you can add your own fields. For the attributes method see ChannelHandler for examples ChannelHandler
EDIT: Will yes there is some Netty handlers for IP specific filtering, but I am not sure about it specifically. I am not sure what your trying to do as I do not know the other library you mentioned. To give you an Idea of how I use Netty may help you. I have a MMO style game, when a client connects its over TCP w/SSL when they connect though in the handler I have a Session class i create and tracks all there information. Then I prompt the client through my own network protocol to open another connection to the server using TCP w/o SSL. I add that to their Session, Then i negotiate if they can receive UDP and if so I build out a specific UDP handler for them and attach it to the Session. Each Session has its own Instance of the handlers in the Handler that allows me to read and write from one channel to another and or handle that person. Each session also references each of the handlers, channel and connection data. I also have a file server build on http and a post server built in netty, the client implements native Java, hence i used a web-server to not have initial dependencies.
I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.
I'm building a POC with Netty 4, just a basic client/server setup; my question is how to effectively share state across threads within the server process itself, let me explain...
Server
The server is nothing fancy; standard logic for accepting remote connections using boss and workers event loop groups (code not shown).
Where it gets interesting
The server supports pluggable modules that provide metrics and group management services. The modules run autonomously as child threads, periodically generating information important for the server to function properly; this is where I'm unsure what to do; how to get info produced from the thread modules to the server process in a "netty" way.
Naive Approach
Local VM Channel
Looking at example LocalEcho, it appears LocalChannel and LocalServerChannel provides VM (in-memory) communication, though I was expecting such channels to be easier to set up, but here goes:
//in-memory server
EventLoopGroup serverGroup = new DefaultEventLoopGroup();
ServerBootstrap svm = new ServerBootstrap();
svm.group(serverGroup)
.channel(LocalServerChannel.class)
.handler(new ChannelInitializer<LocalServerChannel>() {
#Override
public void initChannel(LocalServerChannel ch) throws Exception {
ch.pipeline().addLast(new LoggingHandler(LogLevel.INFO))
}
})
.childHandler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast( EventsFromModule_Handler());
}
});
//start server vm
svm.bind(addr).sync();
//in-memory client
EventLoopGroup clientGroup = new NioEventLoopGroup();
Bootstrap cvm = new Bootstrap();
cvm.group(clientGroup)
.channel(LocalChannel.class)
.handler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast(
new DefaultModuleHandler());
}
});
Connect Modules
Here's where I create and fork the modules; first I get a channel from the client vm, passing it to the module..
Channel mch = cvm.connect(addr).sync().channel();
ModuleFactory.fork(
new DiscoveryService( mch),
)
...now the module has a dedicated channel for generating events; such events will be handled by EventsFromModule_Handler(), bridging the gap between modules and the (server) process.
Question
I'm very new to netty, but is using LocalChannels in this context a valid approach?
Am I doing something entirely wrong?
I have a ServerBootstrap configured with a fairly standard Http-Codec ChannelInitializer.
On shutdown my server waits for a grace period where it can still handle incoming requests. My server supports keep-alive, but on shutdown I want to make sure every HttpResponse sent closes the connection with HTTP header "Connection: close" and that the channel is closed after the write. This is only necessary on server shutdown.
I have a ChannelHandler to support that:
#ChannelHandler.Sharable
public class CloseConnectionHandler extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
HttpResponse response = (HttpResponse) msg;
if (isKeepAlive(response)) {
setKeepAlive(response, false);
promise.addListener(ChannelFutureListener.CLOSE);
}
ctx.write(msg, promise);
}
I keep a track of all connected clients using a ChannelGroup, so I can dynamically modify the pipeline of each client at the point of shutdown to include my CloseConnectionHandler, this works no problem.
However, new connections in the grace period have their pipeline configuration provided by the original ServerBootstrap ChannelInitializer, and I can't see a way of dynamically re-configuring that?
As a work-around I can have the CloseConnectionHandler configured in the standard pipeline and turned off with a boolean, only activating it on shutdown. But I'd rather avoid that if possible, seems a bit unnecessary.
there is currently no way to "replace" the initializer at run-time. So using a flag etc would be the best bet.