Decoder, Encoder, ServerHandler pipeline in netty - java

Looking at the docs, it says this:
https://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html
A user is supposed to have one or more ChannelHandlers in a pipeline
to receive I/O events (e.g. read) and to request I/O operations (e.g.
write and close). For example, a typical server will have the
following handlers in each channel's pipeline, but your mileage may
vary depending on the complexity and characteristics of the protocol
and business logic:
Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java
object.
Protocol Encoder - translates a Java object into binary data.
Business Logic Handler - performs the actual business logic (e.g.
database access). and it could be represented as shown in the
following example: static final EventExecutorGroup group = new
DefaultEventExecutorGroup(16); ...
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("decoder", new MyProtocolDecoder());
pipeline.addLast("encoder", new MyProtocolEncoder());
// Tell the pipeline to run MyBusinessLogicHandler's event handler
methods // in a different thread than an I/O thread so that the I/O
thread is not blocked by // a time-consuming task. // If your
business logic is fully asynchronous or finished very quickly, you
don't // need to specify a group.
pipeline.addLast(group, "handler",
new MyBusinessLogicHandler());
In a lot of the examples on Github I see this same pattern. I was wondering if someone can explain why the businessHandler is not in between Decoder and Encoder. I would think that you would get your POJO, then do work on it in the business handler, then encode it.

The Decoder and encoder are usually at the beginning of the pipeline because of order in which handlers are called. For incoming data it's bottom-up and for outgoing top-down.
E.g.
pipeline.addLast(new MyEncoder());
pipeline.addLast(new MyDecoder());
pipeline.addLast(new MyBusiness());
In this case, for incoming data call order is: MyDecoder (transforming data to POJO) -> MyBusiness (the encoder is not called for incoming stream) and for outgoing data: MyBusiness -> MyEncoder (the decoder is not called for outgoing stream).
If you receive an incoming stream in the business handler (actually, the POJOs after decoder) work on it and write it back, it looks like MyBusiness is located between encoder and decoder because data is turning back to the encoder.

Of course the business handler is between the decoder and the encoder.Take the example of the Factorial example.
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
// Enable stream compression (you can remove these two if unnecessary)
pipeline.addLast(ZlibCodecFactory.newZlibEncoder(ZlibWrapper.GZIP));
pipeline.addLast(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
// Add the number codec first,
pipeline.addLast(new BigIntegerDecoder());
pipeline.addLast(new NumberEncoder());
// and then business logic.
// Please note we create a handler for every new channel
// because it has stateful properties.
pipeline.addLast(new FactorialServerHandler());
}`
thought in the function of initChannelthe pipeline first add the encoder and the decoder, and finally add the handler. the execution flow is actually sorted by decoder,handler and encoder.
The handlers like decoder,handler and encoder are actually stored in AbstractChannelHandlerContextclass. There is a linked list of AbstractChannelHandlerContext in Netty. The list is arranged like decoder context-->handler context-->encoder context, and the execution is the same!

In fact, if you add 1. decoder, 2. businessHandler, 3. encoder in your server, and you write ctx.channel().writeAndFlush() or ctx.pipeline().writeAndFlush(), the encoder will be called then. It is bc in this case, it will go from the tail to look for the prev outboundChannel. However, if you write ctx.writeAndFlush(), it will look for the prev outboundChannel from the businessHandler's position.
Add an breakpoint in the first line of findContextOutbound() of AbstractChannelHandlerContext, you will get it.
private AbstractChannelHandlerContext findContextOutbound(int mask) {
AbstractChannelHandlerContext ctx = this;
EventExecutor currentExecutor = executor();
do {
ctx = ctx.prev;
} while (skipContext(ctx, currentExecutor, mask, MASK_ONLY_OUTBOUND));
return ctx;
}

Related

Constructing HTTP Chunks from ByteBuf in Netty 4

I have a Netty 3 codebase with an HTTP Message decoder which extends ReplayingDecoder, and I need to migrate code which analyzes the chunks of the message, which means I need to get the chunks.
protected Object decode(ChannelHandlerContext ctx, Channel channel,
ByteBuf buffer, State state) {
//...
HttpChunk chunk = new DefaultHttpChunk(buffer.readBytes(toRead));
//...
}
From what I've gathered, I need to use HttpChunkedInput instead, but creating one is surprisingly difficult.
//requires an InputStream...
HttpChunkedInput hc = new HttpChunkedInput(new ChunkedStream(...));
//but this seems really clunky/too awkward to be right.
HttpChunkedInput hc = new HttpChunkedInput(new ChunkedStream(new ByteArrayInputStream(buffer.array()));
ByteBuf doesn't seem to have a way to dump out a stream directly. Am I missing a Util class API that would be better? I did find there's a new EmbeddedChannel class which can simply readInbound() like here, but I'm not sure I should be changing the types just for that or casting the bare Channel to an EmbeddedChannel to get out of the problem.
Netty 4.x onwards comes with an out of the box HTTP codec which unless you use an HTTP aggregation handler will give you HTTP chunks as HttpContent objects.
This example shows how to write a handler that received such chunks:
https://github.com/netty/netty/blob/a329857ec20cc1b93ceead6307c6849f93b3f101/example/src/main/java/io/netty/example/http/snoop/HttpSnoopServerHandler.java#L60

How to send a message inside a netty decoder to the next handler/decoder in the pipeline?

My channel pipeline contains several decoders, all of them operating on TextWebSocketFrame messages. Now my problem is, that I have to choose the right decoder base on some content of the message.
Essentially, I have to parse a certain field in the message and then decide if I want to proceed handling the message or pass the message to the next encoder/handler.
Most people suggest to use a single decoder to decode all messages in such a case, but my problem is that some decoders are added dynamically and it would be a mess to put all logic in a single decoder.
Currently the code looks like this:
#Override
protected void decode(ChannelHandlerContext ctx, TextWebSocketFrame msg, List<Object> out) throws Exception {
String messageAsJson = msg.text();
JsonObject jsonObject = JSON_PARSER.fromJson(messageAsJson, JsonObject.class);
JsonObject messageHeader = jsonObject.getAsJsonObject(MESSAGE_HEADER_FIELD);
String serviceAsString = messageHeader.get(MESSAGE_SERVICE_FIELD).getAsString();
String inboundTypeAsString = messageHeader.get(MESSAGE_TYPE_FIELD).getAsString();
Service service = JSON_PARSER.fromJson(serviceAsString, Service.class);
InboundType inboundType = JSON_PARSER.fromJson(inboundTypeAsString, InboundType.class);
if (service == Service.STREAMING) {
out.add(decodeQuotesMessage(inboundType, messageAsJson));
} else {
}
}
So basically I'd need some logic in the else branch to pass the message to the next handler in the pipeline.
I am aware, that this approach is not the most efficient one but the architecture of my service has a slow path (running on a different thread pool), including this logic and a fast path. I can accept some slow code at this place.
In general, you need something like this:
if (service == Service.STREAMING) {
ctx.pipeline().addLast(new StreamingHandler());
} else {
ctx.pipeline().addLast(new OtherHandler());
}
out.add(decodeQuotesMessage(inboundType, messageAsJson));
ctx.pipeline().remove(this);
Logic behind is next:
You decoded header and you know now what flow you need to follow;
You add specific handler to the pipeline according to your header;
You add decoded message to 'out' list and thus you say "send this decoded message to next handler in pipeline, that would be handler defined in step 2 in case current handler is last in pipeline";
You remove the current handler from the pipeline to avoid handlers duplication in case your protocol will send the same header again and again. However, this is step is specific to your protocol and may be not necessary.
This is just the general approach, however, it really depends on your protocol flow.

Netty: several handlers, byte counter

I am new to Netty and trying to count received and sent bytes for each channel. I found that i should use ChannelTrafficShapingHandler for that but i have no idea how to reach my goal. As i undersand, i should write my own handler which will exetds from ChannelTrafficShapingHandler with adding some methods? It would be great to provide real code examples.
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
p.addLast(new HttpRequestDecoder());
// Uncomment the following line if you don't want to handle HttpChunks.
//p.addLast(new HttpObjectAggregator(1048576));
p.addLast(new HttpResponseEncoder());
// Remove the following line if you don't want automatic content compression.
//p.addLast(new HttpContentCompressor());
p.addLast(new HttpSnoopServerHandler());
p.addLast(new ChannelTrafficShapingHandler(0, 0));
}
like adding class to handlers, but how can i use that?
I wrote post (Netty http count requests) and was advised to write my own handler to count my requests, opened connections etc. However, i can't understand how to manage different url's with different handlers. For example, i have url /info - that's okey and I want to handle it in Handler #1. Then, i have url /statistic and i want to count there my values and print them. This url should be managed by Handler #2. How could i do that?
I have a lot of data which i should output on page(like several tables), what should i use for that? I think that string is not suitable for huge output
Thank you for your patience and answers
First, add a HttpObjectAggregator handler on pipeline. It will handle all http chuncks for you and put on pipeline a FullHttpRequest.
Then, implement a handler extending SimpleChannelInboundHandler<FullHttpRequest> and add it at the end of the pipeline. On the FullHttpRequest catched into this handler you have methods to get headers, requested URI, content, etc. You can store content data size as you want into a map or other, by requested path, etc. Implement all you need.
For the response, if you use http/html, you use text (so strings). Instantiante an instance of DefaultFullHttpResponse, and put your html text as content for this response. Next, you have just to push it on pipeline by calling ctx.writeAndFlush() (the HttpResponseEncoder will convert the message into a valid http response for you).
Moreover, you can optionnaly add a HttpContentCompressor on pipeline to activate compression when possible.

Non-blocking reverse proxy with netty

I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.

How to keep data with each channel on NIO Server

I have a Java NIO server which receives data from clients.
When a channel is ready for read i.e key.isReadable() return true read(key) is called to read data.
Currently I am using a single read buffer for all channels and in read() method , I clear the buffer and read into it and then finally put into a byte array , supposing that I will get all data in one shot.
But let's say I do not get complete data in one shot(I have special characters at data ending to detect).
Problem :
So now how to keep this partial data with channel or how to deal with partial read problem ? or globally ?
I read somewhere attachments are not good.
Take a look at the Reactor pattern. Here is a link to basic implementation by professor Doug Lea:
http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf
The idea is to have single reactor thread which blocks on Selector call. Once there are IO events ready, reactor thread dispatches the events to appropriate handlers.
In pdf above, there is inner class Acceptor within Reactor which accepts new connections.
Author uses single handler for read and write events and maintains state of this handler. I prefer to have separate handlers for reads and writes but this is not as easy to work with as with 'state machine'. There can be only one Attachment per event, so some kind of injection is needed to switch read/write handlers.
To maintain state between subsequent read/writes you will have to do couple of things:
Introduce custom protocol which tells you when the message is fully read
Have timeout or cleanup mechanism for stale connections
Maintain client specific sessions
So, you can do something like this:
public class Reactor implements Runnable{
Selector selector = Selector.open();
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
public Reactor(int port) throws IOException {
serverSocketChannel.socket().bind(new InetSocketAddress(port));
serverSocketChannel.configureBlocking(false);
// let Reactor handle new connection events
registerAcceptor();
}
/**
* Registers Acceptor as handler for new client connections.
*
* #throws ClosedChannelException
*/
private void registerAcceptor() throws ClosedChannelException {
SelectionKey selectionKey0 = serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
selectionKey0.attach(new Acceptor());
}
#Override
public void run(){
while(!Thread.interrupted()){
startReactorLoop();
}
}
private void startReactorLoop() {
try {
// wait for new events for each registered or new clients
selector.select();
// get selection keys for pending events
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> selectedKeysIterator = selectedKeys.iterator();
while (selectedKeysIterator.hasNext()) {
// dispatch even to handler for the given key
dispatch(selectedKeysIterator.next());
// remove dispatched key from the collection
selectedKeysIterator.remove();
}
} catch (IOException e) {
// TODO add handling of this exception
e.printStackTrace();
}
}
private void dispatch(SelectionKey interestedEvent) {
if (interestedEvent.attachment() != null) {
EventHandler handler = (EventHandler) interestedEvent.attachment();
handler.processEvent();
}
}
private class Acceptor implements EventHandler {
#Override
public void processEvent() {
try {
SocketChannel clientConnection = serverSocketChannel.accept();
if (clientConnection != null) {
registerChannel(clientConnection);
}
} catch (IOException e) {e.printStackTrace();}
}
/**
* Save Channel - key association - in Map perhaps.
* This is required for subsequent/partial reads/writes
*/
private void registerChannel(SocketChannel clientChannel) {
// notify injection mechanism of new connection (so it can activate Read Handler)
}
Once read event is handled, notify injection mechanism that write handler can be injected.
New instances of read and write handlers are created by the injection mechanism once, when new Connection is available. This injection mechanism switches handlers as needed. Lookup of handlers for each Channel is done from the Map that is filled at the connection Acceptance by the method `registerChannel().
Read and write handlers have ByteBuffer instances, and since each Socket Channel has its own pair of handlers, you can now maintain state between partial reads and writes.
Two tips to improve performance:
Try to do first read immediately when connection is accepted. Only if you don't read enough data as defined by header in your custom protocol, register Channel interest for read events.
Try to do write first without registering interest for write events and only if you don't write all the data, register interest for
write.
This will reduce number of Selector wakeups.
Something like this:
SocketChannel socketChannel;
byte[] outData;
final static int MAX_OUTPUT = 1024;
ByteBuffer output = ByteBuffer.allocate(MAX_OUTPUT);
// if message was not written fully
if (socketChannel.write(output) < messageSize()) {
// register interest for write event
SelectionKey selectionKey = socketChannel.register(selector, SelectionKey.OP_WRITE);
selectionKey.attach(writeHandler);
selector.wakeup();
}
Finally, there should be timed Task which checks if Connections are still alive/SelectionKeys are canceled. If client breaks TCP connection, server will usually not know of this. As a result, there will be number of Event handlers in memory, bind as Attachments to stale connections which will result with memory leak.
This is the reason why you may say Attachments are not good, but the issue can be dealt with.
To deal with this here are two simple ways:
TCP keep alive could be enabled
periodic task could check timestamp of last activity on the given Channel. If it is idle for to long, server should terminate connection.
There's an ancient and very inaccurate NIO blog from someone at Amazon where it is wrongly asserted that key attachments are memory leaks. Complete and utter BS. Not even logical. This is also the one where he asserts you need all kinds of supplementary queues. Never had to do that yet, in about 13 years of NIO.
What you need is a ByteBuffer per channel, or possibly two, one for read and one for write. You can store a single one as the attachment itself: if you want two, or have other data to store, you need to define yourself a Session class that contains both buffers and whatever else you want to associate with the channel, for example client credentials, and use the Session object as the attachment.
You really can't get very far in NIO with a single buffer for all channels.

Categories

Resources