I am new to Netty and trying to count received and sent bytes for each channel. I found that i should use ChannelTrafficShapingHandler for that but i have no idea how to reach my goal. As i undersand, i should write my own handler which will exetds from ChannelTrafficShapingHandler with adding some methods? It would be great to provide real code examples.
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
p.addLast(new HttpRequestDecoder());
// Uncomment the following line if you don't want to handle HttpChunks.
//p.addLast(new HttpObjectAggregator(1048576));
p.addLast(new HttpResponseEncoder());
// Remove the following line if you don't want automatic content compression.
//p.addLast(new HttpContentCompressor());
p.addLast(new HttpSnoopServerHandler());
p.addLast(new ChannelTrafficShapingHandler(0, 0));
}
like adding class to handlers, but how can i use that?
I wrote post (Netty http count requests) and was advised to write my own handler to count my requests, opened connections etc. However, i can't understand how to manage different url's with different handlers. For example, i have url /info - that's okey and I want to handle it in Handler #1. Then, i have url /statistic and i want to count there my values and print them. This url should be managed by Handler #2. How could i do that?
I have a lot of data which i should output on page(like several tables), what should i use for that? I think that string is not suitable for huge output
Thank you for your patience and answers
First, add a HttpObjectAggregator handler on pipeline. It will handle all http chuncks for you and put on pipeline a FullHttpRequest.
Then, implement a handler extending SimpleChannelInboundHandler<FullHttpRequest> and add it at the end of the pipeline. On the FullHttpRequest catched into this handler you have methods to get headers, requested URI, content, etc. You can store content data size as you want into a map or other, by requested path, etc. Implement all you need.
For the response, if you use http/html, you use text (so strings). Instantiante an instance of DefaultFullHttpResponse, and put your html text as content for this response. Next, you have just to push it on pipeline by calling ctx.writeAndFlush() (the HttpResponseEncoder will convert the message into a valid http response for you).
Moreover, you can optionnaly add a HttpContentCompressor on pipeline to activate compression when possible.
Related
My channel pipeline contains several decoders, all of them operating on TextWebSocketFrame messages. Now my problem is, that I have to choose the right decoder base on some content of the message.
Essentially, I have to parse a certain field in the message and then decide if I want to proceed handling the message or pass the message to the next encoder/handler.
Most people suggest to use a single decoder to decode all messages in such a case, but my problem is that some decoders are added dynamically and it would be a mess to put all logic in a single decoder.
Currently the code looks like this:
#Override
protected void decode(ChannelHandlerContext ctx, TextWebSocketFrame msg, List<Object> out) throws Exception {
String messageAsJson = msg.text();
JsonObject jsonObject = JSON_PARSER.fromJson(messageAsJson, JsonObject.class);
JsonObject messageHeader = jsonObject.getAsJsonObject(MESSAGE_HEADER_FIELD);
String serviceAsString = messageHeader.get(MESSAGE_SERVICE_FIELD).getAsString();
String inboundTypeAsString = messageHeader.get(MESSAGE_TYPE_FIELD).getAsString();
Service service = JSON_PARSER.fromJson(serviceAsString, Service.class);
InboundType inboundType = JSON_PARSER.fromJson(inboundTypeAsString, InboundType.class);
if (service == Service.STREAMING) {
out.add(decodeQuotesMessage(inboundType, messageAsJson));
} else {
}
}
So basically I'd need some logic in the else branch to pass the message to the next handler in the pipeline.
I am aware, that this approach is not the most efficient one but the architecture of my service has a slow path (running on a different thread pool), including this logic and a fast path. I can accept some slow code at this place.
In general, you need something like this:
if (service == Service.STREAMING) {
ctx.pipeline().addLast(new StreamingHandler());
} else {
ctx.pipeline().addLast(new OtherHandler());
}
out.add(decodeQuotesMessage(inboundType, messageAsJson));
ctx.pipeline().remove(this);
Logic behind is next:
You decoded header and you know now what flow you need to follow;
You add specific handler to the pipeline according to your header;
You add decoded message to 'out' list and thus you say "send this decoded message to next handler in pipeline, that would be handler defined in step 2 in case current handler is last in pipeline";
You remove the current handler from the pipeline to avoid handlers duplication in case your protocol will send the same header again and again. However, this is step is specific to your protocol and may be not necessary.
This is just the general approach, however, it really depends on your protocol flow.
Looking at the docs, it says this:
https://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html
A user is supposed to have one or more ChannelHandlers in a pipeline
to receive I/O events (e.g. read) and to request I/O operations (e.g.
write and close). For example, a typical server will have the
following handlers in each channel's pipeline, but your mileage may
vary depending on the complexity and characteristics of the protocol
and business logic:
Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java
object.
Protocol Encoder - translates a Java object into binary data.
Business Logic Handler - performs the actual business logic (e.g.
database access). and it could be represented as shown in the
following example: static final EventExecutorGroup group = new
DefaultEventExecutorGroup(16); ...
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("decoder", new MyProtocolDecoder());
pipeline.addLast("encoder", new MyProtocolEncoder());
// Tell the pipeline to run MyBusinessLogicHandler's event handler
methods // in a different thread than an I/O thread so that the I/O
thread is not blocked by // a time-consuming task. // If your
business logic is fully asynchronous or finished very quickly, you
don't // need to specify a group.
pipeline.addLast(group, "handler",
new MyBusinessLogicHandler());
In a lot of the examples on Github I see this same pattern. I was wondering if someone can explain why the businessHandler is not in between Decoder and Encoder. I would think that you would get your POJO, then do work on it in the business handler, then encode it.
The Decoder and encoder are usually at the beginning of the pipeline because of order in which handlers are called. For incoming data it's bottom-up and for outgoing top-down.
E.g.
pipeline.addLast(new MyEncoder());
pipeline.addLast(new MyDecoder());
pipeline.addLast(new MyBusiness());
In this case, for incoming data call order is: MyDecoder (transforming data to POJO) -> MyBusiness (the encoder is not called for incoming stream) and for outgoing data: MyBusiness -> MyEncoder (the decoder is not called for outgoing stream).
If you receive an incoming stream in the business handler (actually, the POJOs after decoder) work on it and write it back, it looks like MyBusiness is located between encoder and decoder because data is turning back to the encoder.
Of course the business handler is between the decoder and the encoder.Take the example of the Factorial example.
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
// Enable stream compression (you can remove these two if unnecessary)
pipeline.addLast(ZlibCodecFactory.newZlibEncoder(ZlibWrapper.GZIP));
pipeline.addLast(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
// Add the number codec first,
pipeline.addLast(new BigIntegerDecoder());
pipeline.addLast(new NumberEncoder());
// and then business logic.
// Please note we create a handler for every new channel
// because it has stateful properties.
pipeline.addLast(new FactorialServerHandler());
}`
thought in the function of initChannelthe pipeline first add the encoder and the decoder, and finally add the handler. the execution flow is actually sorted by decoder,handler and encoder.
The handlers like decoder,handler and encoder are actually stored in AbstractChannelHandlerContextclass. There is a linked list of AbstractChannelHandlerContext in Netty. The list is arranged like decoder context-->handler context-->encoder context, and the execution is the same!
In fact, if you add 1. decoder, 2. businessHandler, 3. encoder in your server, and you write ctx.channel().writeAndFlush() or ctx.pipeline().writeAndFlush(), the encoder will be called then. It is bc in this case, it will go from the tail to look for the prev outboundChannel. However, if you write ctx.writeAndFlush(), it will look for the prev outboundChannel from the businessHandler's position.
Add an breakpoint in the first line of findContextOutbound() of AbstractChannelHandlerContext, you will get it.
private AbstractChannelHandlerContext findContextOutbound(int mask) {
AbstractChannelHandlerContext ctx = this;
EventExecutor currentExecutor = executor();
do {
ctx = ctx.prev;
} while (skipContext(ctx, currentExecutor, mask, MASK_ONLY_OUTBOUND));
return ctx;
}
I am trying to implement a process consisting of several webservice-calls, initiated by a JMS-message read by Spring-integration. Since there are no transactions across these WS-calls, I would like to keep track of how far my process has gone, so that steps that are already carried out are skipped when retrying message processing.
Example steps:
Retrieve A (get A.id)
Create new B for A (using A.id, getting B.id)
Create new C for B (using B.id, getting C.id)
Now, if the first attempt fails in step 3, I already have a created a B, and know it's id. So if I want to retry the message, it will skip the second step, and not leave me with an incomplete B.
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
The way it works at the moment:
Message is read
Some exception is thrown
Message processing halts, and ActiveMQ places the message on DLQ
How I would like it to work:
Message is read
Some exception is thrown
The exception is handled, with the result of this handling being an extra header property added to the original message
ActiveMQ places the message on DLQ
One thing that might achieve this is the following:
Read the message
Start processing, wrapped in try-catch
On exception, get the extra information from the exception, create a new message based on the original one, add extra info to header and send it directly to the DLQ
Swallow the exception so the original message dissappears
This feels kinda hackish though, hopefully there is a more elegant solution.
It's hard to generalize without more information about your flow(s) but you could consider adding a custom request handler advice to decorate and/or re-route failed messages. See Adding Behavior to Endpoints.
As the other answer says, you can't modify the message but you can build a new one from it.
EDIT:
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
Ahhh... now I think I know what you are asking; no, you can't "decorate" the existing message; you can republish it with additional headers instead of throwing an exception.
You can republish in the advice, or in the error flow.
It might seem like a "hack" to you, but the JMS API provides no mechanism to do what you want.
From the spring forum:
To place new header to the MessageHeaders you should use
MessageBuilder, because not only headers, but entire Message is
immutable.
return MessageBuilder.fromMessage(message).setHeader(updateflag, message.getHeaders().get("Lgg_Rid") == "ACK" ? "CONF" : "FAIL").build();
In an asynchronous context, errors will go to an error channel - either one you configure yourself and indicate in the message headers with errorChannel, or a global error channel if none is specified. See for more details here.
Let's say I have these handler flow in netty pipeline:
UpHandler1 -> UpHandler2 -> UpHandler3 -> ... -> DownHandler1 -> DownHandler2 -> DownHandler3
Based on certain condition (i.e. already found response to request without doing further processing), is there anyway, in my UpHandler2, I can straight go to DownHandler2 (so skip certain upstream as well as downstream handlers in between)? Is this recommended?
You can use UpHandler2's ChannelHandlerContext to retrieve the ChannelPipeline. From here you can retrieve the channel handler context of any channel handler using one of the context(...) methods. Then sendDownstream for Netty 3, or write for Netty 4, will forward to the next downstream handler after the handler to which the context responds. In effect I think you'll need to get the ChannelHandlerContext for DownHandler1 and use that to write your message.
Alternatively you can build the netty pipeline such that DownHandler2 is the next down stream handler from UpHandler2. If I've understood your pipeline correctly then something like
pipeline.addLast("down3", downhandler3);
pipeline.addLast('up1", uphandler1);
pipeline.addLast("down2", downhandler2);
pipeline.addLast("up2", uphandler2);
pipeline.addLast("down1", downhandler1);
pipeline.addLast("up3", uphandler3);
might work. However this could be quite brittle and also depends on whether your processing logic allows it.
I have a batch route that consumes XML files from a folder. It filters, transforms and finally saves a grouped document to disk. As this is a batch route, I require it to be shut down after a single polling of the sourcefolder, which is what the RouteTerminator is for in the code below. (It calls stopRoute() and removeRoute() on camelContext with routeID.)
from("file:" + sourcePath)
.filter().xquery("//DateTime > xs:dateTime('2013-05-07T23:59:59')")
.filter().xquery("//DateTime < xs:dateTime('2013-05-09T00:00:00')")
.aggregate(constant(true))
.completionFromBatchConsumer()
.groupExchanges()
.to("xquery:" + xqueryPath)
.to("file:" + targetPath)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
new RouteTerminator(routeID, exchange.getContext()).start();
}
})
.end();
This correctly closes the route after a single file collection and after repeating the process in onException it also gracefully closes the route when an exception is thrown. Unfortunately, if the route filters out every Exchange, it never reaches the processor. Exchanges are instead dropped during the filter and the route remains open.
I thought to move the filter inside the aggregate call as this might keep the route going until the end, but this method won't accept XQuery filters. XPath is not an option as it does not support dateTime comparisons.
How can I force the entire route to stop in this case?
I tried again and now have a solution where I call setHeader to set a Filtered header.
Unfortunately I can't seem to break out of the choice to use it as a simple switch/case so I have to route both the .when() and .otherwise() to the same second direct route.
In that route I then aggregate and call a basic merge bean that builds a Document out of every Exchange and adds it to a GenericFile if the header matches. It seems like there should be an easier way to simply set a header based on an xquery though...