Assembling a Netty Message in the Handler - java

I am in the process of prototyping Netty for my project. I am trying to implement a simple Text/String oriented protocol on top of Netty. In my pipeline I am using the following:
public class TextProtocolPipelineFactory implements ChannelPipelineFactory
{
#Override
public ChannelPipeline getPipeline() throws Exception
{
// Create a default pipeline implementation.
ChannelPipeline pipeline = pipeline();
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(2000000, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
// and then business logic.
pipeline.addLast("handler", new TextProtocolHandler());
return pipeline;
}
}
I have a DelimiterBasedFrameDecoder, a String Decoder, and a String Encoder in the pipeline.
As a result of this setup my incoming message is split into multiple Strings. This results in multiple invocations of the "messageReceived" method of my handler. This is fine. However , this requires me to accumulate these messages in memory and re-construct the message when the last string packet of the message is received.
My question is, what is the most memory efficient way to "accumulate the strings" and then "re-construct them into the final message". I have 3 options so far. They are:
Use a StringBuilder to accumulate and toString to construct. (This gives the worst memory performance. In fact for large payloads with lots of concurrent users this gives non-acceptable performance)
Accumulate into a ByteArray via a ByteArrayOutputStream and then construct using the byte-array (this gives a much better performance than option 1, but it still hogs quite a bit of memory)
Accumulate into a Dymamic Channel Buffer and use toString(charset) to construct. I have not profiled this setup yet but I am curious how this compares to the above two options. Has anyone solved this issue using the Dynamic Channel Buffer?
I am new to Netty and its possible I may be doing something wrong architecturally. Your input will be greatly appreciated.
Thanks in advance
Sohil
Adding my implementation of a custom FrameDecoder for Norman to review
public final class TextProtocolFrameDecoder extends FrameDecoder
{
public static ChannelBuffer messageDelimiter()
{
return ChannelBuffers.wrappedBuffer(new byte[] {'E','O','F'});
}
#Override
protected Object decode(ChannelHandlerContext ctx, Channel channel,ChannelBuffer buffer)
throws Exception
{
int eofIndex = find(buffer, messageDelimiter());
if(eofIndex != -1)
{
ChannelBuffer frame = buffer.readBytes(buffer.readableBytes());
return frame;
}
return null;
}
private static int find(ChannelBuffer haystack, ChannelBuffer needle) {
for (int i = haystack.readerIndex(); i < haystack.writerIndex(); i ++) {
int haystackIndex = i;
int needleIndex;
for (needleIndex = 0; needleIndex < needle.capacity(); needleIndex ++) {
if (haystack.getByte(haystackIndex) != needle.getByte(needleIndex)) {
break;
} else {
haystackIndex ++;
if (haystackIndex == haystack.writerIndex() &&
needleIndex != needle.capacity() - 1) {
return -1;
}
}
}
if (needleIndex == needle.capacity()) {
// Found the needle from the haystack!
return i - haystack.readerIndex();
}
}
return -1;
}
}

I think you would get the best performance if you would implement your own FrameDecoder. This would allow you to buffer all the data till you really need to dispatch it to the next Handler in the chain. Please refer to the FrameDecoder apidocs.
If you don't want to handle the detect of CRLF by yourself it would also be possible to keep the DelimiterBasedFrameDecoder and just add a custom FrameDecoder behind it to assemble the ChannelBuffers that represent a line of text.
In both cases FrameDecoder will take care to minimize memory copies as much as possible by try to just "wrap" buffers and not copy them each time.
That said if you want to have the best performance go with the first approach, if you want it easy go with the second ;)

Related

Can ChronicleQueue tailers for two different queues be interleaved?

I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.

Flink Filter Stream based on another Stream Deterministically

I have 2 DataStreams in Flink (with common timestamps and from Kafka) with one of them containing some signal values and the other containing the activeness (simple active-inactive) information. I have tried RichCoProcessFunction with a simple state private ValueState<Boolean> seen; and the results are non-deterministic. If I run on the same set of data (with same timestamps) by using startFromEarliest I sometimes get different values filtered. How can I make it deterministic? I'm sharing my KeyedCoProcessFunction skeleton below.
private ValueState < Boolean > seen;
#Override
public void open(Configuration parameters) throws Exception {
ValueStateDescriptor < Boolean > descriptor = new ValueStateDescriptor < > (
// state name
"have-seen-key",
// type information of state
TypeInformation.of(new TypeHint < Boolean > () {}));
seen = getRuntimeContext().getState(descriptor);
}
#Override
public void processElement1(SomeEvent < Double > value, Context ctx, Collector < SomeEvent < Double >> out) throws Exception {
if (seen.value() == Boolean.TRUE) {
out.collect(value);
}
}
#Override
public void processElement2(SomeEvent < Double > value, Context ctx, Collector < SomeEvent < Double >> out) throws Exception {
if (value.value == 1) {
seen.update(Boolean.TRUE);
} else {
seen.update(Boolean.FALSE);
}
}
Implementing an event time join of the sort you want can be done as a RichCoProcessFunction, but it can be a bit complex. You might prefer to implement this as a join with a temporal table function.
The reason that it's not deterministic is that the two sources are producing elements with a different pace. The simplest way to make it more deterministic is to use EventTime. This means that You would need to have timestamps assigned both to control records and the data records. Flink will then emit watermarks for Your elements.
Then You could simply buffer and wait with emitting or discarding the elements until You receive Watermark for the control stream, meaning that nothing is going change in control stream.
Without timestamps, it's virtually impossible to introduce deterministic behavior in such case, because You won't be ever able to exactly tell when the given record has arrived and which records should be dropped and which should be emitted.

Akka stream - limiting Flow rate without introducing delay

I'm working with Akka (version 2.4.17) to build an observation Flow in Java (let's say of elements of type <T> to stay generic).
My requirement is that this Flow should be customizable to deliver a maximum number of observations per unit of time as soon as they arrive. For instance, it should be able to deliver at most 2 observations per minute (the first that arrive, the rest can be dropped).
I looked very closely to the Akka documentation, and in particular this page which details the built-in stages and their semantics.
So far, I tried the following approaches.
With throttle and shaping() mode (to not close the stream when the limit is exceeded):
Flow.of(T.class)
.throttle(2,
new FiniteDuration(1, TimeUnit.MINUTES),
0,
ThrottleMode.shaping())
With groupedWith and an intermediary custom method:
final int nbObsMax = 2;
Flow.of(T.class)
.groupedWithin(Integer.MAX_VALUE, new FiniteDuration(1, TimeUnit.MINUTES))
.map(list -> {
List<T> listToTransfer = new ArrayList<>();
for (int i = list.size()-nbObsMax ; i>0 && i<list.size() ; i++) {
listToTransfer.add(new T(list.get(i)));
}
return listToTransfer;
})
.mapConcat(elem -> elem) // Splitting List<T> in a Flow of T objects
Previous approaches give me the correct number of observations per unit of time but these observations are retained and only delivered at the end of the time window (and therefore there is an additional delay).
To give a more concrete example, if the following observations arrives into my Flow:
[Obs1 t=0s] [Obs2 t=45s] [Obs3 t=47s] [Obs4 t=121s] [Obs5 t=122s]
It should only output the following ones as soon as they arrive (processing time can be neglected here):
Window 1: [Obs1 t~0s] [Obs2 t~45s]
Window 2: [Obs4 t~121s] [Obs5 t~122s]
Any help will be appreciated, thanks for reading my first StackOverflow post ;)
I cannot think of a solution out of the box that does what you want. Throttle will emit in a steady stream because of how it is implemented with the bucket model, rather than having a permitted lease at the start of every time period.
To get the exact behavior you are after you would have to create your own custom rate-limit stage (which might not be that hard). You can find the docs on how to create custom stages here: http://doc.akka.io/docs/akka/2.5.0/java/stream/stream-customize.html#custom-linear-processing-stages-using-graphstage
One design that could work is having an allowance counter saying how many elements that can be emitted that you reset every interval, for every incoming element you subtract one from the counter and emit, when the allowance used up you keep pulling upstream but discard the elements rather than emit them. Using TimerGraphStageLogic for GraphStageLogic allows you to set a timed callback that can reset the allowance.
I think this is exactly what you need: http://doc.akka.io/docs/akka/2.5.0/java/stream/stream-cookbook.html#Globally_limiting_the_rate_of_a_set_of_streams
Thanks to the answer of #johanandren, I've successfully implemented a custom time-based GraphStage that meets my requirements.
I post the code below, if anyone is interested:
import akka.stream.Attributes;
import akka.stream.FlowShape;
import akka.stream.Inlet;
import akka.stream.Outlet;
import akka.stream.stage.*;
import scala.concurrent.duration.FiniteDuration;
public class CustomThrottleGraphStage<A> extends GraphStage<FlowShape<A, A>> {
private final FiniteDuration silencePeriod;
private int nbElemsMax;
public CustomThrottleGraphStage(int nbElemsMax, FiniteDuration silencePeriod) {
this.silencePeriod = silencePeriod;
this.nbElemsMax = nbElemsMax;
}
public final Inlet<A> in = Inlet.create("TimedGate.in");
public final Outlet<A> out = Outlet.create("TimedGate.out");
private final FlowShape<A, A> shape = FlowShape.of(in, out);
#Override
public FlowShape<A, A> shape() {
return shape;
}
#Override
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
return new TimerGraphStageLogic(shape) {
private boolean open = false;
private int countElements = 0;
{
setHandler(in, new AbstractInHandler() {
#Override
public void onPush() throws Exception {
A elem = grab(in);
if (open || countElements >= nbElemsMax) {
pull(in); // we drop all incoming observations since the rate limit has been reached
}
else {
if (countElements == 0) { // we schedule the next instant to reset the observation counter
scheduleOnce("resetCounter", silencePeriod);
}
push(out, elem); // we forward the incoming observation
countElements += 1; // we increment the counter
}
}
});
setHandler(out, new AbstractOutHandler() {
#Override
public void onPull() throws Exception {
pull(in);
}
});
}
#Override
public void onTimer(Object key) {
if (key.equals("resetCounter")) {
open = false;
countElements = 0;
}
}
};
}
}

most efficient way to get byte[] to Queue (ListBlockingQueue)

I am reading an array of bytes passed in to me (not my choice, but i have to use it this way). I need to get the data to a LinkedBlockingQueue, and ultimately step through the bytes to build one or more (may contain partial messages) xml messages. So my question is this:
What generic should i use for the LBQ type?
what is the most efficient way to get the byte[] to that generic type?
Here is my example code:
parsebytes(byte[] bytes, int length)
{
//assume that i am doing other checks on data
if (length > 0)
{
myThread.putBytes(bytes, length);
}
}
in my thread:
putBytes(byte[] bytes, int length)
{
for (int i = 0; i < length; i++)
{
blockingQueue.put(bytes[i]);
}
}
I also do not want to have to pull off the blocking queue byte-by-byte either. I would rather grab everything that is in the queue and process it.
There is no such thing as a ListBlockingQueue. However, any BlockingQueue<Object> will accept byte[] since Java arrays are objects.
In the absence of other design considerations, the simplest option might be to just stick the arrays into the queue as they arrive, and let the consumer stich them together.
Consider this:
BlockingQueue<byte[]> q = new LinkedBlockingQueue<>();
q.put(new byte[] {1,2,3});
byte[] bytes = q.take();

Netty: Add and Remove FrameDecoder dynamically from a Pipeline, Protocol encapsulation

I've been working with Netty 3.3.1-Final for 3 weeks now.
My Protocol has 3 steps and each step needs a different FrameDecoder:
Read arguments
Transfer some data
Mutual close of the data pipe
I've been through a lot of "blocking" issues that I could not understand. It finally appears to me, reading the org.jboss.netty.example.portunification example that I had some buffer issue when trying to dynamically change my FrameDecoder: the buffer of one FrameDecoder was (probably) not empty when changing for the next one...
Is there a way to do that easily in Netty? Do I have to change my Protocol? Do I need to write one big FrameDecoder and manage a state?
If so, how to avoid code duplication between different protocols with common sub parts (for instance "reading arguments")?
Today I came to the idea of a FrameDecoderUnifier (code below) with the purpose of a way to hot add and remove some FrameDecoder, what do you think?
Thanks for your help!
Renaud
----------- FrameDecoderUnifier class --------------
/**
* This FrameDecoder is able to forward the unused bytes from one decoder to the next one. It provides
* a safe way to replace a FrameDecoder inside a Pipeline.
* It is not safe to just add and remove FrameDecoder dynamically from a Pipeline because there is a risk
* of unread bytes inside the buffer of the FrameDecoder you wan't to remove.
*/
public class FrameDecoderUnifier extends FrameDecoder {
private final Method frameDecoderDecodeMethod;
volatile boolean skip = false;
LastFrameEventHandler eventHandler;
LinkedList<Entry> entries;
Entry entry = null;
public FrameDecoderUnifier(LastFrameEventHandler eventHandler) {
this.eventHandler = eventHandler;
this.entries = new LinkedList<Entry>();
try {
this.frameDecoderDecodeMethod = FrameDecoder.class.getMethod("decode", ChannelHandlerContext.class, Channel.class, ChannelBuffer.class);
} catch (NoSuchMethodException ex) {
throw new RuntimeException(ex);
} catch (SecurityException ex) {
throw new RuntimeException(ex);
}
}
public void addLast(FrameDecoder decoder, LastFrameIdentifier identifier) {
entries.addLast(new Entry(decoder, identifier));
}
private Object callDecode(FrameDecoder decoder, ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) throws Exception {
return frameDecoderDecodeMethod.invoke(decoder, ctx, channel, buffer);
}
#Override
protected Object decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) throws Exception {
if (entry == null && !entries.isEmpty()) {
entry = entries.getFirst();
}
if (entry == null) {
return buffer; //No framing, no decoding
}
//Perform the decode operation
Object obj = callDecode(entry.getDecoder(), ctx, channel, buffer);
if (obj != null && entry.getIdentifier().isLastFrame(obj)) {
//Fire event
eventHandler.lastObjectDecoded(entry.getDecoder(), obj);
entry = null;
}
return obj;
}
/**
* You can use this interface to take some action when the current decoder is changed for the next one.
* This can be useful to change some upper Handler in the pipeline.
*/
public interface LastFrameEventHandler {
public void lastObjectDecoded(FrameDecoder decoder, Object obj);
}
public interface LastFrameIdentifier {
/**
* True if after this frame, we should disable this decoder.
* #param obj
* #return
*/
public abstract boolean isLastFrame(Object decodedObj);
}
private class Entry {
FrameDecoder decoder;
LastFrameIdentifier identifier;
public Entry(FrameDecoder decoder, LastFrameIdentifier identifier) {
this.decoder = decoder;
this.identifier = identifier;
}
public FrameDecoder getDecoder() {
return decoder;
}
public LastFrameIdentifier getIdentifier() {
return identifier;
}
}
}
I have had similar problems, in that removing a frame decoder from a pipeline does not seem to prevent it from being called, and there isn't an obvious way to make the decoder to behave as if it wasn't in the chain: Netty insists that the decode() reads at least one byte so you can't simply return the incoming ChannelBuffer, whereas returning null stops the processing of incoming data until the next packet arrives, stalling the protocol decoding process.
Firstly: the Netty 3.7 docs for FrameDecoder does in fact has a section "Replacing a decoder with another decoder in a pipeline". It says:
It is not possible to achieve this simply by calling
ChannelPipeline#replace()
Instead, it suggests passing the data on by returning an array wrapping the decoded first packet and the rest of the data received.
return new Object[] { firstMessage, buf.readBytes(buf.readableBytes()) };
Importantly, "unfolding" must have been enabled prior to this, but this part is easy to miss and isn't explained. The best clue I could find was Netty issue 132, which evidently gave rise to the "unfold" flag on FrameDecoders. If true, the decoder will unpack such arrays into objects in a way which is transparent to downstream handlers. A peep at the source code seems to confirm this is what "unfolding" means.
Secondly: there seems to be an even simpler way, since the example also shows how to pass data on down the pipeline unchanged. For example, after doing its job, my sync packet FrameDecoder sets an internal flag and removes itself from the pipeline, returning the decoded object as normal. Any subsequent invocations when the flag is set then simply pass the data on like so:
protected Object decode(ChannelHandlerContext ctx,
Channel channel, ChannelBuffer cbuf) throws Exception {
// Close the door on more than one sync packet being decoded
if (m_received) {
// Pass on the data to the next handler in the pipeline.
// Note we can't just return cbuf as-is, we must drain it
// and return a new one. Otherwise Netty will detect that
// no bytes were read and throw an IllegalStateException.
return cbuf.readBytes(cbuf.readableBytes());
}
// Handle the framing
ChannelBuffer decoded = (ChannelBuffer) super.decode(ctx, channel, cbuf);
if (decoded == null) {
return null;
}
// Remove ourselves from the pipeline now
ctx.getPipeline().remove(this);
m_received = true;
// Can we assume an array backed ChannelBuffer?
// I have only hints that we can't, so let's copy the bytes out.
byte[] sequence = new byte[magicSequence.length];
decoded.readBytes(sequence);
// We got the magic sequence? Return the appropriate SyncMsg
return new SyncMsg(Arrays.equals(sequence, magicSequence));
}
A decoder derived from LengthFieldBasedFrameDecoder remains downstream and handles all subsequent data framing. Works for me, so far.
I think, having a frame decoder which switch internal decoders based on some state and dynamically adding/removing upper layer handlers should be avoided because
Difficult to understand/debug the code
Handlers are not having well defined responsibilities (That's why you are removing/adding handlers right? One handler should handle one or more (related) types of protocol messages, not many handlers same type of messages)
Ideally frame decoder only extract the protocol frame, not to decode the frame based on state (here frame decoder can have internal chain of decoders to decoder the frame and fire a MessageEvent with decoded message, above handlers can react to decoded messages).
UPDATE: Here I have considered a protocol where each messages can have a unique tag/identifier with end of the message is clearly marked (for example Tag Length Value frame format)

Categories

Resources