In netty, events that flow through a channel pipeline occur in order as each channel is effectively only assigned to one thread and each handler calls each other in turn. This makes sense and alleviates many syncronisation issues.
However if you use an IdleStateHandler, from my reading of the source, it appears that the channelIdle event will be 'processed' in the context of the Timers thread (the thread the HashedWheelTime is using for example).
Is this case or have I missed something? If it is the case, does this mean that it is possible for an idlestate event and an IO event (example a messageRecieved event) to be executing on the same channel at the same time?
Also, as I could save the ChannelHandler ctx and use it in a different thread to 'write' to a channel for example, thus also have an event going downstream in one thread and upstream in another thread at the same time on the same channel?
Finally which thread do ChannelFutures execute in?
All of these use cases are perfectly acceptable and not a criticism of Netty at all, I am actually very fond of the library and make use of it everywhere. Its just that as I try to do more and more complex things with it, I want to understand more about how it works under the hood so that I can ensure I am using the proper and just the right amount (no more, no less) of syncronisation (locking).
Yes the channelIdle event and downstream / upstream event could be fired at the same time. So if your handler does implement at least two of the three you need to add proper synchronization. I guess we should make it more clear in the javadocs..
Now the other questions..
You can call Channel.write(...) from every thread you want too.. So its possible to just store it somewhere and call write whenever you want. This also gives the "limitation" that you need have proper synchronization for "downstream" handlers.
ChannelFutures are executed from within the worker thread.
However if you use an IdleStateHandler, from my reading of the source, it appears that the channelIdle event will be 'processed' in the context of the Timers thread (the thread the HashedWheelTime is using for example).
Is this case or have I missed something? If it is the case, does this mean that it is possible for an idlestate event and an IO event (example a messageRecieved event) to be executing on the same channel at the same time?
Yes, you have missed the concept of ordered event execution available in the Netty, If you are not haveing ExecutionHandler with OrderedMemoryAwareThreadPoolExecutor in the pipeline, you can not have the ordered event execution, specially channel state events from IdleStateHandler.
Also, as I could save the ChannelHandler ctx and use it in a different thread to 'write' to a channel for example, thus also have an event going downstream in one thread and upstream in another thread at the same time on the same channel?
Yes its correct.
More over, If you want to have ordered event execution in the downstream, you have have an downstream execution handler impl with OrderedMemoryAwareThreadPoolExecutor.
Finally which thread do ChannelFutures execute in?
The are executed by Oio/Nio Worker threads,
But if your down stream handler is consuming some type of event and firing another type of event to below the downstream, then your downstream handler can optionally handle the future execution. this can be done by get the future form downstream event and calling
future.setSuccess()
Related
I'm attempting to implement a spring integration flow that requires a multithreaded call if an input variable in true. If this variable is true then the flow with execute a multithreaded call and the main thread will continue it's flow.
Then at the end it will be required to wait for both flows to finish before returning a response.
I've been successful at implementing a multithreaded spring integration flow using a splitter, but the splitter results in all of the messages going to the same channel, this is different since the multithreaded call requires calling a different channel than the main thread of execution.
Is there a way to set up a splitter to send to different channels based on if the parameter is true or not? Or how would I set up an executor channel to spawn a new thread if that value is true while continuing the main flow at the same time.
As for waiting for both of the flows to finish execution would a spring integration barrier or an aggregator be a better approach for this use case?
Consider to use a PublishSubscribeChannel with an Executor configuration to let the same message to be sent to different parallel flows. This way you really can continue your main flow with one subscriber and do something else with other subscribers. With an Executor all of them are going to be executed in parallel.
Docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
If you still insist that main flow must be executed only on the same thread, then consider to use a RecipientListRouter, where one of the recipients could be an unconditional next direct channel in a main flow. The other recipient could be conditional on your boolean variable and it can be an ExecutorChannel to let its subscriber to be invoked in parallel.
Docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router-implementations-recipientlistrouter
For waiting for both flows it is up to you to decide - barrier, an aggregator or more sophisticated scatter-gather. All of them will work for your "wait-for-all" requirements. Or you may implement some custom solution based on a CountDownLatch in some header. So, every time your parallel flow is done, you count it down. And this way you even will be able to determine the number according your boolean value. So, if no parallel, then just 1 and only main flow is going to be executed and only this one is going to count down that latch you are waiting for on original request.
Maybe there is some "integration-pattern" here I miss...
I have a proccess (a thread from an TaskExecutor) that is some cases need to stop and wait for an additional data to continue.
I was thinking about blocking in a receive method, but I don't find how to send, from a different thread a message to that channel (a temporal one, isn't it?) to unblock this thread, only this.
The component responsible about unblock should receive a message from some kind of messagin platform (redis,rabbit,...) and then "notify" the blocked execution.
An ugly implementation could be a wait/notify but of course I don't want to use that having a full "message-oriented" solution.
Is there any component/solution for this problem?
Maybe a subscriber with some topic I can use to be sure only that thead ir running again, but I cannot block in a publishsubscribe channel, can I?
thanks a lot,
that is some cases need to stop and wait for an additional data to continue.
Looks like this is indeed the use-case for the Thread Barrier component.
Another way to do something similar is an Aggregator for the releaseStrategy as 2 messages by size.
Anyway the correlationKey is a key entity in both use-cases.
Do the instances of Netty's channel handlers such as (SimpleChannelInboundHandler, ChannelInboundHandlerAdapter, etc.) share the same thread and stack or do each have it’s own thread and stack? I ask because I instantiate numerous channel handlers and I need them to communicate with each other and I must decide between using thread communication or non- threaded communication.
Thank you for your answer
As a general rule, if your handler has state, then it's one handler per channel (pipeline). Otherwise, annotate your handler with #ChannelHandler.Sharable and use the same instance for each channel.
The answer: it depends
I assume that you must be building a server, some of what I say might not apply.
According to https://netty.io/4.0/api/io/netty/channel/ChannelHandler.html, one of the factors that determines which thread your channel handler runs on is how you add it to the pipeline. Netty allows the capability to use a single instance of your handler for all pipelines (i.e. Connections to your server), in which case you must accommodate for different threads.
In contrast, if you use a channel initializer to add handlers to the pipeline, then no, you do not need to communicate between threads because each connection uses a different instance of the handler.
This is also given that you are using more than one worker thread to run your channel handlers. You must also account for which channel your handlers are communicating with, if you don't store a state variable using a channel initialized handler, then you must accommodate for inter-thread communication.
Your best bet is to debug the current thread in each handler before making a decision, netty's threading behavior and how it interacts with your program is highly subjective on your implementation and where you are moving your data.
Even after reading http://krondo.com/?p=1209 or Does an asynchronous call always create/call a new thread? I am still confused about how to provide asynchronous calls on an inherently single-threaded system. I will explain my understanding so far and point out my doubts.
One of the examples I read was describing a TCP server providing asynch processing of requests - a user would call a method e.g. get(Callback c) and the callback would be invoked some time later. Now, my first issue here - we have already two systems, one server and one client. This is not what I mean, cause in fact we have two threads at least - one in the server and one on the client side.
The other example I read was JavaScript, as this is the most prominent example of single-threaded asynch system with Node.js. What I cannot get through my head, maybe thinking in Java terms, is this:If I execute the code below (apologies for incorrect, probably atrocious syntax):
function foo(){
read_file(FIle location, Callback c) //asynchronous call, does not block
//do many things more here, potentially for hours
}
the call to read file executes (sth) and returns, allowing the rest of my function to execute. Since there is only one thread i.e. the one that is executing my function, how on earth the same thread (the one and only one which is executing my stuff) will ever get to read in the bytes from disk?
Basically, it seems to me I am missing some underlying mechanism that is acting like round-robin scheduler of some sort, which is inherently single-threaded and might split the tasks to smaller ones or call into a multiothraded components that would spawn a thread and read the file in.
Thanks in advance for all comments and pointing out my mistakes on the way.
Update: Thanks for all responses. Further good sources that helped me out with this are here:
http://www.html5rocks.com/en/tutorials/async/deferred/
http://lostechies.com/johnteague/2012/11/30/node-js-must-know-concepts-asynchrounous/
http://www.interact-sw.co.uk/iangblog/2004/09/23/threadless (.NET)
http://ejohn.org/blog/how-javascript-timers-work/ (intrinsics of timers)
http://www.mobl-lang.org/283/reducing-the-pain-synchronous-asynchronous-programming/
The real answer is that it depends on what you mean by "single thread".
There are two approaches to multitasking: cooperative and interrupt-driven. Cooperative, which is what the other StackOverflow item you cited describes, requires that routines explicitly relinquish ownership of the processor so it can do other things. Event-driven systems are often designed this way. The advantage is that it's a lot easier to administer and avoids most of the risks of conflicting access to data since only one chunk of your code is ever executing at any one time. The disadvantage is that, because only one thing is being done at a time, everything has to either be designed to execute fairly quickly or be broken up into chunks that to so (via explicit pauses like a yield() call), or the system will appear to freeze until that event has been fully processed.
The other approach -- threads or processes -- actively takes the processor away from running chunks of code, pausing them while something else is done. This is much more complicated to implement, and requires more care in coding since you now have the risk of simultaneous access to shared data structures, but is much more powerful and -- done right -- much more robust and responsive.
Yes, there is indeed a scheduler involved in either case. In the former version the scheduler is just spinning until an event arrives (delivered from the operating system and/or runtime environment, which is implicitly another thread or process) and dispatches that event before handling the next to arrive.
The way I think of it in JavaScript is that there is a Queue which holds events. In the old Java producer/consumer parlance, there is a single consumer thread pulling stuff off this queue and executing every function registered to receive the current event. Events such as asynchronous calls (AJAX requests completing), timeouts or mouse events get pushed on to the Queue as soon as they happen. The single "consumer" thread pulls them off the queue and locates any interested functions and then executes them, it cannot get to the next Event until it has finished invoking all the functions registered on the current one. Thus if you have a handler that never completes, the Queue just fills up - it is said to be "blocked".
The system has more than one thread (it has at least one producer and a consumer) since something generates the events to go on the queue, but as the author of the event handlers you need to be aware that events are processed in a single thread, if you go into a tight loop, you will lock up the only consumer thread and make the system unresponsive.
So in your example :
function foo(){
read_file(location, function(fileContents) {
// called with the fileContents when file is read
}
//do many things more here, potentially for hours
}
If you do as your comments says and execute potentially for hours - the callback which handles fileContents will not fire for hours even though the file has been read. As soon as you hit the last } of foo() the consumer thread is done with this event and can process the next one where it will execute the registered callback with the file contents.
HTH
I'm writing a server using java NIO, and I have a few questions that I can't find answers to.
First, regarding SSLEngine, how to handle NEED_TASK properly in separated thread? When I invoke tasks in separate thread they complete, but I have no idea how to go back to perform another handshake operation. One option would be to call that operation from a thread that was performing delegated task, but I guess that's not the way to do it.
Another question is about calling interestOps() from different thread then selector thread. I need to change key interests after an attempt to write to channel hadn't written all data.
I thought about using some sort of Queue of changes like in ROX NIO tutorial, but I have read in another thread here that it is not the best way.
first regarding SSLEngine, how to handle NEED_TASK properly in separated thread. When I invoke tasks in separate thread they complete, but I have no idea how to go back to perform another handshake operations.
While the engine is in NEED_TASK state it can't do anything else. When the task completes you should then repeat the operation that originally returned NEED_TASK and let the engine tell you what to do next. You need to block or disable use of that engine by other threads until the task completes, i.e. don't select on that channel.
Another question is about calling interestOps() from different thread then selector thread. I need to change key interests after an attempt to write to channel hadn't written all data. I thought about using some sort of Queue of changes like in ROX NIO tutorial, but I have read in another thread here that it is not the best way.
That would have been me. I hate those queues. I just wakeup() the selector and change the interestOps, never seen a problem with that. The selector thread has to cope correctly with zero keys being ready, but it already needs to do that.