Understanding of Netty internals - java

In my understanding of Netty's, incoming message passed to eventLoop(only one eventLoop, one thread). Next, EventLoop doesn't process it, but pass it to ExecutorService(it holds multiple threads in pool) for execution.
All this happens with the help of NIO. EventLoop waits for incoming messages and pass it by selectors, keys, channels etc.
Am I right?
Netty 4 is used

As far as i know Netty uses the EvenLoopGroups for handeling incoming and outgoing data as well as incoming connections.
That shouln't be so interesting when you start using Netty as the way the data goes through different classes. When a message is inbound the first interfacce you can intercept it is the decoder (ByteToMessageDecoder) where your encrypted ByteBuf is available. Then its making its way through the handler (ChannelInboundHandler).

Related

How many thread should I use to send and receive data with java.nio?

I am creating a distributed system with many machines for learning. I need to send and receive data between machines, and I am using java.nio to create that network. In one machine, I use one thread for serversocketchannel to receive data from other machines, and use for each package of data I create new thread to send it. It means that one thread for receiving and multiple threads for sending in one machines.
But I face a problem that since one thread handles receiving, many client will be pending when connecting.
Should I change it to one thread handles receiving and one thread handles sending?
Thank you
P/s: I don't want to use any 3rd party framework.
If there are more senders than receivers in your network, then obviously some of the senders will end up waiting. If you have more receivers than senders, then obviously some of the receivers will be idle, since nearly all of the time a sender will probably be connected to a receiver, one to one.
I cannot judge on what you "should do" as I don't know what you're trying to accomplish.
Anyway, the two common patterns used on the receiver side are:
One thread handles all
One thread handles just accepting the connection and opening the stream, then delegates the actual work with the stream to another thread (usually from a thread pool to prevent resource exhaustion that could happen if a new thread was created for every connection)

Event Driven server using Java NIO

I'm trying to wrap my head around building an asynchronous (non blocking) HTTP server using java NIO. I presently have a threadpool implementation and would like to make it into Event Driven with single thread.
How exactly does an Event Driven server work?
Do we still need threads?
I've been reading on Java channels, buffers and selectors.
So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request. If so, how is it any different from a threadpool implementation.
And if I don't create more threads that can process the request, how can the same thread still keep listening to requests and process them. I'm talking SCALABLE, say 1 million requests in total and 1000 coming in concurrently.
I've been reading on Java channels, buffers and selectors. So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request.
No, the idea is that you process data as it is available, not necessarily using threads.
The complication comes out of the need to handle data as it comes. For instance, you might not get a full request at once. In that case, you need to buffer it somewhere until you have the full request, or process it piecemeal.
Once you have got the request, you need to send the response. Again, the whole response cannot normally be sent at once. You send as much as you can without blocking, then use the selector to wait until you can send more (or another event happens, such as another request coming in).

WebSocket async send can result in blocked send once queue filled

I have pretty simple Jetty-based websockets server, responsible for streaming small binary messages to connect clients.
To avoid any blocking on server side I was using sendBytesByFuture method.
After increasing load from 2 clients to 20, they stop receive any data. During troubleshooting I decided to switch on synchronous send method and finally got potential reason:
java.lang.IllegalStateException: Blocking message pending 10000 for BLOCKING
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.lockMsg(WebSocketRemoteEndpoint.java:130)
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.sendBytes(WebSocketRemoteEndpoint.java:244)
Clients not doing any calculations upon receiving data so potentially they can't be slow joiners.
So I wondering what can I do to solve this problem?
(using Jetty 9.2.3)
If the error message occurs from a synchronous send, then you have multiple threads attempting to send messages on the same RemoteEndpoint - something that isn't allowed per the protocol. Only 1 message at a time may be sent. (There is essentially no queue for synchronous sends)
If the error message occurs from an asynchronous send, then that means you have messages sitting in a queue waiting to be sent, yet you are still attempting to write more async messages.
Try not to mix synchronous and asynchronous at the same time (it would be very easy to accidentally have output that become an invalid protocol stream)
Using Java Futures:
You'll want to use the Future objects that are provided on the return of the sendBytesByFuture() and sendStringByFuture() methods to verify that the message was actually sent or not (could have been an error), and if enough start to queue up unsent you back off on sending more messages until the remote endpoint can catch up.
Standard Future behavior and techniques apply here.
Using Jetty Callbacks:
There is also the WriteCallback behavior available in the sendBytes(ByteBuffer,WriteCallback) and sendString(String,WriteCallback) methods that would call your own code on success/error, at which you can put some logic around what you send (limit it, send it slower, queue it, filter it, drop some messages, prioritize messages, etc. whatever you need)
Using Blocking:
Or you can just use blocking sends to never have too many messages queue up.

Multiple Threads Sharing One Socket

I have a scenario where multiple threads need to communicate with external system on one socket. Each thread's message can be identified by a unique id.
In this scenario where all threads share same socket, can I use blockQueues. Since the threads can produce request and consume response, can i have singleton component say "Socketer" who holds the socket and have two BlockQueues (incoming & outgoing). Any message on outgoing queue is written on socket and any message from socket is sent to incoming queue. The socketer also maintains the hashtable of all the producer threads and as it reads response, it identifies the corresponding producer and hands over response to it.s
Please suggest if it is a right design approach or advise the improvement. My threads are actually WebServices and I am in Spring environment.
Thanks
I don't see why you need the hash table, but you do need a response queue per thread. You could embed the correct response queue into the request message.
But are you sure you can't open multiple connections to the external system? It would make your life a lot simpler.

Java multithreaded stateful server - networking design

I'm trying to implement a stateful, multi-client server application and have some questions about the networking/threading design. The problem I'm currently facing is how to exchange messages between the communication layer and the logic layer.
The server handles multiple clients, where each of them can be active in multiple "channels", where each channel has multiple stages and may have multiple clients acting in it. Think of it to something similar as a chat program with multiple rooms.
I have already implemented the receiving of messages on the server side. Each client has his own thread that blockingly reads the data and decodes into a message. Now how to proceed? In my oppinion, each channel should also have this own thread to easily maintain its state. I could use a BlockingQueue to exchange the received messages with the channel thread, who's blockingly waiting for new messages on that queue.
But then how to send messages to the clients? The logic in the channel will handle the message, and produce some messages to be sent to one/some/all of the clients. Is it safe to use the channel thread to directly write to the socket? Or should I use another BlockingQueue to transmit the messages to the client handler thread? But how to wake it then, since it's waiting on the socket to read? Or should I use a separate send-thread per client, or even a separate send-socket?
BTW: I know I could use existing libraries for the networking layer, but I want do do it from scratch on plain sockets.
Put a send message method on the communication object that wraps the socket. Synchronize this method so that only one thread can be calling it at once. Then, it doesn't make any difference how many threads call this method. Each message will only be sent one at a time. You also don't have to disturb the thread that's blocking to read. This send method will be a quick enough operation that you don't have to worry about other threads blocking while a thread sends.
As long as the channel has a reference to the communication objects for each connected client, it can send messages and not worry about it.
If it ever caused problems, you could always modify that send message to enqueue the object to be sent. Then you could have a specific send thread to block on the queue and write the contents to the socket. But from my experience, this won't be necessary.
What about a event mechanism? When you ready with processing the request and there is a data for client available, then simply send it with an event for the client socket handler thread. as because the transmission from client is ended, you can send reply normally - if i think correctly.

Categories

Resources