I want to fork off a subprocess, feed it incoming data from a Channel and stream back the result to the client. So far the best solution I've come with is to put an OrderedMemoryAwareThreadPoolExecutor in front of my upstream handler which forks off another thread to read the output from the process and write it back on the channel. Is there a better way to do this?
you can create your own thread pool to submit new thread in your job handler. OrderedMemoryAwareThreadPoolExecutor is a wrapper of standard excutorservice. The advantage of it is it can calculate the memory cost. You can get better control of your own logic by using your own thread pool. As told in netty tutorial, you need to fork a new thread for job handling. The I/O is bound to a socket, which is hidden inside a netty channel. You need to pass channel or channel context into that new thread. The socket is always there, and the async IO is performed by reading/writing to the socket in the new thread.
Related
In my understanding of Netty's, incoming message passed to eventLoop(only one eventLoop, one thread). Next, EventLoop doesn't process it, but pass it to ExecutorService(it holds multiple threads in pool) for execution.
All this happens with the help of NIO. EventLoop waits for incoming messages and pass it by selectors, keys, channels etc.
Am I right?
Netty 4 is used
As far as i know Netty uses the EvenLoopGroups for handeling incoming and outgoing data as well as incoming connections.
That shouln't be so interesting when you start using Netty as the way the data goes through different classes. When a message is inbound the first interfacce you can intercept it is the decoder (ByteToMessageDecoder) where your encrypted ByteBuf is available. Then its making its way through the handler (ChannelInboundHandler).
I have a simple client-server application using sockets for the communication. One possibility is to close the socket every time the client has sent something to the server.
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
How can I check in the server if there is new data available in a socket in the queue? The only thing I can imagine is to constantly iterate over the whole queue and check every socket if it has new data. But this would be very inefficient because if I have several threads working on the queue, the queue gets blocked when one thread is scanning over it.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
Keeping connections open will improve performance, though there are scaling issues: an open socket uses kernel resources. (I wouldn't use a queue though ...)
How can I check in the server if there is new data available in a socket in the queue?
If you have a number of sockets to different clients, and you want to process data in (roughly) the order that it arrives, there are two common techniques:
Create a thread per socket, and have each thread simply do a read. This will (naturally) block the thread until data becomes available.
Use the NIO channel selector mechanism (see Selector) which allows you to find out which of a group of I/O channels is ready for a read or write.
Thread per socket tends to be resource hungry (thread stacks), and does not scale well at all if you have multiple threads that are active simultaneously. (Too many context switches, too much load on the thread scheduler.)
By contrast, selectors map onto native syscalls provided by the host operating system, and thus they are efficient and responsive ... if used intelligently.
(You could also obtain non-blocking channels for the sockets, and poll them round-robin fashion. But that isn't going to be either efficient or responsive.)
As you can see, none of these ideas work with a queue. Either you have a number of threads each dealing with one socket, or you have one thread dealing with an array or (array) list of sockets. The queue abstraction is not designed for indexing or iterating.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
See #Lolo's answer.
A practical solution would be to use NIO2 AsynchronousSocketChannels to perform asynchronous read operations with a callback that you can specify as a CompletionHandler.
For this school assignment, I need to simulate a client server type application using Java threads (no need for sockets etc). How might I go about doing it?
I need a way to server to start and wait for clients to call it then it should return a response. The "API" in my mind was something like:
server.start()
client1.connect(server)
client2.connect(server)
x = client1.getData()
y = client2.getData()
success1 = client1.sendData(1)
success2 = client2.sendData(2)
How might the server|client.run method look like? Assume I could hardcode the method calls for now.
I suggest to use the following approach:
1. Have "server" code that works with Blocking Queue -
A blocking queue is a data structure which is synchronized and let's the thread that reads data from it (the "consumer" thread) to wait until there is a data in the queue to be read.
The "producer" thread is a thread that "pushes" data on the queue.
I would recommend you use one of the blocking queue implementations.
I would also suggest you read more about "consumer producer" pattern.
Blocking queue also eliminates the need for "busy wait" which is not recommended in multi-threading programming.
From the description that you have provided What i can suggest is you should write some thing like
1) Have one queue where all the clients can put up messages.
2) server which is running in an infinite loop like while(true) waits for the new messages that has been put in the queue and if it finds one then processes it and marks it as processed.
3) The job of the client threads would be to create messages and put them in the queue. And notifying the server that new message has been added to the queue so that server can come to know that new message has been arrived for processing.
For this program to make it working i think you need to learn Thread's notify, notifyAll(), and wait() methods. So basically without sockets what you are looking for it "Inter thread communication". This link can help.
Hope this helps.
Is there any possibility to communicate with clients by events? I mean:
I have connected client, InputStreamReader and PrintWriter
in = new BufferedReader(new InputStreamReader(
client.getInputStream()));
out = new PrintWriter(client.getOutputStream(), true);
when I use in.readLine() server waits for incoming data. But i have this situation:
Client didn't send any data
Connection is still alive
I need to send some data to client (but in.readLine() is still hanging process) and wait for respond
The questions are:
What is the best way to handle asynchronously incoming data? I mean something like "events". Should I create thread for read and another thread for write? If i can do it in one thread, could you give an example of the code please?
Is possible to abort waiting for in.readLine()?
Java provides non-blocking i/o through the java.nio package (see here). But Java's "nio" channels do not inter-operate with streams from java.io. So, if you want to use nio, you'll have to build your server with nio from the listener on down.
If you're stuck with the existing java.io streams, then you'll either have to use a thread-per-client model; or you'll need to devise a system for having a single thread (or pool of threads) manage a bunch of clients by looping over them repeatedly, polling instream.available() to figure out which ones have data ready to be handled. Of course, in this latter case, you'd want to avoid busy-looping, so some appropriate use of Thread.sleep is probably also in-order.
In my opinion having a separate thread to perform socket IO is best if you want your program to behave asynchronously. Have a look at http://en.wikipedia.org/wiki/Observer_pattern.
For a simple application, what I'll do is create a separate thread to listen for incoming data, and register 'observers' or 'event listener' to this thread. When a data comes in, notify your observers so they can perform necessary actions.
While the listener thread is idle waiting for data, your main thread still can progress normally.
Make sure you're also familiar with Java concurrency programming
I'm trying to implement a stateful, multi-client server application and have some questions about the networking/threading design. The problem I'm currently facing is how to exchange messages between the communication layer and the logic layer.
The server handles multiple clients, where each of them can be active in multiple "channels", where each channel has multiple stages and may have multiple clients acting in it. Think of it to something similar as a chat program with multiple rooms.
I have already implemented the receiving of messages on the server side. Each client has his own thread that blockingly reads the data and decodes into a message. Now how to proceed? In my oppinion, each channel should also have this own thread to easily maintain its state. I could use a BlockingQueue to exchange the received messages with the channel thread, who's blockingly waiting for new messages on that queue.
But then how to send messages to the clients? The logic in the channel will handle the message, and produce some messages to be sent to one/some/all of the clients. Is it safe to use the channel thread to directly write to the socket? Or should I use another BlockingQueue to transmit the messages to the client handler thread? But how to wake it then, since it's waiting on the socket to read? Or should I use a separate send-thread per client, or even a separate send-socket?
BTW: I know I could use existing libraries for the networking layer, but I want do do it from scratch on plain sockets.
Put a send message method on the communication object that wraps the socket. Synchronize this method so that only one thread can be calling it at once. Then, it doesn't make any difference how many threads call this method. Each message will only be sent one at a time. You also don't have to disturb the thread that's blocking to read. This send method will be a quick enough operation that you don't have to worry about other threads blocking while a thread sends.
As long as the channel has a reference to the communication objects for each connected client, it can send messages and not worry about it.
If it ever caused problems, you could always modify that send message to enqueue the object to be sent. Then you could have a specific send thread to block on the queue and write the contents to the socket. But from my experience, this won't be necessary.
What about a event mechanism? When you ready with processing the request and there is a data for client available, then simply send it with an event for the client socket handler thread. as because the transmission from client is ended, you can send reply normally - if i think correctly.