I have a socketChannel (java.nio.channels.SocketChannel) listening for reading requests (from multiple clients). It stores each request in a Request Queue.
Also socketChannel.configureBlocking(false)
Then I want the multiple threads to take one request at a time from the Request Queue and write to the socketChannel
I have read the following from a documentation.
Socket channels are safe for use by multiple concurrent threads. They
support concurrent reading and writing, though at most one thread may
be reading and at most one thread may be writing at any given time.
Since only 1 thread can be written, what can I do in the case of multiple writes?
You can use your own lock synchronized or ReentrantLock, or queue the messages and have one thread do the actual writes.
The problem with writes is you can only atomically write one byte at a time, if you write more than one byte, you might send some, but not all of the data in which case another thread can attempt to write it's message and you get a corrupted message.
I have a java.nio.channels.SocketChannel listening for reading requests (from multiple clients).
No you don't. You might have a ServerSocketChannel that listens for connections from multiple clients, but once you have an accepted SocketChannel, it is only connected to one client. All you can get from it is sequential requests from that client.
It stores each request in a Request Queue.
I don't see any need for that.
Also socketChannel.configureBlocking(false)
Then I want the multiple threads to take one request at a time from the Request Queue and write to the socketChannel
Why not just compute the reply as soon as you read it and write it directly back?
I have read the following from a documentation.
Socket channels are safe for use by multiple concurrent threads. They support concurrent reading and writing, though at most one thread may be reading and at most one thread may be writing at any given time.
Since only 1 thread can be written, what can I do in the case of multiple writes?
What multiple writes? You only have one client request per channel. You only need to write one reponse per request. You should not read, let alone process, a new request until you've written the prior response.
Related
I'm trying to wrap my head around building an asynchronous (non blocking) HTTP server using java NIO. I presently have a threadpool implementation and would like to make it into Event Driven with single thread.
How exactly does an Event Driven server work?
Do we still need threads?
I've been reading on Java channels, buffers and selectors.
So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request. If so, how is it any different from a threadpool implementation.
And if I don't create more threads that can process the request, how can the same thread still keep listening to requests and process them. I'm talking SCALABLE, say 1 million requests in total and 1000 coming in concurrently.
I've been reading on Java channels, buffers and selectors. So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request.
No, the idea is that you process data as it is available, not necessarily using threads.
The complication comes out of the need to handle data as it comes. For instance, you might not get a full request at once. In that case, you need to buffer it somewhere until you have the full request, or process it piecemeal.
Once you have got the request, you need to send the response. Again, the whole response cannot normally be sent at once. You send as much as you can without blocking, then use the selector to wait until you can send more (or another event happens, such as another request coming in).
I have a simple client-server application using sockets for the communication. One possibility is to close the socket every time the client has sent something to the server.
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
How can I check in the server if there is new data available in a socket in the queue? The only thing I can imagine is to constantly iterate over the whole queue and check every socket if it has new data. But this would be very inefficient because if I have several threads working on the queue, the queue gets blocked when one thread is scanning over it.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
Keeping connections open will improve performance, though there are scaling issues: an open socket uses kernel resources. (I wouldn't use a queue though ...)
How can I check in the server if there is new data available in a socket in the queue?
If you have a number of sockets to different clients, and you want to process data in (roughly) the order that it arrives, there are two common techniques:
Create a thread per socket, and have each thread simply do a read. This will (naturally) block the thread until data becomes available.
Use the NIO channel selector mechanism (see Selector) which allows you to find out which of a group of I/O channels is ready for a read or write.
Thread per socket tends to be resource hungry (thread stacks), and does not scale well at all if you have multiple threads that are active simultaneously. (Too many context switches, too much load on the thread scheduler.)
By contrast, selectors map onto native syscalls provided by the host operating system, and thus they are efficient and responsive ... if used intelligently.
(You could also obtain non-blocking channels for the sockets, and poll them round-robin fashion. But that isn't going to be either efficient or responsive.)
As you can see, none of these ideas work with a queue. Either you have a number of threads each dealing with one socket, or you have one thread dealing with an array or (array) list of sockets. The queue abstraction is not designed for indexing or iterating.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
See #Lolo's answer.
A practical solution would be to use NIO2 AsynchronousSocketChannels to perform asynchronous read operations with a callback that you can specify as a CompletionHandler.
I have a scenario where multiple threads need to communicate with external system on one socket. Each thread's message can be identified by a unique id.
In this scenario where all threads share same socket, can I use blockQueues. Since the threads can produce request and consume response, can i have singleton component say "Socketer" who holds the socket and have two BlockQueues (incoming & outgoing). Any message on outgoing queue is written on socket and any message from socket is sent to incoming queue. The socketer also maintains the hashtable of all the producer threads and as it reads response, it identifies the corresponding producer and hands over response to it.s
Please suggest if it is a right design approach or advise the improvement. My threads are actually WebServices and I am in Spring environment.
Thanks
I don't see why you need the hash table, but you do need a response queue per thread. You could embed the correct response queue into the request message.
But are you sure you can't open multiple connections to the external system? It would make your life a lot simpler.
I'm the main developer of an online game.
Players use a specific client software that connects to the game server with TCP/IP (TCP, not UDP)
At the moment, the architecture of the server is a classic multithreaded server with one thread per connection.
But in peak hours, when there are often 300 or 400 connected people, the server is getting more and more laggy.
I was wondering, if by switching to a java.nio.* asynchronous I/O model with few threads managing many connections, if the performances would be better.
Finding example codes on the web that cover the basics of such a server architecture is very easy. However, after hours of googling, I didn't find the answers to some more advanced questions:
1 - The protocol is text-based, not binary-based. The clients and the server exchanges lines of text encoded in UTF-8. A single line of text represents a single command, each lines are properly terminated by \n or \r\n.
For the classic multithreaded server, I have that kind of code :
public Connection (Socket sock) {
this.in = new BufferedReader( new InputStreamReader( sock.getInputStream(), "UTF-8" ));
this.out = new BufferedWriter( new OutputStreamWriter(sock.getOutputStream(), "UTF-8"));
new Thread(this) .start();
}
And then in run, data are read line by line with readLine.
In the doc, I found an utilitiy class Channels that can create a Reader out of a SocketChannel. But it is said that the produced Reader wont work if the Channel is in non-blocking mode, what contradicts the fact that non-blocking mode is mandatory to use the highly performant channel selection API I'm willing to use. So, I suspect that it isn't the right solution for what I would like to do.
The first question is therefore the following: if I can't use that, how to efficiently and properly take care of breaking lines and converting native java strings from/to UTF-8 encoded data in the nio API, with buffers and channels?
Do I have to play with get/put or inside the wrapped byte array by hand? How to go from ByteBuffer to strings encoded in UTF-8 ? I admit to don't understand very well how to use classes in the charset package and how it works to do that.
2 - In the asynchronous/non-blocking I/O world, what about the handling of consecutive read/write that have by nature to be executed sequencially one after the other?
For example, the login procedure, which is typicly challenge-response-based: the server sends a question (a particular computation), the client sends the response, and then the server checks the response given by the client.
The answer is, I think, certainly not to make a single task to send to worker threads for the whole login process, as it is quite long, with the risk to freeze worker threads for too much time (Imagine that scenario: 10 pool threads, 10 players try to connect at the same time; tasks related to players already online are delayed until one thread is again ready).
3 - What happens if two different threads simultaneously call Channel.write(ByteBuffer) on the same Channel?
Do the client might receive mixed up lines ? For example if a thread sends "aaaaa" and another sends "bbbbb", could the client receive "aaabbbbbaa", or am I ensured that everyting is sent in a consist order? Am I allowed to modify the buffer used right after the call returned?
Or asked differently, do I need additional synchronization to avoid this sort of situation?
If I need additionnal synchronization, how to know when release locks and so on, upon write finishes?
I'm afraid that the answer isn't as simple as registering for OP_WRITE in the selector. By trying that, I noticed that I get the write-ready event all the time and always for all clients, exiting Selector.select early mostly for nothing, since there are only 3 or 4 messages to send pers second per client, while the selection loop is performed hundreds of times per second. So, potentially, active wait in perspective, what is very bad.
4 - Can multiple threads call Selector.select on the same selector simultaneously without any concurrency problems such as missing an event, scheduling it twice, etc?
5 - In fact, is nio as good as it is said to be ? Would it be interesting to stay to classic multithreaded model, but unstead of creating a thread per connection, use fewer threads and loop over the connections to look for data availability using InputStream.isAvailable ? Is that idea stupid and/or inefficient?
1) Yes. I think that you need to write your own nonblocking readLine method. Note also that a nonblocking read may be signaled when there are several lines in the buffer, or when there is an incomplete line:
Example: (first read)
USER foo
PASS
(second read)
bar
You will need to store (see 2) the data that was not consumed, until enough information is ready to process it.
//channel was select for OP_READ
read data from channel
prepend data from previous read
split complete lines
save incomplete line
execute commands
2) You will need to keep the state of each client.
Map<SocketChannel,State> clients = new HashMap<SocketChannel,State>();
when a channel is connected, put a fresh state into the map
clients.put(channel,new State());
Or store the current state as the attached object of the SelectionKey.
Then, when executing each command, update the state. You may write it as a monolithic method, or do something more fancy such as polymorphic implementations of State, where each state knows how to deal with some commands (e.g. LoginState expects USER and PASS, then you change the state into a new AuthorizedState).
3) I don't recall using NIO with many asynchronous writers per channel, but the documentation says it is thread safe (I won't elaborate, since I have no proof of this). About OP_WRITE, note that it signals when the write buffer is not full. In other words, as said here: OP_WRITE is almost always ready, i.e. except when the socket send buffer is full, so you will just cause your Selector.select() method to spin mindlessly.
4) Yes. Selector.select() performs a blocking selection operation.
5) I think that the most difficult part is switching from a thread-per-client architecture, to a different design where reads and writes are decoupled from processing. Once you have done that, it is easier to work with channels than working your own way with blocking streams.
I'm developing a small client-server program in Java.
The client and the server are connected over one tcp-connection. Most parts of the communication are asynchronous (can happen at any time) but some parts I want to be synchronous (like ACKs for a sent command).
I use a Thread that reads commands from the socket's InputStream and raises an onCommand() event. The Command itself is progressed by the Command-Design-Pattern.
What would be a best-practice approach (Java), to enable waiting for an ACK without missing other, commands that could appear at the same time?
con.sendPacket(new Packet("ABC"));
// wait for ABC_ACK
edit1
Think of it like an FTP-Connection but that both data and control-commands are on the same connection. I want to catch the response to a control-command, while data-flow in the background is running.
edit2
Everything is sent in blocks to enable multiple (different) transmissons over the same TCP-Connection (multiplexing)
Block:
1 byte - block's type
2 byte - block's payload length
n byte - block's paylod
In principle, you need a registry of blocked threads (or better, the locks on which they are waiting), keyed with some identifier which will be sent by the remote side.
For asynchronous operation, you simply sent the message and proceed.
For synchronous operation, after sending the message, your sending thread (or the thread which initiated this) create a lock object, adds this with some key to the registry and then waits on the lock until notified.
The reading thread, when it receives some answer, looks in the registry for the lock object, adds the answer to it, and calls notify(). Then it goes reading the next input.
The hard work here is the proper synchronization to avoid dead locks as well as missing a notification (because it comes back before we added ourself to the registry).
I did something like this when I implemented the remote method calling protocol for our Fencing-applet. In principle RMI works the same way, just without the asynchronous messages.
#Paulo's solution is one I have used before. However, there may be a simpler solution.
Say you don't have a background thread reading results in the connection. What you can do instead do is use the current thread to read any results.
// Asynchronous call
conn.sendMessage("Async-request");
// server sends no reply.
// Synchronous call.
conn.sendMessage("Sync-request");
String reply = conn.readMessage();