Check if ObjectInputStream has anything to read without blocking? - java

I am building a server in java that communicates with several clients at the same time, the initial approach we had is the the server listens to connections from the clients, once a connection is received and a socket is created, a new thread is spawned to handle the communication with each client, that is read the request with an ObjectInputStream, do the desired operation (fetch data from the DB, update it, etc.), and send back a response to the client (if needed). While the server itself goes back to listen to more connections.
This works fine for the time being, however this approach is not really scalable, it works great for a small amount of clients connected at the same time, however since every client spawns another thread, what will happen when there are a too many clients connected at once?
So my next idea was to maintain a list of sorts that will hold all connected clients (the socket object and some extra info), use a ThreadPool for to iterate through them and read anything they sent, if a message was received then put it in a queue for execution by another ThreadPool of worker threads, and once the worker has finished with its task if a response is required then send it.
The 2 latter steps are pretty trivial to implement, the problem is that with the original thread per client implementation, I use ObjectInputStream.readObject() to read the message, and this method blocks until there is something to read, which is fine for this approach, but I can't use the same thing for the new approach, since if I block on every socket, I will never get to the ones that are further down the list.
So I need a way to check if I have anything to read before I call readObject(), so far I tried the following solutions:
Solution 1:
use ObjectInputStream.available() to check if there is anything available to read, this approach failed since this method seems to always return 0, regardless of whether there is an object in the stream or not. So this does not help at all.
Solution 2:
Use PushbackInputStream to check for the existence of the first unread byte in the stream, if it exists then push it back and read the object using the ObjectInputStream, and if it doesn't move on:
boolean available;
int b = pushbackinput.read();
if (b==-1)
available = false;
else
{
pushbackinput.unread(b);
available = true;
}
if (available)
{
Object message= objectinput.readObject();
// continue with what you need to do with that object
}
This turned out to be useless too, since read() blocks also if there is no input to read. It seems to only return the -1 option if the stream was closed. If the stream is still open but empty it just blocks, so this is no different than simply using ObjectInputStream.readObject();
Can anyone suggest an approach that will actually work?

This is a good question, and you've done some homework.... but it involves going through some history to get things right. Note, your issue is actually more to do with the socket-level communication rather than the ObjectInputStream:
The easiest way to do things in the past was to have a separate thread per socket. This was scalable to a point but threads were expensive and slow to create.
In response, for large systems, people created thread pools and would service the sockets on threads when there was work to do. This was complicated.
The Java language was then changed with the java.nio package which introduced the Selector together with non-blocking IO. This created a reliable (although sometimes confusing) way to service multiple sockets with fewer threads. In your case through, it would not help fully/much because you want to know when a full Object is ready to be read, not when there's just 'some' object.
In the interim the 'landscape' changed, and Java is now able to more efficiently create and manage threads. 'Current' thinking is that it is better/faster and easier to allocate a single thread per socket again.... see Java thread per connection model vs NIO
In your case, I would suggest that you stick with the thread-per-socket model, and you'll be fine. Java can scale and handle more threads than sockets, so you'll be fine.

Related

Java: Managing more connections than there are threads, using a queue

For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.

Avoid waiting on Servlet streams

My Servler spends quite some time in reading request.getInputStream() and writing to response.getOutputStream(). In the long run, this can be a problem as its blocking a thread for nothing but reading/writing literally a few bytes per second. (*)
I'm never interested in a partial request data, the processing should not start before the request is completely available. Similarly for the response.
I guess, asynchronous IO would solve it, but I wonder what's the proper way. Maybe a servlet Filter replacing the ServletInputStream by a wrapped ByteArrayInputStream, using request.startAsync and calling the chained servlet after having collected the whole input?
Is there already such a filter?
Should I write one or should I use a different approach?
Note that what I mean is to avoid wasting threads on slow servlet streams. This isn't the same as startAsync which avoids wasting threads just waiting for some event.
And yes, at the moment it'd be a premature optimization.
My read loop as requested
There's nothing interesting in my current input stream reading method, but here you are:
private byte[] getInputBytes() throws IOException {
ServletInputStream inputStream = request.getInputStream();
final int len = request.getContentLength();
if (len >= 0) {
final byte[] result = new byte[len];
ByteStreams.readFully(inputStream, result);
return result;
} else {
return ByteStreams.toByteArray(inputStream);
}
}
That's all and it blocks when data aren't available; ByteStreams come from Guava.
Summary of my understanding so far
As the answers clearly state, it's impossible to work with servlet streams without wasting a thread on them. Neither the servlet architecture nor the common implementation expose anything allowing to say "buffer the whole data and call me only when you collected everything", albeit they use NIO and could do it.
The reason may be that usually a reverse proxy like nginx gets used, which can do it. nginx does this buffering by default and it couldn't be even switched off until two years ago.
Actually a supported case???
Given that many negative answer, I'm not sure, but it looks like my goal
to avoid wasting threads on slow servlet streams
is actually fully supported: Since 3.1, there's ServletInputStream.html#setReadListener which seems to be meant exactly for this. The thread allocated for processing Servlet#Service initially calls request.startAsync(), attaches the listener and gets returned to the pool by simply returning from service. The listener implements onDataAvailable(), which gets called when it's possible to read without blocking, adds a piece of data and returns. In onAllDataRead(), I can do the whole processing of the collected data.
There's an example, how it can be done with Jetty. It seems to cover non-blocking output as well.
(*) In the logfiles, I can see requests taking up to eight seconds which get spend on reading the input (100 bytes header + 100 bytes data). Such cases are rare, but they do happen, although the server is mostly idle. So I guess, it's a mobile client on a very bad connection (some users of ours connect from places having such bad connectivity).
HttpServletRequest#startAsync() isn't useful for this. That's only useful for push things like web sockets and the good 'ol SSE. Moreover, JSR356 Web Socket API is built on top of it.
Your concrete problem is understood, but this definitely can't be solved from the servlet on. You'd only end up wasting yet more threads for the very simple reason because the container has already dedicated the current thread to the servlet request until the request body is read fully up to the last bit, even if it's ultimately read by a newly spawned async thread.
To save threads, you actually need a servletcontainer which supports NIO and if necessary turn on that feature. With NIO, a single thread can handle as many TCP connections as the available heap memory allows it, instead of that a single thread is allocated per TCP connection. Then, in your servlet you don't at all need to worry about this delicate I/O task.
Almost all modern servletcontainers support it: Undertow (WildFly), Grizzly (GlassFish/Payara), Tomcat, Jetty, etc. Some have it by default enabled, others require extra configuration. Just refer their documentation using the keyword "NIO".
If you'd actually also want to save the servlet request thread itself, then you'd basically need to go a step back, drop servlets and implement a custom NIO based service on top of an existing NIO connector (Undertow, Grizzly, Jetty, etc).
You can't. The Servlet container allocates the thread to the request, and that's the end of it, it's allocated. That's the model. If you don't like that, you will have to stop using Servlets.
Even if you could solve (1), you can't start async I/O on an input stream.
The way to handle slow requests is to time them out, by setting the appropriate setting for whatever container you're using ... if you actually have a problem, and it's far from clear that you really do, with a mostly idle server and this only happening rarely.
Your read loop makes a distinction without a difference. Just read the request input stream to its end. The servlet container already ensures that end of stream happens at the content-length if provided.
There's a class called org.apache.catalina.connector.CoyoteAdapter, which is the class that receives the marshaled request from TCP worker thread. It has a method called "service" which does the bulk of the heavy lifting. This method is called by another class: org.apache.coyote.http11.Http11Processor which also has a method of the same name.
I find it interesting that I see so many hooks in the code to handle async io, which makes me wonder if this is not a built in feature of the container already? Anyway, with my limited knowledge, the best way that I can think of to implement the feature you are talking about, would be to create a class:
public class MyAsyncReqHandlingAdapter extends CoyoteAdapter and #Override service() method and roll your own... I don't have the time to devote to doing this now, but I may revisit in the future.
In this method you would need a way to identify slow requests and handle them, by handing them off to a single threaded nio processor and "complete" the request at that level, which, given the source code:
https://github.com/apache/tomcat/blob/075920d486ca37e0286586a9f017b4159ac63d65/java/org/apache/coyote/http11/Http11Processor.java
https://github.com/apache/tomcat/blob/3361b1321201431e65d59d168254cff4f8f8dc55/java/org/apache/catalina/connector/CoyoteAdapter.java
You should be able to figure out how to do. Interesting question and yes it can be done. Nothing I see in the spec says that it cannot...

From classic multithreaded to java.nio asynchronous/non-blocking server

I'm the main developer of an online game.
Players use a specific client software that connects to the game server with TCP/IP (TCP, not UDP)
At the moment, the architecture of the server is a classic multithreaded server with one thread per connection.
But in peak hours, when there are often 300 or 400 connected people, the server is getting more and more laggy.
I was wondering, if by switching to a java.nio.* asynchronous I/O model with few threads managing many connections, if the performances would be better.
Finding example codes on the web that cover the basics of such a server architecture is very easy. However, after hours of googling, I didn't find the answers to some more advanced questions:
1 - The protocol is text-based, not binary-based. The clients and the server exchanges lines of text encoded in UTF-8. A single line of text represents a single command, each lines are properly terminated by \n or \r\n.
For the classic multithreaded server, I have that kind of code :
public Connection (Socket sock) {
this.in = new BufferedReader( new InputStreamReader( sock.getInputStream(), "UTF-8" ));
this.out = new BufferedWriter( new OutputStreamWriter(sock.getOutputStream(), "UTF-8"));
new Thread(this) .start();
}
And then in run, data are read line by line with readLine.
In the doc, I found an utilitiy class Channels that can create a Reader out of a SocketChannel. But it is said that the produced Reader wont work if the Channel is in non-blocking mode, what contradicts the fact that non-blocking mode is mandatory to use the highly performant channel selection API I'm willing to use. So, I suspect that it isn't the right solution for what I would like to do.
The first question is therefore the following: if I can't use that, how to efficiently and properly take care of breaking lines and converting native java strings from/to UTF-8 encoded data in the nio API, with buffers and channels?
Do I have to play with get/put or inside the wrapped byte array by hand? How to go from ByteBuffer to strings encoded in UTF-8 ? I admit to don't understand very well how to use classes in the charset package and how it works to do that.
2 - In the asynchronous/non-blocking I/O world, what about the handling of consecutive read/write that have by nature to be executed sequencially one after the other?
For example, the login procedure, which is typicly challenge-response-based: the server sends a question (a particular computation), the client sends the response, and then the server checks the response given by the client.
The answer is, I think, certainly not to make a single task to send to worker threads for the whole login process, as it is quite long, with the risk to freeze worker threads for too much time (Imagine that scenario: 10 pool threads, 10 players try to connect at the same time; tasks related to players already online are delayed until one thread is again ready).
3 - What happens if two different threads simultaneously call Channel.write(ByteBuffer) on the same Channel?
Do the client might receive mixed up lines ? For example if a thread sends "aaaaa" and another sends "bbbbb", could the client receive "aaabbbbbaa", or am I ensured that everyting is sent in a consist order? Am I allowed to modify the buffer used right after the call returned?
Or asked differently, do I need additional synchronization to avoid this sort of situation?
If I need additionnal synchronization, how to know when release locks and so on, upon write finishes?
I'm afraid that the answer isn't as simple as registering for OP_WRITE in the selector. By trying that, I noticed that I get the write-ready event all the time and always for all clients, exiting Selector.select early mostly for nothing, since there are only 3 or 4 messages to send pers second per client, while the selection loop is performed hundreds of times per second. So, potentially, active wait in perspective, what is very bad.
4 - Can multiple threads call Selector.select on the same selector simultaneously without any concurrency problems such as missing an event, scheduling it twice, etc?
5 - In fact, is nio as good as it is said to be ? Would it be interesting to stay to classic multithreaded model, but unstead of creating a thread per connection, use fewer threads and loop over the connections to look for data availability using InputStream.isAvailable ? Is that idea stupid and/or inefficient?
1) Yes. I think that you need to write your own nonblocking readLine method. Note also that a nonblocking read may be signaled when there are several lines in the buffer, or when there is an incomplete line:
Example: (first read)
USER foo
PASS
(second read)
bar
You will need to store (see 2) the data that was not consumed, until enough information is ready to process it.
//channel was select for OP_READ
read data from channel
prepend data from previous read
split complete lines
save incomplete line
execute commands
2) You will need to keep the state of each client.
Map<SocketChannel,State> clients = new HashMap<SocketChannel,State>();
when a channel is connected, put a fresh state into the map
clients.put(channel,new State());
Or store the current state as the attached object of the SelectionKey.
Then, when executing each command, update the state. You may write it as a monolithic method, or do something more fancy such as polymorphic implementations of State, where each state knows how to deal with some commands (e.g. LoginState expects USER and PASS, then you change the state into a new AuthorizedState).
3) I don't recall using NIO with many asynchronous writers per channel, but the documentation says it is thread safe (I won't elaborate, since I have no proof of this). About OP_WRITE, note that it signals when the write buffer is not full. In other words, as said here: OP_WRITE is almost always ready, i.e. except when the socket send buffer is full, so you will just cause your Selector.select() method to spin mindlessly.
4) Yes. Selector.select() performs a blocking selection operation.
5) I think that the most difficult part is switching from a thread-per-client architecture, to a different design where reads and writes are decoupled from processing. Once you have done that, it is easier to work with channels than working your own way with blocking streams.

Chat system in Java

Is there a way to immediately print the message received from the client without using an infinite loop to check whether the input stream is empty or not?
Because I found that using infinite loop consumes a lot of system resources, which makes the program running so slow. And we also have to do the same (infinite loop) on the client side to print the message on the screen in real time.
I'm using Java.
You should be dealing with the input stream in a separate Thread - and let it block waiting for input. It will not use any resources while it blocks. If you're seeing excessive resource usage while doing this sort of thing, you're doing it wrong.
I think you can just put your loop in a different thread and have it sleep a bit (maybe for half a second?) between iterations. It would still be an infinite loop, but it would not consume nearly as many resources.
You don't you change your architecture a little bit to accommodate WebSockets. check out Socket.IO . It is a cross browser WebSockets enabler.
You will have to write controllers (servlets for example in java) that push data to the client. This does not follow the request-response architecture.
You can also architect it so that a "push servlet" triggers a "request" from the client to obtain the "response".
Since your question talks about Java, and if you are interested in WebSockets, check this link out.
If you're using Sockets, which you should be for any networking.
Then you can use the socket's DataInputStream which you can get using socket.getInputStream() (i think that's the right method) and do the following:
public DataInputStream streamIn;
public Socket soc;
// initialize socket, etc...
streamIn = soc.getInputStream();
public String getInput() {
return (String) streamIn.readUTF(); // Do some other casting if this doesn't work
}
streamIn.readUTF() blocks until data is available, meaning you don't have to loop, and threading will let you do other processing while you wait for data.
Look here for more information on DataInputStream and what you can do with it: http://docs.oracle.com/javase/6/docs/api/java/io/DataInputStream.html
A method that does not require threads would involve subclassing the input stream and adding a notify type method. When called this method would alert any interested objects (i.e. objects that would have to change state due to the additions to the stream) that changes have been made. These interested objects could then respond in anyway that is desired.
Objects writing to the buffer would do their normal writing, and afterward would call the notify() method on the input stream, informing all interested objects of the change.
Edit: This might require subclassing more than a couple of classes and so could involve a lot of code changes. Without knowing more about your design you would have to decide if the implementation is worth the effort.
There are two approaches that avoid busy loops / sleeps.
Use a thread for each client connection, and simply have each thread call read. This blocks the thread until the client sends some data, but that's no problem because it doesn't block the threads handling other clients.
Use Java NIO channel selectors. These allow a thread to wait until one of set of channels (in this case sockets) has data to be read. There is a section of the Oracle Java Tutorials on this.
Of these two approaches, the second one is most efficient in terms of overall resource usage. (The thread-per-client approach uses a lot of memory on thread stacks, and CPU on thread switching overheads.)
Busy loops that repeatedly call (say) InputStream.available() to see if there is any input are horribly inefficient. You can make them less inefficient by slowing down the polling with Thread.sleep(...) calls, but this has the side effect of making the service less responsive. For instance, if you add a 1 second sleep between each set of polls, the effect that each client will see is that the server typically delays 1 second before processing each request. Assuming that those requests are keystrokes and the responses echo them, the net result is a horribly laggy service.

Stateless Blocking Server Design

A little help please.
I am designing a stateless server that will have the following functionality:
Client submits a job to the server.
Client is blocked while the server tries to perform the job.
The server will spawn one or multiple threads to perform the job.
The job either finishes, times out or fails.
The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.
Here is what I have thought of so far.
Client submits a job to the server.
The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
Have a thread pool that will execute the job, fetch the result and appropriately create the response.
Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.
Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.
Cheers
Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.
Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:
Create a thread pool (such as ThreadPoolExecutor)
For each client request:
If you have any parts of the job that you can execute in parallel, delegate them to the pool.
And / or do them in the current thread.
Wait until pooled job parts complete (if applicable).
Return results to client.
Shutdown the thread pool.
No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.
Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.
How are you communicating to the client?
I recommend you create an object to represent each job which holds job parameters and the socket (or other communication mechanism) to reach the client. The thread pool will then send the response to unblock the client at the end of job processing.
The timeouts will be somewhat tricky, and will have hidden gotcha's but the basic design would seem to be to straightforward, write a class that takes a Socket in the constructor. on socket.accept we just do a new socket processing instantiation, with great foresight and planning on scalability or if this is a bench-test-experiment, then the socket processing class just goes to the data processing stuff and when it returns you have some sort of boolean or numeric for the state or something, handy place for null btw, and ether writes the success to the Output Stream from the socket or informs client of a timeout or whatever your business needs are
If you have to have a scalable, effective design for long-running heavy-haulers, go directly to nio ... hand coded one-off solutions like I describe probably won't scale well but would provide fundamental conceptualizing basis for an nio design of code-correct work.
( sorry folks, I think directly in code - design patterns are then applied to the code after it is working. What does not hold up gets reworked then, not before )

Categories

Resources