Server disconnects in Java - java

Related posts didn't answer my question.
I have a server which does something like:
EVERY TWO SECONDS DO:
if the inputstream is not null {
if inputStream.available() is 0
{
return
}
print "handling input stream"
handleTheInputStream();
}
Even after my client disconnects, the server doesn't recognize it through an IOException. The other post said that I would see an End-of-Stream character. However, that is not the case, since after my client disconnects I never see "handling input stream" which indicates that no data is available.
Perhaps something is wrong with the way I currently understand how this works.
Please help.

Don't use available() - that says whether or not there's currently data available, not whether there will be data available in the future. In other words, it's the wrong tool to use to detect disconnection.
Basically you should call read() (and process the data) until it returns -1, at which point it means the client has disconnected.

If this is done using sockets, you may want to check the Socket class's various instance methods, such as isClosed() or isInputShutdown().
Of course, this assumes that the method operating on this stream has access to the Socket object and not just the InputStream.

Related

Java InputStream.available() returning > 0 after reading

I'm building a chat program where hosts are connected via sockets and talk to each other using ObjectInput and ObjectOutput streams. A host builds a string from keyboard input and sends it to the other hosts along with an array of ints.
After a host has successfully read a message via readObject(), the while(true) loop continues and that host hangs on the very next call of readObject(). I can only surmise that this is because indata.available() is returning true even after reading whatever was in it, and when it tries to read again before something else has been sent, it blocks (waits).
A snippet of the relevant code is below. I've done some research and found that I can't flush or empty an input stream. I also can't close it - because of the nature of the constantly-running chat program, it needs to stay open to continue reading.
Also, I understand that I'm checking indata.available() and then reading in using inputs.readObject(). I thought this was the proper way to do it, but correct me if I'm wrong.
I'm not sure what to do about it! I need indata.available() to return 0 if I haven't written an object to the stream.
private InputStream[] indata;
private ObjectInputStream[] inputs;
private ObjectOutputStream[] outputs;
private int[] stamps;
// Establish connections via sockets between 3 hosts, serverless
while (true) {
// Build a message
for (all hosts that aren't myself) {
if ( i != rank ) {
outputs[i].writeObject( message );
outputs[i].writeObject( stamps );
outputs[i].flush( );
outputs[i].reset( );
}
}
// Read a message in from a host that sent one
for (all hosts that aren't myself) {
if (indata[j].available() > 0) {
String message = (String)inputs[j].readObject();
int[] senderStamps = (int[])inputs[j].readObject();
}
}
}
Some additional information, for clarification:
I'm using available() because the instructor used it in his code and I'm not allowed to change it. Further, the call to available() worked as intended when only one object (the string) was being sent - the only code that was there on the sending side was "writeObject" and "flush". It was my job to add the code to send the array, and when I do that, I also have to add the code to reset() the ObjectOutputStream (or I get other problems - when the array is sent, modified, and then sent again without calling reset() between sendings, the original, unmodified version is sent instead of the newly modified one).
I can't just block on read, as a process that's blocked on read cannot write to the other hosts, and I need to be able to write even when a host has nothing to read.
Also, we aren't allowed to use multiple threads.
I figured out what was happening, but I'm not sure I understand why.
When I called reset() after sending, and then read the objects in at the receiving end, there was still 1 byte left in the stream. This is what was causing me to fall into the if-clause and then block when reading again (because there was really no object to read).
I had to call reset because I was sending a persistent object (the array). I noticed that I didn't have to call reset when I was only sending the string, and the difference between the string and the array was that the array was a data member of the class while the string was being created anew each time the loop ran.
So, I created a non-persistent copy of the array that I wanted to send, and sent that copy. When I did this, I didn't have to call reset (still don't understand why). Further, after reading from the input stream, there were 0 bytes left inside, so it never put the program in a position where readObject would block.
I think I kinda understand why I don't have to reset when sending non-persistent objects, but I don't understand why reset() was causing data to be left in the corresponding input stream.
Either way, it works as intended now.

Java close FileInputStream before reading anything

I have a very peculiar problem. In android I am using FileInputStream to read from the serial (ttySx/COM) port. I am using this to decide which of the known devices is connected (if any at all). What I basically do is:
Are you device 1? No...
Are you device 2? No...
Are you device 3? Yes...
Great lets do some stuff...
And this works great. If there is any incoming data to read (response from device), everything is fine. However, if there is no device connected to ttySx there is nothing to respond to my write. That means nothing to read.
Now, FileInputStream.read() is a blocking call. When I call it in the thread, thread is effectively frozen. I cannot interrupt the thread because for that I would have to read something first. So far everything makes perfect sense.
As there is no response from the port for quite some time I decide that there is nothing connected and want to stop reading and dispose of the thread(actually I do not want to bother with the port anymore because with nothing connected, it is useless to me at this moment). As mentioned earlier interrupt itself is no good. What should be working, is to close() the FileInputStream (read() will throw an exception and hooray!). The close() works... As long as the read() read anything ever (like when I had an answering device connected, then disconnect it -> read() is stuck - because no data to read - but close() works).
However if there was not a thing connected to the port when the read() started (equals: I haven't read a single byte), the close() method does nothing. It does not close the stream. Nor does work the closing of FileInputStream channel.
I could create a workarround: Store the FileInputStream somewhere and when I want to read from the port again later, use the same instance. That would work for me. Unfortunately I would quite unnecessarily block the port itself. No other process (for example another application) could read from the port because it is stuck in "uninterruptable" read...
Any ideas why this is happening and how to make it right? Or some other way to detect if there is anything connected to the ttySx port?
Thanks.
EDIT1: The library used for communication with serial port is https://github.com/cepr/android-serialport-api
In the end we used FileInputStream::available().
First time we tried it, it was like:
Check if anything is available.
Read (regardless of availability)
Of course, when we checked the available, there was nothing to read yet. Then the read call blocked and waited for input. When we checked again, there was nothing available already, because read had cleared the port.
Therefore this suggestion Java close FileInputStream before reading anything from M. Prokhorov was the correct one for my situation.
If anyone would wonder about the behavior in question:
From researching it, it seems that reading streams was not designed for ports/sockets in first place. It was designed for regular files. You read, reach the end of document and close the stream. The exceptions are designed for wrong sequential usage of a stream (you open it, close id and then try to read).
If you enter blocking mode, it will block until it reads at least a byte. No way around it. Close initializes the "closing state" similarly to setting the interrupt state of a thread.

Cannot read from TCP socket

I have a C++ client using QT and a JAVA server and I have successfully written from the client to the server but I cannot write from the server to the client. My code:
QString
Client::readTCP ( )
{
socketTCP->waitForReadyRead();
QTextStream in (socketTCP);
return in.readAll() ;
}
// Later on
qDebug() << Client::readTCP();
But no matter what method I choose I can't get a response from the server. The server code is as follows:
DataOutputStream output = new DataOutputStream (SOCKET.getOutputStream());
output.writeBytes ( "myString" );
ANSWER:
It works either because I changed in.readAll() to in.readLine() or it is because I waited a couple seconds after the server started before sending a message.
The QTextStream::readAll function attempts to read the entire contents of the stream. Either this message is or isn't the entire contents of the stream.
If this message isn't the entire contents of the stream, then it should not return. It would be a serious error if readAll returned only a part of the contents of the stream despite the fact that it's specified to return all the contents.
If this is the entire contents of the stream, then the server is broken. If it doesn't close the socket, how can the client know it's received the entire contents? Unless there's some other way to indicate end of message, it has to be indicated by closing the stream, and you don't show the server closing the stream.
I'll repeat the advice I always give when I see problems like this -- do not ever implement a network protocol until you specify that protocol in a protocol specification. Otherwise, it's not possible to fix problems where the server and client disagree because there's no way to know which end is right. Here, the server and client disagree over how the end of a message is to be marked, and without a protocol specification to refer to, there's no way to know which end to fix.
If you had a protocol specification, you could just look at the section that explains how the ends of messages are marked and detected. Then you could fix whichever end doesn't follow the specification. (Or, if the specification doesn't say how, then fix the specification! Clearly, this has to happen somehow and it's the specification's job to explain how.)
In Java, after sending data to buffer, flush it. Output streams have flush() method which forces any data left in stream to be written/sent. Try that if using readAll() on client side.
Also, readLine() is advised if you know how much data will be sent. You can loop trough in.readLine() until it gets null. Also readLine() will remove any \n or \r\n.
I have no experience with Qt so I cannot say if you are reading correctly from the socket, but with Java I use PrintStream's method println when sending text, and a VS C++ Client receives it just fine using recv.
Also, you may want to check if the packet is actually sent over the socket using Wireshark.

JAVA : Handling socket disconnection

Two computers are connected by socket connection. If the server/client closes the connection
from their end(i.e closes the InputStream, OutputStream and Socket) then how can I inform
the other end about the disconnection? There is one way I know of - trying to read from the InputStream,
which throws an IOException if connection is closed, but is there any other way to detect this?
Another question, I looked the problem up on the internet and saw inputStream.available()
does not solve this problem. Why is that?
Additional Information : I'm asking for another way because my project becomes tough to handle if I have to try to read from the
InputStrem to detect a disconnection.
trying to read from the InputStream, which throws an IOException
That is not correct. If the peer closes the socket:
read() returns -1
readLine() returns null
readXXX() throws EOFException, for any other X.
As InputStream only has read() methods, it only returns -1: it doesn't throw an IOException at EOS.
Contrary to other answers here, there is no TCP API or Socket method that will tell you whether the peer has closed the connection. You have to try a read or a write.
You should use a read timeout.
InputStream.available() doesn't solve the problem because it doesn't return an EOS indication of any kind. There are few correct uses of it, and this isn't one of them.
There is no O-O-O way to get a callback/exception the moment the connection is broken. You only get to know about the broken connection only when you do a explicit read/write on the socket stream.
There are two ways to read from a socket viz. Synchronously read byte by byte as they arrive; or wait untill a desired number of bytes available on the stream and then do a bulk read. You do the check by calling available() on the socket stream which gives you the number of bytes currently available for read. In the second case, if the socket connection is broken for some reason there is no way you can be notified of that. In that case you need to employ a timeout mechanism for your wait. In the first case where you do explicit read/write you get an exception.
The problem is not "if the server/client closes the connection". The problem is "what if they do not close the connection and yet the connection is broken?"
There is no way to detect that without a heartbeat protocol of your own.
Another option is to set SO_KEEPALIVE to true.
"When the keepalive option is set for a TCP socket and no data has been exchanged across the socket in either direction for 2 hours (NOTE: the actual value is implementation dependent)"
In my experience, it is much sooner than every 2 hours. More like a ~5 minutes. Other than using So_KEEPALIVE, you are royally screwed :P
In my communications protocols, I use a reserved 'heartbeat' byte that is sent every 2 seconds. My own filterInputStream and filterOutputStream sends/and digests the heartbeat byte.
Q1 If you close the socket connection on server, the client should throw an exception if not immediately, certainly on the next read attempt, and visa versa.
Q2 From the JavaDocs
Returns an estimate of the number of bytes that can be read (or
skipped over) from this input stream without blocking by the next
invocation of a method for this input stream. The next invocation
might be the same thread or another thread. A single read or skip of
this many bytes will not block, but may read or skip fewer bytes.
This is not an indication of the number of bytes currently in the stream, but an estimate of the number of bytes that may be read from the implementation that won't block the current thread

Java NIO: How to know when SocketChannel read() is complete with non-blocking I/O

I am currently using a non-blocking SocketChannel (Java 1.6) to act as a client to a Redis server. Redis accepts plain-text commands directly over a socket, terminated by CRLF and responds in-like, a quick example:
SEND: 'PING\r\n'
RECV: '+PONG\r\n'
Redis can also return huge replies (depending on what you are asking for) with many sections of \r\n-terminated data all as part of a single response.
I am using a standard while(socket.read() > 0) {//append bytes} loop to read bytes from the socket and re-assemble them client side into a reply.
NOTE: I am not using a Selector, just multiple, client-side SocketChannels connected to the server, waiting to service send/receive commands.
What I'm confused about is the contract of the SocketChannel.read() method in non-blocking mode, specifically, how to know when the server is done sending and I have the entire message.
I have a few methods to protect against returning too fast and giving the server a chance to reply, but the one thing I'm stuck on is:
Is it ever possible for read() to return bytes, then on a subsequent call return no bytes, but on another subsequent call again return some bytes?
Basically, can I trust that the server is done responding to me if I have received at least 1 byte and eventually read() returns 0 then I know I'm done, or is it possible the server was just busy and might sputter back some more bytes if I wait and keep trying?
If it can keep sending bytes even after a read() has returned 0 bytes (after previous successful reads) then I have no idea how to tell when the server is done talking to me and in-fact am confused how java.io.* style communications would even know when the server is "done" either.
As you guys know read never returns -1 unless the connection is dead and these are standard long-lived DB connections, so I won't be closing and opening them on each request.
I know a popular response (atleast for these NIO questions) have been to look at Grizzly, MINA or Netty -- if possible I'd really like to learn how this all works in it's raw state before adopting some 3rd party dependencies.
Thank you.
Bonus Question:
I originally thought a blocking SocketChannel would be the way to go with this as I don't really want a caller to do anything until I process their command and give them back a reply anyway.
If that ends up being a better way to go, I was a bit confused seeing that SocketChannel.read() blocks as long as there aren't bytes sufficient to fill the given buffer... short of reading everything byte-by-byte I can't figure out how this default behavior is actually meant to be used... I never know the exact size of the reply coming back from the server, so my calls to SocketChannel.read() always block until a time out (at which point I finally see that the content was sitting in the buffer).
I'm not real clear on the right way to use the blocking method since it always hangs up on a read.
Look to your Redis specifications for this answer.
It's not against the rules for a call to .read() to return 0 bytes on one call and 1 or more bytes on a subsequent call. This is perfectly legal. If anything were to cause a delay in delivery, either because of network lag or slowness in the Redis server, this could happen.
The answer you seek is the same answer to the question: "If I connected manually to the Redis server and sent a command, how could I know when it was done sending the response to me so that I can send another command?"
The answer must be found in the Redis specification. If there's not a global token that the server sends when it is done executing your command, then this may be implemented on a command-by-command basis. If the Redis specifications do not allow for this, then this is a fault in the Redis specifications. They should tell you how to tell when they have sent all their data. This is why shells have command prompts. Redis should have an equivalent.
In the case that Redis does not have this in their specifications, then I would suggest putting in some sort of timer functionality. Code your thread handling the socket to signal that a command is completed after no data has been received for a designated period of time, like five seconds. Choose a period of time that is significantly longer than the longest command takes to execute on the server.
If it can keep sending bytes even after a read() has returned 0 bytes (after previous successful reads) then I have no idea how to tell when the server is done talking to me and in-fact am confused how java.io.* style communications would even know when the server is "done" either.
Read and follow the protocol:
http://redis.io/topics/protocol
The spec describes the possible types of replies and how to recognize them. Some are line terminated, while multi-line responses include a prefix count.
Replies
Redis will reply to commands with different kinds of replies. It is possible to check the kind of reply from the first byte sent by the server:
With a single line reply the first byte of the reply will be "+"
With an error message the first byte of the reply will be "-"
With an integer number the first byte of the reply will be ":"
With bulk reply the first byte of the reply will be "$"
With multi-bulk reply the first byte of the reply will be "*"
Single line reply
A single line reply is in the form of a single line string starting with "+" terminated by "\r\n". ...
...
Multi-bulk replies
Commands like LRANGE need to return multiple values (every element of the list is a value, and LRANGE needs to return more than a single element). This is accomplished using multiple bulk writes, prefixed by an initial line indicating how many bulk writes will follow.
Is it ever possible for read() to return bytes, then on a subsequent call return no bytes, but on another subsequent call again return some bytes? Basically, can I trust that the server is done responding to me if I have received at least 1 byte and eventually read() returns 0 then I know I'm done, or is it possible the server was just busy and might sputter back some more bytes if I wait and keep trying?
Yes, that's possible. Its not just due to the server being busy, but network congestion and downed routes can cause data to "pause". The data is a stream that can "pause" anywhere in the stream without relation to the application protocol.
Keep reading the stream into a buffer. Peek at the first character to determine what type of response to expect. Examine the buffer after each successful read until the buffer contains the full message according to the specification.
I originally thought a blocking SocketChannel would be the way to go with this as I don't really want a caller to do anything until I process their command and give them back a reply anyway.
I think you're right. Based on my quick-look at the spec, blocking reads wouldn't work for this protocol. Since it looks line-based, BufferedReader may help, but you still need to know how to recognize when the response is complete.
I am using a standard
while(socket.read() > 0) {//append
bytes} loop
That is not a standard technique in NIO. You must store the result of the read in a variable, and test it for:
-1, indicating EOS, meaning you should close the channel
zero, meaning there was no data to read, meaning you should return to the select() loop, and
a positive value, meaning you have read that many bytes, which you should then extract and remove from the ByteBuffer (get()/compact()) before continuing.
It's been a long time, but . . .
I am currently using a non-blocking SocketChannel
Just to be clear, SocketChannels are blocking by default; to make them non-blocking, one must explicitly invoke SocketChannel#configureBlocking(false)
I'll assume you did that
I am not using a Selector
Whoa; that's the problem; if you are going to use non-blocking Channels, then you should always use a Selector (at least for reads); otherwise, you run into the confusion you described, viz. read(ByteBuffer) == 0 doesn't mean anything (well, it means that there are no bytes in the tcp receive buffer at this moment).
It's analogous to checking your mailbox and it's empty; does it mean that the letter will never arrive? was never sent?
What I'm confused about is the contract of the SocketChannel.read() method in non-blocking mode, specifically, how to know when the server is done sending and I have the entire message.
There is a contract -> if a Selector has selected a Channel for a read operation, then the next invocation of SocketChannel#read(ByteBuffer) is guaranteed to return > 0 (assuming there's room in the ByteBuffer arg)
Which is why you use a Selector, and because it can in one select call "select" 1Ks of SocketChannels that have bytes ready to read
Now there's nothing wrong with using SocketChannels in their default blocking mode; and given your description (a client or two), there's probably no reason to as its simpler; but if you want to use non-blocking Channels, use a Selector

Categories

Resources