HttpURLConnections ignore timeouts and never return - java

We are getting some unexpected results randomly from some servers when trying to open an InputStream from an HttpURLConnection. It seems like those servers would accept the connection and reply with a "stay-alive" header which will keep the Socket open but doesn't allow data to be sent back to the stream.
That scenario makes an attempt for a multi-threaded crawler a little "complicated", because if some connection gets stuck, the thread running it would never return... denying the completion of it's pool which derives in the controller thinking that some threads are still working.
Is there some way to read the connection response header to identify that "stay-alive" answer and avoid trying to open the stream??

I'm not sure what I'm missing here but it seems to me you simply need getHeaderField()?

Did you try setting "read time out", in addition to "connect time out"?
See http://java.sun.com/j2se/1.5.0/docs/api/java/net/URLConnection.html#setReadTimeout%28int%29

Related

How to handle socket exceptions to make a continuous flow of program in Java?

I am creating a skribbl clone in Java using JavaFX and socket programming.
I am sending data over a TCP connection using a objectinput/outputstream.
Every 2/5 times the application runs without any exception but other time it shows numerous socket related exception and I am not able to figure out why it is happening.
java.net.SocketException: Connection reset
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:271)
at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2854)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3181)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3191)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1621)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:488)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:446)
at com.nttd.wtdoodle.Client.Game.Server.PlayerHandler$1.run(PlayerHandler.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Error sending Message to clien
java.io.StreamCorruptedException: invalid type code: 00
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1701)
at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2479)
at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2373)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2211)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1670)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:488)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:446)
at com.nttd.wtdoodle.Client.Game.Player.PtoSBridge$1.run(PtoSBridge.java:55)
at java.base/java.lang.Thread.run(Thread.java:829)
Error sending Message to client
There is one more exception which is the ClassCastException. I don't know why these exceptions are occurring sometimes but not all the time.
I want to know if the objectoutput/inputstream will be suitable for this work or not.
Can you suggest any other stream that will be useful for this purpose?
Thank You for all the replies and answer . I have changed the ObjectStream to BufferedReader and BufferedWriter and it solved my problem.
One thing we always must have in mind whether programming network code is: later or sooner something wrong will happen: parcial data delivery, disconnection, duplicates, etc. Even using TCP (a transport layer protocol with some important guarantees), your application layer must assume that a very wide range of network problems will eventually happen.
Although you didn't show your code, we can deduce by the stacktrace that you're exchange structured data, probably large structured data in the form of encoded java objects.
The problems with this strategy are:
if the objects are always the same shape, there are a considerable overhead of always being sending/reading this same metadata which can be avoided. The bigger the data, the prone to get network errors. Consider to encode only the required data in order to avoid continuously send the same metadata.
If even a small part of the serialized version of your object is missed or jammed, the received cannot restore the object. It is better than the scenario where the receiver tries to restore the object with incomplete or inconsistent data. As result, java thorws exceptions to sinalize it for you. Once you get an exception, you must to code what to do: ask to the remote pier resend the data, reconnect the socket, etc.
We cannot say exactly what happens to your code without see the code. But by what I can see, the solution is to properly implement the "unhappy paths" inside the catch blocks

Java: Managing more connections than there are threads, using a queue

For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.

Java Check if HttpResponse is Still Alive

Is there a way from a java servlet to check if the httpresponse is still "alive?" For instance, in my situation I send an ajax request from the browser over to a servlet. In this case its a polling request so it may poll for up to 5 minutes, when the servlet is ready to respond with data i'd like to check if the user has closed the browser window, or moved to another page etc. In other words, check to see if sending the data to the response will actually do anything.
Generally, this problem can be solved by sending a dummy payload before the actual message.
If the socket was severed, an IOException or a SocketException or something similar is thrown (depending on the library). Technically, browsers are supposed to sever a connection whenever you navigate away from a page or close the browser (or anything similar), but I've found out that the implementation details can vary. Older versions of FF, for example, appropriately close a connection when navigating away from a page, but newer versions (especially when using AJAX) tend to leave connections open.
That's the main reason you may use a dummy packet before the actual message. Another important consideration is the timeout. I've done polling before and you either need to implement some sort of heartbeat to keep a connection alive or increase the server timeout (although keep in mind that some browsers may have timeouts as well - timeouts that you have no control over).
Instead of polling or pushing over AJAX, I strongly suggest trying to support (at least in part) a Websocket solution.
Java Servlet Response doesn't have any such method as it is based on request and response behavior. If you wish to check the status, then probably you need to work at lower level e.g. TCP/IP Sockets, which has several status check methods as below:
boolean isBound()
Returns the binding state of the socket.
boolean isClosed()
Returns the closed state of the socket.
boolean isConnected()
Returns the connection state of the socket.
boolean isInputShutdown()
Returns whether the read-half of the socket connection is closed.
boolean isOutputShutdown()
Returns whether the write-half of the socket connection is closed.

Is it possible to close Java sockets on both client and server sides?

I have a socket tcp connection between two java applications. When one side closes the socket the other side remains open. but I want it to be closed. And also I can't wait on it to see whether it is available or not and after that close it. I want some way to close it completely from one side.
What can I do?
TCP doesn't work like this. The OS won't release the resources, namely the file descriptor and thus the port, until the application explicitly closes the socket or dies, even if the TCP stack knows that the other side closed it. There's no callback from kernel to user application on receipt of the FIN from the peer. The OS acknowledges it to the other side but waits for the application to call close() before sending its FIN packet. Take a look at the TCP state transition diagram - you are in the passive close box.
One way to detect a situation like this without dedicating a thread to each socket is to use the select/poll/epoll/kqueue family of functions. The socket being passively closed will be signaled as readable and read attempt will return the EOF.
Hope this helps.
Both sides have to read from the connection, so they can detect when the peer has closed. When read returns -1 it will mean the other end closed the connection and that's your clue to close your end.
If you are still reading from your socket, then you will detect the -1 when it closes.
If you are no longer reading from your socket, go ahead and close it.
If it's neither of these, you are probably having a thread wait on an event. This is NOT the way you want to handle thousands of ports! Java will start to get pukey at around 3000 threads in windows--much less in Linux (I don't know why).
Make sure you are using NIO. Use a single thread to manage all your ports (connection pool). It should just grab the data from a thread, forward it to a queue. At that point I think I'd have a thread pool take the data out of the queues and process it because actually processing the data from a port will take some time.
Attaching a thread to each port will NOT work, and is the biggest reason NIO was needed.
Also, having some kind of a "Close" message as part of your stream to trigger closing the port may make things work faster--but you'll still need to handle the -1 to cover the case of broken streams
The usual solution is to let the other side know you are going to close the connection, before actually closing it. For instance, in the case of the SMTP protocol, the server will send '221 Bye' before it closes the connection.
You probably want to have a connection pool.

NIO: Send message and then disconnect immediately

In some circumstances I wish to send an error message from a server to client using non-blocking I/O (SocketChannel.write(ByteBuffer)) and then disconnect the client. Assuming I write the full contents of the message and then immediately disconnect I presume the client may not receive this message as I'm guessing that the OS hasn't actually sent the data at this point.
Is this correct, and if so is there a recommended approach to dealing with this situation?
I was thinking of using a timer whereby if I wish to disconnect a client I send a message and then close their connection after 1-2 seconds.
SocketChannel.write will in non-blocking mode return the number of bytes which could immediately be sent to the network without blocking. Your question makes me think that you expect the write method to consume the entire buffer and try asynchronously to send additional data to the network, but that is not how it's working.
If you really need to make sure that the error message is sent to the client before disconnecting the socket, I would simply enable blocking before calling the write method. Using non-blocking mode, you would have to call write in a loop, counting the number of bytes being sent by each invocation and exit the loop when you've succeeded to pass the entire message to the socket (bad solution, I know, unnecessary code, busy wait and so on).
you may be better off launching a thread and synchronously write data to the channel. the async api is more geared toward "one thread dispatching multiple channels" and not really intended for fire and forget communications.
The close() method of sockets makes sure, everything sent using write before is actually sent before the socket is really closed. However this assumes that your write() was able to copy all data to the tcp stacks output window, which will not always work. For solutions to this see the other answers.

Categories

Resources