Is there a way from a java servlet to check if the httpresponse is still "alive?" For instance, in my situation I send an ajax request from the browser over to a servlet. In this case its a polling request so it may poll for up to 5 minutes, when the servlet is ready to respond with data i'd like to check if the user has closed the browser window, or moved to another page etc. In other words, check to see if sending the data to the response will actually do anything.
Generally, this problem can be solved by sending a dummy payload before the actual message.
If the socket was severed, an IOException or a SocketException or something similar is thrown (depending on the library). Technically, browsers are supposed to sever a connection whenever you navigate away from a page or close the browser (or anything similar), but I've found out that the implementation details can vary. Older versions of FF, for example, appropriately close a connection when navigating away from a page, but newer versions (especially when using AJAX) tend to leave connections open.
That's the main reason you may use a dummy packet before the actual message. Another important consideration is the timeout. I've done polling before and you either need to implement some sort of heartbeat to keep a connection alive or increase the server timeout (although keep in mind that some browsers may have timeouts as well - timeouts that you have no control over).
Instead of polling or pushing over AJAX, I strongly suggest trying to support (at least in part) a Websocket solution.
Java Servlet Response doesn't have any such method as it is based on request and response behavior. If you wish to check the status, then probably you need to work at lower level e.g. TCP/IP Sockets, which has several status check methods as below:
boolean isBound()
Returns the binding state of the socket.
boolean isClosed()
Returns the closed state of the socket.
boolean isConnected()
Returns the connection state of the socket.
boolean isInputShutdown()
Returns whether the read-half of the socket connection is closed.
boolean isOutputShutdown()
Returns whether the write-half of the socket connection is closed.
Related
I have a gRPC client that uses two bidi streams. For reasons unknown at the present, when we send a keepAlive ping every hour, onError with a statusRuntimeException are called on both streams.
To handle the reconnection, I've implemented the following retry mechanism, in java pseudocode. I will clarify anything as necessary in the comments.
The mechanism looks like so:
onError() {
retrySyncStream();
}
void retrySyncStream() {
// capture the current StreamObserver
previousStream = this.StreamObserver;
// open a new stream
streamObserver = bidiStub.startStream(responseObserver);
waitForChannelReady(); // <-- simplified version, we use the gRPC notification listener
previousStream.onCompleted(); // <-- called on notify of channel READY
}
Although we attempt to close the old stream, server side we see 2 connections open on 2 HA nodes. I don't have control over anything server side, I just need to handle reconnection logic on the client.
First things, is this common practice to ditch the old StreamObserver after getting a StatusRuntimeException? The reason I am doing this is because we have a mock server spring boot application that we use to test our client against. When I do a force shutdown (Ctl-c) the spring boot server app, and start it back up again, the client can't use the original StreamObserver, it has to create a new one by calling the gRPC bidi stream API call.
From what I've read online, people say not to ditch the managed channel, but how about stream observers, and making sure that multiple streams aren't being opened by mistake?
Thanks.
When the StreamObserver gets an error, the RPC is dead. It is appropriate to ditch it.
When you re-create the stream, consider what would happen if the server is having trouble. Generally you'd have exponential backoff in place somewhere. For bidi streaming cases, several cases in gRPC tend to reset the backoff if the client received a response from the server.
Since both streams die together, it sounds like the TCP connection was dead. Unfortunately, in TCP you have to send on the connection to learn it is dead. When the client learns the connection is dead, it learns that because it can't send to the HA proxy using that connection. That means HA has to separately discover the connection is dead. Server-side keepalive could help with this, although TCP keepalive at HA is probably also warranted.
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I'm not sure if this is more a Java question or TCP question or both:
I have some server code using Apache Mina that runs a server listening for TCP on a given socket. The handler I have extends IoHandlerAdapter. I'm using this to connect a camera to the Java server over HTTP/1.1
The problem is: if I make the connection and then completely disconnect the camera (either pull power or pull network), I have to wait until the sessionIdle method is called to detect that the session is now dead (and then explicitly close the session from the server side).
My question is: Isn't there any way to know that the TCP session was instantly broken by the client? Or is this just the way that TCP sessions / Mina works?
Ideally, sessionIdle would only be for cases where the TCP socket hasn't died but the client (camera) has stopped talking over the socket... and some other mechanism would catch when the socket is actually killed (by the client/network).
NOTE: I am overriding exceptionCaught() but I don't see that being called in the case where I either unplug power or network. It just sits until the idle time then calls sessionIdle().
Thanks!
Unfortunately, the abrupt removal of one end of a connection is not immediately detectable since the end being removed does not have a chance to notify the other end of the connection.
There is no string (so to speak) connecting two ends of a TCP connection, instead there is just an agreement on connection state. The result being that until a timeout is hit, there is no way to know that the other end of the connection has disappeared unless that end of the connection sends a notification of its intent to disconnect.
As an analogy, imagine you are carrying on a conversation with someone in another room, whom you cannot see. When do you consider them to be no longer there? If they tell you they are leaving, and then you hear the door slam shut, it would be reasonable to assume that they have left. However, if they just fail to answer a question you would probably repeat the question, maybe a little louder this time. Then you might wait a little bit, call their name, maybe wait a little bit more for a response, and then perhaps finally walk over to the other room to see if they are still there or not.
This is what TCP basically does (especially if you are sending keep alives), except it has no way to walk over to the other room and can only rely on timeout thresholds being hit to indicate that the other party has left. So when a timeout is hit with no response from the other side, the assumption is made that they have left without telling you.
This is why when the other side of your connection suddenly disappears you have to rely on the timeout to tell you so, since there is simply no other way to know.
I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.
We are getting some unexpected results randomly from some servers when trying to open an InputStream from an HttpURLConnection. It seems like those servers would accept the connection and reply with a "stay-alive" header which will keep the Socket open but doesn't allow data to be sent back to the stream.
That scenario makes an attempt for a multi-threaded crawler a little "complicated", because if some connection gets stuck, the thread running it would never return... denying the completion of it's pool which derives in the controller thinking that some threads are still working.
Is there some way to read the connection response header to identify that "stay-alive" answer and avoid trying to open the stream??
I'm not sure what I'm missing here but it seems to me you simply need getHeaderField()?
Did you try setting "read time out", in addition to "connect time out"?
See http://java.sun.com/j2se/1.5.0/docs/api/java/net/URLConnection.html#setReadTimeout%28int%29