Java TCP Socket wait for idle? - java

I'm not sure if this is more a Java question or TCP question or both:
I have some server code using Apache Mina that runs a server listening for TCP on a given socket. The handler I have extends IoHandlerAdapter. I'm using this to connect a camera to the Java server over HTTP/1.1
The problem is: if I make the connection and then completely disconnect the camera (either pull power or pull network), I have to wait until the sessionIdle method is called to detect that the session is now dead (and then explicitly close the session from the server side).
My question is: Isn't there any way to know that the TCP session was instantly broken by the client? Or is this just the way that TCP sessions / Mina works?
Ideally, sessionIdle would only be for cases where the TCP socket hasn't died but the client (camera) has stopped talking over the socket... and some other mechanism would catch when the socket is actually killed (by the client/network).
NOTE: I am overriding exceptionCaught() but I don't see that being called in the case where I either unplug power or network. It just sits until the idle time then calls sessionIdle().
Thanks!

Unfortunately, the abrupt removal of one end of a connection is not immediately detectable since the end being removed does not have a chance to notify the other end of the connection.
There is no string (so to speak) connecting two ends of a TCP connection, instead there is just an agreement on connection state. The result being that until a timeout is hit, there is no way to know that the other end of the connection has disappeared unless that end of the connection sends a notification of its intent to disconnect.
As an analogy, imagine you are carrying on a conversation with someone in another room, whom you cannot see. When do you consider them to be no longer there? If they tell you they are leaving, and then you hear the door slam shut, it would be reasonable to assume that they have left. However, if they just fail to answer a question you would probably repeat the question, maybe a little louder this time. Then you might wait a little bit, call their name, maybe wait a little bit more for a response, and then perhaps finally walk over to the other room to see if they are still there or not.
This is what TCP basically does (especially if you are sending keep alives), except it has no way to walk over to the other room and can only rely on timeout thresholds being hit to indicate that the other party has left. So when a timeout is hit with no response from the other side, the assumption is made that they have left without telling you.
This is why when the other side of your connection suddenly disappears you have to rely on the timeout to tell you so, since there is simply no other way to know.

Related

Java: Managing more connections than there are threads, using a queue

For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.

Netty - Call a method on connection termination

I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.

"java.net.BindException: Address already in use" when trying to do rapid Socket creation and destruction for load testing

I'm trying to load test a Java server by opening a large number of socket connections to the server, authenticating, closing the connection, then repeating. My app runs great for awhile but eventually I get:
java.net.BindException: Address already in use: connect
According to documentation I read, the reason for this is that closed sockets still occupy the local address assigned to them for a period of time after close() was called. This is OS dependent but can be on the order of minutes. I tried calling setReuseAddress(true) on the socket with the hopes that its address would be reusable immediately after close() was called. Unfortunately this doesn't seem to be the case.
My code for socket creation is:
Socket socket = new Socket();
socket.setReuseAddress(true);
socket.connect(new InetSocketAddress(m_host, m_port));
But I still get this error:
java.net.BindException: Address already in use: connect after awhile.
Is there any other way to accomplish what I'm trying to do? I would like to for instance: open 100 sockets, close them all, open 200 sockets, close them all, open 300, etc. up to a max of 2000 or so sockets.
Any help would be greatly appreciated!
You are exhausing the space of outbound ports by opening that many outbound sockets within the TIME_WAIT period of two minutes. The first question you should ask yourself is does this represent a realistic load test at all? Is a real client really going to do that? If not, you just need to revise your testing methodology.
BTW SO_LINGER is the number of seconds the application will wait during close() for data to be flushed. It is normally zero. The port will hang around for the TIME_WAIT interval anyway if this is the end that issued the close. This is not the same thing. It is possible to abuse the SO_LINGER option to patch the problem. However that will also cause exceptional behaviour at the peer and again this is not the purpose of a test.
Not using bind() but setReuseAddress(true) is just weird, I hope you do understand the implications of setReuseAddress (and the point of). 100-2000 is not a great number of sockets to open, however the server you are attempting to connect to (since it looks the same addr/port pair), may just drop them w/ a normal backlog of 50.
Edit:
if you need to open multiple sockets quickly (ermm port scan?), I'd very strongly recommend using NIO and connect()/finishConnect() + Selector. Opening 1000 sockets in the same thread is just plain slow.
Forgot you may need finishConnect() either way in your code.
I think that you should plan on the port you want to use to connect to be in use. By that I mean try to connect using the given port. If the connect fails (or in your case throws an exception), try to open the connection using the next port number.
Try wrapping the connect statement in a try/catch.
Here's some pseudo-code that conveys what I think will work:
portNumber = x //where x is the first port number you will try
numConnections = 200 // or however many connections you want to open
while(numConnections > 0){
try{
connect(host, portNumber)
numConnections--
}catch(){}
portNumber++
}
This code doesn't cover corner cases such as "what happens when all ports are in use?"

Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Is it possible to close Java sockets on both client and server sides?

I have a socket tcp connection between two java applications. When one side closes the socket the other side remains open. but I want it to be closed. And also I can't wait on it to see whether it is available or not and after that close it. I want some way to close it completely from one side.
What can I do?
TCP doesn't work like this. The OS won't release the resources, namely the file descriptor and thus the port, until the application explicitly closes the socket or dies, even if the TCP stack knows that the other side closed it. There's no callback from kernel to user application on receipt of the FIN from the peer. The OS acknowledges it to the other side but waits for the application to call close() before sending its FIN packet. Take a look at the TCP state transition diagram - you are in the passive close box.
One way to detect a situation like this without dedicating a thread to each socket is to use the select/poll/epoll/kqueue family of functions. The socket being passively closed will be signaled as readable and read attempt will return the EOF.
Hope this helps.
Both sides have to read from the connection, so they can detect when the peer has closed. When read returns -1 it will mean the other end closed the connection and that's your clue to close your end.
If you are still reading from your socket, then you will detect the -1 when it closes.
If you are no longer reading from your socket, go ahead and close it.
If it's neither of these, you are probably having a thread wait on an event. This is NOT the way you want to handle thousands of ports! Java will start to get pukey at around 3000 threads in windows--much less in Linux (I don't know why).
Make sure you are using NIO. Use a single thread to manage all your ports (connection pool). It should just grab the data from a thread, forward it to a queue. At that point I think I'd have a thread pool take the data out of the queues and process it because actually processing the data from a port will take some time.
Attaching a thread to each port will NOT work, and is the biggest reason NIO was needed.
Also, having some kind of a "Close" message as part of your stream to trigger closing the port may make things work faster--but you'll still need to handle the -1 to cover the case of broken streams
The usual solution is to let the other side know you are going to close the connection, before actually closing it. For instance, in the case of the SMTP protocol, the server will send '221 Bye' before it closes the connection.
You probably want to have a connection pool.

Categories

Resources