I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.
Related
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I'm not sure if this is more a Java question or TCP question or both:
I have some server code using Apache Mina that runs a server listening for TCP on a given socket. The handler I have extends IoHandlerAdapter. I'm using this to connect a camera to the Java server over HTTP/1.1
The problem is: if I make the connection and then completely disconnect the camera (either pull power or pull network), I have to wait until the sessionIdle method is called to detect that the session is now dead (and then explicitly close the session from the server side).
My question is: Isn't there any way to know that the TCP session was instantly broken by the client? Or is this just the way that TCP sessions / Mina works?
Ideally, sessionIdle would only be for cases where the TCP socket hasn't died but the client (camera) has stopped talking over the socket... and some other mechanism would catch when the socket is actually killed (by the client/network).
NOTE: I am overriding exceptionCaught() but I don't see that being called in the case where I either unplug power or network. It just sits until the idle time then calls sessionIdle().
Thanks!
Unfortunately, the abrupt removal of one end of a connection is not immediately detectable since the end being removed does not have a chance to notify the other end of the connection.
There is no string (so to speak) connecting two ends of a TCP connection, instead there is just an agreement on connection state. The result being that until a timeout is hit, there is no way to know that the other end of the connection has disappeared unless that end of the connection sends a notification of its intent to disconnect.
As an analogy, imagine you are carrying on a conversation with someone in another room, whom you cannot see. When do you consider them to be no longer there? If they tell you they are leaving, and then you hear the door slam shut, it would be reasonable to assume that they have left. However, if they just fail to answer a question you would probably repeat the question, maybe a little louder this time. Then you might wait a little bit, call their name, maybe wait a little bit more for a response, and then perhaps finally walk over to the other room to see if they are still there or not.
This is what TCP basically does (especially if you are sending keep alives), except it has no way to walk over to the other room and can only rely on timeout thresholds being hit to indicate that the other party has left. So when a timeout is hit with no response from the other side, the assumption is made that they have left without telling you.
This is why when the other side of your connection suddenly disappears you have to rely on the timeout to tell you so, since there is simply no other way to know.
I'm developing a websocket application by using Netty. I'd like to know if a message is really delivered from a source to a destination. In particular, let's assume that a client and a server have an open channel and exchange some messages for a while. At a certain point, the client goes down, but the channel is still active in Netty. I tried to use isReachable() before sending the message, but this method seems to be buggy in some scenarios (e.g. a machine with Win7 is up, but isReachable() returns false). Now, my idea is to implement a mechanism using ACKs, namely the server sends the message and the client sends back an ack. To do that, I need a timeout to see if, after a certain interval, the corresponding ack does not arrive. Is there something similar in Netty?
Regarding isReachable() - it's only a best effort API. The documentation points out that it tries to send an ICMP echo request or create a TCP connection to port 7 on the destination host, both of which are highly likely to be blocked by a firewall. Is this happening in your case?
As for the acknowledgement, there's nothing in Netty that provides this as standard, but it shouldn't be too difficult to implement. Firstly each message needs to be uniquely identifible by some sort of identifier, possibly a sequence number but a globally unique identifier means you can potentially recover across disconnections. Then you want to create a combined handler that implements both ChannelInboundHandler and ChannelOutboundHandler (assuming Netty 4). When a message is sent
add the message to a map indexed by its id
create a timer associated with the message id. Add it to another map indexed by message id
forward the message
When the ACK is received cancel the timer and remove the timer and message from their respective maps. If the timer fires use the associated id to decide what to do with the timer and message (possibly retransmit and reset the timer).
Netty provides a HashedWheelTimer for efficiently managing lots of timers with a resolution suitable for this kind of activity.
You may also want to consider putting a limit on the number of retries so you can stop and raise an error rather than continually indefinitely.
I am trying to use a Java library to communicate with a car via the serial port using OBD2 protocol. The protocol is simple: you send an ASCII string (e.g. "01 0d"), and the car answers with an ASCII value. I've found many libraries in the web, but there is one concept I don't understand in the examples. After every send command, the programmer put a call to sleep. Why is that? For example:
send(pid)
sleep(200)
receive(response)
I don't understand, because read is a blocking function call, so I should be able to wait on read. Why is the additional call to sleep?
I did a bunch of work with the (Mitsubishi/Subaru) MUT-II protocol a few years ago, which uses the ISO9141 protocol and it was the same way. 200ms pause after every single request. It was later confirmed by the community/forums that the only pause that was actually necessary was the one after the initial 5 baud init, once changed to 10400 no more pauses were necessary.
If you are going via a hardware interface (like OBDKey or a similar ELM327 based device) then the protocol timings are taken care of for you, so that is unlikely to be the cause of the sleep delay.
You are right, read does block. But note that there can be a timeout set up in the read mechanism when establishing the COM / serial port parameters. In this case a call to read returns with some or no data when the timeout expires.
I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.