PDA loses TCP connection to ServerSocket in Suspend Mode - java

I'm implementing a java TCP/IP Server using ServerSocket to accept messages from clients via network sockets.
Works fine, except for clients on PDAs (a WIFI barcode scanner).
If I have a connection between server and pda - and the pda goues into suspend (standby) after some idle time - then there will be problems with the connection.
When the pda wakes up again, I can observer in a tcp monitor, that a second connection with a different port is established, but the old one remains established too:
localhost:2000 remotehost:4899 ESTABLISHED (first connection)
localhost:2000 remotehost:4890 ESTABLISHED (connection after wakeup)
And now communication doesn't work, as the client now uses the new connection, but the server still listens at the old one - so the server doesn't receive the messages. But when the server sends a message to the client he realizes the problem (receives a SocketException: Connection reset. The server then uses the new connection and all the messages which have been send in the meantime by the client will be received at a single blow!
So I first realize the network problems, when the server tries to send a message - but in the meantime there are no exceptions or anything. How can I properly react to this problem - so that the new connection is used, as soon as it is established (and the old one closed)?

From your description I guess that the server is structured like this:
server_loop
{
client_socket = server_socket.accept()
TalkToClientUntilConnectionCloses(client_socket)
}
I'd change it to process incoming connections and established connections in parallel. The simplest approach (from the implementation point of view) is to start a new thread for each client. It is not a good approach in general (it has poor scalability), but if you don't expect a lot of clients and can afford it, just change the server like this:
server_loop
{
client_socket = server_socket.accept()
StartClientThread(client_socket)
}
As a bonus, you get an ability to handle multiple clients simultaneously (and all the troubles attached too).

It sounds like the major issue is that you want the server to realize and drop the old connections as they become stale.
Have you considered setting a timeout on the connection on the server-side socket (the connection Socket, not the ServerSocket) so you can close/drop it after a certain period? Perhaps after the SO_TIMEOUT expires on the Socket, you could test it with an echo/keepalive command to verify that the connection is still good.

Related

How to find out if a client connection is terminated within a TCP socket connection in a java server?

I'm implementing a multithreaded server using blocking read/write on TCP sockets with InputStream and OutputStream primitives wrapped-up by appropriate Reader/Writer.
The InputStreamReader's read() method returns -1 if the client is disconnected but it keeps waiting indefinitely if the client connection is intact. How do I overcome this?
The most important question here is what does "is lost" really mean for the application. There are two roles in a socket communication - a writer and a reader. Our code may play both of the roles at the same time. So, for a socket communication:
From the writer's point of view, the connection "is lost" means the writer gets an error while write/send to the socket (OutputStream.write(...)). For TCP this happens only due to receiving FIN or RST, or due to retransmission timeout (isn't available in Java to be managed) or due to OS has detected the connection is lost with TCP keepalive (https://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html)
From the reader's point of view, the connection "is lost" means the reader gets an error while read from the socket (InputStream.read(...)). For TCP this happens only due to to receiving FIN or RST, or due to OS has detected "lost" connection with TCP keepalive, or due to blocking read timeout (SO_TIMEOUT).
Both TCP retransmission timeout (RTO) and TCP keepalive work not so good:
RTO: https://pracucci.com/linux-tcp-rto-min-max-and-tcp-retries2.html
Keepalive is even worst: https://learn.microsoft.com/en-us/windows/win32/winsock/so-keepalive. By default Windows sends the first TCP Keepalive packet after 2 hours(!). The same issue on Linux https://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
So, if you are a server, I'd strongly recommend to just use forcibly set SO_TIMEOUT (or your own equivalent for unblocking read), OR to introduce health/availability-checking on the application's level. There are two typical patterns: a ping-pong (after a timeout is reached, the server sends a ping packet to a client and awaits for a pong received back), a heartbeat (the client should send a heartbeat package in a specified period of time). Note, that often the client experiences the same problem and you may decide to support ping-pong/heartbeat for both sides.
Just an interesting post on the topic https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
Are you creating the Input stream over a socket object?
ServerSocket server = new ServerSocket(port);
Socket socket = server.accept();
DataInputStream in = new DataInputStream(new BufferedInputStream(socket.getInputStream()));
Then you can check if socket is connected using
socket.isClosed()
in between your loops?

FIN & RST set in socket communication

There is existing socket communication with TLS 1.2 enabled for which i have included one-way/two-way support , on doing so i have observed frequent reset in socket .
While analyzing the packets using wire shark observed FIN,ACK & RST flag sent which i believe the reason for getting reset or aborting the connection .
My queries:
During the socket conversation i believe , at many occasion i observer EOFexcetpion while attempting to readObject(). Can this lead to socket reset or disconnect.
In case i want the socket connection to be permanently connected , how can i ignore FIN & RST flag and keep up the socket connection permanent ?
Is it efficient whenever socket finds idle then to disconnect . Is it when RST or FIN flag is passed ?
...how can i ignore FIN & RST flag ...
The simple answer is that you cannot.
The protocol specifies that once you receive FIN the connection is in the process of being dismantled. You can attempt to do whatever you want, but the sender of the FIN packet is going away regardless of what you do.
The RST flag is sent back to you when you send data to an endpoint that was not expecting a packet from you, i.e. when you tried to ignore FIN.
Keeping a connection open "permanently" requires cooperation from both sides of the connection, and the connection may still fail due to timeouts if the network goes down.
Is it efficient whenever socket finds idle then to disconnect
yes it is efficient to disconnect or dismantle the socket for idle connections. If idle connections were not dismantled, then as new connections are initiated, the connections (socket connections) continue to remain and consume system resources/memory. Additionally each socket connection is on a new port, so as new connections keep coming in (if your server is busy like a web server for example), you continue to use up tcp ports!
Also there are two different states, FIN_WAIT_1 and FIN_WAIT_2 (refer RFC 793 for TCP Specification)
So bottom line, it may not be a good idea to continue to have the socket connection to be always or permanently connected- certainly not a good idea for a busy server which is accepting lots of client traffic- the newer accepted connections will continue to remain and consume or use up local tcp ports incrementally as newer connections keep coming in...

Is it a good practice to keep a Socket connection open for reading?

I am trying to build an Android IM, since users may have new messages from others, should I keeps the TCP connection open and keep reading data from it? e.g.
while(!shutdown) {
int count = socketChannel.read(buffer);
// do something with buffer
}
This depends on your implementation. If you're using blocked sockets then you wouldn't want to do this. It would mean that if you have more than one client connecting to the server they would block all other clients from connecting to that server socket.
What you could do is have a server socket running consistently (as you normally would) and then to connect to it with a client socket to check and receive any new messages that have arrived. Once you've received your message you can close the socket. This could be performed every n seconds.
The other option is to use non-blocked socket connections and always keep them open but this could lead to issues if you have many clients.

java.net.SocketException Connection timed out error

I am getting below error when I am trying to connect to a TCP server. My programs tries to open around 300-400 connections using diffferent threads and this is happening during 250th thread. Each thread uses its own connection to send and receive data.
java.net.SocketException: Connection timed out:could be due to invalid address
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:372)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:233)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:220)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:385)
Here is the code I have that a thread uses to get socket:
socket = new Socket(my_hostName, my_port);
Is there any default limit on number of connections that a TCP server can have at one time? If not how to solve this type of problems?
You could be getting a connection timeout if the server has a ServerSocket bound to the port you are connecting to, but is not accepting the connection.
If it always happens with the 250th connection, maybe the server is set up to only accept 250 connections. Someone has to disconnect so you can connect. Or you can increase the timeout; instead of creating the socket like that, create the socket with the empty constructor and then use the connect() method:
Socket s = new Socket();
s.connect(new InetSocketAddress(my_hostName, my_port), 90000);
Default connection timeout is 30 seconds; the code above waits 90 seconds to connect, then throws the exception if the connection cannot be established.
You could also set a lower connection timeout and do something else when you catch that exception...
Why all the connections? Is this a test program? In which case be aware that opening large numbers of connections from a single client stresses the client in ways that aren't exercised by real systems with large numbers of different client hosts, so test results from that kind of client aren't all that valid. You could be running out of client ports, or some other client resource.
If it isn't a test program, same question. Why all the connections? You'd be better off running a connection pool and reusing a much smaller number of connections serially. The network only has so much bandwidth after all; dividing it by 400 isn't very useful.

java/groovy socket write timeout

I have a simple badly behaved server (written in Groovy)
ServerSocket ss = new ServerSocket(8889);
Socket s = ss.accept()
Thread.sleep(1000000)
And a client who I want to have timeout (since the server is not consuming it's input)
Socket s = new Socket("192.168.0.106", 8889)
s.setSoTimeout(100);
s.getOutputStream.write( new byte[1000000] );
However, this client blocks forever. How do I get the client to timeout?
THANKS!!
You could spawn the client in it's own thread and spin lock/wait(timeout long) on it to return. Possibly using a Future object to get the return value if the Socket is successful.
I do believe that the SO_TIMEOUT setting for a Socket only effects the read(..) calls from the socket, not the write.
You might try using a SocketChannel (rather then Stream) and spawn another thread that also has a handle to that Channel. The other thread can asynchronously close that channel after a certain timeout of it is blocked.
The socket timeout is at the TCP level, not at the application level. The source machine TCP is buffering the data to be sent and the target machine network stack is acknowledging the data received, so there's no timeout. Also, different TCP/IP implementations handle these timeouts differently. Take a look at what's going on on the wire with tcpdump (or wireshark if you are so unfortunate :) What you need is application level ACK, i.e. you need to define the protocol between client and the server. I can't comment on Java packages (you probably want to look at nio), but receive timeout on that ACK would usually be handled with poll/select.
There is no way to get the timeout, but you can always spawn a thread that closes the connection if the write hasn't finished.

Categories

Resources