I have a simple badly behaved server (written in Groovy)
ServerSocket ss = new ServerSocket(8889);
Socket s = ss.accept()
Thread.sleep(1000000)
And a client who I want to have timeout (since the server is not consuming it's input)
Socket s = new Socket("192.168.0.106", 8889)
s.setSoTimeout(100);
s.getOutputStream.write( new byte[1000000] );
However, this client blocks forever. How do I get the client to timeout?
THANKS!!
You could spawn the client in it's own thread and spin lock/wait(timeout long) on it to return. Possibly using a Future object to get the return value if the Socket is successful.
I do believe that the SO_TIMEOUT setting for a Socket only effects the read(..) calls from the socket, not the write.
You might try using a SocketChannel (rather then Stream) and spawn another thread that also has a handle to that Channel. The other thread can asynchronously close that channel after a certain timeout of it is blocked.
The socket timeout is at the TCP level, not at the application level. The source machine TCP is buffering the data to be sent and the target machine network stack is acknowledging the data received, so there's no timeout. Also, different TCP/IP implementations handle these timeouts differently. Take a look at what's going on on the wire with tcpdump (or wireshark if you are so unfortunate :) What you need is application level ACK, i.e. you need to define the protocol between client and the server. I can't comment on Java packages (you probably want to look at nio), but receive timeout on that ACK would usually be handled with poll/select.
There is no way to get the timeout, but you can always spawn a thread that closes the connection if the write hasn't finished.
Related
I am trying to build an Android IM, since users may have new messages from others, should I keeps the TCP connection open and keep reading data from it? e.g.
while(!shutdown) {
int count = socketChannel.read(buffer);
// do something with buffer
}
This depends on your implementation. If you're using blocked sockets then you wouldn't want to do this. It would mean that if you have more than one client connecting to the server they would block all other clients from connecting to that server socket.
What you could do is have a server socket running consistently (as you normally would) and then to connect to it with a client socket to check and receive any new messages that have arrived. Once you've received your message you can close the socket. This could be performed every n seconds.
The other option is to use non-blocked socket connections and always keep them open but this could lead to issues if you have many clients.
I am writing ftp server in java using NIO non-blocking technology.
I want to prevent user from connect to my server and then do nothing.
Here is my code snippet:
ServerSocketChannel serverChannel = (ServerSocketChannel) key.channel();
SocketChannel socketChannel = serverChannel.accept();
socketChannel.socket().setSoTimeout(3000);
socketChannel.configureBlocking(false);
....................
It does not works. Um... is it possible to throw an exception when the user do nothing (e.g. say 15 minutes)?
thank you very much
socketChannel.socket().setSoTimeout(3000);
You've done it. But then you also put the channel into non-blocking mode, which prevents you getting timeout exceptions. If you're using non-blocking mode, and therefore presumably also select(), you have to manage the timeout yourself.
As the topic suggests I have a server and some clients.
The server accepts I/O connections concurrently (no queueing in socket connections) but I have this troubling issue and I do not know how to bypass it!
If I force a client to throw an I/O Exception the server detects it and terminates the client thread correctly (verified from Task Manager (Windows) and System Monitor (Ubuntu) ). But If I emulate an I/O that is "hanging" like i.e. Thread.sleep(60*1000);or
private static Object lock = new Object();
synchronized(lock) {
while (true) {
try {
lock.wait();
} catch (InterruptedException e) {
/* Foo */
}
}
}
then all subsequent I/O operations (connection & data transfer) seem to block or wait until the "hanging" client is terminated. The applications makes use of the ExecutorService so if the "hanging" client does not complete the operations in the suggested time limit then the task will time out and the client is forced to exit. The subsequent "blocked" I/Os will resume but I wonder why the server doesn't accept any I/O connections or performs any I/O operations when a client "hangs"?
NOTE:The client threading takes place in the server main like this:
while (true) {
accept client connection;
submit client task;
||
\ /
\/
// ExecutorService here in the form
// spService.submit(new Callable<Tuple<String[], BigDecimal[]>>() {
// ... code ... }}).get(taskTimeout, taskTimeUnit);
check task result & perform cleanup if result is null;
otherwise continue;
}
The Problem :
This may very well indicate that your server ACCEPTS client connections concurrently, however, it only handles these connections synchronously. That means that even if a million clients connect, successfully, at any given time, if anyone of them takes a long time (or hangs), it will hold up the others.
The TEST:
To verify this : I would toggle the amount of time a client takes to connect by adding Thread.sleep statments(1000) in your clients.
Expected result :
I believe you will see that even adding a single Thread.sleep(1000) statement in your client delays all other connecting clients by 1000.
I think I have found the source of my problems!
I do use one thread-per-client model but I run my tests locally i.e. in the same machine which means all of them have the same IP! So each client is assigned the same IP with the server! I guess that this leaves server and clients to differ only in port number but since each client is mapped to a different localport for each server connection then the server shouldn't block. I have confirmed that each client and server use different I/Os (compared references) and I wrap their sockets using <Input/Output>Streams to BufferedReaders & PrintWriters but still when a client hangs all other clients hang too (so maybe the I/O channels are indeed the same???)!I will test this on another machine and check the results back with you! :)
EDIT: Confirmed the erratic behaviour. It seems that even with remote clients if one hangs the other clients seem to hang too! :/
Don't know but I am determined to fix this. It's just that it's pretty weird since I am pretty sure I use one thread-per-client (I/Os differ, client sockets differ, IPs seem to be not a problem, I even map each client in the server to a localport of my choice ...)
May be I'll switch to NIO if I don't find a solution soon enough.
SOLUTION: Solved the problem! It seemed that the ExecutorService had to be run in a seperate thread otherwise if an I/O in a client blocked, all I/Os would block! That's strange given the fact that I've tried both an Executors.newFixedThreadPool(<nThreads>); and Executors.newCachedThreadPool(); and the client actions (aka I/Os) should take place in a new Thread for each client.
In any case, I used a method and wrapped the calls so each client instace would use a final ExecutorService baseWorker = Executors.newSingleThreadExecutor(); and created a new Thread explicitly each time using <Thread instance>.start(); so each thread would run in the background :)
I need a simple client-server communication in order to implement unit-test.
My steps:
Create server thread
Wait for server thread to put server socket into listen mode ( serverSocket.accept() )
Create client
Make some request, verify responses
Basically, I have a problem with step #2. I can't find a way to signal me when server socket is put to "listen" state. An asynchronous call to "accept" will do in this case, but java doesn't support this (it seems to support only asynchronous channels and those are incompatible with "accept()" method according to documentation).
Of cause I can put a simple "sleep", but that is not really a solution for production code.
So, to summarize, I need to detect when ServerSocket has been put into listen mode without using sleeps and/or polling.
The socket is put into listening state as soon as you construct the ServerSocket object, not when you call accept. As long as you create the client after the ServerSocket constructor has completed, you won't have a problem. Connections will be accepted and internally queued until accept gets called.
Here is some code to demonstrate:
ServerSocket serverSocket = new ServerSocket(12345);
Thread.sleep(10000);
Socket socket = serverSocket.accept();
During that 10 second gap before accept is called, the OS netstat command will show the server socket in "LISTENING" state, and clients can connect to it. If a client connects during that 10 seconds, the connection is queued, and when the accept method is finally called it immediately returns the queued Socket object.
Why not to send single just before calling accept()?
connectionAccepted = true;
loc.notify();
socket.accept();
To be sure that the socket is ready add a tiny sleep in your "client" code:
wait();
// we are here when notify is called.
Thread.sleep(10); // 10 ms
startTest();
You can even do better: create loop that tries to "ping" the socket with a tiny sleep between attempts. In this case you will start test as quickly as it is possible.
I'm implementing a java TCP/IP Server using ServerSocket to accept messages from clients via network sockets.
Works fine, except for clients on PDAs (a WIFI barcode scanner).
If I have a connection between server and pda - and the pda goues into suspend (standby) after some idle time - then there will be problems with the connection.
When the pda wakes up again, I can observer in a tcp monitor, that a second connection with a different port is established, but the old one remains established too:
localhost:2000 remotehost:4899 ESTABLISHED (first connection)
localhost:2000 remotehost:4890 ESTABLISHED (connection after wakeup)
And now communication doesn't work, as the client now uses the new connection, but the server still listens at the old one - so the server doesn't receive the messages. But when the server sends a message to the client he realizes the problem (receives a SocketException: Connection reset. The server then uses the new connection and all the messages which have been send in the meantime by the client will be received at a single blow!
So I first realize the network problems, when the server tries to send a message - but in the meantime there are no exceptions or anything. How can I properly react to this problem - so that the new connection is used, as soon as it is established (and the old one closed)?
From your description I guess that the server is structured like this:
server_loop
{
client_socket = server_socket.accept()
TalkToClientUntilConnectionCloses(client_socket)
}
I'd change it to process incoming connections and established connections in parallel. The simplest approach (from the implementation point of view) is to start a new thread for each client. It is not a good approach in general (it has poor scalability), but if you don't expect a lot of clients and can afford it, just change the server like this:
server_loop
{
client_socket = server_socket.accept()
StartClientThread(client_socket)
}
As a bonus, you get an ability to handle multiple clients simultaneously (and all the troubles attached too).
It sounds like the major issue is that you want the server to realize and drop the old connections as they become stale.
Have you considered setting a timeout on the connection on the server-side socket (the connection Socket, not the ServerSocket) so you can close/drop it after a certain period? Perhaps after the SO_TIMEOUT expires on the Socket, you could test it with an echo/keepalive command to verify that the connection is still good.