I have two beans creating a client socket connection to a server: AbstractClientConnectionFactory and TcpOutboundGateway.
The server offers a timeout of 1 minute.
Question: which timeouts do I have to set on the beans so that spring/java does not terminate the connection before the timeout of the server?
The following properties are available:
factory.setSoTimeout();
gateway.setRequestTimeout();
gateway.setRemoteTimeout();
Which of those timeouts is the correct one to set from a clients perspective? Or should I just set them all to equal 60000L?
I'm asking because I' just using factory.setSoTimeout(60000L) by now, and getting socket timeouts after 10sec. So maybe I have to additionally set the gateway timeouts?
I also discovered that gateway.setRemoteTimeout(60000L) prevents the timeout only when set. So it's probably correct to also set this value (though I don't understand why timeout has to be configured twice).
Still the question remains what .setRequestTimeout() is for.
factory.setSoTimeout();
The SO timeout is set on the socket itself; if no reply is received within that time, the reader thread gets an exception. If we haven't sent a message recently (meaning we are expecting a reply), the socket is closed. If we did send a message recently we'll wait for one more socket timeout after which the socket is closed.
gateway.setRequestTimeout();
This only applies if the factory singleUse is false (meaning a shared single connection). It is the time we wait to get access to the socket if another request is in process. Since TCP has no natural mechanism for request/reply correlation, we can't have 2 (or more) requests outstanding so the second request has to wait until the first one completes. If singleUse is true a new socket is used for each request so this is not needed. The CachingClientConnectionFactory provides a mechanism to use a pool of shared sockets. Again, this timeout does not apply (but the pool has a timeout if all the sockets are in use).
gateway.setRemoteTimeout();
This is how long the gateway itself will wait for a reply; if this expires, the socket is closed.
SO timeout and remoteTimeout effectively do the same thing; just with different implementations.
You can set both to at least the time you expect a request to take, or leave the SO timeout to default (infinity).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
public class ServerThread implements Runnable
{
private static final int port = 10000;
#Override
public void run() {
ServerSocket serverSocket = new ServerSocket(port);
while (true) {
Socket clientSocket = serverSocket.accept();
ClientThread clientThread = new ClientThread(clientSocket);
// handle the client request in a separate thread
}
}
}
Will this work If I have let's say 10 different threads running ServerThread.run()? Or should I use the same ServerSocket object for all threads?
The docs say:
The constructor for ServerSocket throws an exception if it can't listen on the specified port (for example, the port is already being used)
You may be wondering why I want to do this in the first place and not simply have a single thread running ServerSocket.accept(). Well, my assumption is (correct me if I'm wrong) that the accept() method may take some time to finish establishing a connection, especially if the ServerSocket is SSL (because of handshaking). So if two clients want to connect at the same time, one has to wait for the other. This would be very bad for a high traffic server.
Update: It seems that the accept() method will return as soon as a connection belonging to a queue is established. This means if there's a queue of clients waiting to connect, the server thread can handle the requests in the fastest way possible and only one thread is needed. (apart from the time it takes to create a new thread for each request and starting the thread, but that time is negligible when using a thread pool)
The ServerSocket also has a parameter called "backlog", where you can set the maximum number of connections in the queue. According to the book "Fundamental Networking in Java" 3.3.3
TCP itself can get ahead of a TCP server application in accepting connections. It
maintains a ‘backlog queue’ of connections to a listening socket which TCP iself
has completed but which have not yet been accepted by the application. This
queue exists between the underlying TCP implementation and the server process
which created the listening socket. The purpose of pre-completing connections
is to speed up the connection phase, but the queue is limited in length so as
not to pre-form too many connections to servers which are not accepting them at
the same rate for any reason. When an incoming connection request is received
and the backlog queue is not full, TCP completes the connection protocol and
adds the connection to the backlog queue. At this point, the client application is fully connected, but the server application has not yet received the connection as a result value of ServerSocket.accept. When it does so, the entry is removed from the queue.
I'm still not sure though in the case of SSL, if the handshaking is also done in parallel by ServerSocket.accept() for simultaneous connections.
Update 2 The ServerSocket.accept() method itself doesn't do any real networking at all. It will return as soon as the operating system has established a new TCP connection. The operating system itself holds a queue of waiting TCP connections, which can be controlled by the "backlog" parameter in the ServerSocket constructor:
ServerSocket serverSocket = new ServerSocket(port, 50);
//this will create a server socket with a maximum of 50 connections in the queue
SSL handshaking is done after the client calls Socket.connect(). So one thread for ServerSocket.accept() is always enough.
Here are a few thoughts regarding your problem:
You can't listen() on the same IP+port with several ServerSocket. If you could, to which one of the socket would the OS transfer the SYN packet?*
TCP indeed maintains a backlog of pre-accepted connections so a call to accept() will return (almost) immediately the first (oldest) socket in the backlog queue. It does so by sending the SYN-ACK packet automatically in reply to a SYN sent by the client, and waits for the reply-ACK (the 3-way handshake).
But, as #zero298 suggests, accepting connections as fast as possible isn't usually the problem. The problem will be the load incurred by processing communication with all the sockets you'll have accepted, which may very well put your server down its knees (it's actually a DoS attack). In fact the backlog parameter is usually here so too many simultaneous connections waiting for too long in the backlog queue to be accept()ed will be dropped by TCP before reaching your application.
Instead of creating one thread per client socket, I would suggest you use an ExecutorService thread pool running some maximum number of threads, each handling communication with one client. That allows for a graceful degradation of system resources, instead of creating millions of threads which would in turn create thread starvation, memory issues, file descriptor limits, ... Coupled with a carefully-chosen backlog value you'll be able to get the maximum throughput you server can offer without crashing it. And if you're worried about DoS on SSL, the very first thing the run() method of your client-thread should do, is call startHandshake() on the newly-connected socket.
Regarding the SSL part, TCP itself cannot do any SSL pre-accept, as it need to perform encryption/decoding, talking to a keystore, etc. which are well beyond its specification. Note that you should also use an SSLServerSocket in that case.
To go around the use-case you gave (clients willingly delaying handshake to DoS your server) you'll be interested reading an Oracle forum post about it where EJP (again) answers:
The backlog queue is for connections that have been completed by the TCP stack but not yet accepted by the application. Nothing to do with SSL yet. JSSE doesn't do any negotiation until you do some I/O on the accepted socket, or call startHandshake() on it, both of which you would do in the thread dealing with the connection. I don't see how you can make a DOS vulnerability out of that, at least not an SSL-specific one. If you are experiencing DOS conditions, most probably you are doing I/O in the accept() thread that should be done in the connection-handling thread.
*: Though Linux >=3.9 does some kind of load-balancing, but for UDP only (so not SSLServerSocket) and with option SO_REUSEPORT, which is not available on all platforms anyway.
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.
I'm having a problem with a library that I am using. It might be the library or it might be me using it wrong!
Basically, when I do this (Timeout in milliseconds)
_ignitedHttp.setConnectionTimeout(1); // v short
_ignitedHttp.setSocketTimeout(60000); // 60 seconds
No timeout exception is generated and it works ok, however, when I do the following,
_ignitedHttp.setConnectionTimeout(60000); // 60 seconds
_ignitedHttp.setSocketTimeout(1); // v short
I get a Socket Exception.
So, my question is why can I not simulate a Connection Exception? Am I misunderstanding the difference between a socket and a connection time-out? The library is here (not officially released yet).
A connection timeout occurs only upon starting the TCP connection. This usually happens if the remote machine does not answer. This means that the server has been shut down, you used the wrong IP/DNS name, wrong port or the network connection to the server is down.
A socket timeout is dedicated to monitor the continuous incoming data flow. If the data flow is interrupted for the specified timeout the connection is regarded as stalled/broken. Of course this only works with connections where data is received all the time.
By setting socket timeout to 1 this would require that every millisecond new data is received (assuming that you read the data block wise and the block is large enough)!
If only the incoming stream stalls for more than a millisecond you are running into a timeout.
A connection timeout is the maximum amount of time that the program is willing to wait to setup a connection to another process. You aren't getting or posting any application data at this point, just establishing the connection, itself.
A socket timeout is the timeout when waiting for individual packets. It's a common misconception that a socket timeout is the timeout to receive the full response. So if you have a socket timeout of 1 second, and a response comprised of 3 IP packets, where each response packet takes 0.9 seconds to arrive, for a total response time of 2.7 seconds, then there will be no timeout.
We are doing FTP connection through our applicaion which is a JAVA aplication.
We have set timeout for connection using Socket.connect(Adreess,timeout) method before calling FTPClient.connect() method.
During retriving files from the FTP site under same connection we havent set any timeout. Is it mandatory to call method FTPClient.setSoTimeOut(timeout) method to set individual time out for each such interaction under same connection or Socket.connect(Adreess,timeout) method will set timeout for each interaction with FTP site under one connection?
I would also like to know What is the difference between these two methods?
The timeout in Socket.connect() is connect timeout, which is the time to wait for TCP handshake to finish. This timeout only occurs once per connection.
setSoTimeout() is called socket read timeout, which is how long you wait to read pending bytes from socket. This occurs on every socket read throughout the TCP session.
It's good practice to set both timeout value so you don't rely on system defaults, which may vary. However, the timeout may not work sometimes when the call is stuck in native code. For example, the connect timeout is not honored if firewall silently drops packet.