I have a server that times out after 45 seconds if it hasn't received a full request and closes the connection. I connect to this server through a Socket and write my request to the socket's OutputStream.
Socket socket = new Socket("myhost", myPort);
PrintWriter out = new PrintWriter(socket.getOutputStream());
out.write(properRequestMessage);
out.flush();
I'm assuming here that my request is good (follows my protocol). The server is supposed to respond with a file. I try to read from the socket inputstream:
BufferedReader response = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String in;
while((in = response.readLine()) != null) {
System.out.println(in);
}
The readLine() blocks here and I think it is because my server thinks my request isn't properly terminated and is therefore waiting for more.
Now, if 45 seconds pass and my server times out, will the readLine() unblock or wait for some Socket default timeout time?
That depends on what the server does when it times out. If it closes the connection you will see that. If it just logs a message, you might not see anything.
There is no default read timeout. Your readLine() can wait forever.
If the server closes its end of the socket on that timeout, then readLine() will return null.
The readLine() method will block until it receives an input or until the underlying socket read() timeout ends.
You don't set the timeout on the read command but rather on the socket it self.
Socket.setSoTimeout(int ms).
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a read() call on the InputStream associated with this Socket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the Socket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
What actually occurs also depends on what the server does, if it closes the socket correctly a IOException should be thrown by readLine(). If the connection isn't close it will wait for the socket to timeout.
Related
Server looks like this:
public class Server {
public static void main(String[] args) {
Server server = new Server();
server.start(5006);
}
private void start(int port) {
try(ServerSocket serverSocket = new ServerSocket(port);
Socket clientSocket = serverSocket.accept();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(clientSocket.getInputStream(), "UTF-8"));) {
String line;
while (true) {
while((line = bufferedReader.readLine())!=null){
System.out.println("line = " + line);
}
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
The client looks like this:
public class Client {
public static void main(String[] args) {
Client client = new Client();
client.start("localhost", 5006);
}
private void start(String localhost, int port) {
Random random = new Random();
try (Socket socket = new Socket(localhost, port);
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream(), "UTF-8"))) {
while (true) {
int i = random.nextInt();
bufferedWriter.write(String.valueOf(i));
bufferedWriter.newLine();
System.out.println("i = " + i);
//sleep(bufferedWriter);
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void sleep(BufferedWriter bufferedWriter) throws Exception{
//bufferedWriter.flush(); //Has to enabled if the wait below is to work
TimeUnit.SECONDS.sleep(3);
}
Context:
As is evident from the code, the server is single threaded. It accepts a connection from the client and goes into a busy loop processing data from the client socket. The server is intentionally handicapped.
Running one instance of the client program works as intended. The client ends random integers and the same are printed on the server's console.
Questions:
1) While one instance of the client is still running, spin up another instance of the client program.
How is this instance of the client able to connect to the server when the server is still in the busy loop (while(true))?
The client goes so far as to fill up the buffered writer and then just hangs; waiting for the server to consume the stream.
2) In the client program, uncomment the 'sleep' method and re-run.
The client program hangs. The server does not receive anything. Why? I just want to write to the buffer every 3 seconds. Nonsensical but let's suppose it is a sane thing to do for arguments sake. I also put in the sleep after we send the new line to the server just to make sure the server prints its input.
If you uncomment the flush in the sleep method, it starts working again. Why?
How is this instance of the client able to connect to the server when the server is still in the busy loop (while(true))? The client goes so far as to fill up the buffered writer and then just hangs; waiting for the server to consume the stream.
Because of the listen backlog queue. The operating system completes the inbound connection and places it on the backlog queue. accept() blocks while the backlog queue is empty and then removes the first connection from it. The client's connect operation is complete before the corresponding accept() is called, and because the client now has a connection it can now write a certain amount of data to it.
2) In the client program, uncomment the 'sleep' method and re-run. The client program hangs.
It sleeps for three seconds. It doesn't 'hang'. Because of the much slower output, it takes much (much) longer for the BufferedWriter's buffer to fill and output to be flushed to the server.
The server does not receive anything. Why?
Because you haven't flushed the buffer.
I just want to write to the buffer every 3 seconds. Nonsensical but let's suppose it is a sane thing to do for arguments sake. I also put in the sleep after we send the new line to the server just to make sure the server prints its input.
If you uncomment the flush in the sleep method, it starts working again. Why?
See above.
1) Two clients:
A. Client 1 steps:
(As EJP said), Connection established on OS level and placed in backlog queue.
Then it is waiting for ServerSocket to pick it and start reading from clientSocket InputStream
Client 1 writes data to its socket OutputStream - server's reader reads data from clientSocket InputStream
Server is busy, because infinite loop.
B. Client 2 steps:
(As EJP said), Connection established on OS level and placed in backlog queue.
By this code it will never be picked by ServerSocket. Server is indefinitely busy with indefinitely open Socket to Client 1.
All data written from Client 2 stays in the buffer on Client 2 side.
Note: Client 2 will never go beyond that point with current code even if Client 1 process be killed and connection from it to server be broken, server will end. There is no second accept() call to process request from Client 2
2) flush
when Client writes bytes to its socket OutputStream bytes goes to the buffer first not over network to the server.
flush forces OutputStream to send whatever is in the buffer at this moment.
Without flush data be send to the server when buffer is full.
If you do not have flush somewhere in the code flow, server never receives anything. It keeps waiting for data indefinitely in current code. But all data stay in buffer at client side waiting for flush or buffer be full.
3)
The client program hangs.
Assumption is not correct. Client program does not hang. It is running and fill the buffer, but nothing sends to the server until buffer is full.
With or without flush it does not matter is there sleep or not.
"No sleep" - buffer fills very quickly. "sleep" buffer fills once per 3 sec. It just need a time to fill the buffer.
If you uncomment the flush in the sleep method, it starts working again. Why?
Only server starts receiving data earlier, because client does not wait to fill the buffer to send data and flushes buffer after each write.
I'm trying to read a HTTP request from a Bufferedreader, that gets Socket.getInputStream() as input. However, when I use Bufferedreader.lines().foreach(), it never terminates and it just gets stuck.
My code (simplified):
Socket socket = new ServerSocket(9090);
Socket newConnection = socket.accept();
BufferedReader reader = new BufferedReader(new InputStreamReader(newConnection.getInputStream()));
reader.lines().forEach(s -> System.out.println(s));
You need to read more about the HTTP 1.1 protocol. Requests aren't terminated by end of stream. They are terminated by exhausting the byte count in the Content-length header, or of the cumulative chunks if chunked transfer mode is in use. If they were exhausted by end of stream, you could never send a response.
Try creating your socket with parameter-less constructor and use connect() method with port and timeout parameter. This will prevent endless freeze.
I have scenario in which there is server listening on specified ip and port and client which connects to that server.
Now I am reading response from server using readline method:
String readme=bs.readline()).
Here bs is bufferedreader object. I want to know if before reading response if I write this line
socket.setSoTimeout(1000)
and if no response come till 1000 ms
whether socket get timeout and get disconnected or it do not disconnect socket and give empty string in readme.
Actually neither. A SocketTimeoutException is thrown.
From the docs:
setSoTimeout
public void setSoTimeout(int timeout)
throws SocketException
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds.
With this option set to a non-zero timeout, a read() call on the
InputStream associated with this Socket will block for only this
amount of time. If the timeout expires, a
java.net.SocketTimeoutException is raised, though the Socket is still
valid. The option must be enabled prior to entering the blocking
operation to have effect. The timeout must be > 0. A timeout of zero
is interpreted as an infinite timeout.
Parameters: timeout - the specified timeout, in milliseconds.
Throws: SocketException - if there is an error in the underlying protocol, such as a TCP error.
The socket will not disconnect. Instead, any reading method will throw a SocketTimeoutException that you may wish to catch in your program. The socket can still be used, but readme in such a case will not be defined:
String readme;
try
{
readme = bs.readline;
// TODO do stuff with readme
}
catch (SocketTimeoutException e)
{
// did not receive the line. readme is undefined, but the socket can still be used
socket.close(); // disconnect, for example
}
It is assumed in the example that IOExceptions are caught elsewhere or thrown.
The docs explain this behaviour quite well: Socket.setSoTimeout(int)
I have following Socket server's code that reads stream from connected Socket.
try
{
ObjectInputStream in = new ObjectInputStream(client.getInputStream());
int count = 10;
while(count>0)
{
String msg = in.readObject().toString(); //Stucks here if this client is lost.
System.out.println("Client Says : "+msg);
count--;
}
in.close();
client.close();
}
catch(Exception ex)
{
ex.printStackTrace();
}
And I have a Client program, that connects with this server, sends some string every second for 10 times, and server reads from the socket for 10 times and prints the message, but if in between I kill the Client program, the Server freezes in between instead of throwing any exception or anything.
How can I detect this freeze condition? and make this loop iterate infinitely and print whatever client sends until connection is active and stable?
The problem is that the server side of the socket has no way of knowing that the client connection closed because the client code terminates without calling .close() on the client side of the socket, and therefore never sends the TCP FIN signal.
One possible way of fixing this would be to create a new Watcher thread that just periodically inspects the socket to see if it is still active. The problem with that approach is that the isConnected() on the Socket will not work for the same reason stated above so the only real way to inspect the connection is to attempt to write to it. However, this may cause random garbage to be sent to a potentially listening client.
Other options would be to implement some type of keep-alive protocol that the client should agree to (i.e., send keep-alive bits every so often so the Watcher has something to look for). You could also just move to the java.nio approach, which I believe does a better job at dealing with these conditions.
This thread is old, but provides more detail: http://www.velocityreviews.com/forums/t541628-sockets-checking-for-dropped-connections-and-close.html.
I am trying to set the timeout of a connection on the client socket in Java. I have set the connection timeout to 2000 milliseconds, i.e:
this.socket.connect(this.socketAdd, timeOut);
This I am trying on a web application. When a user makes a request, I am passing values to socket server, but if I don't receive any response in 5 secs the socket should disconnect.
But in my case the whole request is getting submitted once again. Can any one please tell me where am I going wrong?
I want to cut the socket connection, if I don't get any response in 5 secs. How can I set it? Any sample code would help.
You can try the follwoing:
Socket client = new Socket();
client.connect(new InetSocketAddress(hostip, port_num), connection_time_out);
To put the whole thing together:
Socket socket = new Socket();
// This limits the time allowed to establish a connection in the case
// that the connection is refused or server doesn't exist.
socket.connect(new InetSocketAddress(host, port), timeout);
// This stops the request from dragging on after connection succeeds.
socket.setSoTimeout(timeout);
What you show is a timeout for the connection, this will timeout if it cannot connect within a certain time.
Your question implies you want a timeout for when you are already connected and send a request, you want to timeout if there is no response within a certain amount of time.
Presuming you mean the latter, then you need to timeout the socket.read() which can be done by setting SO_TIMEOUT with the Socket.setSoTimeout(int timeout) method. This will throw an exception if the read takes longer than the number of milliseconds specified. For example:
this.socket.setSoTimeout(timeOut);
An alternative method is to do the read in a thread, and then wait on the thread with a timeout and close the socket if it timesout.