I have a simple server that looks like this
public static void main(String[] args) throws IOException {
ServerSocket ss = new ServerSocket(4999);
Socket s = ss.accept();
InputStream is = s.getInputStream();
while (true) {
System.out.println(is.read());
}
}
It accepts a single client socket, reads from it forever and prints out the number that was sent from the client socket.
I have a client like this
public static void main(String[] args) throws IOException, InterruptedException {
int id = Integer.valueOf(args[0]);
Socket s = new Socket("localhost", 4999);
OutputStream os = s.getOutputStream();
while (true) {
os.write(id);
Thread.sleep(1000L);
System.out.println("Sent");
}
}
It connects to the server and sends the number it received as command-line argument forever.
I start the server.
I start a client like java -jar client.jar 123.
Then I start another client like java -jar client.jar 234.
No errors happen on neither the server side nor the client side.
Each client prints the Sent message every 1 second, neither gets blocked.
The server only prints 123 until the end of times.
My questions:
What happens with the bytes written by the second client?
I would expect the second client to receive an error or get blocked or something, but nothing happens. Why?
Note: I know that this code is bad and I should handle clients in threads and call ServerSocket.accept() and all that jazz.
Update:
Based on the accepted answer the solution is to create the server like new ServerSocket(4999, 1); where 1 is the size of the backlog. 0 would mean to use whatever the default setting is configured in Java.
By using 1 there can be only one connection in a "non-accepted" state. Anymore client trying to connect gets a connection refused!
The bytes written by the second client will go into the client Socket's send buffer, since you're writing to a socket that doesn't have an established connection yet. Eventually the send buffer will fill up. You could try playing with Socket.setSendBufferSize() to see what happens when it fills up.
A ServerSocket has a listen backlog for connections that haven't been accepted yet. The second client's connection is in the backlog, and if the server would ever get around to accepting it (which it won't, with your code, but there is no way for the client to know that), it would be established and the client's send buffer would be sent merrily along. You could try calling the constructor ServerSocket(int port, int backlog) with a backlog of 1 to see what happens to the client when the listen backlog fills up - it should get connection refused.
Related
Server looks like this:
public class Server {
public static void main(String[] args) {
Server server = new Server();
server.start(5006);
}
private void start(int port) {
try(ServerSocket serverSocket = new ServerSocket(port);
Socket clientSocket = serverSocket.accept();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(clientSocket.getInputStream(), "UTF-8"));) {
String line;
while (true) {
while((line = bufferedReader.readLine())!=null){
System.out.println("line = " + line);
}
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
The client looks like this:
public class Client {
public static void main(String[] args) {
Client client = new Client();
client.start("localhost", 5006);
}
private void start(String localhost, int port) {
Random random = new Random();
try (Socket socket = new Socket(localhost, port);
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream(), "UTF-8"))) {
while (true) {
int i = random.nextInt();
bufferedWriter.write(String.valueOf(i));
bufferedWriter.newLine();
System.out.println("i = " + i);
//sleep(bufferedWriter);
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void sleep(BufferedWriter bufferedWriter) throws Exception{
//bufferedWriter.flush(); //Has to enabled if the wait below is to work
TimeUnit.SECONDS.sleep(3);
}
Context:
As is evident from the code, the server is single threaded. It accepts a connection from the client and goes into a busy loop processing data from the client socket. The server is intentionally handicapped.
Running one instance of the client program works as intended. The client ends random integers and the same are printed on the server's console.
Questions:
1) While one instance of the client is still running, spin up another instance of the client program.
How is this instance of the client able to connect to the server when the server is still in the busy loop (while(true))?
The client goes so far as to fill up the buffered writer and then just hangs; waiting for the server to consume the stream.
2) In the client program, uncomment the 'sleep' method and re-run.
The client program hangs. The server does not receive anything. Why? I just want to write to the buffer every 3 seconds. Nonsensical but let's suppose it is a sane thing to do for arguments sake. I also put in the sleep after we send the new line to the server just to make sure the server prints its input.
If you uncomment the flush in the sleep method, it starts working again. Why?
How is this instance of the client able to connect to the server when the server is still in the busy loop (while(true))? The client goes so far as to fill up the buffered writer and then just hangs; waiting for the server to consume the stream.
Because of the listen backlog queue. The operating system completes the inbound connection and places it on the backlog queue. accept() blocks while the backlog queue is empty and then removes the first connection from it. The client's connect operation is complete before the corresponding accept() is called, and because the client now has a connection it can now write a certain amount of data to it.
2) In the client program, uncomment the 'sleep' method and re-run. The client program hangs.
It sleeps for three seconds. It doesn't 'hang'. Because of the much slower output, it takes much (much) longer for the BufferedWriter's buffer to fill and output to be flushed to the server.
The server does not receive anything. Why?
Because you haven't flushed the buffer.
I just want to write to the buffer every 3 seconds. Nonsensical but let's suppose it is a sane thing to do for arguments sake. I also put in the sleep after we send the new line to the server just to make sure the server prints its input.
If you uncomment the flush in the sleep method, it starts working again. Why?
See above.
1) Two clients:
A. Client 1 steps:
(As EJP said), Connection established on OS level and placed in backlog queue.
Then it is waiting for ServerSocket to pick it and start reading from clientSocket InputStream
Client 1 writes data to its socket OutputStream - server's reader reads data from clientSocket InputStream
Server is busy, because infinite loop.
B. Client 2 steps:
(As EJP said), Connection established on OS level and placed in backlog queue.
By this code it will never be picked by ServerSocket. Server is indefinitely busy with indefinitely open Socket to Client 1.
All data written from Client 2 stays in the buffer on Client 2 side.
Note: Client 2 will never go beyond that point with current code even if Client 1 process be killed and connection from it to server be broken, server will end. There is no second accept() call to process request from Client 2
2) flush
when Client writes bytes to its socket OutputStream bytes goes to the buffer first not over network to the server.
flush forces OutputStream to send whatever is in the buffer at this moment.
Without flush data be send to the server when buffer is full.
If you do not have flush somewhere in the code flow, server never receives anything. It keeps waiting for data indefinitely in current code. But all data stay in buffer at client side waiting for flush or buffer be full.
3)
The client program hangs.
Assumption is not correct. Client program does not hang. It is running and fill the buffer, but nothing sends to the server until buffer is full.
With or without flush it does not matter is there sleep or not.
"No sleep" - buffer fills very quickly. "sleep" buffer fills once per 3 sec. It just need a time to fill the buffer.
If you uncomment the flush in the sleep method, it starts working again. Why?
Only server starts receiving data earlier, because client does not wait to fill the buffer to send data and flushes buffer after each write.
I have a java.nio.channels.ServerSocketChannel which I initialised as follows:
while(true)
{
ServerSocketChannel channel = ServerSocketChannel.open();
InetSocketAddress serverSocket = new InetSocketAddress(host,port);
channel.bind(serverSocket);
SocketChannel ch = channel.accept();
// Later on when I have read off data from a client, I want to shut this
// connection down and restart listening.
channel.socket().close(); //Just trying to close the associated socket
// too because earlier approaches failed
channel.close();
}
When I send the first message from client it is successfully delivered to server and the client program exits. Then trouble begins. When I initialise the client again and try to
establish at the same port and address of the server as I did the first time, I get a
java.net.BindException: Address already in use: connect
exception even though I closed the associated channel/socket.
I have been renewing the ServerSocketChannel and InetSocketAddressobjects because as my client instance has to shut down after a write, I have to disengage that channel and since I cannot reuse a channel after it has been closed, I have to make a new object everytime. My theory is since the channel reference is reassigned each time, the orphaned object becomes GC meat, but since the close() method apparently is not working properly, the channel is still alive and until GC collects it my port will be hogged.
Nevertheless I tried keeping the initialisation of ServerSocketChannel and InetSocketAddress objects before the while loop, but this did not help, and the same exception occurred after the first write, as before.
ServerSocketChannel channel = ServerSocketChannel.open();
InetSocketAddress serverSocket = new InetSocketAddress(host,port);
channel.bind(serverSocket);
while (true)
{
SocketChannel ch = channel.accept();
//read from a client
}
For clarity , here is how I connect from the client:
SocketChannel ch=SocketChannel.open();
ch.bind(new InetSocketAddress("localhost", 8077));
InetSocketAddress address=new InetSocketAddress("localhost",8079);
//the address and port of the server
System.out.print(ch.connect(address));
ByteBuffer buf=ByteBuffer.allocate(48);
buf.clear();
buf.put("Hellooooooooooooooooooooooooo".getBytes());
buf.flip();
while(buf.hasRemaining()) {
ch.write(buf);
}
ch.close();
It looks like you're confusing client and server. Normally, server starts only once and binds to s port. Usually, there's no need to close there anything as the port gets freed when the program exits. Obviously, you must close the Sockets obtained by ServerSocket.accept(), but that's another story.
I guess you've got confused by your variable names (just like it happened to me as I started with this). Try to call all things according to their type, here was Hungarian really helpful for me.
The code I wrote for testing this is long, stupid, and boring. But it seems to work.
It may also be helpful to do:
channel.setOption(StandardSocketOptions.SO_REUSEADDR, true);
Search for information about this option to learn more.
do ch.close() as well to GC the client socket.
I have following Socket server's code that reads stream from connected Socket.
try
{
ObjectInputStream in = new ObjectInputStream(client.getInputStream());
int count = 10;
while(count>0)
{
String msg = in.readObject().toString(); //Stucks here if this client is lost.
System.out.println("Client Says : "+msg);
count--;
}
in.close();
client.close();
}
catch(Exception ex)
{
ex.printStackTrace();
}
And I have a Client program, that connects with this server, sends some string every second for 10 times, and server reads from the socket for 10 times and prints the message, but if in between I kill the Client program, the Server freezes in between instead of throwing any exception or anything.
How can I detect this freeze condition? and make this loop iterate infinitely and print whatever client sends until connection is active and stable?
The problem is that the server side of the socket has no way of knowing that the client connection closed because the client code terminates without calling .close() on the client side of the socket, and therefore never sends the TCP FIN signal.
One possible way of fixing this would be to create a new Watcher thread that just periodically inspects the socket to see if it is still active. The problem with that approach is that the isConnected() on the Socket will not work for the same reason stated above so the only real way to inspect the connection is to attempt to write to it. However, this may cause random garbage to be sent to a potentially listening client.
Other options would be to implement some type of keep-alive protocol that the client should agree to (i.e., send keep-alive bits every so often so the Watcher has something to look for). You could also just move to the java.nio approach, which I believe does a better job at dealing with these conditions.
This thread is old, but provides more detail: http://www.velocityreviews.com/forums/t541628-sockets-checking-for-dropped-connections-and-close.html.
I create a new thread that runs the following code:
public static void startServer() throws IOException {
serverSocket = new ServerSocket(55000);
Socket clientSocket = serverSocket.accept();
}
The above code is run in a thread. Now, in my main class, I successfully create a socket
connection to the server and I have checked it's integrity which is fine. here is the code:
Socket testServerSocket = new Socket("127.0.0.1", 55000);
assertEquals("/127.0.0.1", testServerSocket.getInetAddress().toString());
assertEquals(55000, testServerSocket.getPort());
This runs perfect. Then, again from my main, I kill the server connection that closes the connection on the server side. However the following code keeps failing:
assertEquals(false, testServerSocket.isBound());
It keeps returning true. Likewise, if I check the remote IP address for the connection, it doesn't return null, but rather '/127.0.0.1'. Any ideas why this might be happening? Many thanks for your help
I'm not an expert on sockets (I know what they are, but haven't ever used sockets on Java, only with C on Linux), but like JavaDoc for java.net.Socket states, 'A socket is an endpoint for communication between two machines'. So while closing server-side socket does destroy the connection between the two sockets (server- and client-side), your client-side socket is still bound, hence the isBound() is returning true. Maybe you meant to call isConnected() or isClosed()?
I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications.
Client:
import java.net.*;
public class client {
public static void main(String[] args) throws Exception {
InetAddress localhost = InetAddress.getLocalHost();
System.out.println("before");
Socket clientSideSocket = null;
try {
clientSideSocket = new Socket(localhost,12345,localhost,54321);
} catch (ConnectException e) {
System.out.println("Connection Refused");
}
System.out.println("after");
if (clientSideSocket != null) {
clientSideSocket.close();
}
}
}
Server:
import java.net.*;
public class server {
public static void main(String[] args) throws Exception {
ServerSocket listener = new ServerSocket(12345);
while (true) {
Socket serverSideSocket = listener.accept();
System.out.println("A client-request is accepted.");
}
}
}
And I found a behavior that I cannot explain:
I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one!
I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client.
I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!
When you close the server , the OS will keep the socket alive for a while so it can tell the client the connection has been closed. This involves timeouts and retransmissions which can take some time. You might find some info here and here. If you want your server to be able to immediately rebind the same socket, call setReuseAddress(true) on it, though it might be the client sockets that's in a TIME_WAIT state.
The socket is no longer in TIME_WAIT state, and can be reused again by any program.
Your client code just connects, closes the socket and then exits. As far as the server/OS tcp stack is concerned, these are different connections - it's fine to reuse the source port as long as any prior connection have been torn down. (Note that the OS might not tear down all of the housekeeping of the connection immediately after you call .close() or your program exits, there's some time delay involved so it can be sure all packets have been sent/received)
It is likely the operating system has not yet shutdown the sockets, try the netstat command (should work on Windows or Unix/Linux). If you run it immediately after you close client or server you should still the socket in "TIME_WAIT" "CLOSE_WAIT" or something similar. You wont be able to reuse those ports until they are fully closed.
Per Question #3: Many clients can connect to a server attached to a single port. Apache runs on port 80 but that doesn't mean only one person can view your website at a time. Also you are closing your client socket before you're opening a new one.