How server-socket with more than one backlog works? - java

I'm new to socket programming, I have problem in understanding serversocket.
assume we create a serversocket like this:
loadbalancerSocket = new ServerSocket(port, 20);
connection = loadbalancerSocket.accept();
and then after some stuff, write something in its buffer:
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(connection.getOutputStream()));
writer.write("Hello!");
writer.flush();
writer.close();
My question is : How the connection understand which client should get the response of server? our backlog is 20 , and 20 client can connect to the server socket at the same time(As I understood).

In your example the first connected client gets the response. The backlog parameter does not mean number of clients that can connect in parallel. It is the maximum number of clients waiting for accepting connection.
The ServerSocket is not connected to any particular client. The connected socket is the socket returned from accept(). If you want to handle multiple clients in parallel you have to call accept() multiple times and handle connections separately. You can create a special thread for each connection for example.
accept() is typically called in a loop and the newly created connected socket returned from accept() is typically passed to a handler that is responsible for particular client.

Related

java.nio.channels.ServerSocketChannel not closing properly

I have a java.nio.channels.ServerSocketChannel which I initialised as follows:
while(true)
{
ServerSocketChannel channel = ServerSocketChannel.open();
InetSocketAddress serverSocket = new InetSocketAddress(host,port);
channel.bind(serverSocket);
SocketChannel ch = channel.accept();
// Later on when I have read off data from a client, I want to shut this
// connection down and restart listening.
channel.socket().close(); //Just trying to close the associated socket
// too because earlier approaches failed
channel.close();
}
When I send the first message from client it is successfully delivered to server and the client program exits. Then trouble begins. When I initialise the client again and try to
establish at the same port and address of the server as I did the first time, I get a
java.net.BindException: Address already in use: connect
exception even though I closed the associated channel/socket.
I have been renewing the ServerSocketChannel and InetSocketAddressobjects because as my client instance has to shut down after a write, I have to disengage that channel and since I cannot reuse a channel after it has been closed, I have to make a new object everytime. My theory is since the channel reference is reassigned each time, the orphaned object becomes GC meat, but since the close() method apparently is not working properly, the channel is still alive and until GC collects it my port will be hogged.
Nevertheless I tried keeping the initialisation of ServerSocketChannel and InetSocketAddress objects before the while loop, but this did not help, and the same exception occurred after the first write, as before.
ServerSocketChannel channel = ServerSocketChannel.open();
InetSocketAddress serverSocket = new InetSocketAddress(host,port);
channel.bind(serverSocket);
while (true)
{
SocketChannel ch = channel.accept();
//read from a client
}
For clarity , here is how I connect from the client:
SocketChannel ch=SocketChannel.open();
ch.bind(new InetSocketAddress("localhost", 8077));
InetSocketAddress address=new InetSocketAddress("localhost",8079);
//the address and port of the server
System.out.print(ch.connect(address));
ByteBuffer buf=ByteBuffer.allocate(48);
buf.clear();
buf.put("Hellooooooooooooooooooooooooo".getBytes());
buf.flip();
while(buf.hasRemaining()) {
ch.write(buf);
}
ch.close();
It looks like you're confusing client and server. Normally, server starts only once and binds to s port. Usually, there's no need to close there anything as the port gets freed when the program exits. Obviously, you must close the Sockets obtained by ServerSocket.accept(), but that's another story.
I guess you've got confused by your variable names (just like it happened to me as I started with this). Try to call all things according to their type, here was Hungarian really helpful for me.
The code I wrote for testing this is long, stupid, and boring. But it seems to work.
It may also be helpful to do:
channel.setOption(StandardSocketOptions.SO_REUSEADDR, true);
Search for information about this option to learn more.
do ch.close() as well to GC the client socket.

ObjectInputStream's readObject() freezes after Client Socket connection is killed

I have following Socket server's code that reads stream from connected Socket.
try
{
ObjectInputStream in = new ObjectInputStream(client.getInputStream());
int count = 10;
while(count>0)
{
String msg = in.readObject().toString(); //Stucks here if this client is lost.
System.out.println("Client Says : "+msg);
count--;
}
in.close();
client.close();
}
catch(Exception ex)
{
ex.printStackTrace();
}
And I have a Client program, that connects with this server, sends some string every second for 10 times, and server reads from the socket for 10 times and prints the message, but if in between I kill the Client program, the Server freezes in between instead of throwing any exception or anything.
How can I detect this freeze condition? and make this loop iterate infinitely and print whatever client sends until connection is active and stable?
The problem is that the server side of the socket has no way of knowing that the client connection closed because the client code terminates without calling .close() on the client side of the socket, and therefore never sends the TCP FIN signal.
One possible way of fixing this would be to create a new Watcher thread that just periodically inspects the socket to see if it is still active. The problem with that approach is that the isConnected() on the Socket will not work for the same reason stated above so the only real way to inspect the connection is to attempt to write to it. However, this may cause random garbage to be sent to a potentially listening client.
Other options would be to implement some type of keep-alive protocol that the client should agree to (i.e., send keep-alive bits every so often so the Watcher has something to look for). You could also just move to the java.nio approach, which I believe does a better job at dealing with these conditions.
This thread is old, but provides more detail: http://www.velocityreviews.com/forums/t541628-sockets-checking-for-dropped-connections-and-close.html.

What will happen to a TCP/UDP serversocket when I switch wifi network?

what will happen to the serversocket in my app when I suddenly change the wifi network? I guess it will shut down since my device will get a new IP, at least in TCP, is the UDP MulticastSocket prone to this as well? And how to end the previous Server socket thread and start a new one when the network changes? One solution is using time outs, another is using a flag that will indicate whether the infinite loop should end or not but since listening to a socket is a blocking function it will produce an exception/error anyways.
Any thoughts will be appreciated! :)
EDIT: sample of my server thread.
ServerSocket ss = new ServerSocket(4445);
while(true){
Socket socket = ss.accept();
ObjectInputStream in = new ObjectInputStream(socket.getInputStream());
Object obj = in.readObject();
Log.i("TAG", "Received: " + obj.toString());
in.close();
socket.close();
}
TCPIP connection will break. So client would have to connect again.
UDP will be ok provided your IP does not change after reconnection. Of course if you transmit UDP its not going to make a difference for that machine.
You should get an exception in case of TCPIP which you can handle.
UDP sockets that are not bound to the address will remain open, as they are stateless. TCP listening sockets not bound to the address will remain open as well.
Conntected TCP sockets may be severed (RST) or just linger until a timeout hits.
It is a little known fact that IP mandates it that a device by default will accept packets directed to any address it has configured on any interface, no matter on which interface the packet arrives. If this were not so, routing would be broken. One can use packet filters to filter out packets with non-matching addresses depending on the interface.

Communiation with multiple clients via Sockets

I have a Server Socket and 3-4 android devices as clients. I'm using TCP/IP for communications. Which is the best method. Should I use multiple ports for each client? Or should I use same port. If using same function then how should I identify the communication addressed to different devices?
No, you do not need several ports.
ServerSocket server = new ServerSocket(port);
while (true)
{
Socket socket = server.accept();
// do something with this socket - aka 1 client
new SomeClientClass(socket);
InputStream in = socket.getInputStream();
in.read(byte[]);
OutputStream out = socket.getOutputStream;
// out will only write response to its own client.
// when this new SomeClientClassis created, method returns to this point
// in while loop and waits for the next client
}
You can use one port. The client can send you its id. If it can't you can look at the clients IP address to workout which one it is.
There are thousands of TCP client/server code examples on the web, but I would start with the sample code which comes with the JDK,

Waiting for ServerSocket accept() to put socket into "listen" mode

I need a simple client-server communication in order to implement unit-test.
My steps:
Create server thread
Wait for server thread to put server socket into listen mode ( serverSocket.accept() )
Create client
Make some request, verify responses
Basically, I have a problem with step #2. I can't find a way to signal me when server socket is put to "listen" state. An asynchronous call to "accept" will do in this case, but java doesn't support this (it seems to support only asynchronous channels and those are incompatible with "accept()" method according to documentation).
Of cause I can put a simple "sleep", but that is not really a solution for production code.
So, to summarize, I need to detect when ServerSocket has been put into listen mode without using sleeps and/or polling.
The socket is put into listening state as soon as you construct the ServerSocket object, not when you call accept. As long as you create the client after the ServerSocket constructor has completed, you won't have a problem. Connections will be accepted and internally queued until accept gets called.
Here is some code to demonstrate:
ServerSocket serverSocket = new ServerSocket(12345);
Thread.sleep(10000);
Socket socket = serverSocket.accept();
During that 10 second gap before accept is called, the OS netstat command will show the server socket in "LISTENING" state, and clients can connect to it. If a client connects during that 10 seconds, the connection is queued, and when the accept method is finally called it immediately returns the queued Socket object.
Why not to send single just before calling accept()?
connectionAccepted = true;
loc.notify();
socket.accept();
To be sure that the socket is ready add a tiny sleep in your "client" code:
wait();
// we are here when notify is called.
Thread.sleep(10); // 10 ms
startTest();
You can even do better: create loop that tries to "ping" the socket with a tiny sleep between attempts. In this case you will start test as quickly as it is possible.

Categories

Resources