Multithread server program in Java - java

I am trying to write a multithread program in Java where a server listens for connections from clients and spawns a thread to acommodate each client. I have:
while(true)
{
Socket s = server.accept();
ClientHandler ch = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
}
My question is: whenever it accepts a connection in
Socket s = server.accept();
and starts executing the following lines of code to create the thread etc., what happens to a request for connection from a client during that time. Is it queued somehow and it will get served in the next loop of while(true) or will it be rejected?
thanks,
Nikos

After the accept() returns the TCP handshake is complete and you have a connected client socket (s in your code). Until the next call to accept() the OS queues pending connection requests.
You might want to check out some tutorial like this one for example.

I wrote a tiny http server in Java, which you can find on github. That might be a good example for you to take a look at real-world usage of Sockets and multithreading.

As it was answered already: yes it is queued and no it is not rejected.
I think that anyone reading this question should first know that when you instantiate a socket:
server = new ServerSocket(mPort, mNusers);
Java is already implementing a network socket ; which has clearly defined behavior. The connection will be rejected however after reaching the limit set.
Also, the code posted in the question accepts multiple connections but looses the refference for the previous. This may be a "duh" but just in case someone is copy-pasting , you should do something to store all the created sockets or handlers. Perhaps:
ClientHandler[] ch = new ClientHandler[mNusers];
int chIndex = 0;
while(true)
{
Socket s = server.accept();
ch[chIndex] = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
chIndex++;
}
An array may not be the best option but I want to point out that with sockets you should know what's the limit of connections you will allocate

ugh :)
there is a so called SYN-queue, which takes up to the requested amount of not yet established connections (there is a per call and a system-wide limit - not sure if its even user limited).
when "listening" on a ServerSocket, you specify that by - one of the answers says "Nusers" but it is - the size of the "backlog" the socket should keep. in this backlog the pending connections are stored and if it is filled up, all other (should) get a ConnectionRefused.
So
a) increase that when you are to slowly accepting (server.accept())
connections
b) accept the connections faster by
b.1) using a ThreadPool (be happy moving the problem to context-switches of the os)
b.2) use NIO and with that the ability to handle socket-states
within single threads/CPUs (as long as the internal data throughput is better than the one in the network, this is the more performant option)
have fun

Related

Is it possible to create multiple (SSL) ServerSocket on the same port? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
public class ServerThread implements Runnable
{
private static final int port = 10000;
#Override
public void run() {
ServerSocket serverSocket = new ServerSocket(port);
while (true) {
Socket clientSocket = serverSocket.accept();
ClientThread clientThread = new ClientThread(clientSocket);
// handle the client request in a separate thread
}
}
}
Will this work If I have let's say 10 different threads running ServerThread.run()? Or should I use the same ServerSocket object for all threads?
The docs say:
The constructor for ServerSocket throws an exception if it can't listen on the specified port (for example, the port is already being used)
You may be wondering why I want to do this in the first place and not simply have a single thread running ServerSocket.accept(). Well, my assumption is (correct me if I'm wrong) that the accept() method may take some time to finish establishing a connection, especially if the ServerSocket is SSL (because of handshaking). So if two clients want to connect at the same time, one has to wait for the other. This would be very bad for a high traffic server.
Update: It seems that the accept() method will return as soon as a connection belonging to a queue is established. This means if there's a queue of clients waiting to connect, the server thread can handle the requests in the fastest way possible and only one thread is needed. (apart from the time it takes to create a new thread for each request and starting the thread, but that time is negligible when using a thread pool)
The ServerSocket also has a parameter called "backlog", where you can set the maximum number of connections in the queue. According to the book "Fundamental Networking in Java" 3.3.3
TCP itself can get ahead of a TCP server application in accepting connections. It
maintains a ‘backlog queue’ of connections to a listening socket which TCP iself
has completed but which have not yet been accepted by the application. This
queue exists between the underlying TCP implementation and the server process
which created the listening socket. The purpose of pre-completing connections
is to speed up the connection phase, but the queue is limited in length so as
not to pre-form too many connections to servers which are not accepting them at
the same rate for any reason. When an incoming connection request is received
and the backlog queue is not full, TCP completes the connection protocol and
adds the connection to the backlog queue. At this point, the client application is fully connected, but the server application has not yet received the connection as a result value of ServerSocket.accept. When it does so, the entry is removed from the queue.
I'm still not sure though in the case of SSL, if the handshaking is also done in parallel by ServerSocket.accept() for simultaneous connections.
Update 2 The ServerSocket.accept() method itself doesn't do any real networking at all. It will return as soon as the operating system has established a new TCP connection. The operating system itself holds a queue of waiting TCP connections, which can be controlled by the "backlog" parameter in the ServerSocket constructor:
ServerSocket serverSocket = new ServerSocket(port, 50);
//this will create a server socket with a maximum of 50 connections in the queue
SSL handshaking is done after the client calls Socket.connect(). So one thread for ServerSocket.accept() is always enough.
Here are a few thoughts regarding your problem:
You can't listen() on the same IP+port with several ServerSocket. If you could, to which one of the socket would the OS transfer the SYN packet?*
TCP indeed maintains a backlog of pre-accepted connections so a call to accept() will return (almost) immediately the first (oldest) socket in the backlog queue. It does so by sending the SYN-ACK packet automatically in reply to a SYN sent by the client, and waits for the reply-ACK (the 3-way handshake).
But, as #zero298 suggests, accepting connections as fast as possible isn't usually the problem. The problem will be the load incurred by processing communication with all the sockets you'll have accepted, which may very well put your server down its knees (it's actually a DoS attack). In fact the backlog parameter is usually here so too many simultaneous connections waiting for too long in the backlog queue to be accept()ed will be dropped by TCP before reaching your application.
Instead of creating one thread per client socket, I would suggest you use an ExecutorService thread pool running some maximum number of threads, each handling communication with one client. That allows for a graceful degradation of system resources, instead of creating millions of threads which would in turn create thread starvation, memory issues, file descriptor limits, ... Coupled with a carefully-chosen backlog value you'll be able to get the maximum throughput you server can offer without crashing it. And if you're worried about DoS on SSL, the very first thing the run() method of your client-thread should do, is call startHandshake() on the newly-connected socket.
Regarding the SSL part, TCP itself cannot do any SSL pre-accept, as it need to perform encryption/decoding, talking to a keystore, etc. which are well beyond its specification. Note that you should also use an SSLServerSocket in that case.
To go around the use-case you gave (clients willingly delaying handshake to DoS your server) you'll be interested reading an Oracle forum post about it where EJP (again) answers:
The backlog queue is for connections that have been completed by the TCP stack but not yet accepted by the application. Nothing to do with SSL yet. JSSE doesn't do any negotiation until you do some I/O on the accepted socket, or call startHandshake() on it, both of which you would do in the thread dealing with the connection. I don't see how you can make a DOS vulnerability out of that, at least not an SSL-specific one. If you are experiencing DOS conditions, most probably you are doing I/O in the accept() thread that should be done in the connection-handling thread.
*: Though Linux >=3.9 does some kind of load-balancing, but for UDP only (so not SSLServerSocket) and with option SO_REUSEPORT, which is not available on all platforms anyway.

Checking if there is an incoming socket

Im creating a server for my game but this question is more java related. So i want to check of their is an incomming socket but i still want to run the game becouse the server is hosted by an user and not seperate by an external program. But still i want to check if someone is connection using an socket. What i now have is:
public void updateConnection() throws IOException {
Socket connection = server.accept();
System.out.println("Ape is connecting");
ServerClient client = new ServerClient(connection);
clientsWaiting.add(client);
}
I want this method to be used every frame and not continuously checking if thats posible. If it isn't posible what else shall i use to create my server and check if some ones connecting each frame.
You're best bet would be to have your game check for incoming socket connections in a separate thread. You could create a Runnable that just listens for connections continuously.
When you check for an incoming connection: Socket connection = server.accept();, what is actually happening is you are placing a block on that particular thread until you receive a connection. This will cause your code to stop executing. The only way around this is parallelization. You can handle all of your networking tasks on one thread, whilst handling your gaming logic and rendering on another.
Be aware though, writing code to be run on multiple threads has many pit falls. Java provides some tools to minimize the potential problems, but it is up to you, the programmer, to ensure that your code will be thread safe. Going into detail about the many concerns regarding parallel programming is beyond the scope of this question. I suggest that you do a bit of research on it, because bugs that arise from this type of programming are sometimes hard to reproduce and to track.
Now that I have given you this disclaimer, to use Runnable to accomplish what you are trying to do, you could do something similar to this:
Runnable networkListener = () -> {
//declare and instantiate server here
while(true){
Socket connection = server.accept();
//whatever you would like to do with the connection would go here
}
}
Thread networkThread = new Thread(networkListener);
networkThread.start();
You would place that before your game loop and it would spawn a thread that would listen for connections without interrupting your game. There are a lot of good idioms out there on how to handle Sockets using ThreadPools to track them, spawning a new Thread each time a new connection is made, so I suggest you do some research on that as well.
Good luck to you, this isn't an easy road you are about to venture down.
One more addition: when you establish TCP connection you are not dealing with frames(UDP is frame based protocol), you are dealing with stream of bytes.
The lower lever Byteoutpustream example:
InputStream inputStream = socket.getInputStream();
// read from the stream
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] content = new byte[ 2048 ];
int bytesRead = -1;
while( ( bytesRead = inputStream.read( content ) ) != -1 ) {
baos.write( content, 0, bytesRead );
} // while
So when client finishes writing, but stream is still open, your read method blocks. If you expect certain data from client, you read it and then call your print method or however you want to notify, etc...

What is happening when we create a new Thread after the socket.accept() method?

I am trying to write a multi-client socket system which communicates via strings sent by the client, which will trigger an event according to its content.
There is a lot of materialon how to do it, but I cannot grasp the logic behind it.
In this example and enter link description here in this one, there is a while true piece of code which has two main instructions:
socket.accept();
Thread t = new Thread(runnable);
I cannot understand how this works:
the while(true) continuously passed over those instructions, but creates a Thread only when the accept() method clicks?
Does the new thread have a dedicated port? Isn't the socket communication one on one?
How does the software keep track of the spawned socket threads, and does it actually matter?
How do I send a reply to the thread that just wrote me?
Maybe it's my lack of google skills, but I cannot find a good tutorial to do this stuff: help?
the while(true) continuously passed over those instructions, but creates a Thread only when the accept() method clicks?
the execution stop on method accept() until someone try to connect.You can see this using the debug mode on our IDE.
Does the new thread have a dedicated port? Isn't the socket communication one on one?
No, you have many connections on the same port
How does the software keep track of the spawned socket threads, and does it actually matter?
Do bother about this for now
How do I send a reply to the thread that just wrote me?
When someone try to connect you receive an object to respond this user, check the documentation
•the while(true) continuously passed over those instructions, but creates a Thread only when the accept() method clicks?
A new Thread is created to listen for data coming in through the Socket (see Socket.getInputStream())
•Does the new thread have a dedicated port? Isn't the socket communication one on one?
Threads do not have ports. But the Socket has a dedicated address for communicating with this client
•How does the software keep track of the spawned socket threads, and does it actually matter?
That depends on the software. But most of the time, you would keep a record of connected Sockets in some sort of Collection - A List perhaps, or a Map between the userID and the Socket if clients are logging in.
•How do I send a reply to the thread that just wrote me?
In a simple sense, it's as simple as
ServerSocket ss = ...;
Socket s = ss.accept();
PrintStream ps = new PrintStream(s.getOutputStream());
ps.println("Hello World");
You need to make sure that your PrintStream doesn't then get Garbage Collected, as this will close the Stream / Socket

Hot-swap ServerSocket (Java)

I am trying to write code to hot-swap sockets in Java.
Here is the gist of the code I am currently using:
// initial server initialization
ServerSocket server1 = new ServerSocket(port);
// ... code to accept and process requests
// new server initialization
ServerSocket server2 = new ServerSocket();
// attempt at hotswap
server1.close();
server2.bind(port);
// .. more code
The code works as above but I am wondering about the possibility of dropped messages between the time the first socket is closed and the second one is opened.
Two questions:
Is there a way to ensure that no connections are dropped?
If there is a way to ensure that no connections are dropped does it still work if the instances of the ServerSocket class are in different virtual machines?
Thanks in advance.
The closing of a ServerSocket means that that server1's handler does not handle new incoming connections, these are taken care of by the server2. So far so good. You can garbage collect server1 when it no longer has any connected Sockets left.
There will be a (shorter or longer) period of time where the port is marked as "not open" in the OS networking driver after the first ServerSocket is closed and the second one is opened (since the OS cannot know our intention to start a new socket directly after closing the first one).
An incoming TCP request during this time will get a message back from the OS that the port is not open, and will likely not retry, since it got a confirmation that the port was not open.
A Possible work-around
Use the java NIO constructs, which spawn a new thread per incoming request, see the ServerSocketChannel and be sure to check out the library http://netty.io/ which have several constructs for this.
Make sure that you can set the handler for the incoming request dynamically (and thread safe :), this will make it possible to seamlessly change the handling of the incoming requests, but you will not be able to exchange the ServerSocket (but that's likely not exactly what you want, either).

Why is it impossible, without attempting I/O, to detect that TCP socket was gracefully closed by peer?

As a follow up to a recent question, I wonder why it is impossible in Java, without attempting reading/writing on a TCP socket, to detect that the socket has been gracefully closed by the peer? This seems to be the case regardless of whether one uses the pre-NIO Socket or the NIO SocketChannel.
When a peer gracefully closes a TCP connection, the TCP stacks on both sides of the connection know about the fact. The server-side (the one that initiates the shutdown) ends up in state FIN_WAIT2, whereas the client-side (the one that does not explicitly respond to the shutdown) ends up in state CLOSE_WAIT. Why isn't there a method in Socket or SocketChannel that can query the TCP stack to see whether the underlying TCP connection has been terminated? Is it that the TCP stack doesn't provide such status information? Or is it a design decision to avoid a costly call into the kernel?
With the help of the users who have already posted some answers to this question, I think I see where the issue might be coming from. The side that doesn't explicitly close the connection ends up in TCP state CLOSE_WAIT meaning that the connection is in the process of shutting down and waits for the side to issue its own CLOSE operation. I suppose it's fair enough that isConnected returns true and isClosed returns false, but why isn't there something like isClosing?
Below are the test classes that use pre-NIO sockets. But identical results are obtained using NIO.
import java.net.ServerSocket;
import java.net.Socket;
public class MyServer {
public static void main(String[] args) throws Exception {
final ServerSocket ss = new ServerSocket(12345);
final Socket cs = ss.accept();
System.out.println("Accepted connection");
Thread.sleep(5000);
cs.close();
System.out.println("Closed connection");
ss.close();
Thread.sleep(100000);
}
}
import java.net.Socket;
public class MyClient {
public static void main(String[] args) throws Exception {
final Socket s = new Socket("localhost", 12345);
for (int i = 0; i < 10; i++) {
System.out.println("connected: " + s.isConnected() +
", closed: " + s.isClosed());
Thread.sleep(1000);
}
Thread.sleep(100000);
}
}
When the test client connects to the test server the output remains unchanged even after the server initiates the shutdown of the connection:
connected: true, closed: false
connected: true, closed: false
...
I have been using Sockets often, mostly with Selectors, and though not a Network OSI expert, from my understanding, calling shutdownOutput() on a Socket actually sends something on the network (FIN) that wakes up my Selector on the other side (same behaviour in C language). Here you have detection: actually detecting a read operation that will fail when you try it.
In the code you give, closing the socket will shutdown both input and output streams, without possibilities of reading the data that might be available, therefore loosing them. The Java Socket.close() method performs a "graceful" disconnection (opposite as what I initially thought) in that the data left in the output stream will be sent followed by a FIN to signal its close. The FIN will be ACK'd by the other side, as any regular packet would1.
If you need to wait for the other side to close its socket, you need to wait for its FIN. And to achieve that, you have to detect Socket.getInputStream().read() < 0, which means you should not close your socket, as it would close its InputStream.
From what I did in C, and now in Java, achieving such a synchronized close should be done like this:
Shutdown socket output (sends FIN on the other end, this is the last thing that will ever be sent by this socket). Input is still open so you can read() and detect the remote close()
Read the socket InputStream until we receive the reply-FIN from the other end (as it will detect the FIN, it will go through the same graceful diconnection process). This is important on some OS as they don't actually close the socket as long as one of its buffer still contains data. They're called "ghost" socket and use up descriptor numbers in the OS (that might not be an issue anymore with modern OS)
Close the socket (by either calling Socket.close() or closing its InputStream or OutputStream)
As shown in the following Java snippet:
public void synchronizedClose(Socket sok) {
InputStream is = sok.getInputStream();
sok.shutdownOutput(); // Sends the 'FIN' on the network
while (is.read() > 0) ; // "read()" returns '-1' when the 'FIN' is reached
sok.close(); // or is.close(); Now we can close the Socket
}
Of course both sides have to use the same way of closing, or the sending part might always be sending enough data to keep the while loop busy (e.g. if the sending part is only sending data and never reading to detect connection termination. Which is clumsy, but you might not have control on that).
As #WarrenDew pointed out in his comment, discarding the data in the program (application layer) induces a non-graceful disconnection at application layer: though all data were received at TCP layer (the while loop), they are discarded.
1: From "Fundamental Networking in Java": see fig. 3.3 p.45, and the whole §3.7, pp 43-48
I think this is more of a socket programming question. Java is just following the socket programming tradition.
From Wikipedia:
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
Once the handshake is done, TCP does not make any distinction between two end points (client and server). The term "client" and "server" is mostly for convenience. So, the "server" could be sending data and "client" could be sending some other data simultaneously to each other.
The term "Close" is also misleading. There's only FIN declaration, which means "I am not going to send you any more stuff." But this does not mean that there are no packets in flight, or the other has no more to say. If you implement snail mail as the data link layer, or if your packet traveled different routes, it's possible that the receiver receives packets in wrong order. TCP knows how to fix this for you.
Also you, as a program, may not have time to keep checking what's in the buffer. So, at your convenience you can check what's in the buffer. All in all, current socket implementation is not so bad. If there actually were isPeerClosed(), that's extra call you have to make every time you want to call read.
The underlying sockets API doesn't have such a notification.
The sending TCP stack won't send the FIN bit until the last packet anyway, so there could be a lot of data buffered from when the sending application logically closed its socket before that data is even sent. Likewise, data that's buffered because the network is quicker than the receiving application (I don't know, maybe you're relaying it over a slower connection) could be significant to the receiver and you wouldn't want the receiving application to discard it just because the FIN bit has been received by the stack.
Since none of the answers so far fully answer the question, I'm summarizing my current understanding of the issue.
When a TCP connection is established and one peer calls close() or shutdownOutput() on its socket, the socket on the other side of the connection transitions into CLOSE_WAIT state. In principle, it's possible to find out from the TCP stack whether a socket is in CLOSE_WAIT state without calling read/recv (e.g., getsockopt() on Linux: http://www.developerweb.net/forum/showthread.php?t=4395), but that's not portable.
Java's Socket class seems to be designed to provide an abstraction comparable to a BSD TCP socket, probably because this is the level of abstraction to which people are used to when programming TCP/IP applications. BSD sockets are a generalization supporting sockets other than just INET (e.g., TCP) ones, so they don't provide a portable way of finding out the TCP state of a socket.
There's no method like isCloseWait() because people used to programming TCP applications at the level of abstraction offered by BSD sockets don't expect Java to provide any extra methods.
Detecting whether the remote side of a (TCP) socket connection has closed can be done with the java.net.Socket.sendUrgentData(int) method, and catching the IOException it throws if the remote side is down. This has been tested between Java-Java, and Java-C.
This avoids the problem of designing the communication protocol to use some sort of pinging mechanism. By disabling OOBInline on a socket (setOOBInline(false), any OOB data received is silently discarded, but OOB data can still be sent. If the remote side is closed, a connection reset is attempted, fails, and causes some IOException to be thrown.
If you actually use OOB data in your protocol, then your mileage may vary.
the Java IO stack definitely sends FIN when it gets destructed on an abrupt teardown. It just makes no sense that you can't detect this, b/c most clients only send the FIN if they are shutting down the connection.
...another reason i am really beginning to hate the NIO Java classes. It seems like everything is a little half-ass.
It's an interesting topic. I've dug through the java code just now to check. From my finding, there are two distinct problems: the first is the TCP RFC itself, which allows for remotely closed socket to transmit data in half-duplex, so a remotely closed socket is still half open. As per the RFC, RST doesn't close the connection, you need to send an explicit ABORT command; so Java allow for sending data through half closed socket
(There are two methods for reading the close status at both of the endpoint.)
The other problem is that the implementation say that this behavior is optional. As Java strives to be portable, they implemented the best common feature. Maintaining a map of (OS, implementation of half duplex) would have been a problem, I guess.
This is a flaw of Java's (and all others' that I've looked at) OO socket classes -- no access to the select system call.
Correct answer in C:
struct timeval tp;
fd_set in;
fd_set out;
fd_set err;
FD_ZERO (in);
FD_ZERO (out);
FD_ZERO (err);
FD_SET(socket_handle, err);
tp.tv_sec = 0; /* or however long you want to wait */
tp.tv_usec = 0;
select(socket_handle + 1, in, out, err, &tp);
if (FD_ISSET(socket_handle, err) {
/* handle closed socket */
}
Here is a lame workaround. Use SSL ;) and SSL does a close handshake on teardown so you are notified of the socket being closed (most implementations seem to do a propert handshake teardown that is).
The reason for this behaviour (which is not Java specific) is the fact that you don't get any status information from the TCP stack. After all, a socket is just another file handle and you can't find out if there's actual data to read from it without actually trying to (select(2) won't help there, it only signals that you can try without blocking).
For more information see the Unix socket FAQ.
Only writes require that packets be exchanged which allows for the loss of connection to be determined. A common work around is to use the KEEP ALIVE option.
When it comes to dealing with half-open Java sockets, one might want to have a look at
isInputShutdown() and isOutputShutdown().

Categories

Resources