I am trying to write code to hot-swap sockets in Java.
Here is the gist of the code I am currently using:
// initial server initialization
ServerSocket server1 = new ServerSocket(port);
// ... code to accept and process requests
// new server initialization
ServerSocket server2 = new ServerSocket();
// attempt at hotswap
server1.close();
server2.bind(port);
// .. more code
The code works as above but I am wondering about the possibility of dropped messages between the time the first socket is closed and the second one is opened.
Two questions:
Is there a way to ensure that no connections are dropped?
If there is a way to ensure that no connections are dropped does it still work if the instances of the ServerSocket class are in different virtual machines?
Thanks in advance.
The closing of a ServerSocket means that that server1's handler does not handle new incoming connections, these are taken care of by the server2. So far so good. You can garbage collect server1 when it no longer has any connected Sockets left.
There will be a (shorter or longer) period of time where the port is marked as "not open" in the OS networking driver after the first ServerSocket is closed and the second one is opened (since the OS cannot know our intention to start a new socket directly after closing the first one).
An incoming TCP request during this time will get a message back from the OS that the port is not open, and will likely not retry, since it got a confirmation that the port was not open.
A Possible work-around
Use the java NIO constructs, which spawn a new thread per incoming request, see the ServerSocketChannel and be sure to check out the library http://netty.io/ which have several constructs for this.
Make sure that you can set the handler for the incoming request dynamically (and thread safe :), this will make it possible to seamlessly change the handling of the incoming requests, but you will not be able to exchange the ServerSocket (but that's likely not exactly what you want, either).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
public class ServerThread implements Runnable
{
private static final int port = 10000;
#Override
public void run() {
ServerSocket serverSocket = new ServerSocket(port);
while (true) {
Socket clientSocket = serverSocket.accept();
ClientThread clientThread = new ClientThread(clientSocket);
// handle the client request in a separate thread
}
}
}
Will this work If I have let's say 10 different threads running ServerThread.run()? Or should I use the same ServerSocket object for all threads?
The docs say:
The constructor for ServerSocket throws an exception if it can't listen on the specified port (for example, the port is already being used)
You may be wondering why I want to do this in the first place and not simply have a single thread running ServerSocket.accept(). Well, my assumption is (correct me if I'm wrong) that the accept() method may take some time to finish establishing a connection, especially if the ServerSocket is SSL (because of handshaking). So if two clients want to connect at the same time, one has to wait for the other. This would be very bad for a high traffic server.
Update: It seems that the accept() method will return as soon as a connection belonging to a queue is established. This means if there's a queue of clients waiting to connect, the server thread can handle the requests in the fastest way possible and only one thread is needed. (apart from the time it takes to create a new thread for each request and starting the thread, but that time is negligible when using a thread pool)
The ServerSocket also has a parameter called "backlog", where you can set the maximum number of connections in the queue. According to the book "Fundamental Networking in Java" 3.3.3
TCP itself can get ahead of a TCP server application in accepting connections. It
maintains a ‘backlog queue’ of connections to a listening socket which TCP iself
has completed but which have not yet been accepted by the application. This
queue exists between the underlying TCP implementation and the server process
which created the listening socket. The purpose of pre-completing connections
is to speed up the connection phase, but the queue is limited in length so as
not to pre-form too many connections to servers which are not accepting them at
the same rate for any reason. When an incoming connection request is received
and the backlog queue is not full, TCP completes the connection protocol and
adds the connection to the backlog queue. At this point, the client application is fully connected, but the server application has not yet received the connection as a result value of ServerSocket.accept. When it does so, the entry is removed from the queue.
I'm still not sure though in the case of SSL, if the handshaking is also done in parallel by ServerSocket.accept() for simultaneous connections.
Update 2 The ServerSocket.accept() method itself doesn't do any real networking at all. It will return as soon as the operating system has established a new TCP connection. The operating system itself holds a queue of waiting TCP connections, which can be controlled by the "backlog" parameter in the ServerSocket constructor:
ServerSocket serverSocket = new ServerSocket(port, 50);
//this will create a server socket with a maximum of 50 connections in the queue
SSL handshaking is done after the client calls Socket.connect(). So one thread for ServerSocket.accept() is always enough.
Here are a few thoughts regarding your problem:
You can't listen() on the same IP+port with several ServerSocket. If you could, to which one of the socket would the OS transfer the SYN packet?*
TCP indeed maintains a backlog of pre-accepted connections so a call to accept() will return (almost) immediately the first (oldest) socket in the backlog queue. It does so by sending the SYN-ACK packet automatically in reply to a SYN sent by the client, and waits for the reply-ACK (the 3-way handshake).
But, as #zero298 suggests, accepting connections as fast as possible isn't usually the problem. The problem will be the load incurred by processing communication with all the sockets you'll have accepted, which may very well put your server down its knees (it's actually a DoS attack). In fact the backlog parameter is usually here so too many simultaneous connections waiting for too long in the backlog queue to be accept()ed will be dropped by TCP before reaching your application.
Instead of creating one thread per client socket, I would suggest you use an ExecutorService thread pool running some maximum number of threads, each handling communication with one client. That allows for a graceful degradation of system resources, instead of creating millions of threads which would in turn create thread starvation, memory issues, file descriptor limits, ... Coupled with a carefully-chosen backlog value you'll be able to get the maximum throughput you server can offer without crashing it. And if you're worried about DoS on SSL, the very first thing the run() method of your client-thread should do, is call startHandshake() on the newly-connected socket.
Regarding the SSL part, TCP itself cannot do any SSL pre-accept, as it need to perform encryption/decoding, talking to a keystore, etc. which are well beyond its specification. Note that you should also use an SSLServerSocket in that case.
To go around the use-case you gave (clients willingly delaying handshake to DoS your server) you'll be interested reading an Oracle forum post about it where EJP (again) answers:
The backlog queue is for connections that have been completed by the TCP stack but not yet accepted by the application. Nothing to do with SSL yet. JSSE doesn't do any negotiation until you do some I/O on the accepted socket, or call startHandshake() on it, both of which you would do in the thread dealing with the connection. I don't see how you can make a DOS vulnerability out of that, at least not an SSL-specific one. If you are experiencing DOS conditions, most probably you are doing I/O in the accept() thread that should be done in the connection-handling thread.
*: Though Linux >=3.9 does some kind of load-balancing, but for UDP only (so not SSLServerSocket) and with option SO_REUSEPORT, which is not available on all platforms anyway.
My SocketServer first listens for at least 4 Socket connections before creating a WorkerThread where all four connections are served. And in the same thread, all 4 sockets will be opened to perform communication with connected clients.
Now, consider a situation where server has already accepted two socket connections, but listening to remaining 2 clients, before it can proceed with creating thread.
And while that listening phase, the connected clients are shown "Waiting..." message (since server has not yet opened the sockets to send any response back to clients, and socket.readObject() is blocking at client-end), till the server gets all 4 clients to work with. And in the meantime, one of the "already-connected" client kills that "Waiting..." thing, and closes the client app. In such a case, my WorkerThread will fire an exception due to dead socket supplied, when it attempts to open it.
How can I know if a socket is pointing to nothing (since client is lost) without having to open the socket? (since if I open it from main thread, I'll not be able to open it again from WorkerThread, where it is actually supposed to be used).
If I get to know if Socket is dead, I can get server back to listening and attempt to get 4 connections, before it proceeds creating a thread.
I know my SocketServer will be stuck at accept() so even if its possible to check what I asked above, I'll have to create another thread that monitors liveliness of already "accepted" socket connections.
Update
I mean by not opening the socket is something like below.
Socket s = ss.accept();
/* I'll not be doing as below, since once I close InputStream and OutputStream in main Thread, I can't open in WorkerThread.
But I still want to know if Socket s is connected to client, before I start WorkerThread.
ObjectInputStream in = new ObjectInputStream(s.getInputStream());
ObjectOutputStream out = new ObjectOutputStream(s.getOutputStream());
String msg = in.readObject().toString();
System.out.println("Client Says:");
out.writeObject("success");
in.close();
out.close();
*/
new WorkerThread(s).start();
And note that my server is accepting 4 such connections, and when 4 sockets are accept()ed, it passes all 4 in WorkerThread's constructor, and gets back to accept() another 4 clients.
I think you just need to handle your acceptions better. You should handle the IOException correctly whenever you try to read or write to the socket.
One option is to have the accepting code send a "still waiting" message to the client and get an acknowledge every so often while you are waiting for the other connections. The socket and associated streams have already been created by the accept() so you can do this, call flush() on the OutputStream, and then hand off to the handler.
As long as you don't call close() on the streams, you should be able to re-use them without a problem. You just can't have two different threads using the streams at the same time.
I am trying to write a multithread program in Java where a server listens for connections from clients and spawns a thread to acommodate each client. I have:
while(true)
{
Socket s = server.accept();
ClientHandler ch = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
}
My question is: whenever it accepts a connection in
Socket s = server.accept();
and starts executing the following lines of code to create the thread etc., what happens to a request for connection from a client during that time. Is it queued somehow and it will get served in the next loop of while(true) or will it be rejected?
thanks,
Nikos
After the accept() returns the TCP handshake is complete and you have a connected client socket (s in your code). Until the next call to accept() the OS queues pending connection requests.
You might want to check out some tutorial like this one for example.
I wrote a tiny http server in Java, which you can find on github. That might be a good example for you to take a look at real-world usage of Sockets and multithreading.
As it was answered already: yes it is queued and no it is not rejected.
I think that anyone reading this question should first know that when you instantiate a socket:
server = new ServerSocket(mPort, mNusers);
Java is already implementing a network socket ; which has clearly defined behavior. The connection will be rejected however after reaching the limit set.
Also, the code posted in the question accepts multiple connections but looses the refference for the previous. This may be a "duh" but just in case someone is copy-pasting , you should do something to store all the created sockets or handlers. Perhaps:
ClientHandler[] ch = new ClientHandler[mNusers];
int chIndex = 0;
while(true)
{
Socket s = server.accept();
ch[chIndex] = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
chIndex++;
}
An array may not be the best option but I want to point out that with sockets you should know what's the limit of connections you will allocate
ugh :)
there is a so called SYN-queue, which takes up to the requested amount of not yet established connections (there is a per call and a system-wide limit - not sure if its even user limited).
when "listening" on a ServerSocket, you specify that by - one of the answers says "Nusers" but it is - the size of the "backlog" the socket should keep. in this backlog the pending connections are stored and if it is filled up, all other (should) get a ConnectionRefused.
So
a) increase that when you are to slowly accepting (server.accept())
connections
b) accept the connections faster by
b.1) using a ThreadPool (be happy moving the problem to context-switches of the os)
b.2) use NIO and with that the ability to handle socket-states
within single threads/CPUs (as long as the internal data throughput is better than the one in the network, this is the more performant option)
have fun
As a follow up to a recent question, I wonder why it is impossible in Java, without attempting reading/writing on a TCP socket, to detect that the socket has been gracefully closed by the peer? This seems to be the case regardless of whether one uses the pre-NIO Socket or the NIO SocketChannel.
When a peer gracefully closes a TCP connection, the TCP stacks on both sides of the connection know about the fact. The server-side (the one that initiates the shutdown) ends up in state FIN_WAIT2, whereas the client-side (the one that does not explicitly respond to the shutdown) ends up in state CLOSE_WAIT. Why isn't there a method in Socket or SocketChannel that can query the TCP stack to see whether the underlying TCP connection has been terminated? Is it that the TCP stack doesn't provide such status information? Or is it a design decision to avoid a costly call into the kernel?
With the help of the users who have already posted some answers to this question, I think I see where the issue might be coming from. The side that doesn't explicitly close the connection ends up in TCP state CLOSE_WAIT meaning that the connection is in the process of shutting down and waits for the side to issue its own CLOSE operation. I suppose it's fair enough that isConnected returns true and isClosed returns false, but why isn't there something like isClosing?
Below are the test classes that use pre-NIO sockets. But identical results are obtained using NIO.
import java.net.ServerSocket;
import java.net.Socket;
public class MyServer {
public static void main(String[] args) throws Exception {
final ServerSocket ss = new ServerSocket(12345);
final Socket cs = ss.accept();
System.out.println("Accepted connection");
Thread.sleep(5000);
cs.close();
System.out.println("Closed connection");
ss.close();
Thread.sleep(100000);
}
}
import java.net.Socket;
public class MyClient {
public static void main(String[] args) throws Exception {
final Socket s = new Socket("localhost", 12345);
for (int i = 0; i < 10; i++) {
System.out.println("connected: " + s.isConnected() +
", closed: " + s.isClosed());
Thread.sleep(1000);
}
Thread.sleep(100000);
}
}
When the test client connects to the test server the output remains unchanged even after the server initiates the shutdown of the connection:
connected: true, closed: false
connected: true, closed: false
...
I have been using Sockets often, mostly with Selectors, and though not a Network OSI expert, from my understanding, calling shutdownOutput() on a Socket actually sends something on the network (FIN) that wakes up my Selector on the other side (same behaviour in C language). Here you have detection: actually detecting a read operation that will fail when you try it.
In the code you give, closing the socket will shutdown both input and output streams, without possibilities of reading the data that might be available, therefore loosing them. The Java Socket.close() method performs a "graceful" disconnection (opposite as what I initially thought) in that the data left in the output stream will be sent followed by a FIN to signal its close. The FIN will be ACK'd by the other side, as any regular packet would1.
If you need to wait for the other side to close its socket, you need to wait for its FIN. And to achieve that, you have to detect Socket.getInputStream().read() < 0, which means you should not close your socket, as it would close its InputStream.
From what I did in C, and now in Java, achieving such a synchronized close should be done like this:
Shutdown socket output (sends FIN on the other end, this is the last thing that will ever be sent by this socket). Input is still open so you can read() and detect the remote close()
Read the socket InputStream until we receive the reply-FIN from the other end (as it will detect the FIN, it will go through the same graceful diconnection process). This is important on some OS as they don't actually close the socket as long as one of its buffer still contains data. They're called "ghost" socket and use up descriptor numbers in the OS (that might not be an issue anymore with modern OS)
Close the socket (by either calling Socket.close() or closing its InputStream or OutputStream)
As shown in the following Java snippet:
public void synchronizedClose(Socket sok) {
InputStream is = sok.getInputStream();
sok.shutdownOutput(); // Sends the 'FIN' on the network
while (is.read() > 0) ; // "read()" returns '-1' when the 'FIN' is reached
sok.close(); // or is.close(); Now we can close the Socket
}
Of course both sides have to use the same way of closing, or the sending part might always be sending enough data to keep the while loop busy (e.g. if the sending part is only sending data and never reading to detect connection termination. Which is clumsy, but you might not have control on that).
As #WarrenDew pointed out in his comment, discarding the data in the program (application layer) induces a non-graceful disconnection at application layer: though all data were received at TCP layer (the while loop), they are discarded.
1: From "Fundamental Networking in Java": see fig. 3.3 p.45, and the whole §3.7, pp 43-48
I think this is more of a socket programming question. Java is just following the socket programming tradition.
From Wikipedia:
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
Once the handshake is done, TCP does not make any distinction between two end points (client and server). The term "client" and "server" is mostly for convenience. So, the "server" could be sending data and "client" could be sending some other data simultaneously to each other.
The term "Close" is also misleading. There's only FIN declaration, which means "I am not going to send you any more stuff." But this does not mean that there are no packets in flight, or the other has no more to say. If you implement snail mail as the data link layer, or if your packet traveled different routes, it's possible that the receiver receives packets in wrong order. TCP knows how to fix this for you.
Also you, as a program, may not have time to keep checking what's in the buffer. So, at your convenience you can check what's in the buffer. All in all, current socket implementation is not so bad. If there actually were isPeerClosed(), that's extra call you have to make every time you want to call read.
The underlying sockets API doesn't have such a notification.
The sending TCP stack won't send the FIN bit until the last packet anyway, so there could be a lot of data buffered from when the sending application logically closed its socket before that data is even sent. Likewise, data that's buffered because the network is quicker than the receiving application (I don't know, maybe you're relaying it over a slower connection) could be significant to the receiver and you wouldn't want the receiving application to discard it just because the FIN bit has been received by the stack.
Since none of the answers so far fully answer the question, I'm summarizing my current understanding of the issue.
When a TCP connection is established and one peer calls close() or shutdownOutput() on its socket, the socket on the other side of the connection transitions into CLOSE_WAIT state. In principle, it's possible to find out from the TCP stack whether a socket is in CLOSE_WAIT state without calling read/recv (e.g., getsockopt() on Linux: http://www.developerweb.net/forum/showthread.php?t=4395), but that's not portable.
Java's Socket class seems to be designed to provide an abstraction comparable to a BSD TCP socket, probably because this is the level of abstraction to which people are used to when programming TCP/IP applications. BSD sockets are a generalization supporting sockets other than just INET (e.g., TCP) ones, so they don't provide a portable way of finding out the TCP state of a socket.
There's no method like isCloseWait() because people used to programming TCP applications at the level of abstraction offered by BSD sockets don't expect Java to provide any extra methods.
Detecting whether the remote side of a (TCP) socket connection has closed can be done with the java.net.Socket.sendUrgentData(int) method, and catching the IOException it throws if the remote side is down. This has been tested between Java-Java, and Java-C.
This avoids the problem of designing the communication protocol to use some sort of pinging mechanism. By disabling OOBInline on a socket (setOOBInline(false), any OOB data received is silently discarded, but OOB data can still be sent. If the remote side is closed, a connection reset is attempted, fails, and causes some IOException to be thrown.
If you actually use OOB data in your protocol, then your mileage may vary.
the Java IO stack definitely sends FIN when it gets destructed on an abrupt teardown. It just makes no sense that you can't detect this, b/c most clients only send the FIN if they are shutting down the connection.
...another reason i am really beginning to hate the NIO Java classes. It seems like everything is a little half-ass.
It's an interesting topic. I've dug through the java code just now to check. From my finding, there are two distinct problems: the first is the TCP RFC itself, which allows for remotely closed socket to transmit data in half-duplex, so a remotely closed socket is still half open. As per the RFC, RST doesn't close the connection, you need to send an explicit ABORT command; so Java allow for sending data through half closed socket
(There are two methods for reading the close status at both of the endpoint.)
The other problem is that the implementation say that this behavior is optional. As Java strives to be portable, they implemented the best common feature. Maintaining a map of (OS, implementation of half duplex) would have been a problem, I guess.
This is a flaw of Java's (and all others' that I've looked at) OO socket classes -- no access to the select system call.
Correct answer in C:
struct timeval tp;
fd_set in;
fd_set out;
fd_set err;
FD_ZERO (in);
FD_ZERO (out);
FD_ZERO (err);
FD_SET(socket_handle, err);
tp.tv_sec = 0; /* or however long you want to wait */
tp.tv_usec = 0;
select(socket_handle + 1, in, out, err, &tp);
if (FD_ISSET(socket_handle, err) {
/* handle closed socket */
}
Here is a lame workaround. Use SSL ;) and SSL does a close handshake on teardown so you are notified of the socket being closed (most implementations seem to do a propert handshake teardown that is).
The reason for this behaviour (which is not Java specific) is the fact that you don't get any status information from the TCP stack. After all, a socket is just another file handle and you can't find out if there's actual data to read from it without actually trying to (select(2) won't help there, it only signals that you can try without blocking).
For more information see the Unix socket FAQ.
Only writes require that packets be exchanged which allows for the loss of connection to be determined. A common work around is to use the KEEP ALIVE option.
When it comes to dealing with half-open Java sockets, one might want to have a look at
isInputShutdown() and isOutputShutdown().
This question already has answers here:
Java socket API: How to tell if a connection has been closed?
(9 answers)
Closed 1 year ago.
Hey all. I have a server written in java using the ServerSocket and Socket classes.
I want to be able to detect and handle disconnects, and then reconnect a new client if necessary.
What is the proper procedure to detect client disconnections, close the socket, and then accept new clients?
Presumably, you're reading from the socket, perhaps using a wrapper over the input stream, such as a BufferedReader. In this case, you can detect the end-of-stream when the corresponding read operation returns -1 (for raw read() calls), or null (for readLine() calls).
Certain operations will cause a SocketException when performed on a closed socket, which you will also need to deal with appropriately.
The only safe way to detect the other end has gone is to send heartbeats periodically and have the other end to timeout based on a lack of a heartbeat.
Is it just me, or has nobody noticed that the JavaDoc states a method under ServerSocket api, which allows us to obtain a boolean based on the closed state of the serversocket?
you can just loop every few seconds to check the state of it:
if(!serverSocket.isClosed()){
// whatever you want to do if the serverSocket is connected
}else{
// treat a disconnected serverSocket
}
EDIT: Just reading your question again, it seems that you require the server to just continually search for connections and if the client disconnects, it should be able to re-detect when the client attempts to re-connect. should'nt that just be your solution in the first place?
Have a server that is listening, once it picks up a client connection, it should pass it to a worker thread object and launch it to operate asynchronously. Then the server can just loop back to listening for new connections. If the client disconnects, the launched thread should die and when it reconnects, a new thread is launched again to handle the new connection.
Jenkov provides a great example of this implementation.