So I'm working on this recreational project to learn more about java networking and so far every tutorial or documentation I've come across involves creating a new thread for each client connection to wait for input. I'm wondering if it's possible to handle the list of client connections with a single thread? I tried doing something like the following code but it didn't work.
while(true){
for(Client c : list){
DataInputStream dis = new DataInputStream(c.getSocket().getInputStream());
if(dis.readLine()!=null){
//Code
}
dis.close();
}
}
Yes it is possible with a single thread using the NIO package. This will allow you to set up non-blocking IO and multiplex across channels within your single thread. It's not exactly trivial but there's a decent example here.
Your example above will block on the readLine() call until data is available on the Socket. If one of your clients is waiting on data, the while loop will never proceed and you'll never service the other clients.
I am trying to write a multithread program in Java where a server listens for connections from clients and spawns a thread to acommodate each client. I have:
while(true)
{
Socket s = server.accept();
ClientHandler ch = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
}
My question is: whenever it accepts a connection in
Socket s = server.accept();
and starts executing the following lines of code to create the thread etc., what happens to a request for connection from a client during that time. Is it queued somehow and it will get served in the next loop of while(true) or will it be rejected?
thanks,
Nikos
After the accept() returns the TCP handshake is complete and you have a connected client socket (s in your code). Until the next call to accept() the OS queues pending connection requests.
You might want to check out some tutorial like this one for example.
I wrote a tiny http server in Java, which you can find on github. That might be a good example for you to take a look at real-world usage of Sockets and multithreading.
As it was answered already: yes it is queued and no it is not rejected.
I think that anyone reading this question should first know that when you instantiate a socket:
server = new ServerSocket(mPort, mNusers);
Java is already implementing a network socket ; which has clearly defined behavior. The connection will be rejected however after reaching the limit set.
Also, the code posted in the question accepts multiple connections but looses the refference for the previous. This may be a "duh" but just in case someone is copy-pasting , you should do something to store all the created sockets or handlers. Perhaps:
ClientHandler[] ch = new ClientHandler[mNusers];
int chIndex = 0;
while(true)
{
Socket s = server.accept();
ch[chIndex] = new ClientHandler(s);
Thread t = new Thread(ch);
t.start();
chIndex++;
}
An array may not be the best option but I want to point out that with sockets you should know what's the limit of connections you will allocate
ugh :)
there is a so called SYN-queue, which takes up to the requested amount of not yet established connections (there is a per call and a system-wide limit - not sure if its even user limited).
when "listening" on a ServerSocket, you specify that by - one of the answers says "Nusers" but it is - the size of the "backlog" the socket should keep. in this backlog the pending connections are stored and if it is filled up, all other (should) get a ConnectionRefused.
So
a) increase that when you are to slowly accepting (server.accept())
connections
b) accept the connections faster by
b.1) using a ThreadPool (be happy moving the problem to context-switches of the os)
b.2) use NIO and with that the ability to handle socket-states
within single threads/CPUs (as long as the internal data throughput is better than the one in the network, this is the more performant option)
have fun
I have this weird problem with my (multithreaded) server when I get more than 500 players connected simultaneously, the PrinterWriter take more than 100 seconds or more (2 minutes) to finish flush() or print() sometimes.
Here is the code:
public static void send(Player p, String packet)
{
PrintWriter out = p.get_out();
if(out != null && !packet.equals("") && !packet.equals(""+(char)0x00))
{
packet = Crypter.toUtf(packet);
out.print((packet)+(char)0x00);
out.flush();
}
}
the printWriter is something like this:
_in = new BufferedReader(new InputStreamReader(_socket.getInputStream()));
_out = new PrintWriter(_socket.getOutputStream());
If I add the keyword synchronized to the send() method, the whole server starts to lag every 2 seconds, if I don't then some random player starts to lag for no reason.
Anyone have any idea ? Where is this coming from? What should I do?
The print writer is wrapped around a socket's output stream, so I'm going to guess and say that the socket's output buffer is full, and so the write/flush call will block until the buffer has enough room to accommodate the message being sent.
The socket send buffer may become full if data is being written to it faster than it can be transmitted to the client (or faster than the client can receive it).
Edit:
P.S. If you're having scalability problems, it may be due to using java.io (which requires one thread per socket) instead of java.nio (in which case a single thread can detect and perform operations on those sockets which have pending data). nio is intended to support applications which must scale to a large number of connections, but the programming model is more difficult.
The reason is that your send() method is static, so all threads that write to any socket are being syncrhonized on the containing class object. Make it non-static, then only threads that are writing to the same socket will be synchronized.
I am working on a mobile communicator and after establishing connection to the server using following method (to illustrate that I am using StreamConnection, InputStream and OutputStream) I distribute inputStream and outputStream between two separate threads, lets call them Sender and Receiver.
Connection method:
private InputStream inputStream;
private OutputStream outputStream;
private StreamConnection connection;
public void connect(String host, String port) throws IOException {
String connectionString = "socket://" + host + ":" + port;
connection = (StreamConnection) Connector.open(connectionString);
inputStream = connection.openDataInputStream();
outputStream = connection.openOutputStream();
}
Sender thread is waiting for anything to appear in the out buffer. When output buffer is empty, Sender waits (by calling wait() method on sender thread). Any input into the output buffer calls notify() method on the sending thread.
Receiver thread polls the InputStream using int available() method and when there is something to receive, it calls blocking int read() method. Everything works like a charm in various emulators and few devicdes I have handy around.
However there is one phone that seems to missbehave. Whenever one thread calls available() or read() on InputThread object, while the other thread calls write() on the OutputStream object, the Input stream finishes. All subsequent reads will return value -1, which means that the InputStream got closed.
After massive google-fu, I came accross this post on nokia forums, where simplex/duplex properties of a device are being discussed and that seems to be the case with the device I have troubles with. Normally (99% of the time) calls to read() and write() can be simultaneous without any problems.
My question is then: Did anybody came accross similar problems? How did/would you sort out the issue of one thread independently reading, while another is independently writing into the established connection, so that they do not call read() or available() while calling write()?
Any pointers in any directions greatly appreciated!
Normally, very little is guaranteed when we talk about multi-threaded applications. I would recommend to use a single thread in your application and try to manage it using a single worker. Some devices do have behavioral problems and so what we see on one device does not appear to work on some other device.
I have worked a lot with mobile games where a lot of animation has to be rendered without compromising the speed of the game. I have realized that you can do more with a single thread and it makes your application very portable (with almost no changes).
If you are waiting for the threads to complete either READ or WRITE operation, so it looks like you are actually doing this sequentially. So more or less, things would be complicated with more than one thread. Instead, built a wait-notify mechanism, based on some predetermined factor and allow the SINGLE thread to either read or write to the socket stream. Switching between threads is a very costly operation, than this scheme.
Hope this answers your question.
Between that kind of problem and mobile network operators filterning data, most mobile java developers use the "http://" protocol instead of "socket://".
Of course, that means not using duplex connection anymore and making many GET and POST requests instead.
Far from ideal, I know.
As a follow up to a recent question, I wonder why it is impossible in Java, without attempting reading/writing on a TCP socket, to detect that the socket has been gracefully closed by the peer? This seems to be the case regardless of whether one uses the pre-NIO Socket or the NIO SocketChannel.
When a peer gracefully closes a TCP connection, the TCP stacks on both sides of the connection know about the fact. The server-side (the one that initiates the shutdown) ends up in state FIN_WAIT2, whereas the client-side (the one that does not explicitly respond to the shutdown) ends up in state CLOSE_WAIT. Why isn't there a method in Socket or SocketChannel that can query the TCP stack to see whether the underlying TCP connection has been terminated? Is it that the TCP stack doesn't provide such status information? Or is it a design decision to avoid a costly call into the kernel?
With the help of the users who have already posted some answers to this question, I think I see where the issue might be coming from. The side that doesn't explicitly close the connection ends up in TCP state CLOSE_WAIT meaning that the connection is in the process of shutting down and waits for the side to issue its own CLOSE operation. I suppose it's fair enough that isConnected returns true and isClosed returns false, but why isn't there something like isClosing?
Below are the test classes that use pre-NIO sockets. But identical results are obtained using NIO.
import java.net.ServerSocket;
import java.net.Socket;
public class MyServer {
public static void main(String[] args) throws Exception {
final ServerSocket ss = new ServerSocket(12345);
final Socket cs = ss.accept();
System.out.println("Accepted connection");
Thread.sleep(5000);
cs.close();
System.out.println("Closed connection");
ss.close();
Thread.sleep(100000);
}
}
import java.net.Socket;
public class MyClient {
public static void main(String[] args) throws Exception {
final Socket s = new Socket("localhost", 12345);
for (int i = 0; i < 10; i++) {
System.out.println("connected: " + s.isConnected() +
", closed: " + s.isClosed());
Thread.sleep(1000);
}
Thread.sleep(100000);
}
}
When the test client connects to the test server the output remains unchanged even after the server initiates the shutdown of the connection:
connected: true, closed: false
connected: true, closed: false
...
I have been using Sockets often, mostly with Selectors, and though not a Network OSI expert, from my understanding, calling shutdownOutput() on a Socket actually sends something on the network (FIN) that wakes up my Selector on the other side (same behaviour in C language). Here you have detection: actually detecting a read operation that will fail when you try it.
In the code you give, closing the socket will shutdown both input and output streams, without possibilities of reading the data that might be available, therefore loosing them. The Java Socket.close() method performs a "graceful" disconnection (opposite as what I initially thought) in that the data left in the output stream will be sent followed by a FIN to signal its close. The FIN will be ACK'd by the other side, as any regular packet would1.
If you need to wait for the other side to close its socket, you need to wait for its FIN. And to achieve that, you have to detect Socket.getInputStream().read() < 0, which means you should not close your socket, as it would close its InputStream.
From what I did in C, and now in Java, achieving such a synchronized close should be done like this:
Shutdown socket output (sends FIN on the other end, this is the last thing that will ever be sent by this socket). Input is still open so you can read() and detect the remote close()
Read the socket InputStream until we receive the reply-FIN from the other end (as it will detect the FIN, it will go through the same graceful diconnection process). This is important on some OS as they don't actually close the socket as long as one of its buffer still contains data. They're called "ghost" socket and use up descriptor numbers in the OS (that might not be an issue anymore with modern OS)
Close the socket (by either calling Socket.close() or closing its InputStream or OutputStream)
As shown in the following Java snippet:
public void synchronizedClose(Socket sok) {
InputStream is = sok.getInputStream();
sok.shutdownOutput(); // Sends the 'FIN' on the network
while (is.read() > 0) ; // "read()" returns '-1' when the 'FIN' is reached
sok.close(); // or is.close(); Now we can close the Socket
}
Of course both sides have to use the same way of closing, or the sending part might always be sending enough data to keep the while loop busy (e.g. if the sending part is only sending data and never reading to detect connection termination. Which is clumsy, but you might not have control on that).
As #WarrenDew pointed out in his comment, discarding the data in the program (application layer) induces a non-graceful disconnection at application layer: though all data were received at TCP layer (the while loop), they are discarded.
1: From "Fundamental Networking in Java": see fig. 3.3 p.45, and the whole ยง3.7, pp 43-48
I think this is more of a socket programming question. Java is just following the socket programming tradition.
From Wikipedia:
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
Once the handshake is done, TCP does not make any distinction between two end points (client and server). The term "client" and "server" is mostly for convenience. So, the "server" could be sending data and "client" could be sending some other data simultaneously to each other.
The term "Close" is also misleading. There's only FIN declaration, which means "I am not going to send you any more stuff." But this does not mean that there are no packets in flight, or the other has no more to say. If you implement snail mail as the data link layer, or if your packet traveled different routes, it's possible that the receiver receives packets in wrong order. TCP knows how to fix this for you.
Also you, as a program, may not have time to keep checking what's in the buffer. So, at your convenience you can check what's in the buffer. All in all, current socket implementation is not so bad. If there actually were isPeerClosed(), that's extra call you have to make every time you want to call read.
The underlying sockets API doesn't have such a notification.
The sending TCP stack won't send the FIN bit until the last packet anyway, so there could be a lot of data buffered from when the sending application logically closed its socket before that data is even sent. Likewise, data that's buffered because the network is quicker than the receiving application (I don't know, maybe you're relaying it over a slower connection) could be significant to the receiver and you wouldn't want the receiving application to discard it just because the FIN bit has been received by the stack.
Since none of the answers so far fully answer the question, I'm summarizing my current understanding of the issue.
When a TCP connection is established and one peer calls close() or shutdownOutput() on its socket, the socket on the other side of the connection transitions into CLOSE_WAIT state. In principle, it's possible to find out from the TCP stack whether a socket is in CLOSE_WAIT state without calling read/recv (e.g., getsockopt() on Linux: http://www.developerweb.net/forum/showthread.php?t=4395), but that's not portable.
Java's Socket class seems to be designed to provide an abstraction comparable to a BSD TCP socket, probably because this is the level of abstraction to which people are used to when programming TCP/IP applications. BSD sockets are a generalization supporting sockets other than just INET (e.g., TCP) ones, so they don't provide a portable way of finding out the TCP state of a socket.
There's no method like isCloseWait() because people used to programming TCP applications at the level of abstraction offered by BSD sockets don't expect Java to provide any extra methods.
Detecting whether the remote side of a (TCP) socket connection has closed can be done with the java.net.Socket.sendUrgentData(int) method, and catching the IOException it throws if the remote side is down. This has been tested between Java-Java, and Java-C.
This avoids the problem of designing the communication protocol to use some sort of pinging mechanism. By disabling OOBInline on a socket (setOOBInline(false), any OOB data received is silently discarded, but OOB data can still be sent. If the remote side is closed, a connection reset is attempted, fails, and causes some IOException to be thrown.
If you actually use OOB data in your protocol, then your mileage may vary.
the Java IO stack definitely sends FIN when it gets destructed on an abrupt teardown. It just makes no sense that you can't detect this, b/c most clients only send the FIN if they are shutting down the connection.
...another reason i am really beginning to hate the NIO Java classes. It seems like everything is a little half-ass.
It's an interesting topic. I've dug through the java code just now to check. From my finding, there are two distinct problems: the first is the TCP RFC itself, which allows for remotely closed socket to transmit data in half-duplex, so a remotely closed socket is still half open. As per the RFC, RST doesn't close the connection, you need to send an explicit ABORT command; so Java allow for sending data through half closed socket
(There are two methods for reading the close status at both of the endpoint.)
The other problem is that the implementation say that this behavior is optional. As Java strives to be portable, they implemented the best common feature. Maintaining a map of (OS, implementation of half duplex) would have been a problem, I guess.
This is a flaw of Java's (and all others' that I've looked at) OO socket classes -- no access to the select system call.
Correct answer in C:
struct timeval tp;
fd_set in;
fd_set out;
fd_set err;
FD_ZERO (in);
FD_ZERO (out);
FD_ZERO (err);
FD_SET(socket_handle, err);
tp.tv_sec = 0; /* or however long you want to wait */
tp.tv_usec = 0;
select(socket_handle + 1, in, out, err, &tp);
if (FD_ISSET(socket_handle, err) {
/* handle closed socket */
}
Here is a lame workaround. Use SSL ;) and SSL does a close handshake on teardown so you are notified of the socket being closed (most implementations seem to do a propert handshake teardown that is).
The reason for this behaviour (which is not Java specific) is the fact that you don't get any status information from the TCP stack. After all, a socket is just another file handle and you can't find out if there's actual data to read from it without actually trying to (select(2) won't help there, it only signals that you can try without blocking).
For more information see the Unix socket FAQ.
Only writes require that packets be exchanged which allows for the loss of connection to be determined. A common work around is to use the KEEP ALIVE option.
When it comes to dealing with half-open Java sockets, one might want to have a look at
isInputShutdown() and isOutputShutdown().