So i am quite new to Sockets and I have to create a Server-Client-App for school.
I expect a client to request something from the server multiple times during his runtime.
I am uncertain wether I should make a new java.net.Socket for every Request I get (Client opens new socket every time and Java Serversocket accepts)
or use a single socket and keep that during the client's runtime.
That strongly depends on how frequently the socket is used.
E.g. if you know that the client will send a request to the server every 50 milliseconds it would be easier to just keep the socket opened.
But if you know the client will only request information from the socket every 5 minutes it's probably better to close the connection and create a new one when needed. Same if you don't know when the next request will be created.
Creating a new Socket on server-side is not very expensive, so it's probably better to just close the connection if it's not used very frequently.
An exception could be a special socket, that needs authentifications or other expensive stuff on creation, but that's probably not the case in a school project.
So in general: It depends on the usage of the socket, but if you're unshure whether it's used very frequently or not better close it and open it again when needed.
Related to this question:
Maximum number of socket in java
If you're worried about exceeding the maximum number of sockets that can be opened (I imagine this is highly unlikely in your case) you can create a work-around where you use TCP to initially establish the connection, send the client a UID (Unique Identifier, a 64-bit unsigned long long type should be sufficient) and then close the TCP connection. Fill and maintain a structure (or Class object in your case) detailing the connection details (IP address, Unique Identifier code) and then wait on arriving packets sent via UDP (User Datagram Protocol, an alternative to TCP). If you decide to use UDP, be aware that you'll need to implement a means of reordering packets in order to reconstruct a byte-stream (serialisation) and a mechanism for resending data packets if somehow they do not arrive (packet loss recovery).
Sounds worse than it is. I'll repeat though, don't bother yourself with these intricacies if you're not worried about exceeding any limits.
I have tcp communication via socket code like :
public void openConnection() throws Exception
{
socket = new Socket();
InetAddress iNet = InetAddress.getByName("server");
InetSocketAddress sock = new InetSocketAddress(iNet, Integer.parseInt(port));
socket.connect(sock, 0);
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
and send method as :
synchronized void send(String message)
{
try
{
out.println(message);
}
catch (Exception e)
{
throw new RuntimeException(this.getClass() + ": Error Sending Message: "
+ message, e);
}
}
And I read from webpages that, TCP/IP does not guarantee delivery of packet, it retries but if network is too busy, packet may be dropped([link]).1
Packets can be dropped when transferring data between systems for two key reasons:
Heavy network utilization and resulting congestion
Faulty network hardware or connectors
TCP is designed to be able to react when packets are dropped on a network. When a packet is successfully delivered to its destination, the destination system sends an acknowledgement message back to the source system. If this acknowledgement is not received within a certain interval, that may be either because the destination system never received the packet or because the packet containing the acknowledgement was itself lost. In either case, if the acknowledgement is not received by the source system in the given time, the source system assumes that the destination system never received the message and retransmits it. It is easy to see that if the performance of the network is poor, packets are lost in the first place, and the increased load from these retransmit messages is only increasing the load on the network further, meaning that more packets will be lost. This behaviour can result in very quickly creating a critical situation on the network.
Is there any way, that I can detect if packet was received successfully by destination or not, I am not sure that out.println(message); will throw any exception, as this is non blocking call. It will put message in buffer and return to let TCP/IP do its work.
Any help?
TCP is designed to be able to react when packets are dropped on a network.
As your quote says, TCP is design to react automatically to the events you mention in this text. As such, you don't have anything to do at this level, since this will be handled by the TCP implementation you're using (e.g. in the OS).
TCP has some features that will do some of the work for you, but you are right to wonder about their limitations (many people think of TCP as a guaranteed delivery protocol, without context).
There is an interesting discussion on the Linux Kernel Mailing List ("Re: Client receives TCP packets but does not ACK") about this.
In your use case, practically, this means that you should treat your TCP connection as a stream of data, in each direction (the classic mistake is to assume that if you send n bytes from on end, you'll read n bytes in a single buffer read on the other end), and handle possible exceptions.
Handling java.io.IOExceptions properly (in particular subclasses in java.net) will cover error cases at the level you describe: if you get one, have a retry strategy (depending on what the application and its user is meant to do). Rely on timeouts too (don't set a socket as blocking forever).
Application protocols may also be designed to send their own acknowledgement when receiving commands or requests.
This is a matter of assigning responsibilities to different layers. The TCP stack implementation will handle the packet loss problems you mention, and throw an error/exception if it can't fix it by itself. Its responsibility is the communication with the remote TCP stack. Since in most cases you want your application to talk to a remote application, there needs to be an extra acknowledgement on top of that. In general, the application protocol needs to be designed to handle these cases. (You can go a number of layers up in some cases, depending on which entity is meant to take responsibility to handle the requests/commands.)
TCP/IP does not drop the packet. The congestion control algorithms inside the TCP implementation take care of retransmission. Assuming that there is a steady stream of data being sent, the receiver will acknowledge which sequence numbers it received back to the sender. The sender can use the acknowledgements to infer which packets need to be resent. The sender holds packets until they have been acknowledged.
As an application, unless the TCP implementation provides mechanisms to receive notification of congestion, the best it can do is establish a timeout for when the transaction can complete. If the timeout occurs before the transaction completes, the application can declare the network to be too congested for the application to succeed.
Code what you need. If you need acknowledgements, implement them. If you want the sender to know the recipient got the information, then have the recipient send some kind of acknowledgement.
From an application standpoint, TCP provides a bi-directional byte stream. You can communicate whatever information you want over that by simply specifying streams of bytes that convey the information you need to communicate.
Don't try to make TCP do anything else. Don't try to "teach TCP" your protocol.
I'm trying to load test a Java server by opening a large number of socket connections to the server, authenticating, closing the connection, then repeating. My app runs great for awhile but eventually I get:
java.net.BindException: Address already in use: connect
According to documentation I read, the reason for this is that closed sockets still occupy the local address assigned to them for a period of time after close() was called. This is OS dependent but can be on the order of minutes. I tried calling setReuseAddress(true) on the socket with the hopes that its address would be reusable immediately after close() was called. Unfortunately this doesn't seem to be the case.
My code for socket creation is:
Socket socket = new Socket();
socket.setReuseAddress(true);
socket.connect(new InetSocketAddress(m_host, m_port));
But I still get this error:
java.net.BindException: Address already in use: connect after awhile.
Is there any other way to accomplish what I'm trying to do? I would like to for instance: open 100 sockets, close them all, open 200 sockets, close them all, open 300, etc. up to a max of 2000 or so sockets.
Any help would be greatly appreciated!
You are exhausing the space of outbound ports by opening that many outbound sockets within the TIME_WAIT period of two minutes. The first question you should ask yourself is does this represent a realistic load test at all? Is a real client really going to do that? If not, you just need to revise your testing methodology.
BTW SO_LINGER is the number of seconds the application will wait during close() for data to be flushed. It is normally zero. The port will hang around for the TIME_WAIT interval anyway if this is the end that issued the close. This is not the same thing. It is possible to abuse the SO_LINGER option to patch the problem. However that will also cause exceptional behaviour at the peer and again this is not the purpose of a test.
Not using bind() but setReuseAddress(true) is just weird, I hope you do understand the implications of setReuseAddress (and the point of). 100-2000 is not a great number of sockets to open, however the server you are attempting to connect to (since it looks the same addr/port pair), may just drop them w/ a normal backlog of 50.
Edit:
if you need to open multiple sockets quickly (ermm port scan?), I'd very strongly recommend using NIO and connect()/finishConnect() + Selector. Opening 1000 sockets in the same thread is just plain slow.
Forgot you may need finishConnect() either way in your code.
I think that you should plan on the port you want to use to connect to be in use. By that I mean try to connect using the given port. If the connect fails (or in your case throws an exception), try to open the connection using the next port number.
Try wrapping the connect statement in a try/catch.
Here's some pseudo-code that conveys what I think will work:
portNumber = x //where x is the first port number you will try
numConnections = 200 // or however many connections you want to open
while(numConnections > 0){
try{
connect(host, portNumber)
numConnections--
}catch(){}
portNumber++
}
This code doesn't cover corner cases such as "what happens when all ports are in use?"
I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.
As a follow up to a recent question, I wonder why it is impossible in Java, without attempting reading/writing on a TCP socket, to detect that the socket has been gracefully closed by the peer? This seems to be the case regardless of whether one uses the pre-NIO Socket or the NIO SocketChannel.
When a peer gracefully closes a TCP connection, the TCP stacks on both sides of the connection know about the fact. The server-side (the one that initiates the shutdown) ends up in state FIN_WAIT2, whereas the client-side (the one that does not explicitly respond to the shutdown) ends up in state CLOSE_WAIT. Why isn't there a method in Socket or SocketChannel that can query the TCP stack to see whether the underlying TCP connection has been terminated? Is it that the TCP stack doesn't provide such status information? Or is it a design decision to avoid a costly call into the kernel?
With the help of the users who have already posted some answers to this question, I think I see where the issue might be coming from. The side that doesn't explicitly close the connection ends up in TCP state CLOSE_WAIT meaning that the connection is in the process of shutting down and waits for the side to issue its own CLOSE operation. I suppose it's fair enough that isConnected returns true and isClosed returns false, but why isn't there something like isClosing?
Below are the test classes that use pre-NIO sockets. But identical results are obtained using NIO.
import java.net.ServerSocket;
import java.net.Socket;
public class MyServer {
public static void main(String[] args) throws Exception {
final ServerSocket ss = new ServerSocket(12345);
final Socket cs = ss.accept();
System.out.println("Accepted connection");
Thread.sleep(5000);
cs.close();
System.out.println("Closed connection");
ss.close();
Thread.sleep(100000);
}
}
import java.net.Socket;
public class MyClient {
public static void main(String[] args) throws Exception {
final Socket s = new Socket("localhost", 12345);
for (int i = 0; i < 10; i++) {
System.out.println("connected: " + s.isConnected() +
", closed: " + s.isClosed());
Thread.sleep(1000);
}
Thread.sleep(100000);
}
}
When the test client connects to the test server the output remains unchanged even after the server initiates the shutdown of the connection:
connected: true, closed: false
connected: true, closed: false
...
I have been using Sockets often, mostly with Selectors, and though not a Network OSI expert, from my understanding, calling shutdownOutput() on a Socket actually sends something on the network (FIN) that wakes up my Selector on the other side (same behaviour in C language). Here you have detection: actually detecting a read operation that will fail when you try it.
In the code you give, closing the socket will shutdown both input and output streams, without possibilities of reading the data that might be available, therefore loosing them. The Java Socket.close() method performs a "graceful" disconnection (opposite as what I initially thought) in that the data left in the output stream will be sent followed by a FIN to signal its close. The FIN will be ACK'd by the other side, as any regular packet would1.
If you need to wait for the other side to close its socket, you need to wait for its FIN. And to achieve that, you have to detect Socket.getInputStream().read() < 0, which means you should not close your socket, as it would close its InputStream.
From what I did in C, and now in Java, achieving such a synchronized close should be done like this:
Shutdown socket output (sends FIN on the other end, this is the last thing that will ever be sent by this socket). Input is still open so you can read() and detect the remote close()
Read the socket InputStream until we receive the reply-FIN from the other end (as it will detect the FIN, it will go through the same graceful diconnection process). This is important on some OS as they don't actually close the socket as long as one of its buffer still contains data. They're called "ghost" socket and use up descriptor numbers in the OS (that might not be an issue anymore with modern OS)
Close the socket (by either calling Socket.close() or closing its InputStream or OutputStream)
As shown in the following Java snippet:
public void synchronizedClose(Socket sok) {
InputStream is = sok.getInputStream();
sok.shutdownOutput(); // Sends the 'FIN' on the network
while (is.read() > 0) ; // "read()" returns '-1' when the 'FIN' is reached
sok.close(); // or is.close(); Now we can close the Socket
}
Of course both sides have to use the same way of closing, or the sending part might always be sending enough data to keep the while loop busy (e.g. if the sending part is only sending data and never reading to detect connection termination. Which is clumsy, but you might not have control on that).
As #WarrenDew pointed out in his comment, discarding the data in the program (application layer) induces a non-graceful disconnection at application layer: though all data were received at TCP layer (the while loop), they are discarded.
1: From "Fundamental Networking in Java": see fig. 3.3 p.45, and the whole ยง3.7, pp 43-48
I think this is more of a socket programming question. Java is just following the socket programming tradition.
From Wikipedia:
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
Once the handshake is done, TCP does not make any distinction between two end points (client and server). The term "client" and "server" is mostly for convenience. So, the "server" could be sending data and "client" could be sending some other data simultaneously to each other.
The term "Close" is also misleading. There's only FIN declaration, which means "I am not going to send you any more stuff." But this does not mean that there are no packets in flight, or the other has no more to say. If you implement snail mail as the data link layer, or if your packet traveled different routes, it's possible that the receiver receives packets in wrong order. TCP knows how to fix this for you.
Also you, as a program, may not have time to keep checking what's in the buffer. So, at your convenience you can check what's in the buffer. All in all, current socket implementation is not so bad. If there actually were isPeerClosed(), that's extra call you have to make every time you want to call read.
The underlying sockets API doesn't have such a notification.
The sending TCP stack won't send the FIN bit until the last packet anyway, so there could be a lot of data buffered from when the sending application logically closed its socket before that data is even sent. Likewise, data that's buffered because the network is quicker than the receiving application (I don't know, maybe you're relaying it over a slower connection) could be significant to the receiver and you wouldn't want the receiving application to discard it just because the FIN bit has been received by the stack.
Since none of the answers so far fully answer the question, I'm summarizing my current understanding of the issue.
When a TCP connection is established and one peer calls close() or shutdownOutput() on its socket, the socket on the other side of the connection transitions into CLOSE_WAIT state. In principle, it's possible to find out from the TCP stack whether a socket is in CLOSE_WAIT state without calling read/recv (e.g., getsockopt() on Linux: http://www.developerweb.net/forum/showthread.php?t=4395), but that's not portable.
Java's Socket class seems to be designed to provide an abstraction comparable to a BSD TCP socket, probably because this is the level of abstraction to which people are used to when programming TCP/IP applications. BSD sockets are a generalization supporting sockets other than just INET (e.g., TCP) ones, so they don't provide a portable way of finding out the TCP state of a socket.
There's no method like isCloseWait() because people used to programming TCP applications at the level of abstraction offered by BSD sockets don't expect Java to provide any extra methods.
Detecting whether the remote side of a (TCP) socket connection has closed can be done with the java.net.Socket.sendUrgentData(int) method, and catching the IOException it throws if the remote side is down. This has been tested between Java-Java, and Java-C.
This avoids the problem of designing the communication protocol to use some sort of pinging mechanism. By disabling OOBInline on a socket (setOOBInline(false), any OOB data received is silently discarded, but OOB data can still be sent. If the remote side is closed, a connection reset is attempted, fails, and causes some IOException to be thrown.
If you actually use OOB data in your protocol, then your mileage may vary.
the Java IO stack definitely sends FIN when it gets destructed on an abrupt teardown. It just makes no sense that you can't detect this, b/c most clients only send the FIN if they are shutting down the connection.
...another reason i am really beginning to hate the NIO Java classes. It seems like everything is a little half-ass.
It's an interesting topic. I've dug through the java code just now to check. From my finding, there are two distinct problems: the first is the TCP RFC itself, which allows for remotely closed socket to transmit data in half-duplex, so a remotely closed socket is still half open. As per the RFC, RST doesn't close the connection, you need to send an explicit ABORT command; so Java allow for sending data through half closed socket
(There are two methods for reading the close status at both of the endpoint.)
The other problem is that the implementation say that this behavior is optional. As Java strives to be portable, they implemented the best common feature. Maintaining a map of (OS, implementation of half duplex) would have been a problem, I guess.
This is a flaw of Java's (and all others' that I've looked at) OO socket classes -- no access to the select system call.
Correct answer in C:
struct timeval tp;
fd_set in;
fd_set out;
fd_set err;
FD_ZERO (in);
FD_ZERO (out);
FD_ZERO (err);
FD_SET(socket_handle, err);
tp.tv_sec = 0; /* or however long you want to wait */
tp.tv_usec = 0;
select(socket_handle + 1, in, out, err, &tp);
if (FD_ISSET(socket_handle, err) {
/* handle closed socket */
}
Here is a lame workaround. Use SSL ;) and SSL does a close handshake on teardown so you are notified of the socket being closed (most implementations seem to do a propert handshake teardown that is).
The reason for this behaviour (which is not Java specific) is the fact that you don't get any status information from the TCP stack. After all, a socket is just another file handle and you can't find out if there's actual data to read from it without actually trying to (select(2) won't help there, it only signals that you can try without blocking).
For more information see the Unix socket FAQ.
Only writes require that packets be exchanged which allows for the loss of connection to be determined. A common work around is to use the KEEP ALIVE option.
When it comes to dealing with half-open Java sockets, one might want to have a look at
isInputShutdown() and isOutputShutdown().