I have tcp communication via socket code like :
public void openConnection() throws Exception
{
socket = new Socket();
InetAddress iNet = InetAddress.getByName("server");
InetSocketAddress sock = new InetSocketAddress(iNet, Integer.parseInt(port));
socket.connect(sock, 0);
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
and send method as :
synchronized void send(String message)
{
try
{
out.println(message);
}
catch (Exception e)
{
throw new RuntimeException(this.getClass() + ": Error Sending Message: "
+ message, e);
}
}
And I read from webpages that, TCP/IP does not guarantee delivery of packet, it retries but if network is too busy, packet may be dropped([link]).1
Packets can be dropped when transferring data between systems for two key reasons:
Heavy network utilization and resulting congestion
Faulty network hardware or connectors
TCP is designed to be able to react when packets are dropped on a network. When a packet is successfully delivered to its destination, the destination system sends an acknowledgement message back to the source system. If this acknowledgement is not received within a certain interval, that may be either because the destination system never received the packet or because the packet containing the acknowledgement was itself lost. In either case, if the acknowledgement is not received by the source system in the given time, the source system assumes that the destination system never received the message and retransmits it. It is easy to see that if the performance of the network is poor, packets are lost in the first place, and the increased load from these retransmit messages is only increasing the load on the network further, meaning that more packets will be lost. This behaviour can result in very quickly creating a critical situation on the network.
Is there any way, that I can detect if packet was received successfully by destination or not, I am not sure that out.println(message); will throw any exception, as this is non blocking call. It will put message in buffer and return to let TCP/IP do its work.
Any help?
TCP is designed to be able to react when packets are dropped on a network.
As your quote says, TCP is design to react automatically to the events you mention in this text. As such, you don't have anything to do at this level, since this will be handled by the TCP implementation you're using (e.g. in the OS).
TCP has some features that will do some of the work for you, but you are right to wonder about their limitations (many people think of TCP as a guaranteed delivery protocol, without context).
There is an interesting discussion on the Linux Kernel Mailing List ("Re: Client receives TCP packets but does not ACK") about this.
In your use case, practically, this means that you should treat your TCP connection as a stream of data, in each direction (the classic mistake is to assume that if you send n bytes from on end, you'll read n bytes in a single buffer read on the other end), and handle possible exceptions.
Handling java.io.IOExceptions properly (in particular subclasses in java.net) will cover error cases at the level you describe: if you get one, have a retry strategy (depending on what the application and its user is meant to do). Rely on timeouts too (don't set a socket as blocking forever).
Application protocols may also be designed to send their own acknowledgement when receiving commands or requests.
This is a matter of assigning responsibilities to different layers. The TCP stack implementation will handle the packet loss problems you mention, and throw an error/exception if it can't fix it by itself. Its responsibility is the communication with the remote TCP stack. Since in most cases you want your application to talk to a remote application, there needs to be an extra acknowledgement on top of that. In general, the application protocol needs to be designed to handle these cases. (You can go a number of layers up in some cases, depending on which entity is meant to take responsibility to handle the requests/commands.)
TCP/IP does not drop the packet. The congestion control algorithms inside the TCP implementation take care of retransmission. Assuming that there is a steady stream of data being sent, the receiver will acknowledge which sequence numbers it received back to the sender. The sender can use the acknowledgements to infer which packets need to be resent. The sender holds packets until they have been acknowledged.
As an application, unless the TCP implementation provides mechanisms to receive notification of congestion, the best it can do is establish a timeout for when the transaction can complete. If the timeout occurs before the transaction completes, the application can declare the network to be too congested for the application to succeed.
Code what you need. If you need acknowledgements, implement them. If you want the sender to know the recipient got the information, then have the recipient send some kind of acknowledgement.
From an application standpoint, TCP provides a bi-directional byte stream. You can communicate whatever information you want over that by simply specifying streams of bytes that convey the information you need to communicate.
Don't try to make TCP do anything else. Don't try to "teach TCP" your protocol.
Related
So i am quite new to Sockets and I have to create a Server-Client-App for school.
I expect a client to request something from the server multiple times during his runtime.
I am uncertain wether I should make a new java.net.Socket for every Request I get (Client opens new socket every time and Java Serversocket accepts)
or use a single socket and keep that during the client's runtime.
That strongly depends on how frequently the socket is used.
E.g. if you know that the client will send a request to the server every 50 milliseconds it would be easier to just keep the socket opened.
But if you know the client will only request information from the socket every 5 minutes it's probably better to close the connection and create a new one when needed. Same if you don't know when the next request will be created.
Creating a new Socket on server-side is not very expensive, so it's probably better to just close the connection if it's not used very frequently.
An exception could be a special socket, that needs authentifications or other expensive stuff on creation, but that's probably not the case in a school project.
So in general: It depends on the usage of the socket, but if you're unshure whether it's used very frequently or not better close it and open it again when needed.
Related to this question:
Maximum number of socket in java
If you're worried about exceeding the maximum number of sockets that can be opened (I imagine this is highly unlikely in your case) you can create a work-around where you use TCP to initially establish the connection, send the client a UID (Unique Identifier, a 64-bit unsigned long long type should be sufficient) and then close the TCP connection. Fill and maintain a structure (or Class object in your case) detailing the connection details (IP address, Unique Identifier code) and then wait on arriving packets sent via UDP (User Datagram Protocol, an alternative to TCP). If you decide to use UDP, be aware that you'll need to implement a means of reordering packets in order to reconstruct a byte-stream (serialisation) and a mechanism for resending data packets if somehow they do not arrive (packet loss recovery).
Sounds worse than it is. I'll repeat though, don't bother yourself with these intricacies if you're not worried about exceeding any limits.
I understand that Server socket channel is registered to listen for accept, when accepted a channel is registered for read and once read it is registered for write and this is done by adding the relevant keys to the SelectionKey's interest set using the interestOps method.
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here? Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket, and the source channel will be oblivious of this decision by the server and might keep on sending data to the server? Or will it somehow inform the channel source of this decision.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset"
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here?
What literally happens is something like:
public void interestOps(int interestOps)
{
this.interestOps = interestOps;
}
Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket
It means that the Selector won't trigger any OP_READ events if data arrives via the socket. It doesn't mean that data won't be received.
and the source channel will be oblivious of this decision by the server and might keep on sending data to the server?
If by 'source channel' you mean the peer, it is not advised in anyway, unless the receive buffer fills up at the receiver.
Or will it somehow inform the channel source of this decision.
No.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset".
No.
I am working on a server/client application that allows multiple clients to be connected to the server at any given time.
For each client, the server sets up a ClientHandler object that has an input and output stream to the client connected at this socket. Through this connection, the client is free to send a number of messages to the server at any point throughout the running of the program, and the server will respond according to the message.
What I need to implement is a mechanism that sends, at certain times, messages to all currently-connected clients. I have done this by storing all the output streams to clients in an ArrayList<PrintWriter> that will allow the same message to be sent to all clients.
What I am struggling with is this:
When a message is received that is individual to the client, the client GUI is updated accordingly (only a select number of messages can be sent, so there only a select number of possible responses-from-server, dealt with by client-side if statements). However, when a message is received by the client that was sent to all clients, I would like the client to update the GUI quite differently.
Considering that both forms of input come from the same input stream, I can see this being difficult, and I anticipate that I will have to declare any methods that cause output using the PrintWriter will have to be made synchronized. However, is there a way to process the different inputs while using the same PrintWriter at all? Would this have to be done using further if statements or could it be done using a separate Thread on the client side that handles messages sent to all clients?
Thanks for any advice, if you think you can help then feel free to ask for parts of my existing code!
Mark
You are first of all lacking a protocol between your server and your clients!
Obviously the server can send two types of messages "response" and "broadcast".
A rather simple approach is tagging your messages: e.g. prefix your mesages with "R" if it is a response to a request and with "B" if it is an unattended broadcast message. (This all depends how communication between server and clients is intended to be performed.)
Whether your client needs different threads for coping with the messages is a completly different story. Having different threads is useful if the processing activity within your client would prevent timely reads of the socket. Then you might consider having an I/O thread that is doing communications and is dispatching the messages to different "handlers" (could be other threads) for processing. (This I/O thread also can remove the tag such that existing processing code need not learn about the lower protocol with the server.)
An analogous reasoning might apply to your server side. Depending on the dynamics of interactions and processing of requests, you might use several threads. Then you should have one that is doing I/O with the clients and others that are doing the work (generating responses or broadcast messages)
When your clients connect to the server, your server creates a Socket for it, here it is Socket socket = ss.accept();, your socket variable will be holding that client.
now if you just keep adding your client socket to a arraylist in your while loop, you will have a list of clients actively connected with your server like:
after the accept:
clients = new ArrayList<DataOutputStream>();
Socket socket = ss.accept();
os = new DataOutputStream(socket.getOutputStream());
clients.add(os);
Now as you have all the clients in that clients arraylist, you can loop through it, or with some protocol define which client should i send the data after reading.
Background
My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.
It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.
Problem
As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.
When I enable the server again I should be able to confirm that no events are missing.
The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.
I need a way to make certain that the data actually reaches the server.
This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.
Stuff I've tried
Setting Socket.setTcpNoDelay(true)
Removing all buffered writers and just using OutputStream directly
Flushing the stream after every send
Additional info
I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.
Snippet of code
Initialization of the class:
socket = new Socket(host, port);
socket.setTcpNoDelay(true);
Where data is sent:
while(!dataList.isEmpty()) {
String data = dataList.removeFirst();
inMemoryCount -= data.length();
try {
OutputStream os = socket.getOutputStream();
os.write(data.getBytes());
os.flush();
}
catch(IOException e) {
inMemoryCount += data.length();
dataList.addFirst(data);
socket = null;
return false;
}
}
return true;
Update 1
I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.
Update 2
The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).
This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.
I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.
The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.
One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.
Final update
My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.
This will possibly create some duplicate events on the server but that can be filtered there.
Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.
Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.
Just another idea.
Edit 0:
You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.
The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.
The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.
(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)
See the accepted answer here: Java Sockets and Dropped Connections
socket.shutdownOutput();
wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
Won't work: server cannot be modified
Can't your server acknowledge every message it receives with another packet? The client won't remove the messages that the server did not acknowledge yet.
This will have performance implications. To avoid slowing down you can keep on sending messages before an acknowledgement is received, and acknowledge several messages in one return message.
If you send a message every 5 seconds, and disconnection is not detected by the network stack for 30 seconds, you'll have to store just 6 messages. If 6 sent messages are not acknowledged, you can consider the connection to be down. (I suppose that logic of reconnection and backlog sending is already implemented in your app.)
What about sending UDP datagrams on a separate UDP socket while making the remote host respond to each, and then when the remote host doesn't respond, you kill the TCP connection? It detects a link breakage quickly enough :)
Use http POST instead of socket connection, then you can send a response to each post. On the client side you only remove the data from memory if the response indicates success.
Sure its more overhead, but gives you what you want 100% of the time.
Are those packet simply disappear? or they waits for the destination? Or the packet go back then throws an exception?
And in java, what is the difference between the byte[] buffer with the length, in the DatagramPacket constructor?
DatagramPacket dp = new DatagramPacket(new byte[...], length);
From Wikipedia:
UDP is... Unreliable – When a message
is sent, it cannot be known if it will
reach its destination; it could get
lost along the way. There is no
concept of acknowledgment,
retransmission or timeout.
Even if the destination is online, there is no guarantee, the UDP packet will arrive, arrive in the order sent, or not be fragmented. (I believe packets smaller than 532 bytes will not be fragmented) It is possible to have all three; fragmented, out of order and incomplete for the same packet.
The simplicity and stability of your network will determine how robust UDP packet delivery is, but you have to assume it is unreliable at least some of the time. All you can do is minimise the loss.
It is up to you to decide what to do if a packet is lost and how to detect it.
If you want broadcast, reliable delivery of messages I suggest you look at JMS Topics or Queues, like ActiveMQ.
If using UDP protocol, you can't guarantee that your packet is going to be received.
So the answer is, it will be sent, even if its destination is not online.
TCP protocol, its guaranteed that costumer will receive the packet. Even if he is offline, once he get's online, that packet will be received.
Are those packet simply disappear? or they waits for the destination? Or the packet go back then throws an exception?
What happens depends on the nature of the "offline" status.
If the UDP message reaches the host, but the application is not listening, it will typically be silently discarded. It definitely won't be queued waiting for the application to listen. (That would be pointless, and potentially dangerous.)
If the UDP message cannot get to the host because the host itself is offline, the message will be silently discarded. (If the packets can reach the destination host's local network, then there is nothing apart from the host itself that can tell if the host actually received the packets.)
If the network doesn't know how to route the IP packets to the UDP server (and a few other scenarios), an ICMP "Destination Unreachable" packet may be sent to the sender, and that typically gets reported as a Java exception. However this is not guaranteed. So the possible outcomes are:
the UDP packet is black-holed and the sender gets no indication, or
the UDP packet is black-holed and the sender gets a Java exception.
If the UDP packet is blocked by a firewall, then the behaviour is hard to predict. (Firewalls often "lie" in their responses to unwanted traffic.)
The only situation where you would expect there to be queuing of UDP traffic is when the network is working, the host is working and the application is listening. Limited queuing is then possible if the application is slow in accepting the packets; i.e. it takes too long between successive calls to receive on the datagram socket. But even there, the queueing / buffering is strictly limited, and beyond that the messages will be dropped.