Looking at the code:
private static void send(final Socket socket, final String data) throws IOException {
final OutputStream os = socket.getOutputStream();
final DataOutputStream dos = new DataOutputStream(os);
dos.writeUTF(data);
dos.flush();
}
can I be sure that calling this method either throws IOException (and that means that I'd better close the socket), or, if no exceptions are thrown, the data I send is guaranteed to be fully send? Are there any cases when I read the data on the other endpoint, the string I get is incomplete and there are no exception?
There is a big difference between sent and received. You can send data from the application successfully, however it then passes to
the OS on your machine
the network adapter
the switch(s) on the network
the network adapter on the remote machine
the OS on the remote machine
the application buffer on the remote machine
whatever the application does with it.
Any of these stages can fail and your sender will be none the wiser.
If you want to know the application has received and processed the data successfully, it must send you back a message saying this has happened. When you receive this, then you know it was received.
Yes, several things may happen. First of all, keep in mind write returns really quickly, so don't think much error checking (has all my data been ACKed ?) is performed.
Door number 1
You write and flush your data. TCP tries as hard as it can to deliver it. Which means it might perform retransmits and such. Of course, your send doesn't get stuck for such a long period (in some cases TCP tries for 5-10 minutes before it nukes the connections). Thus, you will never know if the other side actually got your message. You will get an error message on the next operation on the socket.
Door number 2
You write and flush your data. Because of MTU nastiness and because the string is long, it is sent in multiple packets. So your peer reads some of it and presents it to the user before getting it all.
So imagine you send: "Hello darkness my old friend, I've come to talk with you again". The other side might get "Hello darkness m". However, if it performs subsequent reads, it will get the whole data. So the far side TCP has actually received everything, it has ACKed everything but the user application has failed to read the data in order to take it out of TCPs hands.
Related
We have a system where there are 2 applications. One of these is a legacy application, for which we can't do any code changes. This application is sending messages to second application which is written in java. In our java code, we have set input stream buffer size equal to 1 MB as follows:
Socket eventSocket = new Socket();
eventSocket.setSendBufferSize(1024 * 1024);
Now the legacy application is sending messages of variable size. Most of the messages are smaller than 1 MB. But sometimes it is sending messages as large as 8 MB. Many times these messages are read successfully by the java application. But for some cases, following read operation is returning -1 value:
read = stream.read(b, off, len - off); ( here stream is an InputStream object)
As per Java API definition, InputStream read method returns -1 if there is no more data because the end of the stream has been reached.
But this is an erroneous behavior. We have done snoop test using
wireshark to verify the exact messages that are exchanged between these two applications and found that java application has sent zero
window message few seconds before the time when input stream read
method has returned -1 value. At the time when this java api method
has returned -1, java application was sending ZeroWindowProbeAck
message to the legacy application.
How should we handle this issue?
As per https://wiki.wireshark.org/TCP%20ZeroWindow, zero window has following definition:
What does TCP Zero Window mean?
Zero Window is something to investigate.
TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.
This means that a client is not able to receive further information at the moment, and the TCP transmission is halted until it can process the information in its receive buffer.
TCP Window size is the amount of information that a machine can receive during a TCP session and still be able to process the data. Think of it like a TCP receive buffer. When a machine initiates a TCP connection to a server, it will let the server know how much data it can receive by the Window Size.
In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins sending data, the client will decrement it's Window Size as this buffer fills. At the same time, the client is processing the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames, the client informs the server of how much room is in this buffer. If the TCP Window Size goes down to 0, the client will not be able to receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero Window" in Expert View.
Troubleshooting a Zero Window
For one reason or another, the machine alerting the Zero Window will not receive any more data from the host. It could be that the machine is running too many processes at that moment, and its processor is maxed. Or it could be that there is an error in the TCP receiver, like a Windows registry misconfiguration. Try to determine what the client was doing when the TCP Zero Window happened.
Source: flukenetworks.com
Handling input-stream overflow (zero window) in Java
There is no such thing as 'input-stream overflow' in Java, and you can't handle zero window in Java either, except by reading from the network more quickly. Your title already doesn't make sense.
We have done snoop test using wireshark to verify the exact messages that are exchanged between these two applications and found that java application has sent zero window message few seconds before the time when input stream read method has returned -1 value.
Neither Java nor the application send those messages. The operating system does.
The input stream of a socket returns -1 if and only if a FIN has been received from the peer, and that may in turn occur if and and only if the peer has closed the connection or exited (Unix). It doesn't have anything to do wth TCP windowing.
At the time when this java api method has returned -1, java application was sending ZeroWindowProbeAck message to the legacy application.
No it wasn't. The operating system was, and it wasn't 'at the time', it was 'a few seconds before', accordingly to your own words. At the time when this Java method returned -1, it had just received a FIN from the peer. Have a look at your sniff log. There is no problem here to explain.
As per [whatever], zero window has the following definition
Wireshark does not get to define TCP. TCP is defined in IETF RFCs. You can't cite non-normative sources as definitions.
TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.
For any amount of time.
This means that a client is not able to receive further information at the moment, and the TCP transmission is halted until it can process the information in its receive buffer.
It means that the peer is not able to receive. It has nothing to do with the client or the server specifically.
TCP Window size is the amount of information that a machine can receive during a TCP session
No it isn't. It is the amount of data the receiver is currently able to receive. It is therefore also the amount of data the sender is present allowed to send. It has nothing to do with the session whatsoever.
and still be able to process the data.
Irrelevant.
Think of it like a TCP receive buffer.
It is a TCP receive buffer.
When a machine initiates a TCP connection to a server, it will let the server know how much data it can receive by the Window Size.
Correct. And vice versa. Continuously, not just at the start of the session.
In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins sending data, the client will decrement it's Window Size as this buffer fills.
It has nothing to do with clients and servers. It operates in both directions.
At the same time, the client is processing the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames,
Segments
the client informs the server of how much room is in this buffer.
The receiver informs the sender.
If the TCP Window Size goes down to 0, the client
The peer
will not be able to receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero Window" in Expert View.
For one reason or another, the machine alerting the Zero Window will not receive any more data from the host.
For one reason only. Its socket receive buffer is full. Period.
It could be that the machine is running too many processes at that moment
Rubbish.
Or it could be that there is an error in the TCP receiver, like a Windows registry misconfiguration.
Rubbish. The receiver is reading more slowly than the sender is sending. Period. It is a normal condition that arises frequently during any TCP session.
Try to determine what the client was doing when the TCP Zero Window happened.
That's easy. Not reading from the network.
Your source is drivel, and your problem is imaginary.
We have created a solution, where we are waiting for input stream to get cleared by waiting for some time after this overflow problem occurs. We have done code changes as follows:
int execRetries = 0;
while (true)
{
read = stream.read(b, off, len - off);
if (read == -1)
{
if(execRetries++ < MAX_EXEC_RETRIES_AFTER_IS_OVERFLOW){
try {
Log.error("Inputstream buffer overflow occured. Retry no: " + execRetries);
Thread.sleep(WAIT_TIME_AFTER_IS_OVERFLOW);
} catch (InterruptedException e) {
Log.error(e.getMessage(), e);
}
}
else{
throw new Exception("End of file on input stream");
}
}
else if(execRetries!=0){
Log.info("Inputstream buffer overflow problem resolved after retry no: " + execRetries);
execRetries = 0;
}
.....
}
Solution is sent to test server. We are waiting to verify the end result whether this solution is working.
I have a Netty app that takes HTTP connections and streams intermittent data back to while keeping the connection open until the client closes it. I can get the app to work except that the send buffer doesn't push to the client frequently enough (and often across merged write events which causes incomplete data receipt on the other end until the next buffer is pushed, which may be a long time coming). I'd like to know if there's a way for me to write into the send buffer and force a flush to push a complete chunk of data to the client without having to close the socket.
I have looked at the bootstrap properties tcpNoDelay, writeBufferHighWaterMark, and writeBufferLowWaterMark (all with and without "child.") to no effect.
Any suggestions? Thanks!
Just in case not clear, Netty does not have a flush() operation. It just writes as soon as possible.
I think that you can add a BufferedWriteHandler to the ChannelPipeline in order to emulate buffered write operation.
BufferedWriteHandler web doc
So long as you can maintain or obtain a reference to the pipelined instance of the BufferedWriteHandler,
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
BufferedWriteHandler bufferedWriter
= e.getChannel().getPipeline().get("buffer");
then you can programmatically flush as you like:
bufferedWriter.flush();
I think I answered my own question and it wasn't due to the socket buffer not sending the data. I was using "curl" to test the response and curl has an internal buffer that was preventing the data from being printed to the screen. This can be disabled with "-N." Entirely my own fault, but still took a while to dig through the code and trace it back to my client.
Background
My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.
It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.
Problem
As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.
When I enable the server again I should be able to confirm that no events are missing.
The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.
I need a way to make certain that the data actually reaches the server.
This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.
Stuff I've tried
Setting Socket.setTcpNoDelay(true)
Removing all buffered writers and just using OutputStream directly
Flushing the stream after every send
Additional info
I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.
Snippet of code
Initialization of the class:
socket = new Socket(host, port);
socket.setTcpNoDelay(true);
Where data is sent:
while(!dataList.isEmpty()) {
String data = dataList.removeFirst();
inMemoryCount -= data.length();
try {
OutputStream os = socket.getOutputStream();
os.write(data.getBytes());
os.flush();
}
catch(IOException e) {
inMemoryCount += data.length();
dataList.addFirst(data);
socket = null;
return false;
}
}
return true;
Update 1
I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.
Update 2
The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).
This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.
I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.
The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.
One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.
Final update
My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.
This will possibly create some duplicate events on the server but that can be filtered there.
Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.
Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.
Just another idea.
Edit 0:
You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.
The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.
The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.
(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)
See the accepted answer here: Java Sockets and Dropped Connections
socket.shutdownOutput();
wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
Won't work: server cannot be modified
Can't your server acknowledge every message it receives with another packet? The client won't remove the messages that the server did not acknowledge yet.
This will have performance implications. To avoid slowing down you can keep on sending messages before an acknowledgement is received, and acknowledge several messages in one return message.
If you send a message every 5 seconds, and disconnection is not detected by the network stack for 30 seconds, you'll have to store just 6 messages. If 6 sent messages are not acknowledged, you can consider the connection to be down. (I suppose that logic of reconnection and backlog sending is already implemented in your app.)
What about sending UDP datagrams on a separate UDP socket while making the remote host respond to each, and then when the remote host doesn't respond, you kill the TCP connection? It detects a link breakage quickly enough :)
Use http POST instead of socket connection, then you can send a response to each post. On the client side you only remove the data from memory if the response indicates success.
Sure its more overhead, but gives you what you want 100% of the time.
I have a client connecting to my server. The client sends some messages to the server which I do not care about and do not want to waste time parsing its messages if I'm not going to be using them. All the i/o I'm using is simple java i/o, not nio.
If I create the input stream and just never read from it, can that buffer fill up and cause problems? If so, is there something I can do or a property I can set to have it just throw away data that it sees?
Now what if the server doesn't create the input stream at all? Will that cause any problems on the client/sending side?
Please let me know.
Thanks,
jbu
When you accept a connection from a client, you get an InputStream. If you don't read from that stream, the client's data will buffer up. Eventually, the buffer will fill up and the client will block when it tries to write more data. If the client writes all of its data before reading a response from the server, you will end up with a pretty classic deadlock situation. If you really don't care about the data from the client, just read (or call skip) until EOF and drop the data. Alternatively, if it's not a standard request/response (like HTTP) protocol, fire up a new thread that continually reads the stream to keep it from getting backed up.
If you get no useful data from the client, what's the point of allowing it to connect?
I'm not sure of the implications of never reading from a buffer in Java -- I'd guess that eventually the OS would stop accepting data on that socket, but I'm not sure there.
Why don't you just call the skip method of your InputStream occasionally with a large number, to ensure that you discard the data?
InputStream in = ....
byte[] buffer = new byte[4096] // or whatever
while(true)
in.read(buffer);
if you accept the connection, you should read the data. to tell you the truth i have never seen (or could forsee) a situation where this (a server that ignores all data) could be useful.
I think you get the InputStream once you accept the request, so if you don't acknowledge that request the underlying framework (i.e. tomcat) will drop that request (after some lapsed time).
Regards.
In some circumstances I wish to send an error message from a server to client using non-blocking I/O (SocketChannel.write(ByteBuffer)) and then disconnect the client. Assuming I write the full contents of the message and then immediately disconnect I presume the client may not receive this message as I'm guessing that the OS hasn't actually sent the data at this point.
Is this correct, and if so is there a recommended approach to dealing with this situation?
I was thinking of using a timer whereby if I wish to disconnect a client I send a message and then close their connection after 1-2 seconds.
SocketChannel.write will in non-blocking mode return the number of bytes which could immediately be sent to the network without blocking. Your question makes me think that you expect the write method to consume the entire buffer and try asynchronously to send additional data to the network, but that is not how it's working.
If you really need to make sure that the error message is sent to the client before disconnecting the socket, I would simply enable blocking before calling the write method. Using non-blocking mode, you would have to call write in a loop, counting the number of bytes being sent by each invocation and exit the loop when you've succeeded to pass the entire message to the socket (bad solution, I know, unnecessary code, busy wait and so on).
you may be better off launching a thread and synchronously write data to the channel. the async api is more geared toward "one thread dispatching multiple channels" and not really intended for fire and forget communications.
The close() method of sockets makes sure, everything sent using write before is actually sent before the socket is really closed. However this assumes that your write() was able to copy all data to the tcp stacks output window, which will not always work. For solutions to this see the other answers.