I create simple diameter client and server (Link to sources). Client must send 10000 ccr messages, but in wireshark i see only ~300 ccr messages will be sended. Other messages raised timeouts on client. I run server and client on different computers with windows 7. I found in JDiameter sources line where jdiameter sended ccr (line 280) and i think in case when sending buffer of socket is full ccr not sended. I add before line 280 this code
while(bytes.hasRemaining())
Client send ~9900 ccr, but very slow.I tested client on other diameter server wroted on c++, client(on jdiameter without my changes) send ~7000 ccr, but this server hosted on debian.
I don't know ways to solve this problem, thanks for any help.
If the sender's send returns zero, it means the sender's socket send buffer is full, which in turn means the receiver's socket receive buffer is full, which in turn means that the receiver is reading slower than the sender is sending.
So speed up the receiver.
NB In non-blocking mode, merely looping around the write() call while it returns zero is not adequate. If write() returns zero you must:
Deregister the channel for OP_READ and register it for OP_WRITE
Return to the select loop.
When OP_WRITE fires, do the write again. This time, if it doesn't return zero, deregister OP_WRITE, and (probably, according to your requirements) register OP_READ.
Note that keeping the channel registered for OP_WRITE all the time isn't correct either. A socket channel is almost always writable, meaning there is almost always space in the socket send buffer. What you're interested in is the transistion between not-writable and writable.
Related
I understand that Server socket channel is registered to listen for accept, when accepted a channel is registered for read and once read it is registered for write and this is done by adding the relevant keys to the SelectionKey's interest set using the interestOps method.
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here? Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket, and the source channel will be oblivious of this decision by the server and might keep on sending data to the server? Or will it somehow inform the channel source of this decision.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset"
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here?
What literally happens is something like:
public void interestOps(int interestOps)
{
this.interestOps = interestOps;
}
Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket
It means that the Selector won't trigger any OP_READ events if data arrives via the socket. It doesn't mean that data won't be received.
and the source channel will be oblivious of this decision by the server and might keep on sending data to the server?
If by 'source channel' you mean the peer, it is not advised in anyway, unless the receive buffer fills up at the receiver.
Or will it somehow inform the channel source of this decision.
No.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset".
No.
We have a system where there are 2 applications. One of these is a legacy application, for which we can't do any code changes. This application is sending messages to second application which is written in java. In our java code, we have set input stream buffer size equal to 1 MB as follows:
Socket eventSocket = new Socket();
eventSocket.setSendBufferSize(1024 * 1024);
Now the legacy application is sending messages of variable size. Most of the messages are smaller than 1 MB. But sometimes it is sending messages as large as 8 MB. Many times these messages are read successfully by the java application. But for some cases, following read operation is returning -1 value:
read = stream.read(b, off, len - off); ( here stream is an InputStream object)
As per Java API definition, InputStream read method returns -1 if there is no more data because the end of the stream has been reached.
But this is an erroneous behavior. We have done snoop test using
wireshark to verify the exact messages that are exchanged between these two applications and found that java application has sent zero
window message few seconds before the time when input stream read
method has returned -1 value. At the time when this java api method
has returned -1, java application was sending ZeroWindowProbeAck
message to the legacy application.
How should we handle this issue?
As per https://wiki.wireshark.org/TCP%20ZeroWindow, zero window has following definition:
What does TCP Zero Window mean?
Zero Window is something to investigate.
TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.
This means that a client is not able to receive further information at the moment, and the TCP transmission is halted until it can process the information in its receive buffer.
TCP Window size is the amount of information that a machine can receive during a TCP session and still be able to process the data. Think of it like a TCP receive buffer. When a machine initiates a TCP connection to a server, it will let the server know how much data it can receive by the Window Size.
In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins sending data, the client will decrement it's Window Size as this buffer fills. At the same time, the client is processing the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames, the client informs the server of how much room is in this buffer. If the TCP Window Size goes down to 0, the client will not be able to receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero Window" in Expert View.
Troubleshooting a Zero Window
For one reason or another, the machine alerting the Zero Window will not receive any more data from the host. It could be that the machine is running too many processes at that moment, and its processor is maxed. Or it could be that there is an error in the TCP receiver, like a Windows registry misconfiguration. Try to determine what the client was doing when the TCP Zero Window happened.
Source: flukenetworks.com
Handling input-stream overflow (zero window) in Java
There is no such thing as 'input-stream overflow' in Java, and you can't handle zero window in Java either, except by reading from the network more quickly. Your title already doesn't make sense.
We have done snoop test using wireshark to verify the exact messages that are exchanged between these two applications and found that java application has sent zero window message few seconds before the time when input stream read method has returned -1 value.
Neither Java nor the application send those messages. The operating system does.
The input stream of a socket returns -1 if and only if a FIN has been received from the peer, and that may in turn occur if and and only if the peer has closed the connection or exited (Unix). It doesn't have anything to do wth TCP windowing.
At the time when this java api method has returned -1, java application was sending ZeroWindowProbeAck message to the legacy application.
No it wasn't. The operating system was, and it wasn't 'at the time', it was 'a few seconds before', accordingly to your own words. At the time when this Java method returned -1, it had just received a FIN from the peer. Have a look at your sniff log. There is no problem here to explain.
As per [whatever], zero window has the following definition
Wireshark does not get to define TCP. TCP is defined in IETF RFCs. You can't cite non-normative sources as definitions.
TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.
For any amount of time.
This means that a client is not able to receive further information at the moment, and the TCP transmission is halted until it can process the information in its receive buffer.
It means that the peer is not able to receive. It has nothing to do with the client or the server specifically.
TCP Window size is the amount of information that a machine can receive during a TCP session
No it isn't. It is the amount of data the receiver is currently able to receive. It is therefore also the amount of data the sender is present allowed to send. It has nothing to do with the session whatsoever.
and still be able to process the data.
Irrelevant.
Think of it like a TCP receive buffer.
It is a TCP receive buffer.
When a machine initiates a TCP connection to a server, it will let the server know how much data it can receive by the Window Size.
Correct. And vice versa. Continuously, not just at the start of the session.
In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins sending data, the client will decrement it's Window Size as this buffer fills.
It has nothing to do with clients and servers. It operates in both directions.
At the same time, the client is processing the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames,
Segments
the client informs the server of how much room is in this buffer.
The receiver informs the sender.
If the TCP Window Size goes down to 0, the client
The peer
will not be able to receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero Window" in Expert View.
For one reason or another, the machine alerting the Zero Window will not receive any more data from the host.
For one reason only. Its socket receive buffer is full. Period.
It could be that the machine is running too many processes at that moment
Rubbish.
Or it could be that there is an error in the TCP receiver, like a Windows registry misconfiguration.
Rubbish. The receiver is reading more slowly than the sender is sending. Period. It is a normal condition that arises frequently during any TCP session.
Try to determine what the client was doing when the TCP Zero Window happened.
That's easy. Not reading from the network.
Your source is drivel, and your problem is imaginary.
We have created a solution, where we are waiting for input stream to get cleared by waiting for some time after this overflow problem occurs. We have done code changes as follows:
int execRetries = 0;
while (true)
{
read = stream.read(b, off, len - off);
if (read == -1)
{
if(execRetries++ < MAX_EXEC_RETRIES_AFTER_IS_OVERFLOW){
try {
Log.error("Inputstream buffer overflow occured. Retry no: " + execRetries);
Thread.sleep(WAIT_TIME_AFTER_IS_OVERFLOW);
} catch (InterruptedException e) {
Log.error(e.getMessage(), e);
}
}
else{
throw new Exception("End of file on input stream");
}
}
else if(execRetries!=0){
Log.info("Inputstream buffer overflow problem resolved after retry no: " + execRetries);
execRetries = 0;
}
.....
}
Solution is sent to test server. We are waiting to verify the end result whether this solution is working.
In java NIO, does Selector.select() guarantee that at least one entire UDP datagram content is available at the Socket Channel, or in theory Selector could wake when there is less then a datagram, say couple of bytes ?
What happens if transport protocol is TCP, with regards to Selector.select(), is there difference to UDP ?
From the API:
Selects a set of keys whose corresponding channels are ready for I/O operations.
It doesn't however specify what ready means.
So my questions:
how incoming datagrams/streams go from hardware to Java application Socket (Channels).
when using UDP or TCP client, should one assume that at least one datagram is received or Selector could wake when there is only a part of datagram available ?
It doesn't however specify what ready means.
So my questions:
how incoming packages/streams go from hardware to Java application Socket (Channels).
They arrive at the NIC where they are buffered and then passed to the network protocol stack and from there to the socket receive buffer. From there they are retrieved when you call read().
when using UDP or TCP client, should one assume that at least one package is received
You mean packet. Actually in the case of UDP you mean datagram. You can assume that an entire datagram has been received in the case of UDP.
or Selector could wake when there is only a part of [packet] available?
In the case of TCP you can assume that either at least one byte or end of stream is available. There is no such thing as a 'package' or 'packet' or 'message' at the TCP level.
I have a question considering udp packet life/route. I have a simple client server UDP scheme with a send call in the client side and a receive call in the server side. Lets say the send method gets called and the packet actually arrives in the other side BUT the server's code execution hasn't yet reached the receive method call. What happens with the packet in that time . Now i tried to stop the execution before the receive call with a simple command input prompt , waited a little and then let it continue and noticed that the packet got received . Can you explain WHY that happen, like is it buffered on a different OSI level?
Thanks in advance.
Every TCP or UDP socket has a send buffer and a receive buffer. Your datagram got queued into the send buffer at the sender, then it got sent, then it got queued into the receive buffer at the receiver, then you read it from there.
NB osi has nothing to do with it. TCP/IP doesn't obey the OSI model. It has its own, prior model.
The "receive" method call doesn't receive the packet. If there's a UDP socket "open" for that port, it means that there is buffer space allocated, and that's where the NIC+OS put the data. When you call "receive", it just looks there, and if there's anything there, then it pretends to have just received it.
I should add that if the buffer is empty, then the receive call does go into a blocking state, waiting to get notified by the OS that something has arrived.
I have a simple Java program which acts as a server, listening for UDP packets. I then have a client which sends UDP packets over 3g.
Something I've noticed is occasionally the following appears to occur: I send one packet and seconds later it is still not received. I then send another packet and suddenly they both arrive.
I was wondering if it was possible that some sort of system is in place to wait for a certain amount of data instead of sending an undersized packet. In my application, I only send around 2-3 bytes of data per packet - although the UDP header and what not will bulk the message up a bit.
The aim of my application is to get these few bytes of data from A to B as fast as possible. Huge emphasis on speed. Is it all just coincidence? I suppose I could increase the packet size, but it just seems like the transfer time will increase, and 3g isn't exactly perfect.
Since the comments are getting rather lengthy, it might be better to turn them into an answer altogether.
If your app is not receiving data until a certain quantity is retrieved, then chances are, there is some sort of buffering going on behind the scenes. A good example (not saying this applies to you directly) is that if you or the underlying libraries are using InputStream.readLine() or InputStream.read(bytes), then it will block until it receives a newline or bytes number of bytes before returning. Judging by the fact that your program seems to retrieve all of the data when a certain threshold is reached, it sounds like this is the case.
A good way to debug this is, use Wireshark. Wireshark doesn't care about your program--its analyzing the raw packets that are sent to and from your computer, and can tell you whether or not the issue is on the sender or the receiver.
If you use Wireshark and see that the data from the first send is arriving on your physical machine well before the second, then the issue lies with your receiving end. If you're seeing that the first packet arrives at the same time as the second packet, then the issue lies with the sender. Without seeing the code, its hard to say what you're doing and what, specifically, is causing the data to only show up after receiving more than 2-3 bytes--but until then, this behavior describes exactly what you're seeing.
There are several probable causes of this:
Cellular data networks are not "always-on". Depending on the underlying technology, there can be a substantial delay between when a first packet is sent and when IP connectivity is actually established. This will be most noticeable after IP networking has been idle for some time.
Your receiver may not be correctly checking the socket for readability. Regardless of what high-level APIs you may be using, underneath there needs to be a call to select() to check whether the socket is readable. When a datagram arrives, select() should unblock and signal that the socket descriptor is readable. Alternatively, but less efficiently, you could set the socket to non-blocking and poll it with a read. Polling wastes CPU time when there is no data and delays detection of arrival for up to the polling interval, but can be useful if for some reason you can't spare a thread to wait on select().
I said above that select() should signal readability on a watched socket when data arrives, but this behavior can be modified by the socket's "Receive low-water mark". The default value is usually 1, meaning any data will signal readability. But if SO_RCVLOWAT is set higher (via setsockopt() or a higher-level equivalent), then readability will be not be signaled until more than the specified amount of data has arrived. You can check the value with getsockopt() or whatever API is equivalent in your environment.
Item 1 would cause the first datagram to actually be delayed, but only when the IP network has been idle for a while and not once it comes up active. Items 2 and 3 would only make it appear to your program that the first datagram was delayed: a packet sniffer at the receiver would show the first datagram arriving on time.