Tcp round trip time calculation using java - java

When i do traceroute from one solaris m/c to another using 10 gig network interface it takes 0.073 ms for 40 bytes packets.
When i do the same thing in java, the time is way longer. It is longer even after 10K iteration. What could be the reason ?
Java: Sender (Snippet)
Socket sendingSocket = new Socket(address, RECEIVER_PORT);
sendingSocket.setTcpNoDelay(true);
OutputStream outputStream = sendingSocket.getOutputStream();
byte[] msg = new byte[64]; // assume that it is populated.
for (int i = 0; i < 10000; i++) {
long start = System.nanoTime();
outputStream.write(msg,0,64);
outputStream.flush();
inputStream.read(msg,0,64); // inputStream is initialized like outputstream
long end = System.nanoTime();
}
It takes way longer 69 millis and it does not even depends upon the byte size. Even if i reduce it to say 1 byte array, it still takes 69 millis. Any comment/Suggestion ?
Other Observation:
1. OutputStream.write and flush only takes 6 micros.
2. Similarly on the other end TCPReceiver side which receives and writes back, it only takes 6 micros.
Solution:
Thank you all you responded for this query.
I found that it was due to the socket buffer size:
Default buffer size set on solaris m/c.
Received Buffer Size 49152.
Sending Buffer Size 7552.
I increased the socket buffer size and the performance almost matches traceRoute.

You are not comparing like with like. ICMP and TCP are way different protocols.
If you want to decide if the latency is in your code, the JVM, Solaris TCP stack or the network you'll have to start with tcpdump / wireshark etc.

This is probably due to a number of factors. For starters, establishing a TCP channel takes time. There are several packets that have to be sent between both endpoints to establish the reliable medium. That's not the case with ICMP messages, they are simply single packets. In fact, because you are seeing no difference in the time it takes to transmit the data regardless of size, then you can likely assume that the amount of time required to actually transmit the data (you're talking about a very small amount of data in either case on a 10gig connection) is negligible in comparison to the time it takes to establish the channel. Also, it is entirely possible that there is some overhead associated with the fact that you're using Java (a bytecode language) rather than something like C or C++ that runs natively on the hardware.

The time it takes to connect can be about 20 ms. You need to test using an existing connection.
The TCP stack is quite slow going through the kernel. It takes about 50-100 us on many machines. You can reduce this sustantially using kernel bypass drivers/support.

Related

Fragment UDP/TCP segments in Java

I have to measure the speed of UDP and TCP between a Client and a Server, for a university project, and for that we have to choose the data we will send. For example 500 bytes, 750 bytes, 1500 bytes...
In Linux I know how to reduce or increment MTU of the segment, but I do not know how to do it in Java for my application. Are there any function to do it, or a way to make the socket bigger or smaller and force it to send the amount of data that I want?
Thank you in advance!
The Java socket API is pretty high level and doesn't give you much fine grained control.
That said you can write x number of bytes to the socket then flush it.
You should also enable TCP_NODELAY, otherwise the packets may end up being buffered.
So long as the amount of bytes is less than the underlying OS MTU then the messages should be sent in separate packets.

Send bytes via tcp/ip with android phone faster

I am taking pictures with the camera2 api for Android on a Nexus 6. It's taking virtually no time at all from when my code reaches the end of onImageAvailable() to when it's called again.
However, it's taking ~700ms to send my picture over tcp/ip.
private ImageReader.OnImageAvailableListener mOnImageAvailableListener =
new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader)
{
Image image = mImageReader.acquireLatestImage();
if(image.getPlanes().length == 0)
return;
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
if(buffer == null)
return;
byte[] pictureBytes = new byte[buffer.remaining()];
buffer.get(pictureBytes);
mOutputStream.write((String.valueOf(pictureBytes.length) + "....").getBytes());
// FROM HERE
mOutputStream.write(pictureBytes);
// TO HERE TAKES ~700ms
mOutputStream.flush();
}
My connection is created in another thread, as android requires, with the following code:
ServerSocket serverSocket = new ServerSocket(#);
Socket clientSocket = serverSocket.accept();
OutputStream outputStream = clientSocket.getOutputStream();
Note: mOutputStream in the main thread is equal to outputStream. I pass outputStream to the main thread.
I've tried using BufferedOutputStream and that was actually slower.
The link speed between my phone and the device that it's connected to is 130Mbps. With images that are less than 2MB, I should be able to send at least 8 pictures a second.
How do I reduce the 700ms time? Thank you!
Even if you do have 130mbps connection to your other device if you're using TCP you won't be able to use all that bandwidth immediately. This because of TCP congestion control mechanism called slow-start.
At the beginning of each connection TCP starts with sending small amount of data because the capacity of the link is unknown. This amount of traffic sent before the ACK is received is defined by congestion window and depending on the configured MSS (Maximum Segment Size) it could be 2-4 times the MSS. Exact amount is defined in RFC5681:
If SMSS > 2190 bytes:
IW = 2 * SMSS bytes and MUST NOT be more than 2 segments
If (SMSS > 1095 bytes) and (SMSS <= 2190 bytes):
IW = 3 * SMSS bytes and MUST NOT be more than 3 segments
If SMSS <= 1095 bytes:
IW = 4 * SMSS bytes and MUST NOT be more than 4 segments
After receiving each ACK the size of congestion window is increased by the size of MSS. This actually means that the window size grows exponentially. Even though the exponential growth enables TCP to reach the maximum bandwidth very fast, for relatively small files this fast is not fast enough. Even if your connection is 100mbps you won't be able to send 8 pictures per second because the transfer time is relatively short and the window size will be too small for a considerable period of that transfer time to allow you to use the full bandwidth available.
Depending on the round trip time (to the server and back) it may even take 500ms to send the first 1MB of data so there is a small chance that you will be able to send more than 2 or maybe 3 images per second.
I switched from Java to C# to read in the bytes. That was the main bottleneck. I'm up to 3.33 fps; this includes the time writing to disk.
But yes, I verified the pictures were less than 2MB.
Thank you everyone!

Socket communication between Java and C: Good buffer size

I have to implement a socket communicatio between a Server written in Java and a Client written in C.
The maximum amount of data that I will have to transmit is 64KB.
In the most socket communication tutorials they are working with buffer sizes of about 1024 Byte or less.
Is it a (maybe performance) problem to set the buffer to 64KB?
The two software parts will run on the same machine or at least in the same local area network.
And if it is a problem: How to handle messages that are bigger than buffer in general?
The buffer can be smaller than the messages without any problem while the receiver consumes the data as fast as the sender generates it. A bigger buffer lets your receiver to have more time to process the message, but usually you don't need a giant buffer: for example, when you download software the size of a file can be more than 1GB, but your browser/ftp client just reads the buffer and stores the data in a file in your local hard disk.
And in general, you can ignore the language used to create the client or the server, only the network protocol matters. Every language has its own libraries to handle sockets with ease.
I suggest a larger buffer but I suspect you see less than 5% difference whether you use 1 KB or 64 KB.
Note: b = bit and B = byte, k = 1000 and K = 1024 and it is best not to get the confused (not that it is likely to matter here)

Java DatagramPacket (UDP) maximum send/recv buffer size

In Java when using DatagramPacket suppose you have a byte[1024*1024] buffer. If you just pass that for the DatagramPacket when sending/receiving will a Java receive call for the DatagramPacket block until it has read the entire megabyte?
I'm asking if Java will split it up or just try to send the entire thing which gets dropped.
Normally the size limit is around 64KB for a UDP packet, but I wondered since Java's API allow for byte arrays if that is a limit and something super huge is dropped or split up and reassembled for you.
If it is dropped what API call would tell me the maximum data payload I can use in the Java call? I've heard that IPv6 also has jumbo frames, but does DatagramPacket (or DatagramSocket) support that since UDP defines the header spec?
DatagramPacket is just a wrapper on a UDP based socket, so the usual UDP rules apply.
64 kilobytes is the theoretical maximum size of a complete IP datagram, but only 576 bytes are guaranteed to be routed. On any given network path, the link with the smallest Maximum Transmit Unit will determine the actual limit. (1500 bytes, less headers is the common maximum, but it is impossible to predict how many headers there will be so its safest to limit messages to around 1400 bytes.)
If you go over the MTU limit, IPv4 will automatically break the datagram up into fragments and reassemble them at the end, but only up to 64 kilobytes and only if all fragments make it through. If any fragment is lost, or if any device decides it doesn't like fragments, then the entire packet is lost.
As noted above, it is impossible to know in advance what the MTU of path will be. There are various algorithms for experimenting to find out, but many devices do not properly implement (or deliberately ignore) the necessary standards so it all comes down to trial and error. Or you can just guess 1400 bytes per message.
As for errors, if you try to send more bytes than the OS is configured to allow, you should get an EMSGSIZE error or its equivalent. If you send less than that but more than the network allows, the packet will just disappear.
java.net.DatagramPacket buffer max size is 65507.
See
https://en.wikipedia.org/wiki/User_Datagram_Protocol#UDP_datagram_structure
Maximum Transmission Unit (MTU) size varies dependent on implementation but is arguably irrelevant to the basic question "Java DatagramPacket (UDP) maximum send/recv buffer size" as the MTU is transparent to the java.net.DatagramPacket layer.
# Mihai Danila. Because I couldn't add a comment to the above answer, that's why writing into reply section.
In continuation of your answer on MTU size, in my practice, I try to use NetworkInterface.getMTU()-40 for setting the buffer size of DatagramSocket.setSendBufferSize(). So, trying not to rely on getSendBufferSize() This is to make sure it matches different window sizes on different platforms and is universally acceptable on ethernet (ignoring dial-up for a moment). I haven't hardcoded it to 1460 bytes (1500-20-20) because on windows, the MTU size is universally 1500. However, windows platform's own window size is 8192 bytes, but I believe, by setting the SO_SNDBUF to < MTU, I am burdening the network/IP layer less, and for all the hops for routers and receivers, some overheads. Thus, reducing some latency over the network.
Similarly, for the receive buffer, I am using a max of 64K or 65535 bytes. This way my program is portable on different platforms using different window sizes.
Do you think it sounds OK? I have not implemented any tools to measure any differences but assuming that its the case based on what's out there.

How to determine ideal Datagram settings

I'm writing a Java client application that will consume high rate UDP data and I want to minimize packet loss at the host/application layer (I understand there may be unavoidable loss in the network layer).
What is a reasobaly high Buffer Size (MulticastSocket.setReceiverBufferSize())?
What is the ideal DatagramPacket buffer size? Is there a downside to using 64k?
I have very limited insight into the network topology and the sender application. This is running on Linux. TCP is not an option.
What is a reasobaly high Buffer Size (MulticastSocket.setReceiverBufferSize())?
Figure out how much your application might jitter and the rate of data you need to receive. e.g. if your application pauses to do something for 0.5 seconds (like garbage collection), and you're receiving data at 10MB/sec, you'd need a buffer of 5MB to make up for not receiving data for those 0.5 seconds.
Note that you might need to tune the net.core.rmem_max sysctl on linux to be allowed to set the buffers to the desired size(iirc you actually only get half the size of what you specify in the sysctl) , the default net.core.rmem_max might be rather low.
What is the ideal DatagramPacket buffer size? Is there a downside to using 64k?
The ideal is that of the MTU of your network, for normal ethernet, that means an UDP payload of 1472 bytes. Anything bigger is a bad idea, as it causes fragmented IP packet - IP fragmentation is generally considered a bad thing, as it causes more overhead and can cause more lost data.
Sockets end and receive buffers can be as large as you like, a megabyte or two if you want.
The maximum practical datagram size via a router is 534 bytes.

Categories

Resources