Does anyone have any tips on how to calculate the bandwidth usage of a socket?
For example, as I send data over a socket to the server I am connected to, I want to show the Kb/s that is being sent.
Google search didn't reveal anything useful. Maybe I'm searching the wrong terms.
The best you're probably going to be able to easily do is to record when you start writing and then count bytes you've successfully sent to the Socket.getOutputStream.write() method. For a small amount of data, that will be very inaccurate as it's just filling up the OS's transmission buffer which will initially take bytes much faster than it actually sends them.
It should amortize to essentially the correct rate over a fairly large amount of data, however.
Related
so this is basically my situation.
I have the server and client both connected and running. The way it works is the server sends information to the client, then WAITS until it receives a response from the client, then the server sends information once again to the client, and this process repeats again and again.
This means they are basically just communicating back in forth. The data they are sending is volatile, sometimes it's large, others small. Needless to say even if it is very small I still flush it out, because the other cannot perform until it receives it.
I was wondering if I should use smaller or larger buffer sizes for sending/receiving. This is because I feel like it may be a waste if I have a huge buffer size like 65536, then flush out something as small as "Hello", but then again sometimes the data can be as large as 100 megabytes.
Anybody have any clue on what I should do?
Is it waste to have a large buffer size if I flush out a small amount of data?
Is there a way to adjust buffer size depending on the data being sent?
Thanks! I appreciate all/any responses.
I have to measure the speed of UDP and TCP between a Client and a Server, for a university project, and for that we have to choose the data we will send. For example 500 bytes, 750 bytes, 1500 bytes...
In Linux I know how to reduce or increment MTU of the segment, but I do not know how to do it in Java for my application. Are there any function to do it, or a way to make the socket bigger or smaller and force it to send the amount of data that I want?
Thank you in advance!
The Java socket API is pretty high level and doesn't give you much fine grained control.
That said you can write x number of bytes to the socket then flush it.
You should also enable TCP_NODELAY, otherwise the packets may end up being buffered.
So long as the amount of bytes is less than the underlying OS MTU then the messages should be sent in separate packets.
everyone. I am a beginner in network programming. Currently, I want to do an experiment which sends a file(~2M bytes) from Android phone to Ubuntu server. How can I send it at the highest speed? I have tried something like using Bufferedreader in Java, reading each byte out of the file and sending that single byte to server through socket "outputstream write" function. This way seems to cost too much time. I notice that if in the same network condition, I send same file using some Immediate Messenger tools,like Skype;it is much quicker than I did. Does anybody know the API or implementation protocol beneath those Immediate Messenger software?
I probably need to call other efficient APIs other than socket? I am also trying to read the whole file into a byte array and then call "socket write" function to send the huge byte array to server for a just single time.Although when I receive it at server side to found that there are a lot of "padding zeros" distributed in my original data, the whole transfer seems to cost less time than the "single byte transfer" method. Anybody has any advice on this?Thanks a lot!
Thanks for everybody's answer. I think I just made a silly mistake. The real reason for my slow transmission using TCP socket is that every time I just read a single byte out of the file and call "void write(int b)" to send this single byte to server.This method is very time consuming. Now every time I try to read 256 bytes out of file and send these 256 bytes through "void write(byte[] b,int off,int len)";in this way, the transmission is pretty fast.Therefore, it is not the problem of TCP itself. It is my mistake of calling the wrong API. I have not tried UDP yet. But I think it is also a good choice.Thanks again,everybody.
Short of reinventing TCP/IP the fastest way to send will be via UDP, if your connection is good enough (few losses) than you on;y need to implement packet sequence so the sender prepends sequence number to the data packets, the receiver keeps track of missed packets and requests them again after all data is sent. After all data is received the receiver can reassemble the complete file.
This is simplistic implementation of TCP over UDP.
I am receiving ~3000 UDP packets per second, each of them having a size of ~200bytes. I wrote a java application which listens to those UDP packets and just writes the data to a file. Then the server sends 15000 messages with previously specified rate. After writing to the file it contains only ~3500 messages. Using wireshark I confirmed that all 15000 messages were received by my network interface. After that I tried changing the buffer size of the socket (which was initially 8496bytes):
(java.net.MulticastSocket)socket.setReceiveBufferSize(32*1024);
That change increased the number of messages saved to ~8000. I kept increasing the buffer size up to 1MB. After that, number of messages saved reached ~14400. Increasing buffer size to larger values wouldn't increase the number of messages saved. I think I have reached the maximum allowed buffer size. Still, I need to capture all 15000 messages which were received by my network interface.
Any help would be appreciated. Thanks in advance.
Smells like a bug, most likely in your code. If the UDP packets are delivered over the network, they will be queued for delivery locally, as you've seen in Wireshark. Perhaps your program just isn't making timely progress on reading from its socket - is there a dedicated thread for this task?
You might be able to make some headway by detecting which packets are being lost by your program. If all the packets lost are early ones, perhaps the data is being sent before the program is waiting to receive them. If they're all later, perhaps it exits too soon. If they are at regular intervals there may be some trouble in your code which loops receiving packets. etc.
In any case you seem exceptionally anxious about lost packets. By design UDP is not a reliable transport. If the loss of these multicast packets is a problem for your system (rather than just a mystery that you'd like to solve for performance reasons) then the system design is wrong.
The problem you appear to be having is that you get delay writing to a file. I would read all the data into memory before writing to the file (or writing to a file in another thread)
However, there is no way to ensure 100% of packet are received with UDP without the ability to ask for packets to be sent again (something TCP does for you)
I see that you are using UDP to send the file contents. In UDP the order of packets is not assured. If you not worried about the order, you put all the packets in a queue and have another thread process the queue and write the contents to file. By this the socket reader thread is not blocked because of file operations.
The receive buffer size is configured at OS level.
For example on Linux system, sysctl -w net.core.rmem_max=26214400 as in this article
https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
This is a Windows only answer, but the following changes in the Network Controller Card properties made a DRAMATIC difference in packet loss for our use-case.
We are consuming around 200 Mbps of UDP data and were experiencing substantial packet loss under moderate server load.
The network card in use is an Asus ROG Aerion 10G card, but I would expect most high-end network controller cards to expose similar properties. You can access them via Device Manager->Network card->Right-Click->Properties->Advanced Options.
1. Increase number of Receive Buffers:
Default value was 512; we could increase it up to 1024. In our case, higher settings were accepted, but the network card becomes disabled once we exceed 1024. Having a larger number of available buffers at the network-card level gives the system more tolerance to latency in transferring data from the network card buffers to the socket buffers where our apps finally can read the data.
2. Set Interrupt Moderation Rate to 'Off':
If I understood correctly, interrupt moderation coalesces multiple "buffer fill" notifications (via interrupts) into a single notification. So, the CPU will be interrupted less-often and fetch multiple buffers during each interrupt. This reduces CPU usage, but increases the chance a ready buffer is overwritten before being fetched, in case the interrupt is serviced late.
Additionally, we increased the socket buffer size (as the OP already did) and also enabled Circular Buffering at the socket level, as suggested by Len Holgate in a comment, this should also increase tolerance to latency in processing the socket buffers.
I'm writing a Java server (java.net.Socket, java.net.ServerSocket, java.io.ObjectOutputStream, java.io.ObjectInputStream) and I know I'm going to have limited bandwidth allocated for it.
I've written a decorator object for my output and input streams so I can count how many bytes go through it for profiling purposes. But this won't give me any indication of the amount of overhead I'm using for the connection.
I don't anticipate it will be much, but I'd like to prepare for it. I'm not going try to optimize it, I just want to know how much it will be for logistical reasons (how much bandwidth must I request, etc.)
I can't be the first person to try to get this information, but I can't seem to find good resources on the overhead of Java Sockets and TCP/IP in general. (Perhaps that's because there's nothing noteworthy to find... If we're on the order of kb per minute, it's really not much of a concern, but I'd still like to know!)
Thanks!
This question is challenging to answer with the information we have right now... for instance, what are you calling 'overhead'? Is it only TCP ACK packets, or all packet overhead (for instance ethernet, IP and tcp headers) for anything other than your data payload?
How many connections per minute? What is the average data transfer, per connection? If there are many very short-lived connections, your overhead requirements go up (due to 3-way handshake, and connection close requirements)... you could also have high overhead if the clients don't read much data, but many clients keep the connections open for days at a time.
Honestly, you're 50x better off modeling this in a lab and making some assumptions about hit rate per minute and concurrent clients... that will give you some ballpark numbers. Play around with limiting the bandwidth afforded to the application to the maximum your budget would allow... then start backing off... you can throttle bandwidth by using wanem on a dual-port linux machine.
Getting lab results like this is far better than theoretical calculations.
HTH,
\mike (who spends all day testing network gear)
TCP overhead varies based on a number of factors, but is typically around 5% at full capacity.
Basically each "packet" has 20 bytes of IP header (and 20 more if IPv6) plus 20-32 bytes of TCP header. Packet sizes vary based on the network devices and conditions, but are often in the neighborhood of 1500 bytes.
This page has some detail: http://sd.wareonearth.com/~phil/net/overhead/
In my opinion you can completely ignore keep-alives, as they are only used when the connection is idle anyway.