Which is the best approach to send large UDP packets in sequence - java

I have an android application that needs to send data through the protocol UDP every 100 milliseconds. Each UDP packet has 15000 bytes average. packets are sent in broadcast
Every 100 milliseconds lines below are run through a loop.
DatagramPacket sendPacket = new DatagramPacket(sendData, sendData.length, broadcast, 9876);
clientSocket.send(sendPacket);
Application starts working fine, but after about 1 minute frequency of received packets decreases until the packets do not arrive over the destination.
The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes
I know the media MTU of a network is 1500 bytes and when I send a packet bigger it is broken into several fragments and if a fragment does not reach the destination the whole package is lost.
I do not understand why at first 1 minute the packets are sent correctly and after a while the packets do not arrive more. So I wonder what would be the best approach to solve this problem?

It's exactly the problem you described. Each datagram you broadcast is split into 44 packets. If any one of those is lost, the datagram is lost. As soon as you have enough traffic to cause, say, 1% packet loss, you have 35% datagram loss. 2% packet loss equals 60% datagram loss.
You need to keep your broadcast datagrams small enough not to fragment. If you have a stream of 65,507 byte chunks such that you cannot change the fact that you must have the whole chunk for the data to be useful, then naive UDP broadcast was bad choice.
I'd have to know a lot more about the specifics of your application to make a sensible recommendation. But if you have a chunk of data around 64KB such that you need the whole chunk for the data to be useful, and you can't change that, then you should be using an approach that divides that data into pieces with some redundancy such that some pieces can be lost. With erasure coding, you can divide 65,507 bytes of data into 46 chunks, each 1,490 bytes, such that the original data can be reconstructed from any 44 chunks. This would tolerate moderate datagram loss with only about a 4% increase in data size.

TCP is used specifically instead of UDP when you need reliable and correctly ordered delivery. But assuming you really need UDP for broadcasting, you could:
debug the network to see how & where packets are lost, or maybe it is the receiver that is clogged/lagged. But often you don't have control over these things. Is a WiFi network involved? If so it's hard to get good QoS.
do something on the application layer to ensure ordering and reliable delivery. For example, SIP normally uses UDP, but the protocol uses transactions and sequence numbers so clients & servers will retransmit messages as needed.
implement packet loss concealment. Using maths, the receiver can recreate a lost packet, analogous to how a RAID disk setup can lose drives and still function.
That your setup works fine for a minute and then doesn't is a hint that there is either network congestion or software congestion on the broadcast or receiver side.
Can you do some packet captures with Wireshark and share the results?

Related

Fragment UDP/TCP segments in Java

I have to measure the speed of UDP and TCP between a Client and a Server, for a university project, and for that we have to choose the data we will send. For example 500 bytes, 750 bytes, 1500 bytes...
In Linux I know how to reduce or increment MTU of the segment, but I do not know how to do it in Java for my application. Are there any function to do it, or a way to make the socket bigger or smaller and force it to send the amount of data that I want?
Thank you in advance!
The Java socket API is pretty high level and doesn't give you much fine grained control.
That said you can write x number of bytes to the socket then flush it.
You should also enable TCP_NODELAY, otherwise the packets may end up being buffered.
So long as the amount of bytes is less than the underlying OS MTU then the messages should be sent in separate packets.

I chose to UDP as my peer 2 peer service, and how can I prove it's reliable in my situation

I have two debian servers located on the same subnet. They are connected by a switch. I am aware the UDP is unreliable.
Question 1: I assume the link layer is ethernet. And MTU from a standard
Ethernet is 1500 bytes. However, when I did a ping from one server to
another, I found out that the maximum packet size can be sent is
65507. Shouldn't it be 1500 bytes? Can I say, because there's no router in between these two servers, therefore, the IP datagram will
not be fragmented.
Question 2: Because two servers are directly connected with a switch, can I
assume that all datagrams arrives in order and no loss on the path?
Question 3: How can I determine that the chances of datagram dropped
at the server because of buffer overflow. What size to set the receive buffer so that datagram will not overflow receive buffer.
No. UDP is not even reliable between processes on the same machine. If packets are sent to a socket without giving the receiver process time to read them, the buffer will overflow and packets will be lost.
You did your ping test with fragmentation enabled. Besides that, ping doesn't use UDP, but ICMP, so the results mean nothing. UDP packets smaller than the MTU will not be fragmented, but the MTU depends on more factors, such as IP options and VLAN headers, so it may not be greater than 1500.
No. Switches perform buffering, and it's possible for the internal buffers to overflow. Consider a 24 port switch where 23 nodes are all transmitting as fast as possible to the last node. Clearly the connection to the last node cannot handle the aggregate traffic of 23 other links, the switch will try to buffer packets but eventually end up dropping them.
Besides that, electrical noise can corrupt packets in transit, causing them to be discarded when the checksum fails.
To analyze the chance of buffer overflow, you could employ queuing theory to find the probability that a packet arrives when the buffer is full. You'll need some assumptions regarding the probability distribution on the rate of packet transmission and the processing time. The number of packets in the buffer then form a finite chain, hopefully Markov, which you can solve for the steady-state probabilities of each state in the chain. Good search keywords to find out more would be "queuing theory", "Markov chain", "call capacity", "circuit capacity", "load factor".
EDIT: You changed the title of the question. The answer to your new question is: "You can't prove something that isn't true." If you want to make a reliable application using UDP, you should add your own acknowledgement and loss handling logic.
The 64 KB maximum packet size is the absolute limit of the protocol, as opposed to the 1500 byte MTU you may have configured (the MTU can be changed easily, the 64 KB limit cannot).
In practice you will probably never see reordered datagrams in your scenario. And you'll probably only lose them if the receiving side is not processing them fast enough (or is shut off completely).
The "chances" of a datagram being dropped by the receiver is not something we can really quantify without knowing a whole lot more about your situation. If the receiver processes datagrams faster than the sender sends them, you're fine, otherwise you may lose some--know how many and exactly when is a considerably finer point.
The IP stack will fragment and defragment the packet for you. You can test this by setting the the no-fragment flag. The packet will be dropped.
No. They will most likely come in order, and probably not dropped, but the network stack, in your sender, router and receiver, are free to drop the packet if it can't handle it when it arrives. Also remember that when a large packet is fragmented, one lost fragment means that the whole packet will be dropped by the stack.
I guess you can probe by sending 1000 packets and measure loss, but historical values does not predict the future...
Question 1: You are confusing the MTU with the tcp maximum packet size see here
Question 2: Two servers connected via a switch does not guarantee datagrams arriving in order. There will be other network transmissions occurring that will interfere with the udp stream potentially causing out of sequence frames
Question 3: Answered by Ben Voigt above.

How to minimize UDP packet loss

I am receiving ~3000 UDP packets per second, each of them having a size of ~200bytes. I wrote a java application which listens to those UDP packets and just writes the data to a file. Then the server sends 15000 messages with previously specified rate. After writing to the file it contains only ~3500 messages. Using wireshark I confirmed that all 15000 messages were received by my network interface. After that I tried changing the buffer size of the socket (which was initially 8496bytes):
(java.net.MulticastSocket)socket.setReceiveBufferSize(32*1024);
That change increased the number of messages saved to ~8000. I kept increasing the buffer size up to 1MB. After that, number of messages saved reached ~14400. Increasing buffer size to larger values wouldn't increase the number of messages saved. I think I have reached the maximum allowed buffer size. Still, I need to capture all 15000 messages which were received by my network interface.
Any help would be appreciated. Thanks in advance.
Smells like a bug, most likely in your code. If the UDP packets are delivered over the network, they will be queued for delivery locally, as you've seen in Wireshark. Perhaps your program just isn't making timely progress on reading from its socket - is there a dedicated thread for this task?
You might be able to make some headway by detecting which packets are being lost by your program. If all the packets lost are early ones, perhaps the data is being sent before the program is waiting to receive them. If they're all later, perhaps it exits too soon. If they are at regular intervals there may be some trouble in your code which loops receiving packets. etc.
In any case you seem exceptionally anxious about lost packets. By design UDP is not a reliable transport. If the loss of these multicast packets is a problem for your system (rather than just a mystery that you'd like to solve for performance reasons) then the system design is wrong.
The problem you appear to be having is that you get delay writing to a file. I would read all the data into memory before writing to the file (or writing to a file in another thread)
However, there is no way to ensure 100% of packet are received with UDP without the ability to ask for packets to be sent again (something TCP does for you)
I see that you are using UDP to send the file contents. In UDP the order of packets is not assured. If you not worried about the order, you put all the packets in a queue and have another thread process the queue and write the contents to file. By this the socket reader thread is not blocked because of file operations.
The receive buffer size is configured at OS level.
For example on Linux system, sysctl -w net.core.rmem_max=26214400 as in this article
https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
This is a Windows only answer, but the following changes in the Network Controller Card properties made a DRAMATIC difference in packet loss for our use-case.
We are consuming around 200 Mbps of UDP data and were experiencing substantial packet loss under moderate server load.
The network card in use is an Asus ROG Aerion 10G card, but I would expect most high-end network controller cards to expose similar properties. You can access them via Device Manager->Network card->Right-Click->Properties->Advanced Options.
1. Increase number of Receive Buffers:
Default value was 512; we could increase it up to 1024. In our case, higher settings were accepted, but the network card becomes disabled once we exceed 1024. Having a larger number of available buffers at the network-card level gives the system more tolerance to latency in transferring data from the network card buffers to the socket buffers where our apps finally can read the data.
2. Set Interrupt Moderation Rate to 'Off':
If I understood correctly, interrupt moderation coalesces multiple "buffer fill" notifications (via interrupts) into a single notification. So, the CPU will be interrupted less-often and fetch multiple buffers during each interrupt. This reduces CPU usage, but increases the chance a ready buffer is overwritten before being fetched, in case the interrupt is serviced late.
Additionally, we increased the socket buffer size (as the OP already did) and also enabled Circular Buffering at the socket level, as suggested by Len Holgate in a comment, this should also increase tolerance to latency in processing the socket buffers.

Java/Android: UDP - can't receive larger packets (but still < 64k)

I am working on a client-server android application (two applications - one client and one server). The server is expected to send a video to the client over UDP. I am dividing the video into individual frames, each of which end up being about 50,000 bytes, which is theoretically still less than the maximum for UDP.
I am currently testing on two Android emulators running on the same machine, and using UDP port forwarding in between to connect them.
I have set up the UDP such that if I send a byte array of ~5000 or less bytes, it works fine. If I attempt to send my frame byte arrays (50,000 bytes) the application freezes on the DatagramSocket.receive() method on the client.
Is there any way to set it up the UDP transmission to receive a larger byte size?
Thanks for your help.
But it isn't less than the practical maximum for UDP, which is 534 or 576 bytes, can't remember which at the moment, sorry. Whichever it is, that is the largest packet size that will avoid fragmentation. Once a UDP packet is fragmented into N fragments it is N times as likely to be lost.

In Java, how do I deal with UDP messages that are greater than the maximum UDP data payload?

I read this question about the error that I'm getting and I learned that UDP data payloads can't be more than 64k. The suggestions that I've read are to use TCP, but that is not an option in this particular case. I am interfacing with an external system that is transmitting data over UDP, but I don't have access to that external system at this time, so I'm simulating it.
I have data messages that are upwards of 1,400,000 bytes in some instances and it's a requirement that the UDP protocol is used. I am not able to change protocols (I would much rather use TCP or a reliable protocol build on UDP). Instead, I have to find a way to transmit large payloads over UDP from a test application into the system that I am building and to read those large payloads in the system that I'm building for processing. I don't have to worry about dropped packets, either - if I don't get the datagram, I don't care - just wait for the next payload to arrive. If it's incomplete or missing, just throw it all away and continue waiting. I also don't know the size of the datagram in advance (they range of a few hundred bytes to 1,400,000+ bytes.
I've already set my send and receive buffer sizes large enough, but that's not sufficient. What else can I do?
UDP packets have a 16 bit length field. It's nothing to do with Java. They cannot be bigger, period. If the server you are talking to is immutable, you are stuck with what you can fit into a packet.
If you can change the server and thus the protocol, you can more or less reimplement TCP for yourself. Since UDP is defined to be unreliable, you need the full retransmission mechanism to cope with packets that are dropped in the network somewhere. So, you have to split the 'message' into chunks, send the chunks, and have a protocol for requesting retransmission of lost chunks.
It's a requirement ...
The requirement should also therefore dictate the packetization technique. You need more information about the external system and its protocol. Note that the maximum IPv4 UDP payload Is 65535-28 bytes, and the maximum practical payload is < 1500 bytes once a router gets involved.

Categories

Resources