Multicast data overhead? - java

My application uses multicast capability of UDP.
In short I am using java and wish to transmit all data using a single multicast address and port. Although the multicast listeners will be logically divided into subgroups which can change in runtime and may not wish to process data that comes from outside of their group.
To make this happen I have made the code so that all the running instances of application will join the same multicast group and port but will carefully observe the packet's sender to determine if it belongs to their sub-group.
Warning minimum packet size for my application is 30000-60000 bytes!!!!!
Will reading every packet using MulticastSocket.receive(DatagramPacket) and determining if its the required packet cause too much overhead (even buffer overflow).
Would it generate massive traffic leading to congestion in the network because every packet is sent to everyone ?

Every packet is not sent to everyone since multicast (e.g. PIM) would build a multicast tree that would place receivers and senders optimally. So, the network that would copy the packet as and when needed. Multicast packets are broadcasted (technically more accurate, flooded at Layer2) at the final hop. IGMP assists multicast at the last hop and makes sure that if there is no receiver joining in the last hop, then no such flooding is done.
"and may not wish to process data that comes from outside of their group." The receive call would return the next received datagram and so there is little one can do to avoid processing packets that are not meant for the subgroup classification. Can't your application use different multiple groups?

Every packet may be sent to everyone, but each one will only appear on the network once.
However unless this application is running entirely in a LAN that is entirely under your control including all routers, it is already wildly infeasible. The generally accepted maximum UDP datagram size is 534 once you go through a router you don't control.

Related

Dealing with UDP unreliability

I'm using DatagramPacket and DatagramSocket classes in Java to send messages with UDP. I have incomplete knowledge about networking. I know that:
when a datagram is sent, it may in fact be split into multiple pieces of data travelling independently on the network (for example, if my datagram length is greater than MTU).
UDP does not guarantee the order of messages at receiving (and does not guarantee the receiving of messages at all).
Putting this information together, I "understand" that if I send one (large) DatagramPacket, I may receive the bytes of my datagram in any order (and some parts may even be missing)! But I think I misunderstood something because if it was the case, nobody would use such a protocol.
How can I ensure that the datagram I receive (if I receive it) is equal to the datagram I have sent?
Your understanding is incorrect. If your datagram is broken into fragments by IP (below the UDP layer) at the sending side, then IP at the receiver will reassemble those fragments in the correct order before passing the entire reassembled datagram up to the receiver's UDP layer. If any fragments of the datagram are lost then the reassembly will fail, the partially-reconstructed datagram will be discarded, and nothing will be passed up to the receiver's UDP layer. So the receiving UDP -- and therefore the receiving application -- gets either a complete datagram or nothing. It will never get a partial datagram, and it will never get a datagram whose content has been scrambled because of fragmentation.
The receiving application can be given a partial (truncated) datagram if the incoming datagram is larger than the application's receive buffer, but that has nothing to do with fragmentation.
Putting this information together, I "understand" that if I send one (large) DatagramPacket, I may receive the bytes of my datagram in any order
No.
(and some parts may even be missing)!
No.
You will receive a UDP datagram intact and entire or not at all.
But I think I misunderstood something because if it was the case, nobody would use such a protocol.
Correct. It isn't the case.
How can I ensure that the datagram I receive (if I receive it) is equal to the datagram I have sent?
It always is. If it arrives. However it may arrive zero, one, or more times, and it may arrive out of order.
The generally accepted maximum practical UDP datagram size is 534 bytes of payload. You are guaranteed that IP will not fragment that, either at the sender or at any intermediate host, and non-fragmentation decreases your chance of packet loss. (If any fragment is lost the datagram is lost, as stated by #ottomeister.)
If sequence is important to you, you need sequence numbers in your datagrams. This can also help to protect you against duplicates, as you know what sequence number you're up to so you can spot a duplicate.
If arrival is important to you, you need an ACK- or NACK-based application protocol.

Java - Detect a ping

Is it possible to detect a ping? I.e. device 1 pings device 2 and I want code that can run on device 2 that would detect whenever it is pinged by device 1.
The literal message used by the ping utility ("ICMP Echo Request") can be difficult to detect because it is customarily handled, and "eaten," by the network protocol-stack.
But if you simply want one computer or process to "broadcast" the fact that it's present (i.e. you don't need to receive a "reply," and you don't strictly care whether the message actually arrives at a particular place), the "UDP" network protocol might be just what you're looking for. Programming examples abound on the Internet. (And, right here, awaiting your "search.")
("UDP" is a datagram protocol, which so-to-speak "tosses a paper airplane out the window", whereas "TCP/IP" is concerned with bidirectional connections that are established, used, then torn-down.)
You can open a TCP or UDP socket on device 2 on some specific port and then try to connect on same port from device 1.
https://docs.oracle.com/javase/tutorial/networking/sockets/readingWriting.html
You can decide what you want to use by reading about TCP and UDP.
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
https://en.wikipedia.org/wiki/User_Datagram_Protocol

detecting TCP/IP packet loss

I have tcp communication via socket code like :
public void openConnection() throws Exception
{
socket = new Socket();
InetAddress iNet = InetAddress.getByName("server");
InetSocketAddress sock = new InetSocketAddress(iNet, Integer.parseInt(port));
socket.connect(sock, 0);
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
and send method as :
synchronized void send(String message)
{
try
{
out.println(message);
}
catch (Exception e)
{
throw new RuntimeException(this.getClass() + ": Error Sending Message: "
+ message, e);
}
}
And I read from webpages that, TCP/IP does not guarantee delivery of packet, it retries but if network is too busy, packet may be dropped([link]).1
Packets can be dropped when transferring data between systems for two key reasons:
Heavy network utilization and resulting congestion
Faulty network hardware or connectors
TCP is designed to be able to react when packets are dropped on a network. When a packet is successfully delivered to its destination, the destination system sends an acknowledgement message back to the source system. If this acknowledgement is not received within a certain interval, that may be either because the destination system never received the packet or because the packet containing the acknowledgement was itself lost. In either case, if the acknowledgement is not received by the source system in the given time, the source system assumes that the destination system never received the message and retransmits it. It is easy to see that if the performance of the network is poor, packets are lost in the first place, and the increased load from these retransmit messages is only increasing the load on the network further, meaning that more packets will be lost. This behaviour can result in very quickly creating a critical situation on the network.
Is there any way, that I can detect if packet was received successfully by destination or not, I am not sure that out.println(message); will throw any exception, as this is non blocking call. It will put message in buffer and return to let TCP/IP do its work.
Any help?
TCP is designed to be able to react when packets are dropped on a network.
As your quote says, TCP is design to react automatically to the events you mention in this text. As such, you don't have anything to do at this level, since this will be handled by the TCP implementation you're using (e.g. in the OS).
TCP has some features that will do some of the work for you, but you are right to wonder about their limitations (many people think of TCP as a guaranteed delivery protocol, without context).
There is an interesting discussion on the Linux Kernel Mailing List ("Re: Client receives TCP packets but does not ACK") about this.
In your use case, practically, this means that you should treat your TCP connection as a stream of data, in each direction (the classic mistake is to assume that if you send n bytes from on end, you'll read n bytes in a single buffer read on the other end), and handle possible exceptions.
Handling java.io.IOExceptions properly (in particular subclasses in java.net) will cover error cases at the level you describe: if you get one, have a retry strategy (depending on what the application and its user is meant to do). Rely on timeouts too (don't set a socket as blocking forever).
Application protocols may also be designed to send their own acknowledgement when receiving commands or requests.
This is a matter of assigning responsibilities to different layers. The TCP stack implementation will handle the packet loss problems you mention, and throw an error/exception if it can't fix it by itself. Its responsibility is the communication with the remote TCP stack. Since in most cases you want your application to talk to a remote application, there needs to be an extra acknowledgement on top of that. In general, the application protocol needs to be designed to handle these cases. (You can go a number of layers up in some cases, depending on which entity is meant to take responsibility to handle the requests/commands.)
TCP/IP does not drop the packet. The congestion control algorithms inside the TCP implementation take care of retransmission. Assuming that there is a steady stream of data being sent, the receiver will acknowledge which sequence numbers it received back to the sender. The sender can use the acknowledgements to infer which packets need to be resent. The sender holds packets until they have been acknowledged.
As an application, unless the TCP implementation provides mechanisms to receive notification of congestion, the best it can do is establish a timeout for when the transaction can complete. If the timeout occurs before the transaction completes, the application can declare the network to be too congested for the application to succeed.
Code what you need. If you need acknowledgements, implement them. If you want the sender to know the recipient got the information, then have the recipient send some kind of acknowledgement.
From an application standpoint, TCP provides a bi-directional byte stream. You can communicate whatever information you want over that by simply specifying streams of bytes that convey the information you need to communicate.
Don't try to make TCP do anything else. Don't try to "teach TCP" your protocol.

UDP packets waiting and then arriving together

I have a simple Java program which acts as a server, listening for UDP packets. I then have a client which sends UDP packets over 3g.
Something I've noticed is occasionally the following appears to occur: I send one packet and seconds later it is still not received. I then send another packet and suddenly they both arrive.
I was wondering if it was possible that some sort of system is in place to wait for a certain amount of data instead of sending an undersized packet. In my application, I only send around 2-3 bytes of data per packet - although the UDP header and what not will bulk the message up a bit.
The aim of my application is to get these few bytes of data from A to B as fast as possible. Huge emphasis on speed. Is it all just coincidence? I suppose I could increase the packet size, but it just seems like the transfer time will increase, and 3g isn't exactly perfect.
Since the comments are getting rather lengthy, it might be better to turn them into an answer altogether.
If your app is not receiving data until a certain quantity is retrieved, then chances are, there is some sort of buffering going on behind the scenes. A good example (not saying this applies to you directly) is that if you or the underlying libraries are using InputStream.readLine() or InputStream.read(bytes), then it will block until it receives a newline or bytes number of bytes before returning. Judging by the fact that your program seems to retrieve all of the data when a certain threshold is reached, it sounds like this is the case.
A good way to debug this is, use Wireshark. Wireshark doesn't care about your program--its analyzing the raw packets that are sent to and from your computer, and can tell you whether or not the issue is on the sender or the receiver.
If you use Wireshark and see that the data from the first send is arriving on your physical machine well before the second, then the issue lies with your receiving end. If you're seeing that the first packet arrives at the same time as the second packet, then the issue lies with the sender. Without seeing the code, its hard to say what you're doing and what, specifically, is causing the data to only show up after receiving more than 2-3 bytes--but until then, this behavior describes exactly what you're seeing.
There are several probable causes of this:
Cellular data networks are not "always-on". Depending on the underlying technology, there can be a substantial delay between when a first packet is sent and when IP connectivity is actually established. This will be most noticeable after IP networking has been idle for some time.
Your receiver may not be correctly checking the socket for readability. Regardless of what high-level APIs you may be using, underneath there needs to be a call to select() to check whether the socket is readable. When a datagram arrives, select() should unblock and signal that the socket descriptor is readable. Alternatively, but less efficiently, you could set the socket to non-blocking and poll it with a read. Polling wastes CPU time when there is no data and delays detection of arrival for up to the polling interval, but can be useful if for some reason you can't spare a thread to wait on select().
I said above that select() should signal readability on a watched socket when data arrives, but this behavior can be modified by the socket's "Receive low-water mark". The default value is usually 1, meaning any data will signal readability. But if SO_RCVLOWAT is set higher (via setsockopt() or a higher-level equivalent), then readability will be not be signaled until more than the specified amount of data has arrived. You can check the value with getsockopt() or whatever API is equivalent in your environment.
Item 1 would cause the first datagram to actually be delayed, but only when the IP network has been idle for a while and not once it comes up active. Items 2 and 3 would only make it appear to your program that the first datagram was delayed: a packet sniffer at the receiver would show the first datagram arriving on time.

Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Categories

Resources