UDP packets waiting and then arriving together - java

I have a simple Java program which acts as a server, listening for UDP packets. I then have a client which sends UDP packets over 3g.
Something I've noticed is occasionally the following appears to occur: I send one packet and seconds later it is still not received. I then send another packet and suddenly they both arrive.
I was wondering if it was possible that some sort of system is in place to wait for a certain amount of data instead of sending an undersized packet. In my application, I only send around 2-3 bytes of data per packet - although the UDP header and what not will bulk the message up a bit.
The aim of my application is to get these few bytes of data from A to B as fast as possible. Huge emphasis on speed. Is it all just coincidence? I suppose I could increase the packet size, but it just seems like the transfer time will increase, and 3g isn't exactly perfect.

Since the comments are getting rather lengthy, it might be better to turn them into an answer altogether.
If your app is not receiving data until a certain quantity is retrieved, then chances are, there is some sort of buffering going on behind the scenes. A good example (not saying this applies to you directly) is that if you or the underlying libraries are using InputStream.readLine() or InputStream.read(bytes), then it will block until it receives a newline or bytes number of bytes before returning. Judging by the fact that your program seems to retrieve all of the data when a certain threshold is reached, it sounds like this is the case.
A good way to debug this is, use Wireshark. Wireshark doesn't care about your program--its analyzing the raw packets that are sent to and from your computer, and can tell you whether or not the issue is on the sender or the receiver.
If you use Wireshark and see that the data from the first send is arriving on your physical machine well before the second, then the issue lies with your receiving end. If you're seeing that the first packet arrives at the same time as the second packet, then the issue lies with the sender. Without seeing the code, its hard to say what you're doing and what, specifically, is causing the data to only show up after receiving more than 2-3 bytes--but until then, this behavior describes exactly what you're seeing.

There are several probable causes of this:
Cellular data networks are not "always-on". Depending on the underlying technology, there can be a substantial delay between when a first packet is sent and when IP connectivity is actually established. This will be most noticeable after IP networking has been idle for some time.
Your receiver may not be correctly checking the socket for readability. Regardless of what high-level APIs you may be using, underneath there needs to be a call to select() to check whether the socket is readable. When a datagram arrives, select() should unblock and signal that the socket descriptor is readable. Alternatively, but less efficiently, you could set the socket to non-blocking and poll it with a read. Polling wastes CPU time when there is no data and delays detection of arrival for up to the polling interval, but can be useful if for some reason you can't spare a thread to wait on select().
I said above that select() should signal readability on a watched socket when data arrives, but this behavior can be modified by the socket's "Receive low-water mark". The default value is usually 1, meaning any data will signal readability. But if SO_RCVLOWAT is set higher (via setsockopt() or a higher-level equivalent), then readability will be not be signaled until more than the specified amount of data has arrived. You can check the value with getsockopt() or whatever API is equivalent in your environment.
Item 1 would cause the first datagram to actually be delayed, but only when the IP network has been idle for a while and not once it comes up active. Items 2 and 3 would only make it appear to your program that the first datagram was delayed: a packet sniffer at the receiver would show the first datagram arriving on time.

Related

Multicast data overhead?

My application uses multicast capability of UDP.
In short I am using java and wish to transmit all data using a single multicast address and port. Although the multicast listeners will be logically divided into subgroups which can change in runtime and may not wish to process data that comes from outside of their group.
To make this happen I have made the code so that all the running instances of application will join the same multicast group and port but will carefully observe the packet's sender to determine if it belongs to their sub-group.
Warning minimum packet size for my application is 30000-60000 bytes!!!!!
Will reading every packet using MulticastSocket.receive(DatagramPacket) and determining if its the required packet cause too much overhead (even buffer overflow).
Would it generate massive traffic leading to congestion in the network because every packet is sent to everyone ?
Every packet is not sent to everyone since multicast (e.g. PIM) would build a multicast tree that would place receivers and senders optimally. So, the network that would copy the packet as and when needed. Multicast packets are broadcasted (technically more accurate, flooded at Layer2) at the final hop. IGMP assists multicast at the last hop and makes sure that if there is no receiver joining in the last hop, then no such flooding is done.
"and may not wish to process data that comes from outside of their group." The receive call would return the next received datagram and so there is little one can do to avoid processing packets that are not meant for the subgroup classification. Can't your application use different multiple groups?
Every packet may be sent to everyone, but each one will only appear on the network once.
However unless this application is running entirely in a LAN that is entirely under your control including all routers, it is already wildly infeasible. The generally accepted maximum UDP datagram size is 534 once you go through a router you don't control.

Java TCP latency

I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.

Faster detection of a broken socket in Java/Android

Background
My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.
It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.
Problem
As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.
When I enable the server again I should be able to confirm that no events are missing.
The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.
I need a way to make certain that the data actually reaches the server.
This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.
Stuff I've tried
Setting Socket.setTcpNoDelay(true)
Removing all buffered writers and just using OutputStream directly
Flushing the stream after every send
Additional info
I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.
Snippet of code
Initialization of the class:
socket = new Socket(host, port);
socket.setTcpNoDelay(true);
Where data is sent:
while(!dataList.isEmpty()) {
String data = dataList.removeFirst();
inMemoryCount -= data.length();
try {
OutputStream os = socket.getOutputStream();
os.write(data.getBytes());
os.flush();
}
catch(IOException e) {
inMemoryCount += data.length();
dataList.addFirst(data);
socket = null;
return false;
}
}
return true;
Update 1
I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.
Update 2
The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).
This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.
I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.
The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.
One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.
Final update
My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.
This will possibly create some duplicate events on the server but that can be filtered there.
Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.
Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.
Just another idea.
Edit 0:
You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.
The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.
The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.
(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)
See the accepted answer here: Java Sockets and Dropped Connections
socket.shutdownOutput();
wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
Won't work: server cannot be modified
Can't your server acknowledge every message it receives with another packet? The client won't remove the messages that the server did not acknowledge yet.
This will have performance implications. To avoid slowing down you can keep on sending messages before an acknowledgement is received, and acknowledge several messages in one return message.
If you send a message every 5 seconds, and disconnection is not detected by the network stack for 30 seconds, you'll have to store just 6 messages. If 6 sent messages are not acknowledged, you can consider the connection to be down. (I suppose that logic of reconnection and backlog sending is already implemented in your app.)
What about sending UDP datagrams on a separate UDP socket while making the remote host respond to each, and then when the remote host doesn't respond, you kill the TCP connection? It detects a link breakage quickly enough :)
Use http POST instead of socket connection, then you can send a response to each post. On the client side you only remove the data from memory if the response indicates success.
Sure its more overhead, but gives you what you want 100% of the time.

Java NIO: How to know when SocketChannel read() is complete with non-blocking I/O

I am currently using a non-blocking SocketChannel (Java 1.6) to act as a client to a Redis server. Redis accepts plain-text commands directly over a socket, terminated by CRLF and responds in-like, a quick example:
SEND: 'PING\r\n'
RECV: '+PONG\r\n'
Redis can also return huge replies (depending on what you are asking for) with many sections of \r\n-terminated data all as part of a single response.
I am using a standard while(socket.read() > 0) {//append bytes} loop to read bytes from the socket and re-assemble them client side into a reply.
NOTE: I am not using a Selector, just multiple, client-side SocketChannels connected to the server, waiting to service send/receive commands.
What I'm confused about is the contract of the SocketChannel.read() method in non-blocking mode, specifically, how to know when the server is done sending and I have the entire message.
I have a few methods to protect against returning too fast and giving the server a chance to reply, but the one thing I'm stuck on is:
Is it ever possible for read() to return bytes, then on a subsequent call return no bytes, but on another subsequent call again return some bytes?
Basically, can I trust that the server is done responding to me if I have received at least 1 byte and eventually read() returns 0 then I know I'm done, or is it possible the server was just busy and might sputter back some more bytes if I wait and keep trying?
If it can keep sending bytes even after a read() has returned 0 bytes (after previous successful reads) then I have no idea how to tell when the server is done talking to me and in-fact am confused how java.io.* style communications would even know when the server is "done" either.
As you guys know read never returns -1 unless the connection is dead and these are standard long-lived DB connections, so I won't be closing and opening them on each request.
I know a popular response (atleast for these NIO questions) have been to look at Grizzly, MINA or Netty -- if possible I'd really like to learn how this all works in it's raw state before adopting some 3rd party dependencies.
Thank you.
Bonus Question:
I originally thought a blocking SocketChannel would be the way to go with this as I don't really want a caller to do anything until I process their command and give them back a reply anyway.
If that ends up being a better way to go, I was a bit confused seeing that SocketChannel.read() blocks as long as there aren't bytes sufficient to fill the given buffer... short of reading everything byte-by-byte I can't figure out how this default behavior is actually meant to be used... I never know the exact size of the reply coming back from the server, so my calls to SocketChannel.read() always block until a time out (at which point I finally see that the content was sitting in the buffer).
I'm not real clear on the right way to use the blocking method since it always hangs up on a read.
Look to your Redis specifications for this answer.
It's not against the rules for a call to .read() to return 0 bytes on one call and 1 or more bytes on a subsequent call. This is perfectly legal. If anything were to cause a delay in delivery, either because of network lag or slowness in the Redis server, this could happen.
The answer you seek is the same answer to the question: "If I connected manually to the Redis server and sent a command, how could I know when it was done sending the response to me so that I can send another command?"
The answer must be found in the Redis specification. If there's not a global token that the server sends when it is done executing your command, then this may be implemented on a command-by-command basis. If the Redis specifications do not allow for this, then this is a fault in the Redis specifications. They should tell you how to tell when they have sent all their data. This is why shells have command prompts. Redis should have an equivalent.
In the case that Redis does not have this in their specifications, then I would suggest putting in some sort of timer functionality. Code your thread handling the socket to signal that a command is completed after no data has been received for a designated period of time, like five seconds. Choose a period of time that is significantly longer than the longest command takes to execute on the server.
If it can keep sending bytes even after a read() has returned 0 bytes (after previous successful reads) then I have no idea how to tell when the server is done talking to me and in-fact am confused how java.io.* style communications would even know when the server is "done" either.
Read and follow the protocol:
http://redis.io/topics/protocol
The spec describes the possible types of replies and how to recognize them. Some are line terminated, while multi-line responses include a prefix count.
Replies
Redis will reply to commands with different kinds of replies. It is possible to check the kind of reply from the first byte sent by the server:
With a single line reply the first byte of the reply will be "+"
With an error message the first byte of the reply will be "-"
With an integer number the first byte of the reply will be ":"
With bulk reply the first byte of the reply will be "$"
With multi-bulk reply the first byte of the reply will be "*"
Single line reply
A single line reply is in the form of a single line string starting with "+" terminated by "\r\n". ...
...
Multi-bulk replies
Commands like LRANGE need to return multiple values (every element of the list is a value, and LRANGE needs to return more than a single element). This is accomplished using multiple bulk writes, prefixed by an initial line indicating how many bulk writes will follow.
Is it ever possible for read() to return bytes, then on a subsequent call return no bytes, but on another subsequent call again return some bytes? Basically, can I trust that the server is done responding to me if I have received at least 1 byte and eventually read() returns 0 then I know I'm done, or is it possible the server was just busy and might sputter back some more bytes if I wait and keep trying?
Yes, that's possible. Its not just due to the server being busy, but network congestion and downed routes can cause data to "pause". The data is a stream that can "pause" anywhere in the stream without relation to the application protocol.
Keep reading the stream into a buffer. Peek at the first character to determine what type of response to expect. Examine the buffer after each successful read until the buffer contains the full message according to the specification.
I originally thought a blocking SocketChannel would be the way to go with this as I don't really want a caller to do anything until I process their command and give them back a reply anyway.
I think you're right. Based on my quick-look at the spec, blocking reads wouldn't work for this protocol. Since it looks line-based, BufferedReader may help, but you still need to know how to recognize when the response is complete.
I am using a standard
while(socket.read() > 0) {//append
bytes} loop
That is not a standard technique in NIO. You must store the result of the read in a variable, and test it for:
-1, indicating EOS, meaning you should close the channel
zero, meaning there was no data to read, meaning you should return to the select() loop, and
a positive value, meaning you have read that many bytes, which you should then extract and remove from the ByteBuffer (get()/compact()) before continuing.
It's been a long time, but . . .
I am currently using a non-blocking SocketChannel
Just to be clear, SocketChannels are blocking by default; to make them non-blocking, one must explicitly invoke SocketChannel#configureBlocking(false)
I'll assume you did that
I am not using a Selector
Whoa; that's the problem; if you are going to use non-blocking Channels, then you should always use a Selector (at least for reads); otherwise, you run into the confusion you described, viz. read(ByteBuffer) == 0 doesn't mean anything (well, it means that there are no bytes in the tcp receive buffer at this moment).
It's analogous to checking your mailbox and it's empty; does it mean that the letter will never arrive? was never sent?
What I'm confused about is the contract of the SocketChannel.read() method in non-blocking mode, specifically, how to know when the server is done sending and I have the entire message.
There is a contract -> if a Selector has selected a Channel for a read operation, then the next invocation of SocketChannel#read(ByteBuffer) is guaranteed to return > 0 (assuming there's room in the ByteBuffer arg)
Which is why you use a Selector, and because it can in one select call "select" 1Ks of SocketChannels that have bytes ready to read
Now there's nothing wrong with using SocketChannels in their default blocking mode; and given your description (a client or two), there's probably no reason to as its simpler; but if you want to use non-blocking Channels, use a Selector

Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Categories

Resources