Multiple Server Discovery - Java/Android - java

I have an application where multiple servers could exist. There are heaps of examples of how to use UDP to discover servers but it seems this only works with a single server.
What happens if multiple responses exist? Are they queued, corrupted (with UDP pitfalls) or something else ?
I would like to find out how to receive multiple responses from a UDP broadcast sent from an Android Device. If this isn't viable, is there any other recommended approach for multiple server discovery for Android clients..
Thank you

I would first send a packet to all servers you want to ask if they are there, then let all servers respond. Since you want to find out how to receive the packages, here is how i would do that:
long responseTimeout = 4000;
long start = System.currentTimeMillis();
while(true){
long now = System.currentTimeMillis();
if(now - start < responseTimeout){
datagramSocket.setSoTimeout((int) (responseTimeout - (now - start));
}else{
break;
}
try{
datagramSocket.receive(packet);
addOnlineServer(packet.getAddress());
}catch(SocketTimeoutException e){
break;
}
}
For a certain amount of time your android client should wait for responses, and add each ip of the received package to a list of online servers.
Sure some of the packages could get lost as you are using UDP, but that's what you get. If you want to be sure that no packages get lost, use TCP instead.

If you broadcast the message and the servers all return you should see all the responses as they come back.
However be aware that UDP is a potentially lossy protocol and makes no guarantees at all. Over a non-wireless LAN with decent switches it is pretty safe but as soon as it goes further than that (wireless, over multiple networks, etc) you can expect to lose at least some packets and any packet loss is a message loss on UDP.
The usual solution to this is to send each message a few times. So for example when you first start up you might broadcast at 1 second, 10 second, 30 seconds, and then every 10 minutes thereafter. This will find servers immediately, then sweep up any it missed fairly fast, and then finally detect any new ones that appear on the network.
I've not worked with this sort of system for quite a few years, but last time we did there was a single server acted as the center point. Everything when it started up broadcasted out to find the central server (retrying at increasing intervals until it found it) and when the central server started up it broadcasted out to find everything - retrying 3 times.
All communication after that was done by registering with that central server and getting the list of apps etc from there. The server essentially acted as a network directory so anything could get a list of anything else on the network by querying it.

You should be doing the following to receive and probably also send broadcast packets (which is what you are asking for):
Make sure that network mask is correct
When you bind the socket, be sure to bind it to INADDR_ANY
Set the socket's option to BROADCAST via setsockopt
Call the function sendto(..) with sendaddr.sin_addr.s_addr = inet_addr("your_interface_broadcast_address"), or call sento(..) several times for each interface with its broadcast IP address.
Call the function recvfrom(..), inside a while(..) loop, until you are certain "enough time has passed", usually 1 second should suffice on a LAN network

Related

How to Send and Receive Multiple XBee's Packet in Java Properly

I have a Timer Swing of 15 seconds, which sends packets through my Coordinator API (XBee S2) to up to 20 Routers AT. And I am receiving all my packets through a SerialPortEvent (I'm using RXTXcomm.jar library), every packet I received I store in a Buffer ArrayList.
Are they two different threads? I still have the main GUI going on.
So, my question is: what the best way to send and receive packet to/from multiple XBee modules? I think about two alternatives, make a for-loop to send unicast packet to every module (up to 20), after that, verifying my buffer receptor if I get any response. I currently using this approach, but sometimes I lose packets, maybe because XBee is halfduplex and I am receiving at same time of sending?
The another alternative is send a unicast packet and wait a response for each one. In that case, what would be the timeout response I should wait in order to not delay my timer swing (15 seconds). Should I increase this 15 seconds?
EDIT:
My current timeout is 300 ms for the ACK and 450 ms for the response
Make use of the Transmit Status packet from the XBee, informing you of whether a packet was received at the radio layer on each router. If the router ACK at the application layer doesn't contain any additional information, you can potentially eliminate it to reduce network traffic.
You should be OK sending each frame to the routers and keeping track of which ones ACK, then resend frames that weren't acknowledged in the first pass. You should be OK as long as you're using hardware flow control on the coordinator to monitor the status of the XBee module's serial buffer. If the XBee drops CTS, you need to stop sending bytes immediately.
The XBee communicates with the host with full duplex, but only one radio on the network can send at a time (similar to Ethernet). The XBee module will manage inbound/outbound packets.
Finally, make sure your hosts communicate with the XBee modules at 115,200 bps (or faster). Using the default of 9600 bps is inefficient and increases the time it will take for a host to acknowledge packets.
SwingWorker is the best way to handle this. Perform all serial port access in the worker's doInBackground() implementation, publish() interim results and process() the results on the EDT. A related example is examined here. If need be, you can manage multiple workers, as shown here.
Synchronize your send method and avoid sending packets too close in time.
This worked for me using this library
<dependency>
<groupId>com.rapplogic</groupId>
<artifactId>xbee-api</artifactId>
<version>0.9</version>
</dependency>

Multicast data overhead?

My application uses multicast capability of UDP.
In short I am using java and wish to transmit all data using a single multicast address and port. Although the multicast listeners will be logically divided into subgroups which can change in runtime and may not wish to process data that comes from outside of their group.
To make this happen I have made the code so that all the running instances of application will join the same multicast group and port but will carefully observe the packet's sender to determine if it belongs to their sub-group.
Warning minimum packet size for my application is 30000-60000 bytes!!!!!
Will reading every packet using MulticastSocket.receive(DatagramPacket) and determining if its the required packet cause too much overhead (even buffer overflow).
Would it generate massive traffic leading to congestion in the network because every packet is sent to everyone ?
Every packet is not sent to everyone since multicast (e.g. PIM) would build a multicast tree that would place receivers and senders optimally. So, the network that would copy the packet as and when needed. Multicast packets are broadcasted (technically more accurate, flooded at Layer2) at the final hop. IGMP assists multicast at the last hop and makes sure that if there is no receiver joining in the last hop, then no such flooding is done.
"and may not wish to process data that comes from outside of their group." The receive call would return the next received datagram and so there is little one can do to avoid processing packets that are not meant for the subgroup classification. Can't your application use different multiple groups?
Every packet may be sent to everyone, but each one will only appear on the network once.
However unless this application is running entirely in a LAN that is entirely under your control including all routers, it is already wildly infeasible. The generally accepted maximum UDP datagram size is 534 once you go through a router you don't control.

UDP packets waiting and then arriving together

I have a simple Java program which acts as a server, listening for UDP packets. I then have a client which sends UDP packets over 3g.
Something I've noticed is occasionally the following appears to occur: I send one packet and seconds later it is still not received. I then send another packet and suddenly they both arrive.
I was wondering if it was possible that some sort of system is in place to wait for a certain amount of data instead of sending an undersized packet. In my application, I only send around 2-3 bytes of data per packet - although the UDP header and what not will bulk the message up a bit.
The aim of my application is to get these few bytes of data from A to B as fast as possible. Huge emphasis on speed. Is it all just coincidence? I suppose I could increase the packet size, but it just seems like the transfer time will increase, and 3g isn't exactly perfect.
Since the comments are getting rather lengthy, it might be better to turn them into an answer altogether.
If your app is not receiving data until a certain quantity is retrieved, then chances are, there is some sort of buffering going on behind the scenes. A good example (not saying this applies to you directly) is that if you or the underlying libraries are using InputStream.readLine() or InputStream.read(bytes), then it will block until it receives a newline or bytes number of bytes before returning. Judging by the fact that your program seems to retrieve all of the data when a certain threshold is reached, it sounds like this is the case.
A good way to debug this is, use Wireshark. Wireshark doesn't care about your program--its analyzing the raw packets that are sent to and from your computer, and can tell you whether or not the issue is on the sender or the receiver.
If you use Wireshark and see that the data from the first send is arriving on your physical machine well before the second, then the issue lies with your receiving end. If you're seeing that the first packet arrives at the same time as the second packet, then the issue lies with the sender. Without seeing the code, its hard to say what you're doing and what, specifically, is causing the data to only show up after receiving more than 2-3 bytes--but until then, this behavior describes exactly what you're seeing.
There are several probable causes of this:
Cellular data networks are not "always-on". Depending on the underlying technology, there can be a substantial delay between when a first packet is sent and when IP connectivity is actually established. This will be most noticeable after IP networking has been idle for some time.
Your receiver may not be correctly checking the socket for readability. Regardless of what high-level APIs you may be using, underneath there needs to be a call to select() to check whether the socket is readable. When a datagram arrives, select() should unblock and signal that the socket descriptor is readable. Alternatively, but less efficiently, you could set the socket to non-blocking and poll it with a read. Polling wastes CPU time when there is no data and delays detection of arrival for up to the polling interval, but can be useful if for some reason you can't spare a thread to wait on select().
I said above that select() should signal readability on a watched socket when data arrives, but this behavior can be modified by the socket's "Receive low-water mark". The default value is usually 1, meaning any data will signal readability. But if SO_RCVLOWAT is set higher (via setsockopt() or a higher-level equivalent), then readability will be not be signaled until more than the specified amount of data has arrived. You can check the value with getsockopt() or whatever API is equivalent in your environment.
Item 1 would cause the first datagram to actually be delayed, but only when the IP network has been idle for a while and not once it comes up active. Items 2 and 3 would only make it appear to your program that the first datagram was delayed: a packet sniffer at the receiver would show the first datagram arriving on time.

A captivating and riveting java socket multithreading fandango

Greetings and salutations,
I have two Android apps: one is the server and one is the client. The server receives strings from two or more clients. The server tests collision of input (eg inputs received within 1 ms or whatever). Once a client establishes a connection with the server, the server instantiates a client object that runs a while-true thread which facilitates input reception.
I believe that with OS time-sharing of the CPU, threads alternate their execution. This is a problem since I need to know if two InputStreams send messages within a ms or two of each other. How may I allow this?
public void run() {
while (true) {
try {
if (input.ready()) {
String s = input.readLine();
//do stuff, like comparing socket inputs
}
} catch (Exception e) {
}
}
I think that you will find the bigger problem is that the network layer alone will introduce variability in timing in the order of many milliseconds (just look at the range of times when running ping), such that even if the clients all send at the exact same moment you will find that they will often all arrive at times with a delta larger than a few milliseconds. You will also likely run into other timing issues related to the handling of incoming network data depending on the radio hardware, and kernel configuration that sit between the VM and the physical network, on both the sender and receiver. My point is that detecting collision with the timing of a few ms is probably not possible.
I'd suggest you send a timestamp with the message from the client so that it doesn't matter when it processes. To make it even more accurate you can add some simple time synchronization to the startup of your protocol to find the delta between the client and server device clocks.
If the goal is to just know which client sent data first, then you can just assume the blocked input.readline() that returns first is the one that sent data first, given data of similar length and similar ping latency for each client. This will under normal conditions be correct. To deal with variable length data and a number of other issues related to the physical network a good tweak would be to do a byte read rather than a whole line, which gives you a better approximation of who arrived first as opposed to which client is able to send the whole line faster.

Faster detection of a broken socket in Java/Android

Background
My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.
It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.
Problem
As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.
When I enable the server again I should be able to confirm that no events are missing.
The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.
I need a way to make certain that the data actually reaches the server.
This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.
Stuff I've tried
Setting Socket.setTcpNoDelay(true)
Removing all buffered writers and just using OutputStream directly
Flushing the stream after every send
Additional info
I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.
Snippet of code
Initialization of the class:
socket = new Socket(host, port);
socket.setTcpNoDelay(true);
Where data is sent:
while(!dataList.isEmpty()) {
String data = dataList.removeFirst();
inMemoryCount -= data.length();
try {
OutputStream os = socket.getOutputStream();
os.write(data.getBytes());
os.flush();
}
catch(IOException e) {
inMemoryCount += data.length();
dataList.addFirst(data);
socket = null;
return false;
}
}
return true;
Update 1
I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.
Update 2
The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).
This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.
I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.
The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.
One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.
Final update
My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.
This will possibly create some duplicate events on the server but that can be filtered there.
Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.
Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.
Just another idea.
Edit 0:
You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.
The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.
The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.
(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)
See the accepted answer here: Java Sockets and Dropped Connections
socket.shutdownOutput();
wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
Won't work: server cannot be modified
Can't your server acknowledge every message it receives with another packet? The client won't remove the messages that the server did not acknowledge yet.
This will have performance implications. To avoid slowing down you can keep on sending messages before an acknowledgement is received, and acknowledge several messages in one return message.
If you send a message every 5 seconds, and disconnection is not detected by the network stack for 30 seconds, you'll have to store just 6 messages. If 6 sent messages are not acknowledged, you can consider the connection to be down. (I suppose that logic of reconnection and backlog sending is already implemented in your app.)
What about sending UDP datagrams on a separate UDP socket while making the remote host respond to each, and then when the remote host doesn't respond, you kill the TCP connection? It detects a link breakage quickly enough :)
Use http POST instead of socket connection, then you can send a response to each post. On the client side you only remove the data from memory if the response indicates success.
Sure its more overhead, but gives you what you want 100% of the time.

Categories

Resources