Scalable and High performance message channel - java

I am developing agents to collect data from different sources, the data should be posted to a channel at high frequency (say every 15 seconds). REST is definitely not a solution. The requirement is clearly fire and forget as status reply is not concerned.
Throughput is more important, message drops are acceptable upto 5%.
Possible solutions I come across are
Message Bus
Multicast
UDP
Any alternatives, please suggest.

IMHO High frequency is too fast too see and 15 seconds you can see. It takes about 0.5 seconds to send a message round the world and back again. You can just about see 15 milli-seconds. And if you are talking about 15 micro-seconds, that is definitely high frequency. I have a persisted messaging solution with a latency of around 0.1 micro-seconds which is 0.0000001 seconds, but I don't suggest you need that.
If all you need is a message every 15 seconds I would use the simplest solution which comes to mind. I would try ActiveMQ which I found to be one of the simplest to get working. You should be able to achieve message rates of up to 20,000 per seconds and decent latencies of about 0.01 seconds and you shouldn't lose any messages.

Related

How to measure one-way latency?

I wanna measure the time the troughput takes from my client to my server. Currently i can only measure the full trip (from client to server and back to client) i can do this by measuring the time before we send a packet and measuring it after we receive it back from the server. Technically speaking, if i were to devide the full trip time i would get an avarage of each one-way throughput.
But what if some throughput actually took longer to arrive like this:
In the image i created the throughput from client to server is 30 ms and from server to client 90 ms. If the data would have such arrival rates then measuring the full round trip and dividing it by 2 would not give an accurate one-way arrival time. How can i accurately measure one-way arrival times?
How can i accurately measure one-way arrival times
TL;DR - YOU CAN'T
This is actually a very deep philosophical question that is unanswered at the very core of physics (the rock bottom "metal" of the universe). Physics does not unequivocally know the one-way speed of light, only the two-way speed. No experiment we have so far devised can answer that question. See The One-Way Speed of Light.
Although the average speed over a two-way path can be measured, the one-way speed in one direction or the other is undefined (and not simply unknown), unless one can define what is "the same time" in two different locations. To measure the time that the light has taken to travel from one place to another it is necessary to know the start and finish times as measured on the same time scale. This requires either two synchronized clocks, one at the start and one at the finish, or some means of sending a signal instantaneously from the start to the finish. No instantaneous means of transmitting information is known. Thus the measured value of the average one-way speed is dependent on the method used to synchronize the start and finish clocks. This is a matter of convention.
You can get arbitrarily close for non-relativistic situations by synchronizing clocks, but how do you know the clocks stay synchronized? For your case you'd have to agree to synchronize on the same time signal, but propagation delays can introduce tens to hundreds of milliseconds of delay and jitter.
So if you want to pin down one-way times to an accuracy less than clock jitter you're out of luck. Here's the output from ntpq peer on one of my Linux systems.
remote refid st t when poll reach delay offset jitter
==============================================================================
*unifi.versadns. 71.66.197.233 2 u 289 1024 377 2.298 -0.897 0.747
+eterna.binary.n 68.97.68.79 2 u 615 1024 377 42.258 -3.640 0.430
+homemail.org 139.78.97.128 2 u 160 1024 377 45.257 -0.209 0.391
-time.skylineser 130.207.244.240 2 u 418 1024 103 24.829 2.066 1.376
You might be able to pin down one-way time to within 5ms if both systems use the same master clock for synchronization and have had enough time for delay and jitter to stabilize.
OWAMP - owping
you need to have client and server running,
use NTP or chrony to Synchronize time.
There are expensive hardware to sync time between machines as well.

InetAddress.getByName(ip).isReachable(timeout);

InetAddress.getByName(ip).isReachable(2000) is used to find reachability of a system in 2 seconds.But when I am trying to find reachabilities of multiple systems(say n systems) in my network serially, it is taking 2n seconds.Is there any other way, so that i can find their reachabilities in lesser time say 3 to 4 seconds?
You can use jnetpcap to craft the ping packets by yourself, and listen to the responses.
You can throw all ping requests to the network (almost) at once, and be done with it in (a little bit longer than) 2 seconds.
You will need to know the mac addresses, though.

Rabbit MQ: Improve queue flushing speed

I have a durable queue which holds persistent messages. The messages arrive into the queue at a rate of about 10 messages per second.
The client is unable to fetch those messages at that rate. As a result the queue on the server keeps growing.
Each message is less than 1 KB and I have a healthy 2 Mbps line between the server and my machine. Using a network monitoring utility, I found that it is hardly using any of that bandwidth.
The client is doing nothing with the messages as of now, just printing them to console so processing time on client is almost 0.
Some other details:
I am using a java client.
I have set the client to prefetch 10000 messages. (also tried with default values)
The round trip time is about 350 ms.
Messages are individually acknowledged.
The available resources are being underutilized and 10 messages per second is hardly any load in my opinion. How do I speed up things so that messages held up in queue are transferred faster to the client. Possibly using some sort of batching.
If you are indidivudally acknowledging messages every 350 ms, I would expect the consumer might achieve about 1/0.35 or about 2.9 messages per second. However, the protocol might not be that efficient and it may need two round trips to the server to acknowledge the message and get the next one. i.e. 1.4 message per second may be more realistic.
A round trip of 350 ms is very high, you can go around the world and back again in that time, so a simple solution may not work best for you. e.g. London -> New York -> Tokyo -> London.
I would try having a broker local to your client instead. This way the round trip is between your client and your local broker.

Messaging latency in java (with zeromq)

I just ran the zeroMQ hello world example and timed the request-response latency. It averaged about 0.1ms running using the IPC protocol. This sounds quite slow to me....Does this sound about right?
long start=System.nanoTime();
socket.send(request, 0);
// Get the reply.
byte[] reply = socket.recv(0);
System.out.println((System.nanoTime()-start)/1000000.0);
I assume your average had a sample of more than one? I would run the test for at least 2-10 seconds before taking an average. The average latency in the same process/thread may be misleading.
I would create a second process which echo everything it gets if you are not doing this already. (And divide the latency in two unless you want the RTT latency)
Plain Sockets can get a RTT latency of 20 micro-seconds on a typical multi-core box and I would expect IPC to be faster. On a fast PC you can get a typical RTT latency of 9 micro-second using sockets.
If you want latency much lower than this, I would consider doing everything in one process or one thread if you can, in which case the cost of a method call is around 10 ns (if its not inlined ;)

Benchmarking Apache Mina Total Bandwidth

I am developing a relatively fast paced game (Flash/Apache Mina Server back end) and I am having some difficulty getting an accurate benchmark of the type of bandwidth my current setup would use.
My question is: How do I get an accurate benchmark of the bandwidth required for my tests? What I am doing now wouldn't take into account any overhead?
On the message sent/received methods I am doing
[out/in]Bandwidth+= message.toString().getBytes().length;
I then print out the current values every 250 milliseconds (since that is how frequently "world" updates are done currently) .
With 10 "monsters" all randomly moving around and 1 player randomly moving around I am getting this output.. (1 second window here)
In bandwidth: 1647, Outgoing: 35378
In bandwidth: 1658, Outgoing: 35585
In bandwidth: 1669, Outgoing: 35792
In bandwidth: 1680, Outgoing: 35999
So acting strictly on the size of the messages (outgoing) being passed that works out to about 621 bytes/second or (621/10) 62.1 bytes per second per constantly moving item on screen per person. This seems a little low, a good high speed connection could handle 1000+ object updates per second at this "rate" no problem.
Something definitely smells fishy here. According to the performance testing provided by them: here mina is capable of 20K+ 405 byte requests per second on ~10 connections - way more than what you're seeing.
My guess is that there is some kind of theading\timing issue going on here that is causing the delay. I would enlist the help of a packet tracing application such as wireshark and see if your observations in code mesh with the raw network data. I would also try "flooding" the server side with more data if possible - this might provide some insight to where the issue lies.
I hope this helps, good luck.

Categories

Resources