InetAddress.getByName(ip).isReachable(2000) is used to find reachability of a system in 2 seconds.But when I am trying to find reachabilities of multiple systems(say n systems) in my network serially, it is taking 2n seconds.Is there any other way, so that i can find their reachabilities in lesser time say 3 to 4 seconds?
You can use jnetpcap to craft the ping packets by yourself, and listen to the responses.
You can throw all ping requests to the network (almost) at once, and be done with it in (a little bit longer than) 2 seconds.
You will need to know the mac addresses, though.
Related
I am developing agents to collect data from different sources, the data should be posted to a channel at high frequency (say every 15 seconds). REST is definitely not a solution. The requirement is clearly fire and forget as status reply is not concerned.
Throughput is more important, message drops are acceptable upto 5%.
Possible solutions I come across are
Message Bus
Multicast
UDP
Any alternatives, please suggest.
IMHO High frequency is too fast too see and 15 seconds you can see. It takes about 0.5 seconds to send a message round the world and back again. You can just about see 15 milli-seconds. And if you are talking about 15 micro-seconds, that is definitely high frequency. I have a persisted messaging solution with a latency of around 0.1 micro-seconds which is 0.0000001 seconds, but I don't suggest you need that.
If all you need is a message every 15 seconds I would use the simplest solution which comes to mind. I would try ActiveMQ which I found to be one of the simplest to get working. You should be able to achieve message rates of up to 20,000 per seconds and decent latencies of about 0.01 seconds and you shouldn't lose any messages.
I'm currently writing a crawler in java, and I'm stuck by something.
In my crawler, I have threads downloading a static distant page, using HttpURLConnection.
I tried to download one small file (2kb) with different parameters. The connection has a timeout set to 1s.
I noticed that, if I use 100 threads for the download, I suceed in making 3 times more request per second (~10k requests per second, which use ), whereas when using 500 threads I suceed in making "only" 4k requests per second.
I would have expected to be able to do at least as many request per second as with 100 threads.
Could you explain me why is this behaving this way, and if there is some parameter to activate somewhere to increase the maximum number of parallel connection ?
Thanks :)
i think it's just a matter of your cpu, at a certain point switching threats is more expensive then the time gained by not waiting for a single connection.
i would try to maximize parralel connection by setting a upper limit
I work at a retailer and we consider to introduce CQ5 as a CMS.
However, after doing some research and talking to consultants it turns out, that there may be things that may be "complicated". Perhaps one of you can shed a little light on this.
The first thing is, we were told that when you use the Multi Site Manager to create multi language pages (about 80 languages) the update process can be as slow as half an hour until a change is ultimately published. Did someone of you experience something similar?
The other thing is, that the TarOptimizer has pretty long running times. I was told that runs that take up to 24 hours are not uncommon. Again my question: Did someone of you had such a problem or has an explanation for this?
I am really looking forward to your response.
These are really 2 separate question, but I'll address them based on my experience.
The update process for creating new multi-language pages will vary based on the number of languages, and also the number of publish instances and web-servers (assuming you're using dispatcher to cache) you are running. This is because the replication process is where the bottleneck is (at least in my experience), and as such if you're trying to push out a large amount of content across a large number of publishers with a large number of front-end web-servers whose cache needs to be cleared, there will be some delay in getting this to happen since replication is an asynchronous process. The longest delay I've seen for this has been in the 10-15 minute range, that was with 12 publishers and 12 front end webservers, but this comes with the obvious caveat that your mileage may vary.
For the Tar Optimzation job, I'd encourage you to take a look at this page as it has a lot of good info about the Tar Optizer job and how to tune it. The job can take a long time to run when you have a large repository, especially on an instance with a large number of write operations, but the run times can be configured so that it only runs during a given time period, and it will pick up where it left off the night before if the total run time is longer than the allowed run time. By default, it runs from 2-5 am each night, so if it takes more than that 3 hour period, it will continue where it left off the next night, allowing it to optimize the entire repository over a period of a few days if needed.
I just ran the zeroMQ hello world example and timed the request-response latency. It averaged about 0.1ms running using the IPC protocol. This sounds quite slow to me....Does this sound about right?
long start=System.nanoTime();
socket.send(request, 0);
// Get the reply.
byte[] reply = socket.recv(0);
System.out.println((System.nanoTime()-start)/1000000.0);
I assume your average had a sample of more than one? I would run the test for at least 2-10 seconds before taking an average. The average latency in the same process/thread may be misleading.
I would create a second process which echo everything it gets if you are not doing this already. (And divide the latency in two unless you want the RTT latency)
Plain Sockets can get a RTT latency of 20 micro-seconds on a typical multi-core box and I would expect IPC to be faster. On a fast PC you can get a typical RTT latency of 9 micro-second using sockets.
If you want latency much lower than this, I would consider doing everything in one process or one thread if you can, in which case the cost of a method call is around 10 ns (if its not inlined ;)
I need to be able to monitor the speed of my internal network using java. I was thinking I could use a two part system with a server and a client. I do not need need response time such as what is generated with ping but and actual speed in mbps for upload and download.
My idea would be to have the Server send a packet or series of packets to the client which then replies and then the Server would calculate the speed of the network between those two points. Does anyone have any idea how I could implement this?
Thank You ahead of time.
Hmm, an interesting problem. I hope you like reading... :-)
I'd be interested to know how the monitoring tool would be used. At
work, the sysadmins just have a couple of large screens in the room,
showing a webpage containing loads of network stats, with it constantly
updating.
The rest of my description assumes the network monitoring tool would be
used as described above. If you just want to be able to do an ad-hoc
test between two random hosts on your network, I'd just use rsync to
transfer a reasonably large file (about 1 - 2MB). I'm sure there are
other file transfer tools that calculate the transfer speed too.
When implementing this, (especially within a large network) you must
minimise the risk that the test floods the network, hampering the people
(or programs) actually using it. You don't want to be blamed for a
massive slowdown (or worse, an outage) just because you were conducting
a test. Your sysadmins won't thank you...
I'd architect the tool in the following way:
Bob is a server which participates in an individual 'test' by doing
the following:
Bob receives a request from a client. The request states how much data the client is about to send.
If the amount of data proposed to be sent is not too large, wait for the data. Otherwise Bob rejects the request immediately and ends the communication.
Once the required number of bytes has been received, reply with the amount of time it took to receive it all. Bob terminates the communication.
Alice is the component that displays the result of the measurements
taken (via a webpage or otherwise). Alice is a long lived process
(maybe a web server), configured to periodically connect to a list of
Bob servers. For each configured Bob:
Send Bob a request with the amount of data Alice is about to
send.
Send Bob the specified amount of data, as fast as possible.
Await the reply from Bob, and compute the network speed.
'Display' the result for this instance of Bob. You may choose
to display an aggregate result. For example, the average result for
each of the last 20 tests, to iron out any anomalies...
When conducting a given test, Alice should report any failures. E.g.
'a TCP connection could not be established with Bob', or 'Bob
prematurely terminated the transfer' or whatever else...
Scatter Bob servers to strategic locations in your (possibly large)
network, and configure Alice to go them. For each instance of Bob, you
should configure
The time interval in between tests.
The 'leeway' (I'll explain this in a bit).
The amount of data to send to Bob for each test.
Bob's address (duh).
You want to 'stagger' the tests that a given Alice will attempt. You
don't want Alice to trigger the test to all Bob servers at once, thereby
flooding your network, possibly giving skewed results and so forth.
Allow the test to occur at a randomised time in the future. For
example, if the test interval is every 10 minutes, configure a 'leeway'
of 1 minute, meaning the next test might occur anywhere between 9 and 11
minutes' time.
If there is to be more than one Alice running at a time, the total
number of instances should be small. The more Alices you have, the more
you interfere with the network. Again, you don't want to be responsible
for an outage.
The amount of data Alice should send in an individual test should be
small. 500KB? You probably want a given test to run for no more than
10 seconds. Maybe get Bob to timeout if the test takes too long.
I've deliberately omitted the transport to use (TCP, UDP, whatever)
because you'll get issues depending on the transport, and I don't know
how you want to handle those issues. For example, you'd have to
consider how to handle dropped datagrams with UDP. What result would
you compute? You don't get this issue with TCP, because it
automatically retransmits dropped packets. With TCP, your
throughput will be artificially low if the two endpoints
are far away from each other. Here's some
info on it.
If you had the patience to read this far, I hope it helped!
Rather than writing a server you might want to just use tomcat or apache to be the server, then you just have the client upload a file of a specific size, and measure the time, then turn around and download the file, to measure the download speed.
You could write your own server to do this, but you would be basically doing what has been done many times before, then you will need to ensure your server isn't skewing the numbers.