I need to be able to monitor the speed of my internal network using java. I was thinking I could use a two part system with a server and a client. I do not need need response time such as what is generated with ping but and actual speed in mbps for upload and download.
My idea would be to have the Server send a packet or series of packets to the client which then replies and then the Server would calculate the speed of the network between those two points. Does anyone have any idea how I could implement this?
Thank You ahead of time.
Hmm, an interesting problem. I hope you like reading... :-)
I'd be interested to know how the monitoring tool would be used. At
work, the sysadmins just have a couple of large screens in the room,
showing a webpage containing loads of network stats, with it constantly
updating.
The rest of my description assumes the network monitoring tool would be
used as described above. If you just want to be able to do an ad-hoc
test between two random hosts on your network, I'd just use rsync to
transfer a reasonably large file (about 1 - 2MB). I'm sure there are
other file transfer tools that calculate the transfer speed too.
When implementing this, (especially within a large network) you must
minimise the risk that the test floods the network, hampering the people
(or programs) actually using it. You don't want to be blamed for a
massive slowdown (or worse, an outage) just because you were conducting
a test. Your sysadmins won't thank you...
I'd architect the tool in the following way:
Bob is a server which participates in an individual 'test' by doing
the following:
Bob receives a request from a client. The request states how much data the client is about to send.
If the amount of data proposed to be sent is not too large, wait for the data. Otherwise Bob rejects the request immediately and ends the communication.
Once the required number of bytes has been received, reply with the amount of time it took to receive it all. Bob terminates the communication.
Alice is the component that displays the result of the measurements
taken (via a webpage or otherwise). Alice is a long lived process
(maybe a web server), configured to periodically connect to a list of
Bob servers. For each configured Bob:
Send Bob a request with the amount of data Alice is about to
send.
Send Bob the specified amount of data, as fast as possible.
Await the reply from Bob, and compute the network speed.
'Display' the result for this instance of Bob. You may choose
to display an aggregate result. For example, the average result for
each of the last 20 tests, to iron out any anomalies...
When conducting a given test, Alice should report any failures. E.g.
'a TCP connection could not be established with Bob', or 'Bob
prematurely terminated the transfer' or whatever else...
Scatter Bob servers to strategic locations in your (possibly large)
network, and configure Alice to go them. For each instance of Bob, you
should configure
The time interval in between tests.
The 'leeway' (I'll explain this in a bit).
The amount of data to send to Bob for each test.
Bob's address (duh).
You want to 'stagger' the tests that a given Alice will attempt. You
don't want Alice to trigger the test to all Bob servers at once, thereby
flooding your network, possibly giving skewed results and so forth.
Allow the test to occur at a randomised time in the future. For
example, if the test interval is every 10 minutes, configure a 'leeway'
of 1 minute, meaning the next test might occur anywhere between 9 and 11
minutes' time.
If there is to be more than one Alice running at a time, the total
number of instances should be small. The more Alices you have, the more
you interfere with the network. Again, you don't want to be responsible
for an outage.
The amount of data Alice should send in an individual test should be
small. 500KB? You probably want a given test to run for no more than
10 seconds. Maybe get Bob to timeout if the test takes too long.
I've deliberately omitted the transport to use (TCP, UDP, whatever)
because you'll get issues depending on the transport, and I don't know
how you want to handle those issues. For example, you'd have to
consider how to handle dropped datagrams with UDP. What result would
you compute? You don't get this issue with TCP, because it
automatically retransmits dropped packets. With TCP, your
throughput will be artificially low if the two endpoints
are far away from each other. Here's some
info on it.
If you had the patience to read this far, I hope it helped!
Rather than writing a server you might want to just use tomcat or apache to be the server, then you just have the client upload a file of a specific size, and measure the time, then turn around and download the file, to measure the download speed.
You could write your own server to do this, but you would be basically doing what has been done many times before, then you will need to ensure your server isn't skewing the numbers.
Related
I have a web application and have to fetch 1000 records using a REST API. Each record is around 500 bytes.
What is the best way to do it from the following and why? Is there another better way to do it?
1>Fetch one record at a time. Trigger 1000 calls in parallel.
2>Fetch in groups of 20. Trigger 50 calls in parallel.
3>Fetch in groups of 100. Trigger 10 calls in parallel.
4>Fetch all 1000 records together.
As #Dima said in the comments, it really depends on what you are trying to do.
How are the records being consumed?
Is it a back end process to process or program to program communication? If so, then it depends on the difficulty of processing once the client receives it. Is it going to take them a long time to process each record? 1 ms per record, or 100ms per record? This option depends entirely on possible processing time per record.
Is there a front end consuming this for human users? If so, batch requesting would be good for reasons like paginating results. In such cases, I would go with option 2 or 3 personally.
In general though, depending upon the sheer volume of records, I would recommend considering batching requests (by triggering fewer calls). Heuristically speaking, you are likely to get better overall network throughput that way.
If you add more specifics, I'll happily update my answer, but until then, general will have to do!
Best for what case? What are you trying to optimize?
I did some tests a while back on a similar situation, with slightly larger payloads (images), where my goal was to utilize network efficiently on a high-latency setup (across continents).
My results were that after a minimal amount of parallelism (like 3-4 threads), the network was almost perfectly saturated. We compared it to specific (proprietary) UDP-based transfer protocols, and there was no measurable difference.
Anyway, it may be not what you are looking for, but sometimes having a "dumb" http endpoint is good enough.
I have a web service with a load balancer that maps requests to multiple machines. Each of these requests end up sending a http call to an external API, and for that reason I would like to rate limit the number of requests I send to the external API.
My current design:
Service has a queue in memory that stores all received requests
I rate limit how often we can grab a request from the queue and process it.
This doesn't work when I'm using multiple machines, because each machine has its own queue and rate limiter. For example: when I set my rate limiter to 10,000 requests/day, and I use 10 machines, I will end up processing 100,000 requests/day at full load because each machine processes 10,000 requests/day. I would like to rate limit so that only 10,000 requests get processed/day, while still load balancing those 10,000 requests.
I'm using Java and MYSQL.
use memcached or redis keep api request counter per client. check every request if out rate limit.
if you think checking at every request is too expensive,you can try storm to process request log, and async calculate request counter.
The two things you stated were:
1)"I would like to rate limit so that only 10,000 requests get processed/day"
2)"while still load balancing those 10,000 requests."
First off, it seems like you are using a divide and conquer approach where each request from your end user gets mapped to one of the n machines. So, for ensuring that only the 10,000 requests get processed within the given time span, there are two options:
1) Implement a combiner which will route the results from all n machines to
another endpoint which the external API is then able to access. This endpoint is able
to keep a count of the amount of jobs being processed, and if it's over your threshold,
then reject the job.
2) Another approach is to store the amount of jobs you've processed for the day as a variable
inside of your database. Then, it's common practice to check if your threshold value
has been reached by the value in your database upon the initial request of the job
(before you even pass it off to one of your machines). If the threshold value has been
reached, then reject the job at the beginning. This, coupled with an appropriate message, has an advantage as having a better experience for the end user.
In order to ensure that all these 10,000 requests are still being load balanced so that no 1 CPU is processing more jobs than any other cpu, you should use a simple round robin approach to distribute your jobs over the m CPU's. With round robin, as apposed to a bin/categorization approach, you'll ensure that the job request is being distributed as uniformly as possible over your n CPU's. A downside to round robin, is that depending on the type of job you're processing you might be replicating a lot data as you start to scale up. If this is a concern for you, you should think about implementing a form of locality-sensitive hash (LSH) function. While a good hash function distributes the data as uniformly as possible, LSH exposes you to having a CPU process more jobs than other CPU's if a skew in the attribute you choose to hash against has a high probability of occurring. As always, there's tradeoffs associated with both, so you'll know best for your use cases.
Why not implement a simple counter in your database and make the API client implement the throttling?
User Agent -> LB -> Your Service -> Q -> Your Q consumers(s) -> API Client -> External API
API client checks the number (for today) and you can implement whatever rate limiting algorithm you like. eg if the number is > 10k the client could simply blow up, have the exception put the message back on the queue and continue processing until today is now tomorrow and all the queued up requests can get processed.
Alternatively you could implement a tiered throttling system, eg flat out til 8k, then 1 message every 5 seconds per node up til you hit the limit at which point you can send 503 errors back to the User Agent.
Otherwise you could go the complex route and implement a distributed queue (eg AMQP server) however this may not solve the issue entirely since your only control mechanism would be throttling such that you never process any faster than the something less than the max limit per day. eg your max limit is 10k so you never go any faster than 1 message every 8 seconds.
If you're not adverse to using a library/service https://github.com/jdwyah/ratelimit-java is an easy way to get distributed rate limits.
If performance is of utmost concern you can acquire more than 1 token in a request so that you don't need to make an API request to the limiter 100k times. See https://www.ratelim.it/documentation/batches for details of that
In my application I have only one sql request (one SELECT, or one UPDATE, ...). This request may be executed once and goes through a network. Are any ways or techniques to make it optimized or something to run it faster?
There are two issues with applications using networks. One, maybe the only important one, is network turns, the other is size of network data. The first deals with the end to end latency of a round trip of a request and a response, and a client which does not go to the next line of code until it gets an answer from the server. The other deals with the number of bytes sent and the throughput.
The big issue is network turns. It takes about as long to send a 30 byte packet as it does to send a 1400 byte packet. A lot of the time there is an exchange of a lot of small (80 to 300 byte) packets.
So the technique is to reduce the number of network turns. Use the asynchronous api if there is one and you can. Try to combine queries into one complex query instead of lots of simple queries. Especially avoid doing a loop and doing a query in the loop, if you can combine the result in one query.
Also, use indexes on your table, and make sure that your query has an appropriate index to use.
I'm writing a Java server (java.net.Socket, java.net.ServerSocket, java.io.ObjectOutputStream, java.io.ObjectInputStream) and I know I'm going to have limited bandwidth allocated for it.
I've written a decorator object for my output and input streams so I can count how many bytes go through it for profiling purposes. But this won't give me any indication of the amount of overhead I'm using for the connection.
I don't anticipate it will be much, but I'd like to prepare for it. I'm not going try to optimize it, I just want to know how much it will be for logistical reasons (how much bandwidth must I request, etc.)
I can't be the first person to try to get this information, but I can't seem to find good resources on the overhead of Java Sockets and TCP/IP in general. (Perhaps that's because there's nothing noteworthy to find... If we're on the order of kb per minute, it's really not much of a concern, but I'd still like to know!)
Thanks!
This question is challenging to answer with the information we have right now... for instance, what are you calling 'overhead'? Is it only TCP ACK packets, or all packet overhead (for instance ethernet, IP and tcp headers) for anything other than your data payload?
How many connections per minute? What is the average data transfer, per connection? If there are many very short-lived connections, your overhead requirements go up (due to 3-way handshake, and connection close requirements)... you could also have high overhead if the clients don't read much data, but many clients keep the connections open for days at a time.
Honestly, you're 50x better off modeling this in a lab and making some assumptions about hit rate per minute and concurrent clients... that will give you some ballpark numbers. Play around with limiting the bandwidth afforded to the application to the maximum your budget would allow... then start backing off... you can throttle bandwidth by using wanem on a dual-port linux machine.
Getting lab results like this is far better than theoretical calculations.
HTH,
\mike (who spends all day testing network gear)
TCP overhead varies based on a number of factors, but is typically around 5% at full capacity.
Basically each "packet" has 20 bytes of IP header (and 20 more if IPv6) plus 20-32 bytes of TCP header. Packet sizes vary based on the network devices and conditions, but are often in the neighborhood of 1500 bytes.
This page has some detail: http://sd.wareonearth.com/~phil/net/overhead/
In my opinion you can completely ignore keep-alives, as they are only used when the connection is idle anyway.
I am developing a relatively fast paced game (Flash/Apache Mina Server back end) and I am having some difficulty getting an accurate benchmark of the type of bandwidth my current setup would use.
My question is: How do I get an accurate benchmark of the bandwidth required for my tests? What I am doing now wouldn't take into account any overhead?
On the message sent/received methods I am doing
[out/in]Bandwidth+= message.toString().getBytes().length;
I then print out the current values every 250 milliseconds (since that is how frequently "world" updates are done currently) .
With 10 "monsters" all randomly moving around and 1 player randomly moving around I am getting this output.. (1 second window here)
In bandwidth: 1647, Outgoing: 35378
In bandwidth: 1658, Outgoing: 35585
In bandwidth: 1669, Outgoing: 35792
In bandwidth: 1680, Outgoing: 35999
So acting strictly on the size of the messages (outgoing) being passed that works out to about 621 bytes/second or (621/10) 62.1 bytes per second per constantly moving item on screen per person. This seems a little low, a good high speed connection could handle 1000+ object updates per second at this "rate" no problem.
Something definitely smells fishy here. According to the performance testing provided by them: here mina is capable of 20K+ 405 byte requests per second on ~10 connections - way more than what you're seeing.
My guess is that there is some kind of theading\timing issue going on here that is causing the delay. I would enlist the help of a packet tracing application such as wireshark and see if your observations in code mesh with the raw network data. I would also try "flooding" the server side with more data if possible - this might provide some insight to where the issue lies.
I hope this helps, good luck.