I have to built an application in java which will handle load testing on a particular application. Here we can give certain parameters like TPS (Transaction Per Second) Time (in secs) and Number of Request. I am giving you some scenario like
TPS = 5 Time=100 No of Request=500.
Or
TPS=10 Time=100 No of request=1000
But this request I have sent using multiple Thread so the process can give the fill of concurrent transaction. My question is how to create the logic to create this. I am developing my program in java.
Suppose you want to run 50 TPS for 100 seconds. You can have 5 threads that would send 1 transaction every 100 ms for 100 seconds. You,however, want to randomize the process little bit to prevent threads sending transactions at the same time. So the process for each tread would be
Send a transaction
Wait random time between 1 and 199 ms inclusive (to average 100ms)
Repeat as long as required
That will give you average 50 TPS reasonably distributed in time. You can play around with thread count and other numbers to achieve your specific goal.
Related
I want to hit my application with 200,000 requests in an hour. I am using JMeter to perform this testing.
I am doing this test on a local spring boot application.
The result I am looking for is to configure JMeter thread group for a throughput of 55 hits per second.
How should I configure the thread group to get 200,000 in an hour?
So far I've tried -
First Approach
i) Reduced my requirement to 10,000 requests in 180 seconds which is a smaller proportion to my original ask.
ii) Number of threads - 10,000
iii) Ramp-up period - 180 seconds
iv) This works fine and I get the desired throughout of 55/sec, but this fails further as I increase the proportion to 50,000 threads in 900 seconds(15 minutes), I get the desired throughout but JMeter and my system becomes extremely slow and I also notice 6% error in response. I believe this is not the correct approach when I want to achieve 200k and more.
Second Approach
i) I found this solution to put in a Constant throughput timer, which I did.
ii) Number of threads - 10
iii) Ramp-up period - 900 seconds
iv) Infinite loop
v) Target throughout in minutes - 3300
vi) calculate throughout based on - all active threads.
Although I had configured the ramp-up to be 15 minutes, it seems to run for more than 40 minites until it achieves a throughput of 55/sec due to the constant throughput timer. Found no errors in response in this approach.
The easiest is going for Precise Throughput Timer
However you need to understand 2 things:
Timers can only pause JMeter samplers execution rate in order to limit JMeter's throughput to the desired value so you need to supply sufficient number of threads in the Thread Group. For instance 200k requests per hour is more or less 55 requests per second, if your application response time is 1 second - you will need 55 threads, if it's 2 seconds - you will need 110 threads and so on. The most convenient option is using Throughput Shaping Timer and Concurrency Thread Group combination, they can be connected together via Feedback Function so JMeter would be able to kick off more threads if the current amount is not sufficient to conduct the required load.
The system under test must be capable of processing that many requests per hour, if it doesn't - you won't be able to achieve this whatever you do from JMeter side.
I've been looking around, but it's still not clear to me how to set up JMeter to simulate 1000 concurrent users for one minute.
How should the threads, Ramp-up and Duration be configured?
I hope I was clear in the request.
Thanks
Basically Number of Threads determines the concurrency. If you add 100 threads Jmeter will spin up 100 concurrent threads. Then you can tell Jmeter how this max should be reached. If you add a Ramp Up Period the threads will be gradually created within the configured time period. For example, if you configure 100 threads and 10 seconds ramp-up time. At the end of the 10th second, Jemeter will make sure all 100 threads are running. We can say each second Jmeter will create 10 threads. This option is useful if you don't want to generate sudden spikes of load, say you want to warmup the server and then do your load/perf test then you can use the ramp-up time.
I would suggest going for the following Thread Group setup:
Explanation:
Number of threads - 1000 as this is how many users you need to have
Ramp-up period. It's better to increase the load gradually, this way you will be able to correlate the increasing load with other metrics and KPIs. As per JMeter Documentation
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
in your case given the short test duration it makes sense to increase the load for first 30 seconds and the last 30 seconds will be "plateau"
Loop Count: you need to give JMeter sufficient amount of iterations because if a thread finish samplers execution and there are no more loops to iterate the thread will be shut down and the actual load may be lower.
Duration is 60 seconds
I'm load testing an API. We have a problem - our response times are too large, sometimes close to a minute. We want to be at the range of under a second. But that is besides the point.
When I use a load testing tool, such as Gatling, the RPS sent seem to hit a halt. As you can see in the attached image, there is an initial 15 seconds of 20RPS, and suddenly almost no RPS at all. How can I maintain constant RPS? Probably it has to do with the poor response times, but what if I don't care about the response times? I just want the RPS constant.
My initial tests with JMeter also show similar behaviour.
What injection strategy are you using? How scenario looks? Is every user making one request, chain of requests or any of above in a loop?
Assuming that you want to test single endpoint, best approach to get constant requests per second (not constant responses as you already know) is to use scenario that executes single request and strategy that injects constant number of users per second fe:
setUp(
scn.inject(constantUsersPerSec(25) during(15 minute))
)
If your user performs more then 1 requests there is option to throttle requests, but you need to remember that it will only throttle down not up, so you need to make sure that active users will make enough requests per second to reach that limit fe:
setUp(scn.inject(
constantUsersPerSec(10) during(15 minutes)
).throttle(
jumpToRps(25), holdFor(15 minutes)
))
So here if fe. single user makes 5 requests you can reach even 50 req/s but it will be throttled to 25. But you must remember that new users will be added every second so if it takes more time to finish 1 user then number of active users will increase. Also if response time is high then active users may not produce enough req/s since most of their time is waiting for response.
In JMeter, You can achieve that by using Constant Throughput Timer at your test plan level.
Constant Throughput timer allows you to maintain the throughput of your server (requests/sec). Constant Throughput Timer is only capable of pausing JMeter threads in order to slow them down to reach the target throughput. Also, it works only on a minute level so you need to properly calculate the ramp-up period and let your test run long enough.
Let's see a brief thought on this:
To achieve the target throughput, you need to have enough number of threads in your test plan.
To calculate the number of threads you need for this test, you can use the below formula:
RPS * max response time in second
In your case, If you want 20 RPS and your max response time is 60 seconds, you need at least 1200 (20*60=1200) Threads in your test plan.
As Constant Throughput Timer works on a minute level, to achieve 20 RPS you have to configure your "Target Throughput" value to 1200/min and "Calculate Throughput based on" value as "All active threads".
Constant Throughput Timer Config:
Now, if you have more than single requests in your test plan (i.e 4 requests), then 1200 requests/min will be distributed among 4 samplers. That means you will get 5RPS for each sampler.
Now, for the Thread Group configurations, as you have mentioned "Calculate Throughput based on" value in Constant Throughput Timer for "All active threads", so all of your 1200 threads need to be started on the server to achieve that 20 RPS. Use Ramp-Up Period config to control these threads to start.
Ramp-Up Period is the time in which all the threads arrive on your tested application server. So if you use 60 seconds, then it will take 60 seconds to start all of your 1200 threads. 1200 threads will be active in 60 seconds.
You also need to set your test duration accordingly. Say, you want to keep that 20 RPS for 5 minutes. In this case, you have to set your test duration for 7 mins (2 min extra is for: Starting 1 min for 1200 threads to start which is the ramp up time and last 1 min for the 1200 threads ramp-down time). Don't forget to check the loop counts to Forever if you are using Thread Group.
Thread Group Config for the above-mentioned scenario:
You can also use another handy JMeter plugin which is Ultimate Thread Group if you are confused with the default Thread Group configurations. You can download JMeter Plugins by using JMeter Plugins Manager.
Here is the Ultimate Thread Group Config for the above-mentioned scenario:
Now while after the test finishes, you can check those 5 minutes results where all of your 1200 threads were active by using the Hits Per Second Listener and as well as Active Threads Over Time Listener.
Do not use JMeter GUI for Load testing, use the Non-GUI mode. Also, remove assertions if you have any in your test plan while you're trying to achieve some target RPS.
My service is calling an other service and this other service throttle me based on the number of request sent during a hole minutes (doesn't matter how much per second, as long as there is < x request in the last minute)
I would like to display a really really rough estimate to my user of how much request has been made during the last minute.
It doesn't need to be accurate in anyway it is just a way for the user to see what roughly the numbers are
What would be the best, least memory demanding way of implementing such a counter ?
You could do something like:
maintain an int[] requestCount = new int[60]
for each request: requestCount[(System.currentTimeMillis() / 1000) % 60]++;
run a scheduled job every 1 second to reset the "stale" array position (61 seconds ago) back to 0
to get the number of requests over the past 60 seconds: IntStream.of(requestCount).sum();
Note:
this would not be thread safe. If you need thread safety you could use a final AtomicInteger[] array.
this is not robust to clock changes etc.
The footprint should be fairly small.
Suppose 50K Runnables are to be scheduled to execute indefinitely every 30 mins.
Each Runnable will take 1-5 secs, and perform one Socket operation.
The TheadPool is of 200 size.
Now How to determine the initial invoking delay of each 50K Runnables with scheduleWithFixedDelay calls (or) how to schedule these Runnables in a processor efficient way.
Is there any standard algorithm for distributing these kind of scheduling.
Thanks.
If you have 50K Runnables which take on up to 5 seconds each that is 250,000 seconds of work. If you want to run this every 30 * 60 seconds, you need to run this across a minimum of 139 threads. If you use 200 threads it could take 20 minutes to execute them all. You may need more threads if you want these tasks to complete in say 5 minutes.
A simple read or write shouldn't take 1-5 seconds. By one Socket operation do you mean a read or a write or do you mean open a socket, send some data and get a reply? The later can involve a lot of overhead.
While 50K is a lot, I would just have this many scheduled tasks, unless you need tasks to be run on the 30 minute interval as close as possible. If you have 50K independent tasks they will run approximately every 30 minutes but run at different times to each other. This is unavoidable to some degree as you don't have 50K cores, but how concerned are you about running them as close as possible?
distributing scheduling based on your time limit is probably the way to go.
If you have 30 minutes, that's 1800 seconds in which to schedule and complete all 50000 jobs.
So accounting for the time to complete the last round of jobs you have 50000/(1800-5).
That equates to completing about 28(rounded up) jobs a second. So a simple approach could be to just schedule at least 28 jobs every second. That minimizes simultaneous consumption of resources while completing all jobs with in the designated time period. We won't need to worry about the thread pool size for socket operations because if operations complete with in a maximum time of 5 seconds then the maximum number of simultaneous socket operations happening this way will be 140.
Obviously implementing such a schedule would be a simple for loop from 0 to 1794 as the delay time scheduling a set number of jobs, in this case 28. Followed by a delay to round off to the 30 minute mark before starting again.