I have 1 thread group and I have defined 100 threads and 1 Iteration with a single HttpSampler. Basically I am testing a single GET API.
Now, Jmeter should start 100 threads and then they should fire request to my server which have the API. The server can respond to 100 requests concurrently. So, basically at any point in time I should have 100 concurrency.
But that is not what is happening when I checked through Blazemeter. I get a max users of 37 and total users as 100 which means max concurrency during the test was 37.
This can be possible only if Jmeter did not executed the threads parallel. So where am I wrong ?
I want all threads to execute parallel after they all are created and fire requests at once so that maximum concurrency is 100 for 1 iteration.
If you need more control and accuracy use Ultimate Thread Group JMeter plugin (instead of regular Thread Group)
Set Start Thread Count as 100, with 0 initial delay and 0 Startup Time, with positive Hold time, your thread will hold 100 max users
General example:
If your computer can't handle generating load, you may need distributed testing setup
It is not suggested to use Ramp-Up period as 0.
I think you are making confusion between concurrency (Related to virtual users) and simultaneous (Related to requests or samplers).
To hit requests simultaneously, use the Synchronizing Timer as a child of your requests. It will pause X number of threads and then released at once. And before that, to maintain the concurrency at 100 users, try to use the ramp-up time accordingly (i.e 10 seconds). So it will take 10 seconds for 100 users alive on the server and then hit the requests for 100 users simultaneously.
It doesn't matter which thread group you use but if you maintain the concurrency for more period of time (hold that concurrency), then use Ultimate Thread Group or you can use the loop count accordingly.
If you want to perform spike testing, then the regular Thread group is fine. But you have to remember that, some of your threads might already finish their work and were shut down so you won't see the expected number of the concurrent user.
Here are the example screenshots for 1 minute Test duration (100 users ramp-up time 30 sec + hold load time 20 sec + 10 sec for ramp downtime)
Ultimate Thread Group Config:
Test Results (100 requests at once):
Test Results (100 Concurrent users):
Hope it helps you to understand.
To achieve this, you can use Synchronizing_Timer. Add Synchronization Timer as a child of your GET Request.
The purpose of the SyncTimer is to block threads until X number of
threads have been blocked, and then they are all released at once. A
SyncTimer can thus create large instant loads at various points of the
test plan.
Secondly, to keep a constant load of 100 Request Per Second/Hit Per Second for a time duration, you can use Throughput Shaping Timer. Make sure, you add the Loop Count to Forever and Duration accordingly in Thread Group.
JMeter acts as follows:
The number of threads specified in the Thread Group is being kicked off during the ramp-up period
Each thread starts executing Samplers upside down (or according to the Logic Controllers)
When thread doesn't have any more Samplers to execute or loops to iterate it's being shut down
Assuming all above you may run into the situation when some threads have already finished their work and were shut down and some haven't yet been started. Check out JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more comprehensive explanation if needed
Therefore the solutions are in:
Provide more "iterations" on Thread Group level to let your users loop over, this way you will have 100 concurrent users
If you need to perform some form of Spike Testing and don't want/cannot increase the number of loops just use Synchronizing Timer, this way JMeter will pause the threads until the desired amount is reached and release them at exactly the same moment
Related
Good day!
I'm using JMeter to do load testing. It's my first time to use this tool.
I'm confused with some aspect of JMeter.
I will be using bzm - Concurrency Thread Group to simulate traffic to the server. Based on documentation, it must be required to used it along with jp#gc - Throughput Shaping Timer.
However, I'm thinking not to use it. Will there be any problem in my during my test?
bzm - Concurrency Thread Group
Not necessarily.
Concurrency thread group is responsible for starting/stopping threads (you can think of them as of virtual users), like "I want to have 100 concurrent users for 10 minutes"
Throughput shaping timer is responsible for producing throughput, the load in terms of requests per second, like "I want to have 100 requests per second for 10 minutes"
So:
When you operate with "users" you cannot guarantee the number of requests per second which will be generated (see What is the Relationship Between Users and Hits Per Second? for more details if needed)
When you operate with "throughput" you cannot guarantee that the number of users will be sufficient for conducting the required load.
So you don't have to use the Throughput Shaping Timer, you can if you want to reach/maintain the load certain number of requests per second and want to make sure that the number of threads is sufficient as they can be connected via Feedback Function so JMeter will be able to kick off some new threads if the current amount is not sufficient for conducing the required load
From the 'Thread Properties' configuration, I think the QPS is 750/50=15. But when I run the test , the elapsed time is '00:01:01'. Does it means the real QPS is 750/61=12 ?
JMeter acts as follows:
Given your setup JMeter starts with 1 virtual user and adds 15 users each second
Each virtual user starts executing samplers upside down (or according to the Logic Controllers) as fast as it can
When there are no more samplers to execute or loops to iterate the thread is being shut down
When there are no more threads left the test finishes.
QPS depends on many factors, the main are:
Number of samplers in your Test Plan
Sample Result response time
So QPS is not something you can efficiently or precisely control, the options are in:
If QPS is too low
Add more "Loops" to your Thread Group so threads could re-execute samplers maintaining 750 users concurrency as looking into your Test Plan and execution time it appears you have around 13 simultaneous users only
Add more virtual users
If QPS is too high you can slow down JMeter threads to desired value using Constant Throughput Timer
50 seconds is the ramp up period, which means thread number 750 will be executed after 50 seconds, the elapsed time is influenced by the ramp up period but it only effect the starting time of the test.
Notice Test can be executed even "forever" by checking Forever in loop count
In your case Loop count is 1 , but test can be last for hours if there's a long list of slow HTTP requests or complex loop controller or other Test Fragments inside test.
I have a few quartz (2.2) Jobs running. Let's say one is running ever 5 seconds, another on runs every 10 mins.
I don't want 2 jobs to be executed the same time. I've seen this
DisallowConcurrentExecution
but this only applies to jobs from the same instance but I generally don't want two jobs (of any instance) to overlap.
Edit:
All the jobs working with one database, so this is why it's important that they don't run the same time. Each job has different things to do.
The simplest way is to configure the underlying thread pool to use one thread, this will achieve your goal. Add the following property to your quartz.properties configuration file:
org.quartz.threadPool.threadCount
The number of threads available for
concurrent execution of jobs. You can specify any positive integer,
although only numbers between 1 and 100 are practical. If you only
have a few jobs that fire a few times a day, then one thread is
plenty. If you have tens of thousands of jobs, with many firing every
minute, then you want a thread count more like 50 or 100 (this highly
depends on the nature of the work that your jobs perform, and your
systems resources).
I have an SOA application which consists of many servlets. When client submits the requests, my application connects to 4 external applications, exchanges data between them and provides the result.
Now, due to these 4 connections, the response of requests gets delayed considerably. Hence, we are planning to separate these 4 calls into threads so that the main thread can respond back quickly saying 'we are processing your data'.
The question is, how many threads should I start for these tasks? I can do all of the tasks in a single thread vs 4 different threads. What is the optimal solution?
Also, what affects CPU most? Number of threads OR length of the duration of execution of a particular thread?
My application receives 5 to 7 requests per second. So, what would be better? 1 separate (and longer running) thread OR 4 separate (but shorter running) threads per request?
Thanks in advance.
The number of threads you should start depends on the number of independent tasks you have. The more task/module/function(whatever you call) you have , the more threads you can start for each task/module. Based on the independent work to be done concurrently , you need to know how many threads you should use and how to utilize them effectively
what affects CPU most? Number of threads OR length of the duration of execution of a particular thread?
.
That seems to be a trivial question.Both will effect. Maybe not. Depends on the application/code you have.But that should not be a problem.
I have 4 separate processes which need to go one after another.
1st process
2nd process
3rd process
4th process
Since, every process is connected to one another, each process should run after process before him finishes.
Each process has its own variable length which will be various as programs data input grows.
But some sketch would be like this
Program Runs
1st process - lasts 10 seconds
2nd process - has 300 HTTP get requests, last 3 minutes
3rd process - has 600 HTTP get requests, lasts 6 minutes
4th process - lasts 1 minute
Program is written in java
Thanks for any answer!
There is no concurrency support in the java API for your use case because what you're asking for is the opposite of concurrent. You have a set of four mutually dependent operations that need to be run in a specific order. You only need, and should probably only use, one thread to correctly handle this case.
It would be reasonable and prudent to put each operation in its own method or class, based on how complex the operations are.
If you insist on using multiple threads, your main thread should maintain a list of runnables. Iterate through the list. Pop the first runnable from the list, create a new thread for that runnable, start the thread, and then invoke join() on the thread. The main thread will block until the runnable is complete. The loop will take you through all the runnables in order. Again, there is no good reason to do this. There may or may not be a bad reason.