I have a few quartz (2.2) Jobs running. Let's say one is running ever 5 seconds, another on runs every 10 mins.
I don't want 2 jobs to be executed the same time. I've seen this
DisallowConcurrentExecution
but this only applies to jobs from the same instance but I generally don't want two jobs (of any instance) to overlap.
Edit:
All the jobs working with one database, so this is why it's important that they don't run the same time. Each job has different things to do.
The simplest way is to configure the underlying thread pool to use one thread, this will achieve your goal. Add the following property to your quartz.properties configuration file:
org.quartz.threadPool.threadCount
The number of threads available for
concurrent execution of jobs. You can specify any positive integer,
although only numbers between 1 and 100 are practical. If you only
have a few jobs that fire a few times a day, then one thread is
plenty. If you have tens of thousands of jobs, with many firing every
minute, then you want a thread count more like 50 or 100 (this highly
depends on the nature of the work that your jobs perform, and your
systems resources).
Related
I have 1 thread group and I have defined 100 threads and 1 Iteration with a single HttpSampler. Basically I am testing a single GET API.
Now, Jmeter should start 100 threads and then they should fire request to my server which have the API. The server can respond to 100 requests concurrently. So, basically at any point in time I should have 100 concurrency.
But that is not what is happening when I checked through Blazemeter. I get a max users of 37 and total users as 100 which means max concurrency during the test was 37.
This can be possible only if Jmeter did not executed the threads parallel. So where am I wrong ?
I want all threads to execute parallel after they all are created and fire requests at once so that maximum concurrency is 100 for 1 iteration.
If you need more control and accuracy use Ultimate Thread Group JMeter plugin (instead of regular Thread Group)
Set Start Thread Count as 100, with 0 initial delay and 0 Startup Time, with positive Hold time, your thread will hold 100 max users
General example:
If your computer can't handle generating load, you may need distributed testing setup
It is not suggested to use Ramp-Up period as 0.
I think you are making confusion between concurrency (Related to virtual users) and simultaneous (Related to requests or samplers).
To hit requests simultaneously, use the Synchronizing Timer as a child of your requests. It will pause X number of threads and then released at once. And before that, to maintain the concurrency at 100 users, try to use the ramp-up time accordingly (i.e 10 seconds). So it will take 10 seconds for 100 users alive on the server and then hit the requests for 100 users simultaneously.
It doesn't matter which thread group you use but if you maintain the concurrency for more period of time (hold that concurrency), then use Ultimate Thread Group or you can use the loop count accordingly.
If you want to perform spike testing, then the regular Thread group is fine. But you have to remember that, some of your threads might already finish their work and were shut down so you won't see the expected number of the concurrent user.
Here are the example screenshots for 1 minute Test duration (100 users ramp-up time 30 sec + hold load time 20 sec + 10 sec for ramp downtime)
Ultimate Thread Group Config:
Test Results (100 requests at once):
Test Results (100 Concurrent users):
Hope it helps you to understand.
To achieve this, you can use Synchronizing_Timer. Add Synchronization Timer as a child of your GET Request.
The purpose of the SyncTimer is to block threads until X number of
threads have been blocked, and then they are all released at once. A
SyncTimer can thus create large instant loads at various points of the
test plan.
Secondly, to keep a constant load of 100 Request Per Second/Hit Per Second for a time duration, you can use Throughput Shaping Timer. Make sure, you add the Loop Count to Forever and Duration accordingly in Thread Group.
JMeter acts as follows:
The number of threads specified in the Thread Group is being kicked off during the ramp-up period
Each thread starts executing Samplers upside down (or according to the Logic Controllers)
When thread doesn't have any more Samplers to execute or loops to iterate it's being shut down
Assuming all above you may run into the situation when some threads have already finished their work and were shut down and some haven't yet been started. Check out JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more comprehensive explanation if needed
Therefore the solutions are in:
Provide more "iterations" on Thread Group level to let your users loop over, this way you will have 100 concurrent users
If you need to perform some form of Spike Testing and don't want/cannot increase the number of loops just use Synchronizing Timer, this way JMeter will pause the threads until the desired amount is reached and release them at exactly the same moment
I have around 1000 entries in my datastore and this is likely to increase with time to around 10,000 entries. My task is to update each row's certain properties and save it back and this task has to be performed every 24 hours.
So, what should I use?
First, you create a cron job that runs every 24 hours.
Second, you need to decide what this cron job will do. The simplest option is to update all 1,000 records. You can retrieve and save all entities in large batches (i.e. 500 per call). If this is a simple update of values, it will take just a few seconds.
Since cron jobs are not retried if they fail, a better option is to create a task and add it to the queue. All updates will happen within that task.
NB: Make sure that if your task is retried, it won't mess the data. If this is not possible, you will have to use some kind of flag (i.e. timestamp of last update) to separate updated entities from those that still need updates.
As your data set grows, your cron job can start multiple tasks to update, for example, 1,000 records in each task.
In the task queue the tasks have to be added to the queue manually though code. If you want to do this task automatically every x time, what you need is a cron job.
You need both,
Cron job to start your batch update job every 24 hours
Task-queues to process you records.
I have an SOA application which consists of many servlets. When client submits the requests, my application connects to 4 external applications, exchanges data between them and provides the result.
Now, due to these 4 connections, the response of requests gets delayed considerably. Hence, we are planning to separate these 4 calls into threads so that the main thread can respond back quickly saying 'we are processing your data'.
The question is, how many threads should I start for these tasks? I can do all of the tasks in a single thread vs 4 different threads. What is the optimal solution?
Also, what affects CPU most? Number of threads OR length of the duration of execution of a particular thread?
My application receives 5 to 7 requests per second. So, what would be better? 1 separate (and longer running) thread OR 4 separate (but shorter running) threads per request?
Thanks in advance.
The number of threads you should start depends on the number of independent tasks you have. The more task/module/function(whatever you call) you have , the more threads you can start for each task/module. Based on the independent work to be done concurrently , you need to know how many threads you should use and how to utilize them effectively
what affects CPU most? Number of threads OR length of the duration of execution of a particular thread?
.
That seems to be a trivial question.Both will effect. Maybe not. Depends on the application/code you have.But that should not be a problem.
I have a Quartz trigger that is set to execute a process every minute.
Since concurrent is set to false, if the process takes longer than a minute, future triggers are prevented. However, does this mean that the trigger that attempted to run is now "standing in line" and waiting to be executed?
For example, if the process takes 10 minutes to execute, will there be 10 triggers sitting there waiting to execute. I am attempting to prevent a buildup on the server.
No. If concurrency is set to false (with anottation #DisallowConcurrentExecution) further jobs won't start if actual is still running.
See Job State and Concurrency in docu.
The accepted answer is not correct.
I just did the test with quartz 2.2.3 and the jobs are indeed put on the back burner.
Meaning if the time to process the job is superior to the time between jobs, you will pile up the jobs.
The documentation provided by #1ac0 is not mentioning anything apart the job will not run 'immediately'.
Good Day,
I am required to write a java server that performs an action every X minutes. The action is to check a database to see if the current/system time matches any of the times in a database, and to pull out those items, and send a TCP message to them.
Hencen, the database call is local on the machine, so that is no problem. However, at least 10 TCP calls need to be sent out simultaneously. Hence, the tick may actually need to occur on it's own thread. Can I have some suggestions?
Do I need a thread pool?
one thing you can do is create a schedular job and run that job every x minutes. so that job will be perform every x minutes and you need to define your task in job to perform for more info click here
I would use a Timer or else I would use the Quartz Scheduler - the former is more lightweight, while the latter is (optionally) durable (meaning that scheduled tasks will be saved to a database and reloaded when your program restarts).
Either TimerTask (or) ScheduledExecutorServices implementations would be best options for this task. Yes, I think thread pool would be best option because you don't need to create 10 threads every X minutes.