I have started using quartz scheduler in my application.I want to know whether we can find the actual start time of a scheduler and the actual time at which the scheduler ends.
You can use a JobListener for that.
The method jobToBeExecuted runs before the job starts and jobWasExecuted after it ends.
EDIT:
To further expand on the issue:
You can use the JobDetail object to store data about the job (look at getJobDataMap).
jobToBeExecuted method is called automatically each time the job is about to execute and it accepts a JobExecutionContext which has a method getJobDetails (look at section 1)
Same goes for jobWasExecuted, only this one is called after the job finished.
Look here for further information.
Related
Is there any way to persist JobDataMap without triggering/scheduling any jobs? Can I afterwards (on callback) start a Job with the stored JobDataMap?
I have many jobs scheduled with Quartz an I pass JobDataMap to those jobs:
scheduler.triggerJob(new JobKey("job-name", "job-group"), myJobDataMap);
Now I need to implement job queue since there are jobs which can't be launched in parallel. The problem is that state (JobDataMap) for some jobs is passed from client, and should be persisted for queueing purposes. In other hand I can't schedule job on user request since I don't know when it should be executed! (It should be executed right after previous job)
Yes.
You can use the method addJob(JobDetail, boolean) to add a new job to the Scheduler without actually scheduling it. Quoting the docs:
The Job will be 'dormant' until it is scheduled with a Trigger, or Scheduler.triggerJob() is called for it.
Your data map would be part of the JobDetail parameter.
I have two jobs running. First job polls data and updates the queue. Another one reads the queue and processes it. I need to pass fresh data to second job every time it runs. Second jobs runs every 10 seconds. So I need to pass fresh JobDataMap every time the 2nd job runs.
I read Quartz documentation on this one. But its not clear. Should I use one JobDataMap and pass it back and forth filling it up? Or is there a way to create new JobDataMap and pass it to job every time the job runs.
Any idea?
I have a QUARTZ JOB which is starts every 10 minutes.
If a JOB doesn't finish in 10 minutes, in the next 10th minute another JOB will start.
What I want is: the next JOB (after every 10 minute) should start only, if the previous JOB has finished running. Is there any way to do it?
Quartz Documentation
#DisallowConcurrentExecution is an annotation that can be added to the
Job class that tells Quartz not to execute multiple instances of a
given job definition (that refers to the given job class)
concurrently. Notice the wording there, as it was chosen very
carefully. In the example from the previous section, if
"SalesReportJob" has this annotation, than only one instance of
"SalesReportForJoe" can execute at a given time, but it can execute
concurrently with an instance of "SalesReportForMike". The constraint
is based upon an instance definition (JobDetail), not on instances of
the job class. However, it was decided (during the design of Quartz)
to have the annotation carried on the class itself, because it does
often make a difference to how the class is coded.
If you dont want SalesReportForMike and SalesReportForJoe to run concurrently ,then you can set the scheduler's ThreadPool size to 1. So at any given time only one job will run.
Also take a look at StatefulJob
Take a look at the DisallowConcurrentExecution annotation which will prevent multiple instances of the same job to run at the same time.
I am using Quartz's JDBCJobStore. I added job details, trigger information into tables using CronScheduleBuilder.cronSchedule(). if any scheduled job fails, I need it to be retried given a number of retries and retry interval. So, how can I add those parameters into table for a job?
As far as I am aware , Quartz has no way of doing this. You will have to manage it yourselves.
if any job scheduler fails
I presume that above line is pointing to failure of any scheduled job.
As soon as, the trigger fires the associated job starts running. So, there are 2 possibilities of failure here.
Scheduler is made hard shutdown when the job is being executed.
Quartz can handle this at its best. We have request-recovery attribute set for every job.
IF set to true, we are telling quartz that, "Recover/rerun this job at scheduler next startup if scheduler is hard shutdown during its execution". More information on this attribute here.
The job has thrown an exception during its execution.
This might means in our business logic that it has failed.( Note : Quartz don't assume this job as failed. You will have to decide this in your lifecycle of job that it has failed because of this exception.)
You can handle this by including all the code in job's exeucute() method in a try/catch block.
If any critical exception occured, inside catch block we will handle it in a way that we want to re-schedule the job again( i.e. making the job to retry again ).
So, For this you can always create a new jobdetail and trigger ( by using some of the parameters in jobExecutionContext of failed job) to recreate/reschedule the job again .
Would you please explain to me the exact mean of the StatefulJob in quartz and it's difference with none StatefulJob?
StatefulJob interface, provides 2 things,
first: only one job will be run any time
second: in (SimpleTriggerBean) you will not worry about your job running duration. it means that the next run will be done after delay time after ending of previous one.
StatefulJob guarantees only one job will be running at one time. For example, if you schedule your job to run every 1 minute, but your job took 5 minutes to complete, then the job will not be run again until the previous job has completed.
This is useful to make sure there is only one job running at any given time.
The next job will be run on the next schedule, not immediately after the previous job completed.
jobDetail.getJobDataMap().put("type","FULL");
This line is will decide we are using statefull or non-statefull.
If we are passing the argument then it will be statefull.
With out statefull there is no way to pass the arguments in execute method
In state full while execution time if we modify any value then the execution job will be lost it wont re-triggered at simultaneous process time.
Only one job will execute at a time the second will be sleep until the first one is completed.
In multi scheduling process the second job argument will be share to first job at run time. this is one type of disadvantage in multi scheduling process.