Is there any way to persist JobDataMap without triggering/scheduling any jobs? Can I afterwards (on callback) start a Job with the stored JobDataMap?
I have many jobs scheduled with Quartz an I pass JobDataMap to those jobs:
scheduler.triggerJob(new JobKey("job-name", "job-group"), myJobDataMap);
Now I need to implement job queue since there are jobs which can't be launched in parallel. The problem is that state (JobDataMap) for some jobs is passed from client, and should be persisted for queueing purposes. In other hand I can't schedule job on user request since I don't know when it should be executed! (It should be executed right after previous job)
Yes.
You can use the method addJob(JobDetail, boolean) to add a new job to the Scheduler without actually scheduling it. Quoting the docs:
The Job will be 'dormant' until it is scheduled with a Trigger, or Scheduler.triggerJob() is called for it.
Your data map would be part of the JobDetail parameter.
Related
I have two jobs running. First job polls data and updates the queue. Another one reads the queue and processes it. I need to pass fresh data to second job every time it runs. Second jobs runs every 10 seconds. So I need to pass fresh JobDataMap every time the 2nd job runs.
I read Quartz documentation on this one. But its not clear. Should I use one JobDataMap and pass it back and forth filling it up? Or is there a way to create new JobDataMap and pass it to job every time the job runs.
Any idea?
I have run into a case where I have to use a persistent Scheduler, since I have a web application that can crash or close due to some problems and might lose it job details if this happens . I have tried the following:
Use Quartz scheduler:
I used RAMJobStore first, but since it isn't persistent, it wasn't of much help. Can't setup JDBCJobStore because, this will require huge code changes to my existing code base.
In light of such a scenario,
I have the following queries:
If I use Spring's built in #Schedule annotation will my jobs be persistent..? I don't mind if the jobs get scheduled after the application starts. All I want is the jobs to not lose their details and triggers.?
If not, are there any other alternatives that can be followed , keeping in mind that I need to schedule multiple jobs with my scheduler.?
If yes, how can I achieve this.? My triggers are different each job. For e.g I might have a job that is scheduled at 9AM and another at 8.30AM and so on.
If not a scheduler, then can I have a mechanism to handle this.?
One thing, I found is that the documentation for Quartz isn't very descriptive. I mean it's fine for a top level config, but configuring it on your an application is a pain. This is just a side note. Nothing to do with the question.
Appreciate the help. :)
No, Spring's #Schedule-annotation will typically only instruct Spring at what times a certain task should be scheduled to run within the current VM. As far as I know there is not a context for the execution either. The schedule is static.
I had a similar requirement and created db-scheduler (https://github.com/kagkarlsson/db-scheduler), a simple, persistent and cluster-friendly scheduler. It stores the next execution-time in the database, and triggers execution once it is reached.
A very simple example for a RecurringTask without context could look like this:
final RecurringTask myDailyTask = ComposableTask.recurringTask("my-daily-task", Schedules.daily(LocalTime.of(8, 0)),
() -> System.out.println("Executed!"));
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(myDailyTask)
.threads(5)
.build();
scheduler.start();
It will execute the task named my-daily-task at 08:00 every day. It will be scheduled in the database when the scheduler is first started, unless it already exists in the database.
If you want to schedule an ad-hoc task some time in the future with context, you can use the OneTimeTask:
final OneTimeTask oneTimeTask = ComposableTask.onetimeTask("my-onetime-task",
(taskInstance, context) -> System.out.println("One-time task with identifier "+taskInstance.getId()+" executed!"));
scheduler.scheduleForExecution(LocalDateTime.now().plusDays(1), oneTimeTask.instance("1001"));
See the example above. Any number of tasks can be scheduled, as long as task-name and instanceIdentifier is unique.
#Schedule has nothing to do with the actual executor. The default java executors aren't persistent (maybe there are some app-server specific ones that are), if you want persistence you have to use Quartz for job execution.
I am using Quartz's JDBCJobStore. I added job details, trigger information into tables using CronScheduleBuilder.cronSchedule(). if any scheduled job fails, I need it to be retried given a number of retries and retry interval. So, how can I add those parameters into table for a job?
As far as I am aware , Quartz has no way of doing this. You will have to manage it yourselves.
if any job scheduler fails
I presume that above line is pointing to failure of any scheduled job.
As soon as, the trigger fires the associated job starts running. So, there are 2 possibilities of failure here.
Scheduler is made hard shutdown when the job is being executed.
Quartz can handle this at its best. We have request-recovery attribute set for every job.
IF set to true, we are telling quartz that, "Recover/rerun this job at scheduler next startup if scheduler is hard shutdown during its execution". More information on this attribute here.
The job has thrown an exception during its execution.
This might means in our business logic that it has failed.( Note : Quartz don't assume this job as failed. You will have to decide this in your lifecycle of job that it has failed because of this exception.)
You can handle this by including all the code in job's exeucute() method in a try/catch block.
If any critical exception occured, inside catch block we will handle it in a way that we want to re-schedule the job again( i.e. making the job to retry again ).
So, For this you can always create a new jobdetail and trigger ( by using some of the parameters in jobExecutionContext of failed job) to recreate/reschedule the job again .
I have started using quartz scheduler in my application.I want to know whether we can find the actual start time of a scheduler and the actual time at which the scheduler ends.
You can use a JobListener for that.
The method jobToBeExecuted runs before the job starts and jobWasExecuted after it ends.
EDIT:
To further expand on the issue:
You can use the JobDetail object to store data about the job (look at getJobDataMap).
jobToBeExecuted method is called automatically each time the job is about to execute and it accepts a JobExecutionContext which has a method getJobDetails (look at section 1)
Same goes for jobWasExecuted, only this one is called after the job finished.
Look here for further information.
I have a Quartz job that has already been scheduled. I want to update the JobDataMap associated with it. If I get a JobDataMap with JobDataMap jobDataMap = scheduler.getJobDetail(....).getJobDataMap(), is that map "live"? ie. if I change it, will it be persisted in the scheduler? If not, how do I persist it?
In quartz 2.0. StatefulJob is deprecated. In order to persist the job data map, use #PersistJobDataAfterExecution on the job class. It usually goes with #DisallowConcurrentExecution.
I had a similar problem: I have a secondly trigger which fires a stateful job that works on a queue in the job's data map. Every time the job fires, it polls from the queue and performs some work on the polled element. With each job execution, the queue has one less element (the queue is updated correctly from within the job). When the queue is empty, the job unschedules itself.
I wanted to be able to externally update the list of arguments of an ongoing job/trigger to provide more arguments to the queue. However, just retrieving the data map and updating the queue was not enough (the following execution shows the queue is not updated). The problem is that Quartz only updates the job data map of a job instance after execution.
Here's the solution I found:
JobDetail jobDetail = scheduler.getJobDetail("myJob", "myGroup");
jobDetail.getJobDataMap.put("jobQueue", updatedQueue);
scheduler.addJob(jobDetail, true);
The last line instructs Quartz to replace the stored job with the one you are providing. The next time the job is fired it will see the updated queue.
See http://www.quartz-scheduler.org/docs/tutorial/TutorialLesson03.html:
A Job instance can be defined as
"stateful" or "non-stateful".
Non-stateful jobs only have their
JobDataMap stored at the time they are
added to the scheduler. This means
that any changes made to the contents
of the job data map during execution
of the job will be lost, and will not
seen by the job the next time it
executes.
...a stateful job is just the opposite -
its JobDataMap is re-stored after
every execution of the job.
You 'mark' a Job as stateful by having it implement the StatefulJob
interface, rather than the Job
interface.