My aim is to create a queue system where I can specify a maximum amount of concurrent jobs for each group, i.e. for group A maximum 3 jobs should run at the same time, for group B max Y jobs etc. The jobs can be executed both on cron schedule and only once with SimpleTrigger, therefor I can't check the queue when scheduling the job, I have to check it before or during execution. I'm implementing a joblistener and I'm trying to prevent execution in the jobToBeExecuted() method. I've tried scheduler.interrupt() but it doesn't work when the job hasn't started yet. scheduler.deletejob() and scheduler.unschedule() didn't stop it from executing either.
Any ideas?
public class JobQueueListener implements JobListener {
#Override
public void jobToBeExecuted(JobExecutionContext context) {
JobKey currentJobKey = context.getJobDetail().getKey();
JobDetail jobDetail = context.getJobDetail();
Scheduler scheduler = context.getScheduler();
if (shouldBePutInQueue(currentJobKey)) {
/// Prevent execution and put in queue here, but how?
}
}
#Override
public void jobWasExecuted(JobExecutionContext context, JobExecutionException jobException) {
//Check queue and execute next in queue
}
}
Can you look at TriggerListener
You should implement TriggerListener and have your abort logic within "vetoJobExecution" method.
boolean vetoJobExecution(Trigger trigger,
JobExecutionContext context)
Its Called by the Scheduler when a Trigger has fired, and it's associated JobDetail is about to be executed. If the implementation vetos the execution (via returning true), the job's execute method will not be called.
Related
I've the problem, that I want to create a scheduled task during runtime. The scheduled task should be triggered with a fixed rate. But now I'm having the problem that the manual setup schedules are not triggered in an async way.
The main problem is, that we do not have any fix point were we can start the scheduler. It should get created when I read a specific value (1) and gets destroyed when the value changes back (0). Otherwise we could use the annotation configuration described in test 1 below.
What I have tried so far:
1. Schedule with #Scheduled(fixedRate = 500L) and #Async
Code
#Async
#Scheduled(fixedRate = 500L)
public void annotationTest() {
UUID id = UUID.randomUUID();
log.warn("Hello from Thread {} going to sleep", id);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.warn("Finished Thread {}", id);
}
Also having the #EnableAsync and #EnableScheduling annotations on class level.
Result
09:56:24.855 [task-5] : Hello from Thread 3b5514b2-3b80-4641-bf12-2cd320c4b6e5 going to sleep
09:56:25.355 [task-6] : Hello from Thread e98514a7-e193-422b-9569-f7635deb33f8 going to sleep
09:56:25.356 [task-4] : Finished Thread d86f5f24-bffb-4ddd-93fe-2334ed48cf91
09:56:25.854 [task-7] : Hello from Thread cfc2ab03-4e7e-4a4a-aa08-41d696cb6df7 going to sleep
09:56:25.855 [task-5] : Finished Thread 3b5514b2-3b80-4641-bf12-2cd320c4b6e5
09:56:26.355 [task-6] : Finished Thread e98514a7-e193-422b-9569-f7635deb33f8
Comment
This works as expected, but we are not able to use it, because we have to create the scheduler during runtime and destroy it after a specific time/input.
2. Setting up a ScheduledTaskRegistrar
Code
//#Configuration
#Bean
public ScheduledTaskRegistrar scheduledTaskRegistrar() {
ScheduledTaskRegistrar scheduledTaskRegistrar = new ScheduledTaskRegistrar();
scheduledTaskRegistrar.setScheduler(threadPoolTaskScheduler());
return scheduledTaskRegistrar;
}
#Bean
public TaskScheduler threadPoolTaskScheduler() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(20);
return scheduler;
}
//#Component
public void printMessages() {
scheduledTaskRegistrar.scheduleFixedRateTask(new FixedRateTask(new OwnRunnable(), 500L, 0L));
}
The OwnRunnable will also sleep 1 second and print the finish Text afterwards
Result
10:13:56.983 [TaskScheduler-1] : Finished Thread 73f70de9-35d9-47f0-801b-fb2857ab1c34
10:13:56.984 [TaskScheduler-3] : Hello from Thread 7ab16380-8dba-49e1-bf0d-de8235f81195 going to sleep
10:13:57.984 [TaskScheduler-3] : Finished Thread 7ab16380-8dba-49e1-bf0d-de8235f81195
10:13:57.984 [TaskScheduler-2] : Hello from Thread cc152d2e-f93b-4770-ac55-853a4dd6be97 going to sleep
10:13:58.985 [TaskScheduler-2] : Finished Thread cc152d2e-f93b-4770-ac55-853a4dd6be97
10:13:58.985 [TaskScheduler-4] : Hello from Thread 8d4510a4-773d-49f3-b51b-e58e425b0b68 going to sleep
Comment
As we can see the tasks run in a synchronous way and will not fit to our requirement.
3. Other tests
All other tests are similar to the test described in 2 but will use some other configurations of the ScheduledTaskRegistrar. The results are the same as in test 2.
ConcurrentTaskScheduler instead of ThreadPoolTaskScheduler
ConcurrentTaskScheduler with SimpleAsyncTaskExecutor as ConcurrentExecutor
ConcurrentTaskScheduler with ThreadPoolTaskExecutor as ConcurrentExecutor
Question(s)
How can I use the configuration described in test 2 but get the result of test 1? Is there a way to use the #Async annotation with solution described in test 2? Or does anyone have a better/ another solution for my problem?
Yes, it is possible. Assume that your class that implemented SchedulingConfigurer has a method, doMyJob(). You can annotate that method with Async and use the reference in FixedRateTask. Also notice the class level annotation
#Configuration
#EnableAsync
public class MyJobConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.scheduleFixedRateTask(new FixedRateTask(this::doMyJob, 500L, 0L));
}
#Async
public void doMyJob() {
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Hope it helps
EDIT
I provided the code without testing. Recently when I tried to recreate this scenario, I noticed that if doMyJob is within SchedulingConfigurer, it will not be truly async (if delay is 5seconds and job takes 10seconds, next job runs only after 10seconds). But moving the method to a service class helped.
I want to use camel->quartz component to schedule some job to be done at specified time interval.
But I want that in synchronized manner. Means, Next execution of scheduled job should only start after completion of current execution.
I created Route and Scheduler Service for Servicemix.
QuartzRoute.java
public class QuartzRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("quartz://myGroup/myTimerName?cron=0/1+*+*+*+*+?").process(new SchedulerService());
}
}
SchedulerService.java
public class SchedulerService implements Processor {
public void process(Exchange exchange) throws Exception {
System.out.println("I'm running every 5 sec...");
Thread.sleep(5000);
System.out.println("Exiting iteration ");
}
}
Here, I want "I'm running every 5 sec..." and "Exiting iteration " to be printed in same order every time.
In sort i want this SchedulerService to be executed again only after completion of current execution.
Use the stateful=true option of the quartz component. See Scheduled with fixed delay in quartz scheduler?
"stateful jobs are not allowed to execute concurrently, which means new triggers that occur before the completion of the execute(xx) method will be delayed."
I have an application with a few different, long running quartz jobs. Every job is triggered by a kind of event (for example user action) and it is intended to run only once per such an event. In the environment where the application works the following scenario happens...
Application is running,
Long running job is triggered,
During the execution of the job application shutdown occurs,
Application is starded again.
Is it possible to cause that quartz will automatically refire the job started and not finished previously (in the previous session of the application)? I mean using jdbc job store, which works well for misfired jobs - but is it possible to refire not finished job.
This is the best approach I've found:
Configure quartz scheduler with:
org.quartz.scheduler.interruptJobsOnShutdownWithWait=true
Make your recoverable jobs implementing InterruptableJob, and manually trigger the current job as part of interrupt logic (example below).
Write your own ShutdownHook to call Scheduler.shutdown(true) or use quartz ShutdownHookPlugin
This way, when an ordered shutdown is detected by the VM (hard shutdowns bust be handled by RequestRecovery: quartz jobDetail requestRecovery), jobs implementing InterruptableJob will be interrupted and re-triggered. This trigger will not occur until next start.
There is a quick example of how to implement:
public static class TriggerOnInterruptJob implements InterruptableJob {
private boolean interrupt = false;
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
LOGGER.debug("START");
synchronized (mutex) {
mutex.notifyAll();
}
executionCount.incrementAndGet();
try {
while (!interrupt)
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
try {
context.getScheduler().triggerJob(context.getJobDetail().getKey());
} catch (SchedulerException e) {
e.printStackTrace();
}
}
#Override
public void interrupt() throws UnableToInterruptJobException {
interrupt = true;
}
}
I am running a simple quartz job in main class which runs every 30 secs.
public class Start {
public static void main(String[] args) throws Exception {
SchedulerFactory sf = new StdSchedulerFactory();
Scheduler sched = sf.getScheduler();
JobDetail job = newJob(MyJob.class).withIdentity("myJob","XXX").build();
Trigger trigger = TriggerBuilder.newTrigger()
.withSchedule(
SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(30)
.repeatForever())
.build();
sched.scheduleJob(job, trigger);
sched.start();
}
}
Here i am implementing InterruptableJob like
public class MyJob implements InterruptableJob {
private volatile boolean isJobInterrupted = false;
private JobKey jobKey = null;
private volatile Thread thisThread;
public MyJob() {
}
#Override
public void interrupt() throws UnableToInterruptJobException {
// TODO Auto-generated method stub
System.err.println("calling interrupt:"+thisThread+"==>"+jobKey);
isJobInterrupted = true;
if (thisThread != null) {
// this call causes the ClosedByInterruptException to happen
thisThread.interrupt();
}
}
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
// TODO Auto-generated method stub
thisThread = Thread.currentThread();
jobKey = context.getJobDetail().getKey();
System.err.println("calling execute:"+thisThread+"==>"+jobKey);
}
}
Now i tried to stop the job using another main class like in every possible way with no luck
public class Stop {
public static void main(String[] args) throws Exception {
SchedulerFactory sf = new StdSchedulerFactory();
Scheduler sched = sf.getScheduler();
// get a "nice round" time a few seconds in the future...
Date startTime = nextGivenSecondDate(null, 1);
JobDetail job = newJob(MyJob.class).withIdentity("myJob", "XXX").build();
Trigger trigger = TriggerBuilder.newTrigger()
.withSchedule(
SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(30)
.repeatForever())
.build();
sched.scheduleJob(job, trigger);
sched.start();
try {
// if you want to see the job to finish successfully, sleep for about 40 seconds
Thread.sleep(60000) ;
// tell the scheduler to interrupt our job
sched.interrupt(job.getKey());
Thread.sleep(3 * 1000L);
} catch (Exception e) {
e.printStackTrace();
}
System.err.println("------- Shutting Down --------");
TriggerKey tk=TriggerKey.triggerKey("myJob","group1");
System.err.println("tk"+tk+":"+job.getKey());
sched.unscheduleJob(tk);
sched.interrupt(job.getKey());
sched.interrupt("myJob");
sched.deleteJob(job.getKey());
sched.shutdown();
System.err.println("------- Shutting Down ");
sched.shutdown(false);
System.err.println("------- Shutdown Complete ");
System.err.println("------- Shutdown Complete ");
}
}
Can anyone please tell me the correct way to stop the job? Thanks a lot.
This question seems to answer the exact problem you're describing:
You need to write a your job as an implementation of InterruptableJob. To interrupt this job, you need handle to Scheduler, and call interrupt(jobKey<<job name & job group>>)
As-per the InterruptableJob documentation:
The interface to be implemented by Jobs that provide a mechanism for having their execution interrupted. It is NOT a requirement for jobs to implement this interface - in fact, for most people, none of their jobs will.
Interrupting a Job is very analogous in concept and challenge to normal interruption of a Thread in Java.
The means of actually interrupting the Job must be implemented within the Job itself (the interrupt() method of this interface is simply a means for the scheduler to inform the Job that a request has been made for it to be interrupted). The mechanism that your jobs use to interrupt themselves might vary between implementations. However the principle idea in any implementation should be to have the body of the job's execute(..) periodically check some flag to see if an interruption has been requested, and if the flag is set, somehow abort the performance of the rest of the job's work.
Emphasis mine. It is analogous but not the same. You're not expected to use Threads (but indeed you could if that's what your Job does...).
An example of interrupting a job can be found in the java source for the class org.quartz.examples.DumbInterruptableJob. It is legal to use some combination of wait() and notify() synchronization within interrupt() and execute(..) in order to have the interrupt() method block until the execute(..) signals that it has noticed the set flag.
So I recommend reading the documentation and inspecting examples in the full download.
I'm using quartz-scheduler 1.8.5. I've created a Job implementing StatefulJob. I schedule the job using a SimpleTrigger and StdSchedulerFactory.
It seems that I have to update the Trigger's JobDataMap in addition to the JobDetail's JobDataMap in order to change the JobDataMap from inside the Job. I'm trying to understand why it's necessary to update both? I noticed that the JobDataMap is set to dirty. Maybe I have to explicitly save it or something?
I'm thinking I'll have to dig into the source code of Quartz to really understand what is going on here, but I figured I'd be lazy and ask first. Thanks for any insight into the inner workings of JobDataMap!
Here's my job:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = context.getMergedJobDataMap().getInt("count");
int count2 = context.getJobDetail().getJobDataMap().getInt("count");
//int count3 = context.getTrigger().getJobDataMap().getInt("count");
System.err.println("HelloJob is executing. Count: '"+count+"', "+count2+"'");
//The count only gets updated if I updated both the Trigger and
// JobDetail DataMaps. If I only update the JobDetail, it doesn't persist.
context.getTrigger().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
//This has no effect inside the job, but it works outside the job
try {
context.getScheduler().addJob(context.getJobDetail(), true);
} catch (SchedulerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//These don't seem to persist between jobs
//context.put("count", count++);
//context.getMergedJobDataMap().put("count", count++);
}
}
Here's how I'm scheduling the job:
try {
// define the job and tie it to our HelloJob class
JobDetail job = new JobDetail(JOB_NAME, JOB_GROUP_NAME,
HelloJob.class);
job.getJobDataMap().put("count", 1);
// Trigger the job to run now, and every so often
Trigger trigger = new SimpleTrigger("myTrigger", "group1",
SimpleTrigger.REPEAT_INDEFINITELY, howOften);
// Tell quartz to schedule the job using our trigger
sched.scheduleJob(job, trigger);
return job;
} catch (SchedulerException e) {
throw e;
}
Update:
Seems that I have to put the value into the JobDetail's JobDataMap twice to get it to persist, this works:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = (Integer) context.getMergedJobDataMap().get("count");
System.err.println("HelloJob is executing. Count: '"+count+"', and is the job stateful? "+context.getJobDetail().isStateful());
context.getJobDetail().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
}
}
This seems like a bug, maybe? Or maybe there's a step I'm missing to tell the JobDetail to flush the contents of its JobDataMap to the JobStore?
I think your problem is with using the postfix ++ operator - when you do:
context.getJobDetail().getJobDataMap().put("count", count++);
you're setting the value in the map to count and THEN incrementing count.
To me it looks like you wanted:
context.getJobDetail().getJobDataMap().put("count", ++count);
which would only need to be done once.
As you know, in Quartz, the trigger and the job are separate, rather than combined with some schedulers. They might be allowing you to add values to the datamap which are specific at the trigger level rather than the job level, etc.
I think it allows you to execute the same end job with a different set of data, but still have some common data at the job level.
As scpritch76 answered, the job and trigger are separate so that there can be many triggers (schedules) for a given job.
The job can have some base set of properties in the JobDataMap, and then the triggers can provide additional properties (or override base properties) for particular executions of the job in their JobDataMaps.
#PersistJobDataAfterExecution
#DisallowConcurrentExecution
public class DynamicTestJob implements Job
{
private static final Logger log = LoggerFactory.getLogger(DynamicTestJob.class);
#Override
public void execute(final JobExecutionContext context)
{
final JobDataMap jobDataMap = context.getJobDetail().getJobDataMap();
Integer counter = (Integer) jobDataMap.get("counter");
if (counter == null)
{
counter = 1;
}
else
{
counter++;
}
jobDataMap.put("counter", counter);
System.out.println(counter);
}
}