Spring batch restrict single instance of job only - java

I have one spring batch job which can be kicked of by rest URL. I want to make sure only one job instance is allowed to run. and if another instance already running then don't start another. even if the parameters are different.
I searched and found nothing out of box solution. thinking of extending SimpleJobLauncher. to check if any instance of the job running or not.

You could try to intercept the job execution, implementing the JobExecutionListener interface:
public class MyJobExecutionListener extends JobExecutionListener {
//active JobExecution, used as a lock.
private JobExecution _active;
public void beforeJob(JobExecution jobExecution) {
//create a lock
synchronized(jobExecution) {
if(_active!=null && _active.isRunning()) {
jobExecution.stop();
} else {
_active=jobExecution;
}
}
}
public void afterJob(JobExecution jobExecution) {
//release the lock
synchronized(jobExecution) {
if(jobExecution==_active) {
_active=null;
}
}
}
}
And then, inject to the Job definition:
<job id="myJobConfig">
<listeners>
<listener ref="myListener"/>
</listeners>
</job>

I solved this by creating an JobExecutionListner and with the help of JobExplorer I checked if any other instance is running if running then stop current job.I created listener so that it can be plugged in to any job that requires this kind of scenario.
Set<JobExecution> jobExecutions = ((SimpleJobExplorer) jobExplorer.getObject()).findRunningJobExecutions(jobExecution.getJobInstance().getJobName());
if(jobExecutions.size()>1){
Long currentTime = (new Date()).getTime();
for(JobExecution execution : jobExecutions ){
if(execution.getJobInstance().getId().compareTo(jobExecution.getJobInstance().getId())!=0 && (currentTime - execution.getStartTime().getTime()) <lockOverideTime){
jobExecution.stop();
throw new IllegalStateException("Another instance of the job running job name : " +jobExecution.getJobInstance().getJobName() );
}
}
}

Or, in response to REST URL, check using JobExplorer if your job is running using job's specifics business rules

I think a simple method like the following might do the trick:
#Autowire
private JobExplorer jobExplorer;
private boolean isJobRunning(Job job) {
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(job.getName());
return !jobExecutions.isEmpty();
}
Then, prior to executing your job make the check:
private void executeJob(Job job, #Nonnull JobParameters params) {
if (isJobRunning(job)) {
return;
}
try {
jobLauncher.run(job, params);
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) {
log.error("could not run job " + jobIdentifier, e);
}
}

Related

how to avoid starting duplicate spring batch jobs?

I have a spring batch process which does data reading from the database and writing to file. Basically, the scenario is, that the user can send a request and the job will start and execute the process. But the issue is if the user sends the request 5 times there will be 5 different spring jobs started and running. But those are duplicates. So is there a way that we can avoid or block creating duplicate spring jobs?
You can create a JobExecutionListener that stops the current job execution if another one is already running and configure your job with that listener...
public class SingleExecutionJobListener implements JobExecutionListener {
private static String MATCH_ALL_PATTERN = ".*";
#Autowired
private JobExplorer jobExplorer;
#Autowired
private JobRegistry jobRegistry;
private String jobNamePattern = MATCH_ALL_PATTERN;
#Override
public void beforeJob(JobExecution jobExecution) {
Collection<String> jobNames = jobRegistry.getJobNames();
for (String jobName : jobNames) {
if (jobName.matches(StringUtils.defaultIfBlank(jobNamePattern, MATCH_ALL_PATTERN))) {
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(jobName);
if (CollectionUtils.isNotEmpty(jobExecutions)) {
for (JobExecution execution : jobExecutions) {
if (execution.getJobInstance().getId().compareTo(jobExecution.getJobInstance().getId()) != 0) {
jobExecution.stop();
throw new IllegalStateException(jobName + " instance " + execution.getJobInstance().getId()
+ " is currently running. Please restart this job when " + jobName + " has finished.");
}
}
}
}
}
}
#Override
public void afterJob(JobExecution jobExecution) {}
public String getJobNamePattern() {
return jobNamePattern;
}
public void setJobNamePattern(String jobNamePattern) {
this.jobNamePattern = jobNamePattern;
}
}

getJobDataMap in Quartz gives Null Pointer exception

I am trying to put some data into Quartz job data map and access them in the class which implements the Job class. But it gives me the Null Pointer exception. When the application is run without the code which access the Job Data Map, it runs fine.
I use a Cron trigger to execute a scheduled job. In this example case, I configured it to run in each 20 seconds.
#Bean
public Trigger simpleJobTrigger(#Qualifier("simpleJobDetail") JobDetail jobDetail) {
CronTriggerFactoryBean factoryBean = new CronTriggerFactoryBean();
factoryBean.setJobDetail(jobDetail);
factoryBean.setStartDelay(0L);
factoryBean.setName("test-trigger");
factoryBean.setStartTime(LocalDateTime.now().toDate());
factoryBean.setCronExpression("0/20 * * * * ?");
factoryBean.setMisfireInstruction(SimpleTrigger.MISFIRE_INSTRUCTION_FIRE_NOW);
try {
factoryBean.afterPropertiesSet();
} catch (ParseException e) {
e.printStackTrace();
}
return factoryBean.getObject();
}
Following is my simpleJobDetail bean.
#Bean
public JobDetailFactoryBean simpleJobDetail() {
JobDetailFactoryBean factoryBean = new JobDetailFactoryBean();
factoryBean.setJobClass(Executor.class);
factoryBean.setDurability(true);
factoryBean.setName("test-job");
factoryBean.getJobDataMap().put("caller", "James");
return factoryBean;
}
This is my execute method.
public class Executor implements Job {
#Autowired
ScheduledTaskService scheduledTaskService;
#Override
public void execute(JobExecutionContext jobExecutionContext) {
JobDataMap jobDataMap = null;
try {
jobDataMap = jobExecutionContext.getTrigger().getJobDataMap();
String caller = jobDataMap.get("caller").toString();
System.out.println("This is called by the user "+caller);
} catch (Exception e) {
//e.printStackTrace();
System.out.println("UNABLE TO ACCESS THE JOB DATA MAP "+e);
}
scheduledTaskService.doThePayment();
}
}
When I run the application, it prints the log given in the catch clause.
UNABLE TO ACCESS THE JOB DATA MAP java.lang.NullPointerException
Why execute method fails to access the JobDataMap ? Is there any configuration or a property I should set? What is the reason for job map to be not available at this point?
How can I get this resolved?
Found the issue in my code.
When accessing the JobDataMap, I have accessed it in following way.
jobDataMap = jobExecutionContext.getTrigger().getJobDataMap();
instead, in my case, I should access it from JobDetails, not from the trigger.
jobDataMap = jobExecutionContext.getJobDetail().getJobDataMap();

Quartz triggering same instance twice

We are using DisallowConcurrentExecutionAttribute annotation inside the Java class to prevent concurrent execution of multiple instances, however, looks like Quartz has triggered twice the same instance concurrently. Please address this issue and provide us more information and fix this issue if it is a bug.
#Override
#Transactional(propagation = Propagation.REQUIRED, readOnly = false)
public void execute(final JobExecutionContext jobExecutionContext) throws JobExecutionException {
logger.log(Log.DEBUG, "++++ Quartz JOB BatchJobDetector started");
try {
this.setJobExecutionContext(jobExecutionContext);
boolean triggerNextJob = true;
while (triggerNextJob) {
TriggeredBatchProcessDTO triggeredBatchProcessDTO = getNextJob(jobExecutionContext, 0);
if (triggeredBatchProcessDTO != null) {
triggerJobImmediatly(triggeredBatchProcessDTO.getId(), jobExecutionContext);
triggeredBatchProcessDTO.setState(StatusType.RUNNING);
triggeredBatchProcessDTO.setProcessDtTm(triggeredBatchProcessDTO.getProcessDtTm());//CRGRO022
updateTriggeredBatchProcessDTO(triggeredBatchProcessDTO);
} else {
triggerNextJob = false;
}
}
} catch (final UnexpectedRuntimeException e) {
logger.log(Log.ERROR, "Error during execution of TriggeredBatchProcessDetectorJob: " + e.getMessage(), e);
throw e;
} catch (final Throwable t) {
throw new UnexpectedRuntimeException(CoreExceptionId.RUN_0001_UNEXPECTED_EXCEPTION,
new Object[] { "TriggeredBatchProcessDetectorJob error" }, t);
}
logger.log(Log.DEBUG, "++++ Quartz JOB BatchDetector finished");
}
You need to set up quartz correctly through properties to run it into the cluster mode and I'm not sure but imho you should also use #PersistJobDataAfterExecution annotation. I was using clustered quartz without any problems also with depracated job implementations
StatefulJob. You need to show us your config - here is sample - and give quartz lib versions

Jersey 2.0: Create repeating job

In our REST-Service we want to implement a job that checks something every 10 seconds. So we thought we could use Quartz to make a Job that cover this. But the problem is, that we need to inject a singleton, because it is used in the job and the job seems to be not in the context of our service, so the injected class is always null (NullPointerException).
So is there another possible solution to achieve such a job without using Quartz? Already tried to write our own JobFactory that connects the job with the BeanManager, but it didnt work at all.
This is the code for the job that is not working:
#Stateless
public class GCEStatusJob implements Job, Serializable{
private Logger log = LoggerFactory.getLogger(GCEStatusJob.class);
#Inject
SharedMemory sharedMemory;
#Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
GoogleComputeEngineFactory googleComputeEngineFactory = new GoogleComputeEngineFactory();
List<HeartbeatModel> heartbeatList = new ArrayList<>(sharedMemory.getAllHeartbeats());
List<GCE> gceList = googleComputeEngineFactory.listGCEs();
List<String> ipAddressList = gceList.stream().map(GCE::getIp).collect(Collectors.toList());
for(HeartbeatModel heartbeat : heartbeatList){
if(ipAddressList.contains(heartbeat.getIpAddress())){
long systemTime = System.currentTimeMillis();
if(systemTime-heartbeat.getSystemTime()>10000){
log.info("Compute Engine mit IP "+heartbeat.getIpAddress()+" antwortet nicht mehr. Wird neu gestartet!");
String name = gceList.stream().filter((i) -> i.getIp().equals(heartbeat.getIpAddress())).findFirst().get().getName();
googleComputeEngineFactory.resetGCE(name);
}
}
}
}
}
SharedMemory is always null.
I have used Scheduler context map to achive this. You can try this.
In REST API when we create a Scheduler we can use the Context map to pass the parameters to Job
#Path("job")
public class RESTApi {
private String _userID;
public String get_userID() {
return _userID;
}
public void set_userID(String _userID) {
this._userID = _userID;
}
#GET
#Path("/start/{userId}")
public void startJob(#PathParam("userId") String userID) {
_userID = userID;
try {
SimpleTrigger trigger = new SimpleTrigger();
trigger.setName("updateTrigger");
trigger.setStartTime(new Date(System.currentTimeMillis() + 1000));
trigger.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY);
trigger.setRepeatInterval(1000);
JobDetail job = new JobDetail();
job.setName("updateJob");
job.setJobClass(GCEStatusJob.class);
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.getContext().put("apiClass", this);
scheduler.start();
scheduler.scheduleJob(job, trigger);
} catch (Exception e) {
e.printStackTrace();
}
}
}
JOB implementation
public class GCEStatusJob implements Job {
#Override
public void execute(JobExecutionContext arg0) throws JobExecutionException {
RESTApi apiClass;
try {
apiClass = ((RESTApi) arg0.getScheduler().getContext().get("apiClass"));
System.out.println("User name is" + apiClass.get_userID());
} catch (SchedulerException e) {
e.printStackTrace();
}
}
}
Correct me, if my understanding is wrong.

How to check whether Quartz cron job is running?

How to check if scheduled Quartz cron job is running or not? Is there any API to do the checking?
scheduler.getCurrentlyExecutingJobs() should work in most case. But remember not to use it in Job class, for it use ExecutingJobsManager(a JobListener) to put the running job to a HashMap, which run before the job class, so use this method to check job is running will definitely return true. One simple approach is to check that fire times are different:
public static boolean isJobRunning(JobExecutionContext ctx, String jobName, String groupName)
throws SchedulerException {
List<JobExecutionContext> currentJobs = ctx.getScheduler().getCurrentlyExecutingJobs();
for (JobExecutionContext jobCtx : currentJobs) {
String thisJobName = jobCtx.getJobDetail().getKey().getName();
String thisGroupName = jobCtx.getJobDetail().getKey().getGroup();
if (jobName.equalsIgnoreCase(thisJobName) && groupName.equalsIgnoreCase(thisGroupName)
&& !jobCtx.getFireTime().equals(ctx.getFireTime())) {
return true;
}
}
return false;
}
Also notice that this method is not cluster aware. That is, it will only return Jobs currently executing in this Scheduler instance, not across the entire cluster. If you run Quartz in a cluster, it will not work properly.
If you notice in the QUARTZ_TRIGGERS table, there is a TRIGGER_STATE column. This tells you the state of the trigger (TriggerState) for a particular job. In all likelihood your app doesn't have a direct interface to this table but the quartz scheduler does and you can check the state this way:
private Boolean isJobPaused(String jobName) throws SchedulerException {
JobKey jobKey = new JobKey(jobName);
JobDetail jobDetail = scheduler.getJobDetail(jobKey);
List<? extends Trigger> triggers = scheduler.getTriggersOfJob(jobDetail.getKey());
for (Trigger trigger : triggers) {
TriggerState triggerState = scheduler.getTriggerState(trigger.getKey());
if (TriggerState.PAUSED.equals(triggerState)) {
return true;
}
}
return false;
}
Have you looked at this answer? Try with:
scheduler.getCurrentlyExecutingJobs()

Categories

Resources