I have this problem: my app has a quartz scheduler to run a task each X minutes. This app is deployed in two server instances, so each instance is executing the task at the same time. I want to execute only one task at the same time.
We have configured Quartz with Spring and our application server is WAS.
Which options do you suggest?
You could setup quartz cluster with JDBC job store - then every job fire will be executed by only one cluster node. You can find more information on that topic in quartz documentation
Related
I have configured one quartz job in my spring boot application. For this quartz job, I have set requestRecovery to 1. So, in case of application restart, it will execute this job if it was in execution. But I want this job to start its execution from where it was at the time of application restart.
We have a Spring Batch application scheduled to run every 30 minutes, that creates workers on the Cloud as separate pods.
In the Configuration class, one of the beans connects to a database and reads some properties. If this DB connection fails for some reason, then the worker does not start and the Master job does not get triggered again after 30 minutes.
This is happening because if the worker fails on startup itself, it does not update the final status in the DB or communicate it to the master as Failed. Hence, the Master assumes it is still running and does not trigger the Batch again.
Does anyone have any suggestions on how to handle this and how to ensure the Master triggers the workers again on the scheduled duration?
the problem is about High avaliablity.
You could add redis in the front of db. If we cannot read the config from redis and then connect the db.
2ndly, add retry service like resilience4j into your bean to read your config for multiple times.
3rdly, for warning, you could add related warning service of your cloud to inform you which pod failed to start. Then you are able to restart that pod manually or automatically.
I developed an application with Spring boot 2.2.4.RELEASE and Quartz (v 2.2.3) in the cluster.
I have a Master Job that finds the records in a table and schedule these records via scheduler
`org.springframework.scheduling.quartz.SchedulerFactoryBean
Every single job scheduled has a logic that interacts with DB via HikariCP (Connection pool).
The rule must be that in case of application shut-down the application has to wait until the end of every running job. I will be able to set this rule to
org.springframework.scheduling.quartz.SchedulerFactoryBean
via property
setWaitForJobsToCompleteOnShutdown(true); `
The solution it's working fine but I saw that the Connection Pool (HikariCP) is closed without to wait for the end jobs to run. It leads to the loss of the interaction logicon DB.
I'd like to avoid this thing.
During shut-down of Spring boot, is it possible to prioritize the objects close into the context to do to finish every single job process regularly ?
my context is followed:
My enviroment is composed from two machines with Spring boot and Quartz in cluster configured on both.
I have a Master job with #DisallowConcurrentExecution annotation, that turns every 30 seconds, takes the records, on DB, with id fixed and schedules the jobs slaves, always with DisallowConcurrentExecution annotation, that make a defined logic. In anomaly cases (example: sudden shutdown machine), seem that some the jobs dont't be able to terminate his flow, remaining in the ERROR state.
How can I resume or unblock those jobs are in trigger state to ERROR by quartz objects in java ?
Because, currently, those jobs id can't anymore to be schedule from quartz.
Application log says:
org.quartz.ObjectAlreadyExistsException: Unable to store Job : 'sre.153', because one already exists with this identification.
Once I enable clustering in Quartz, it will distribute the cron jobs to the various servers in the cluster. That's fine normally, but there's actually one job that I'd like to have executed on every server in the cluster every time it's scheduled to run.
Is there a way to mark a quartz cron job to indicate that it should run on all servers in a cluster?
I solved this by having two quartz schedulers running. One was configured as a clustered scheduler. The other was configured as a local scheduler. Obviously, jobs that are supposed to run on all machines would get added to the local scheduler.