Spring Boot Shutdown with Scheduler - java

I'm using Spring Boot + Spring Task Scheduler to creating a polling application. I plan to use the actuator to shutdown the application during maintenance windows (see shutdown endpoint for actuator: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints).
My question is, will the Spring be smart enough finish an in-flight task before gracefully shutting down the application? Will Spring be smart enough not to start another task if the in-flight task completes before shutdown?
Thank you.

Since spring boot 2.1 you can set the following property
# Whether to wait for running jobs to complete on shutdown.
spring.quartz.wait-for-jobs-to-complete-on-shutdown=false

You can control this by setting
ThreadPoolTaskExecutor#waitForTasksToCompleteOnShutdown
to true.
There is a really good explanation in the javadoc of the inherited setter
org.springframework.scheduling.concurrent.ExecutorConfigurationSupport#setWaitForTasksToCompleteOnShutdown
Set whether to wait for scheduled tasks to complete on shutdown,
not interrupting running tasks and executing all tasks in the queue.
Default is "false", shutting down immediately through interrupting
ongoing tasks and clearing the queue. Switch this flag to "true" if you
prefer fully completed tasks at the expense of a longer shutdown phase...
The spring scheduler uses per default a
java.util.concurrent.ThreadPoolExecutor
which will not start new tasks during shutdown as described in javadoc of
java.util.concurrent.ThreadPoolExecutor#shutdown
Initiates an orderly shutdown in which previously submitted
tasks are executed, but no new tasks will be accepted...

Related

How to prevent Spring context closing until clustered Quartz jobs are not finished ? [ Spring + Quartz ]

I developed an application with Spring boot 2.2.4.RELEASE and Quartz (v 2.2.3) in the cluster.
I have a Master Job that finds the records in a table and schedule these records via scheduler
`org.springframework.scheduling.quartz.SchedulerFactoryBean
Every single job scheduled has a logic that interacts with DB via HikariCP (Connection pool).
The rule must be that in case of application shut-down the application has to wait until the end of every running job. I will be able to set this rule to
org.springframework.scheduling.quartz.SchedulerFactoryBean
via property
setWaitForJobsToCompleteOnShutdown(true); `
The solution it's working fine but I saw that the Connection Pool (HikariCP) is closed without to wait for the end jobs to run. It leads to the loss of the interaction logicon DB.
I'd like to avoid this thing.
During shut-down of Spring boot, is it possible to prioritize the objects close into the context to do to finish every single job process regularly ?

How to unblock the jobs with triggerstate to ERROR in Quartz cluster in java

my context is followed:
My enviroment is composed from two machines with Spring boot and Quartz in cluster configured on both.
I have a Master job with #DisallowConcurrentExecution annotation, that turns every 30 seconds, takes the records, on DB, with id fixed and schedules the jobs slaves, always with DisallowConcurrentExecution annotation, that make a defined logic. In anomaly cases (example: sudden shutdown machine), seem that some the jobs dont't be able to terminate his flow, remaining in the ERROR state.
How can I resume or unblock those jobs are in trigger state to ERROR by quartz objects in java ?
Because, currently, those jobs id can't anymore to be schedule from quartz.
Application log says:
org.quartz.ObjectAlreadyExistsException: Unable to store Job : 'sre.153', because one already exists with this identification.

ThreadPoolTaskExecutor is not closing even after closing context and job is finished

I have a Spring Batch job with two partitioned steps. I configured separate ThreadPoolTaskExecutor for both slave steps since I need different pool settings.
Both Master Steps - Configured SimpleAsyncTaskExecutor as TaskExecutor
Both Slave Steps - Configured Separate ThreadPoolTaskExecutor as TaskExecutor
I am doing this to achieve parallel slave steps in addition to parallel chunks for a slave step.
Once my job is finished, I see my thread pools configured in last slave step kept hanging so job didn't terminate.
I am using Spring Boot.
I am closing context ( ConfigurableApplicationContext) as suggested in here.
Doing System.exit(0) as suggested here solves my problem but I am wondering if there is a clean way that I can shutdown thread pools explicitly or if I have done any wrong configuration.
Please suggest.

how can I monitor (listen) the job I scheduled with #scheduled using spring framework?

I needed to schedule a job which should run indefinitely. I found this nice article (although it has spring3 example) and followed through.
http://krams915.blogspot.com/2011/01/spring-3-task-scheduling-via.html
While this is a good starting point, i wasn't able to find how i can monitor the status of the job running. I'd like to know if there's a way for me to monitor scheduled jobs so i can be notified if something went wrong and jobs just died (such as job threw exception and exited).
#Scheduled is a shortcut that uses a bunch of defaults including the Queue. I believe it's an ExecutorService so if you just #Autowire that, you can interrogate it for status.
For more control (like how to handle jobs that throw exceptions) you can create a bean that's your own ExecutorService and put it in the context.

How correct stop timer threads in hibernate localsessionfactorybean?

I have spring web application and found that
org.springframework.orm.hibernate3.LocalSessionFactoryBean
creates 2 timer threads that don't stop after tomcat shutdown.
Does it possible to configure to stop these threads after tomcat received shutdown command or need use some kind of aspect?
Thanks.

Categories

Resources