Scheduled method in a standalone application in spring - java

I have a method which needs to be executed every day at 07:00.
For that matter I created a bean with the method and annotated it with #Scheduled(cron="0 0 7 * * ?").
In this bean I crated a main function - which will initialize the spring context, get the bean and invoke the method ( at least for the first time ), like this:
public static void main(String[] args) {
ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext(args[0]);
SchedulerService schedulerService = context.getBean(SchedulerService.class);
schedulerService.myMethod();
}
This works just fine - but just once.
I think I understand why - It's because the main thread ends - and so is the spring context so even though myMethod is annotated with #Scheduled it wont work.
I thought of a way to pass this - meaning don't let the main thread die, perhaps like this:
while (true){
Thread.currentThread().sleep(500);
}
That's how, I think, the application context will remain and so is my bean.
Am I right?
Is there a better way to solve this?
I'm using spring 3.1.2.
Thanks.

The main thread should stay active until any non-daemon threads are alive. If you have a <task:annotation-driven/> tag in your application then Spring should start up a executor with a small pool of non-daemon threads for you and the main application should not terminate.
The only thing that you will need to do is to register a shutdown hook also to ensure a cleanup when the VM ends.
context.registerShutdownHook()

The join method is ideal for this:
try {
Thread.currentThread().join();
} catch (InterruptedException e) {
logger.warn("Interrupted", e);
}
Alternatively, here's the old school wait method:
final Object sync = new Object();
synchronized (sync) {
try {
sync.wait();
} catch (InterruptedException e) {
logger.warn("Interrupted", e);
}
}

Related

ScheduledExecutorService inside a Spring Bean, not working after a couple of executions

I am trying to schedule a task inside a Spring #Bean which will update the property of the instance returning from Bean.
I am able to run this code, and the executor works fine a couple of times, but after that, it suddenly stops loading.
What exactly is the problem here? Is there a better way to work this out??
#Bean(name="service")
public Service getService(){
Service service = new Service();
ScheduledExecutorService serviceLoader = Executors.newScheduledThreadPool(1);
serviceLoader.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
service.loadAllLiveEvents();
}
}, 0, 1, TimeUnit.HOURS);
return service;
}
The lifecycle of serviceLoader looks weird - it gets initialized right during the method or service, then schedules some work and then service gets returned. What happens to the reference to this pool? when the shutdown can be called?
In addition, the first iteration runs immediately, and this happens when the application context is not ready yet, this can lead to unpredictable results depending on the actual code that runs during the iteration.
I can't say for sure what happens based on this code snippet, but here are some possible solutions:
Use #Scheduled annotation, running scheduled tasks is a bulit-in spring feature. There are a lot of tutorials, here is one of them
If you absolutely have to use the thread pool, I suggest the following configuration:
#Configuration
public class MyConfiguration {
#Bean
public Service service() {
return new Service();
}
#Bean(destroyMethod="shutdownNow") // or shutdown - now spring will close the pool when the app context gets closed
#Qualifier("serviceLoaderPool")
public ScheduledExecutorService serviceLoader() {
return Executors.newScheduledThreadPool(1);
}
#EventListener
public void onAppContextStarted(ApplicationReadyEvent evt) {
ScheduledExecutorService loader =
(ScheduledExecutorService)evt.getApplicationContext().getBean("serviceLoaderPool");
Service service = evt.getApplicationContext.getBean(Service.class);
loader.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
service.loadAllLiveEvents();
}
}, 0, 1, TimeUnit.HOURS);
}
}
With this approach you can be sure that the service will start refreshing when the application context is ready.
The lifecycle of the executor service is well-defined as well, spring manages it as a regular singleton bean, so it won't be GC'ed as long as the application context is up-and-running.
The presence of destroy method guarantees graceful shutdown (again, spring will call it for you).

Spring do on startup ingnoring failures

I need to perform some work when the spring application is ready, something similar to #Scheduled but I want it to perform only once.
I found some ways to do it, such as using #PostConstruct on a bean, using #EventListener or InitializingBean, however, all of these ways does not match my need. If during the execution of this logic something goes wrong, I want to ignore it so the application starts anyway. But using these methods the application crashes.
Of course, I can surround the logic with try-catch and it will work. But, is there any more elegant way?
We faced a similar issue with our microservices , in order to run code just after startup we added a Component.
ApplicationStartup implements ApplicationListener<ApplicationReadyEvent>
Within the application to make a call to the services just after application startup, this worked for us.
#Component
public class ApplicationStartup implements ApplicationListener<ApplicationReadyEvent> {
#Autowired
YourService yourService;
#Override
public void onApplicationEvent(final ApplicationReadyEvent event) {
System.out.println("ApplicationReadyEvent: application is up");
try {
// some code to call yourservice with property driven or constant inputs
} catch (Exception e) {
e.printStackTrace();
}
}
}
When you use #PostConstruct for implementing a logic, the application is not ready yet, so it kind of contradicts your requirement. spring initializes the beans one by one (with respect to the dependencies between them.
After all it builds up the application context.
When the application context is fully initialized, spring indeed allows listeners to be run. So The listeners is a way to go - when the listener is invoked the application is ready.
In both cases (PostConstruct, EventListener) as long as you're not using try/catch block the application context will fail, because it waits till all the listeners will be done.
You can use #Async if you don't want the application context to wait for listeners execution. In this case the exception handling will be done by the task executor. See here
Personally I don't see any issue with try/catch approach
You can use #PostConstruct (as you said) but you must wrap your business in try catch and ignore it when it throws an exception.
Sample Code
#PostConstruct
void init() {
try {
//Your business
}
catch (Exception e) {
//Do nothing Or you can just log
}

Stop ConcurrentTaskScheduler when spring context closed

I am writing a simple spring application with AnnotationConfigApplicationContext. I have a ConcurrentTaskScheduler in my application. What is the best practice for stopping the ConcurrentTaskScheduler when the spring context closed?
Update: The main problem is when Junit close the context in #After annotation all threads will be terminated but when i manually close the context at the end of application, Some threads running by ConcurrentTaskScheduler will continue running.
Let Spring handle the shutdown itself.
Pass a ScheduledExecutorService to your ConcurrentTaskScheduler.
Than add a method with anotation #PreDestroy in which shutdown the ScheduledExecutorService.
#PreDestroy
public void cleanUp() throws InterruptedException {
scheduleExecutorService.shutdown();
try {
scheduleExecutorService.awaitTermination(10000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
scheduleExecutorService.shutdownNow();
throw e;
}
}
What I do in my project is, I save the jobs in the database so I can have a control over it, so in the future I can just grab the information from it and stop.
How to manage/stop spring 4 ConcurrentTaskScheduler

Delegate processing during Tomcat startup with Spring

I have defined a bean which needs to do some heavy processing during the #PostConstruct lifecycle phase (during start up).
As it stands, I submit a new Callable to an executor service with each iteration of the processing loop. I keep a list of the Future objects returned from these submissions in a member variable.
#Component
#Scope("singleton")
public class StartupManager implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ExecutorService executorService;
private final Map<Class<?>, Optional<Action>> actionMappings = new ConcurrentHashMap<>();
private final List<Future> processingTasks = Collections.synchronizedList(new ArrayList<>());
#PostConstruct
public void init() throws ExecutionException, InterruptedException {
this.controllers.getHandlerMethods().entrySet().stream().forEach(handlerItem -> {
processingTasks.add(executorService.submit(() -> {
// processing
}));
});
}
}
This same bean implements the ApplicationListener interface, so it can listen for a ContextRefreshedEvent which allows me to detect when the application has finished starting up. I use this handler to loop through the list of Futures and invoke the blocking get method which ensures that all of the processing has occurred before the application continues.
#Override
public void onApplicationEvent(ContextRefreshedEvent applicationEvent) {
for(Future task : this.processingTasks) {
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e.getMessage());
}
}
}
My first question... Is changing the actionMapping stream to a parallelStream going to achieve the same thing as submitting a task to the executor service? Is there a way I can pass an existing executor service into a parallel stream for it to use the thread pool size i've defined for the bean?
Secondly.. As part of the processing.. The actionMappings map is read and entries are put in there. It is sufficient enough to make this Map a ConcurrentHashMap to make it thread safe in this scenario?
And secondly is implementing the ApplicationListener interface and listening for the ContextRefreshedEvent the best way to detect when the application has startedup and therefore force complete the un-processed tasks by blocking? Or can this be done another way?
Thanks.
About using parallelStream(): No, and this is precisely the main drawback of using this method. It should be used only when the thread pool size doesn't matter, so I think your ExecutorService-based approach is fine.
Since you are working with Java 8, you could as well use the CompletableFuture.supplyAsync() method, which has an overload that takes an Executor. Since ExecutorService extends Executor, you can pass it your ExecutorService and you're done!
I think a ConcurrentHashMap is fine. It ensures thread safety in all its operations, especially when comes the time to add or modify entries.
When is a ContextRefreshedEvent fired? According to the Javadoc:
Event raised when an ApplicationContext gets initialized or refreshed.
which doesn't guarantee your onApplicationEvent() method is to be called once and only once, that is, when your bean is properly initialized, which includes execution of the #PostConstruct-annotated method.
I suggest you implement the BeanPostProcessor interface and put your Future-checkup logic in the postProcessAfterInitialization() method. The two BeanPostProcessormethods are called before and after the InitializingBean.afterPropertiesSet() method (if present), respectively.
I hope this will be helpful...
Cheers,
Jeff

Ejb Timer throwing javax.ejb.ConcurrentAccessTimeoutException: Unable to get write lock on

my application is running on tomee and I have the ejb timer to trigger the timeout method every two minutes. The timer triggered the timeout method first time and is still running when the timer tried to trigger the same method for second time. And it threw the following exception..
javax.ejb.ConcurrentAccessTimeoutException: Unable to get write lock on 'timeout' method for: com.abc.xyz
at org.apache.openejb.core.singleton.SingletonContainer.aquireLock(SingletonContainer.java:298)
at org.apache.openejb.core.singleton.SingletonContainer._invoke(SingletonContainer.java:217)
at org.apache.openejb.core.singleton.SingletonContainer.invoke(SingletonContainer.java:197)
at org.apache.openejb.core.timer.EjbTimerServiceImpl.ejbTimeout(EjbTimerServiceImpl.java:769)
at org.apache.openejb.core.timer.EjbTimeoutJob.execute(EjbTimeoutJob.java:39)
at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:560)
All my log is filled up with the same stacktrace and it continues to occur until I stop the server..
Can we make the timerservice not to trigger the method if it is already running?
or is there a way to timeout the first call before it is triggered again..
Thanks,
Is your timed EJB a singleton bean?
By default singletons use container managed concurrency with write locks that guarantee exclusive access for all methods.
The openejb.xml configures the AccessTimeout for a singleton EJB. After that timeout the exception you have seen will be thrown. Please see here as well: http://tomee.apache.org/singleton-beans.html
Solutions might be:
Use a stateless session bean as the timer bean
Define a read lock on the timer method
Don't use a repeating timer but schedule the next execution of your timer at the end of the current execution.
If you want to avoid running multiple times in parallel, but also want to avoid that the scheduled runs queue up, then I have another proposal.
This way I let schedules "skip", if the previous one is still running:
#Singleton
#Startup
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class Example
{
private final AtomicBoolean alreadyRunning = new AtomicBoolean(false);
#Schedule(minute = "*", hour="*", persistent = false)
public void doWork()
{
if (alreadyRunning.getAndSet(true)) return;
try
{
// ... your code
}
finally
{
alreadyRunning.set(false);
}
}
}

Categories

Resources