I was using the following code-snippet for a long time to trigger an export right after the application was run and as well at the provided crontab.
#PostConstruct
#Scheduled(cron = "${" + EXPORT_SCHEDULE + "}")
public void exportJob()
{
exportService().exportTask();
}
After updating to Spring-Boot 2.6.x the policy according to "cyclic bean definition" got stricter and I couldn't use both annotations on this method anymore. Some answers recommended merging the #PostConstruct into the #Scheduled as initialDelay = 0 but crontab and the other properties of #Scheduled are not compatible. Resulting in following exception:
Caused by: java.lang.IllegalStateException: Encountered invalid #Scheduled method 'exportJob': 'initialDelay' not supported for cron triggers
Another solution may be to implement a CommandLineRunner bean. This interface contains a run() method that is executed after the application startup
#Bean
CommandLineRunner cronJobRunner(CronJobService cronJobService) {
return args -> cronJobService.cronJob();
}
A working solution I found was to use just the #Scheduled annotation but also have another method be annotated with #Scheduled and initialDelay=0 which just proxies the other method.
#Scheduled(cron = "${" + EXPORT_SCHEDULE + "}")
public void cronJob()
{
exportService().exportTask();
}
#Scheduled(fixedRate = Integer.MAX_VALUE, initialDelay = 0)
public void exportJob()
{
cronJob();
}
In theory, the cronJob is triggered more often, hence I chose it to do the service call. One could structure the call chain in the opposite direction as well.
Related
I have an application that uses CompletableFuture to process data from a stream async. The demo showcasing my async implementation is as follows:
#Async
#Transactional(readOnly = true)
public void beginProcessing() {
try(Stream<String> st = myJpa.getMyStream()) { // use spring jpa to get a stream of data from db
CompletableFuture.allOf(st.map(i -> CompletableFuture.supplyAsync(() ->
myMethod(i)))
.toArray(CompletableFuture[]::new)).join();
}
}
#Async
private CompletableFuture<Void> myMethod(String i) {
// logic goes here
}
And it works fine. However, at the moment the CompletableFuture uses some default thread pool to do its job. I would like to use a custom defined taskExecutor instead. This can be achieved by supplying a name of the taskExecutor as a 2nd argument of supplyAsync(...) method. I have done it with ForkJoin thread pool before, so I am positive that it works.
Now, I want to make it work with taskExecutor, so I have added a new bean to my config class as below (as well as another one which I will use elsewhere):
#Configuration
public class MyConfigClass {
#Bean
public TaskExecutor myNewExecutor() {
RejectedExecutionHandler re = new ThreadPoolExecutor.CallerRunsPolicy();
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(0);
executor.setThreadNamePrefix("myNewExecutor-");
executor.setRejectedExecutionHandler(re);
return executor;
}
#Bean
public TaskExecutor someOtherExecutor() { // would be used elsewhere
RejectedExecutionHandler re = new ThreadPoolExecutor.CallerRunsPolicy();
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(0);
executor.setThreadNamePrefix("someOtherExecutor-");
executor.setRejectedExecutionHandler(re);
return executor;
}
}
And then I autowired TaskExecutor into my class and added the 2nd arg. So the code looks like below:
#Autowired
private TaskExecutor myNewExecutor;
#Async
#Transactional(readOnly = true)
public void beginProcessing() {
try(Stream<String> st = myJpa.getMyStream()) { // use spring jpa to get a stream of data from db
CompletableFuture.allOf(st.map(i -> CompletableFuture.supplyAsync(() ->
myMethod(i),myNewExecutor))
.toArray(CompletableFuture[]::new)).join();
}
}
#Async
private CompletableFuture<Void> myMethod(String i) {
// logic goes here
}
However, that does not work since we have multiple beans of type TaskExecutor, so I get the following exception
Caused by: org.springframework.beans.factory.NoUniqueBeanDefinitionException:
No qualifying bean of type 'org.springframework.core.task.TaskExecutor' available: expected single matching bean but found 3: myNewExecutor, someOtherExecutor, taskScheduler
Ok, so I thought I would be able to solve it by using #Qualifier annotation. However, it didn't work for me - the exception is still being thrown (I used said annotation under the #Bean annotaiton in a config class and then under the #Autowired annotation in my class where logic is.).
I know that you can make a method use a custom thread pool by giving each bean a name and having the corresponding name being passed as an arg to #Async annotation. So, I guess I could rework my code to be a simple for loop that calls mytMethod(...) with method's async annotation being replaced with #Async("myNewExecutor"). That would probably work but I would like to preserve CompletableFuture if I can, so wonder if anyone can identify what am I missing that results in the error mentioned above?
I feel like I am missing something trivial to make it work but I just cannot see it.
So ... I figured it out. It was indeed a trivial thing that I somehow missed (embarrassingly so).
By specifying which thread pool to use in my logic, I made sure to effectively tell #Async above myMethod(...) which of 3 available beans to use. But I forgot that I have one more #Async above beginProcessing() method. So naturally it was confused with these changes and was throwing the exception I mentioned. By specifying which thread pool it needs to use (using bean name trick I mentioned I know), the code works like a charm.
I have some methods annotated with #KafkaListener but I want to start only some of them manually (depending on some conditions).
#KafkaListener(id = "consumer1", topics = "topic-name", clientIdPrefix = "client-prefix", autoStartup = "false")
public void consumer1(String message) {
// consume
}
#PostConstruct
private void startConsumers() {
if (true) {
kafkaListenerEndpointRegistry.getListenerContainer("consumer1").start();
}
}
But at this moment kafkaListenerEndpointRegistry.getListenerContainers() is empty list and kafkaListenerEndpointRegistry.getListenerContainer("consumer1") returns null. So maybe the moment when #PostConstruct method is called is too early and listeners are still not registered.
I tried to annotate startConsumers() method with #Scheduled(fixedDelay = 100) and listeners are already available. But using #Scheduled is not a good decision for something that I want to call once after starting the application.
You can't do it in #PostConstruct - it's too early in the application context life cycle.
Implement SmartLifecyle set the phase to Integer.MAX_VALUE and start the container in the start() method.
Or use an #EventListener and listen for the ApplicationStartedEvent (if using Spring Boot) or ContextRefreshedEvent for a non-Boot Spring application.
I'm trying to define my "Event Handler Interceptor",
I followed the instruction on the official documentation here, but I get the following error:
org.springframework.beans.factory.BeanCreationException: error when creating the bean with name 'configureEventProcessing' defined in the resource path class [com / prog / boot / config / EventProcessorConfiguration.class]: invalid factory method 'configureEventProcessing': must have an empty non-return type!
My current configuration call:
#Configuration
public class EventProcessorConfiguration {
#Bean
public void configureEventProcessing(Configurer configurer) {
configurer.eventProcessing()
.registerTrackingEventProcessor("my-tracking-processor")
.registerHandlerInterceptor("my-tracking-processor",
configuration -> new MyEventHandlerInterceptor());
}
}
My event MessageHandlerInterceptor implementation:
public class MyEventHandlerInterceptor implements MessageHandlerInterceptor<EventMessage<?>> {
#Override
public Object handle(UnitOfWork<? extends EventMessage<?>> unitOfWork, InterceptorChain interceptorChain)
throws Exception {
EventMessage<?> event = unitOfWork.getMessage();
String userId = Optional.ofNullable(event.getMetaData().get("userId")).map(uId -> (String) uId)
.orElseThrow(Exception::new);
if ("axonUser".equals(userId)) {
return interceptorChain.proceed();
}
return null;
}
}
What am I doing wrong?
Thanks!
Luckily, the problem is rather straightforward (and does not correlate to Axon directly).
The problem is you should've used #Autowired instead of #Bean on the configureEventProcessing(Configurer) method.
The #Bean annotation on a method will make it a "Bean creation method", whilst you only want to tie into the auto configuration to "further configure" the Event Processors.
Final note of fine tuning, you can use the EventProcessingConfigurer as a parameter instead of the Configurer#eventProcessing call. That shortens your code just a little.
Update
The provided configuration would, given the auto-wiring adjustment work as expected. Granted, it does expect an Event Handling Component to be present which is part of the ""my-tracking-processor" processing group.
If there are no Event Handling components present in that processing group, no events will be passed to it and thus none will be pushed through the MessageHandlerInterceptor.
A quick and easy way to specify a processing group for an Event Handling Component is by adding the #ProcessingGroup annotation on class level.
I want to have some basic preprocessing code that need to be run only once before starting the scheduler everytime. How can we achieve the same in Spring Boot ?
Are you looking for this? There are other options as well. But please elaborate the question.
#Component
public class Cache {
...
#PostConstruct
public void initializeCache() {
...
}
#Scheduled(fixedRate = 60L * 1000L)
public void refreshCache() {
...
}
}
Credits: Will a method annotated with #PostConstruct be guaranteed to execute prior to a method with #Scheduled within the same bean?
If you want to run code only once you could wait until Spring is ready and then run the code. To achieve this you could listen for the event like this:
#EventListener(ApplicationReadyEvent.class)
public void doSomethingAfterStartup() {
System.out.println("run your code here");
}
You can put that code in the application class to see the results.
Is it possible to schedule spring cache eviction to everyday at midnight?
I've read Springs Cache Docs and found nothing about scheduled cache eviction.
I need to evict cache daily and recache it in case there were some changes outside my application.
Try to use #Scheduled
Example:
#Scheduled(fixedRate = ONE_DAY)
#CacheEvict(value = { CACHE_NAME })
public void clearCache() {
log.debug("Cache '{}' cleared.", CACHE);
}
You can also use cron expression with #Scheduled.
If you use #Cacheable on methods with parameters, you should NEVER forget the allEntries=true annotation property on the #CacheEvict, otherwise your call will only evict the key parameter you give to the clearCache() method, which is nothing => you will not evict anything from the cache.
Maybe not the most elegant solution, but #CacheEvict was not working, so I directly went for the CacheManager.
This code clears a cache called foo via scheduler:
class MyClass {
#Autowired CacheManager cacheManager;
#Cacheable(value = "foo")
public Int expensiveCalculation(String bar) {
...
}
#Scheduled(fixedRate = 60 * 1000);
public void clearCache() {
cacheManager.getCache("foo").clear();
}
}
I know this question is old, but I found a better solution that worked for me. Maybe that will help others.
So, it is indeed possible to make a scheduled cache eviction. Here is what I did in my case.
Both annotations #Scheduled and #CacheEvict do not seem to work together.
You must thus split apart the scheduling method and the cache eviction method.
But since the whole mechanism is based on proxies, only external calls to public methods of your class will trigger the cache eviction. This because internal calls between to methods of the same class do not go through the Spring proxy.
I managed to fixed it the same way as Celebes (see comments), but with an improvement to avoid two components.
#Component
class MyClass
{
#Autowired
MyClass proxiedThis; // store your component inside its Spring proxy.
// A cron expression to define every day at midnight
#Scheduled(cron ="0 0 * * *")
public void cacheEvictionScheduler()
{
proxiedThis.clearCache();
}
#CacheEvict(value = { CACHE_NAME })
public void clearCache()
{
// intentionally left blank. Or add some trace info.
}
}
Please follow the below code.change cron expression accordingly. I have set it for 3 minutes
Create a class.
Use the below method inside the class.
class A
{
#Autowired CacheManager cacheManager;
#Scheduled(cron ="0 */3 * ? * *")
public void cacheEvictionScheduler()
{
logger.info("inside scheduler start");
//clearCache();
evictAllCaches();
logger.info("inside scheduler end");
}
public void evictAllCaches() {
logger.info("inside clearcache");
cacheManager.getCacheNames().stream()
.forEach(cacheName -> cacheManager.getCache(cacheName).clear());
}
}
Spring cache framework is event driven i.e. #Cacheable or #CacheEvict will be triggered only when respective methods are invoked.
However you can leverage the underlying cache provider (remember the Spring cache framework is just an abstraction and does not provide a cache solution by itself) to invalidate the cache by itself. For instance EhCache has a property viz. timeToLiveSeconds which dictates the time till the cache be active. But this won't re-populate the cache for you unless the #Cacheable annotated method is invoked.
So for cache eviction and re-population at particular time (say midnight as mentioned) consider implementing a background scheduled service in Spring which will trigger the cache eviction and re-population as desired. The expected behavior is not provided out-of-box.
Hope this helps.