In my application I have a single java Executor, can it create issues if I create multiple Scheduler from the same instance of Executor, having something like:
public class MyServiceController {
#Autowired
private Executor mainExecutor;
public Object something() {
return Flux.from(somethingElse())
.publishOn(Schedulers.fromExecutor(mainExecutor))
.toFuture()
}
}
(Or having multiple classes implementing this pattern, all of them having the same instance of mainExecutor)
it should be fine in both cases, the same Executor will back all Scheduler.Worker spawned by each Scheduler you create from it, and the same is true if it is an ExecutorService (albeit with a different Scheduler implementation for the wrapper).
For clarity, I'd still consider making the Scheduler a singleton next to the Executor.
Related
I currently have a scheduled task within my Spring application.
However a two parts of this logic is severely time-consuming and I am wondering if there would be a way to make these two parts asynchronous so that it does not interfere with the time of the logic being executed.
The logic that I need to execute as follows.
#Scheduled(fixedDelay = 10000)
public void startAuction() throws Exception {
List<SchGoodsAuctionStartListRes> list = schedulerService.schGoodsAuctionStartList();
for (SchGoodsAuctionStartListRes item : list) {
schedulerService.schGoodsAuctionStart(item);
// 1st time consuming block that needs async
PushInfo pushInfo = pushMapper.pushGoodsSeller(item.getGoodsIdx());
pushInfo.setTitle("Start");
pushInfo.setBody("[" + pushInfo.getBrand() + "] started.");
pushInfo.setPushGrp("001");
pushInfo.setPushCode("003");
fcmPushUtil.sendPush(pushInfo);
// 2nd time consuming block that needs async
List<PushInfo> pushInfos = pushMapper.pushGoodsAuctionAll(item.getIdx());
for (PushInfo pushInfoItem : pushInfos) {
pushInfoItem.setTitle("\uD83D\uDD14 open");
pushInfoItem.setBody("[" + pushInfo.getBrand() + "] started. \uD83D\uDC5C");
pushInfoItem.setPushGrp("002");
pushInfoItem.setPushCode("008");
fcmPushUtil.sendPush(pushInfoItem);
}
}
}
From my understanding, a scheduler already is executing logic asynchronously, and I wonder if there would be any way of making those two blocks asynchronous so that it does not cause delays when executing this logic.
Any sort of advice or feedback would be deeply appreciated!
Thank you in advance!
There are several approaches that you could take here.
Configuring Thread Pool executor for Spring's scheduled tasks
By default Spring uses single thread executor to execute scheduled tasks, which means that even if you have multiple #Scheduled tasks or another execution for a task triggers before the previous one is completed, they will all have to wait in the queue.
You can configure your own executor to be used by Spring Scheduling. Take a look at the documentation of #EnableScheduling, it is pretty exhaustive on the subject.
To configure ExecutorService to be used for scheduled tasks it is enough to define a bean:
#Bean
public TaskScheduler taskScheduler() {
ThreadPoolTaskScheduler threadPoolTaskScheduler = new ThreadPoolTaskScheduler();
threadPoolTaskScheduler.setPoolSize(8);
threadPoolTaskScheduler.setThreadNamePrefix("task-scheduler");
return threadPoolTaskScheduler;
}
Additionally, if you use Spring Boot, you can use properties file:
spring.task.scheduling.pool.size=8
Executing scheduled tasks asynchronously
To execute scheduled tasks asynchronously you can use Spring's #Async annotation (and make sure to #EnableAsync somewhere in your configuration. That will make your tasks to be executed on a background thread, freeing the scheduling thread.
#EnableAsync
public class ScheduledAsyncTask {
#Async
#Scheduled(fixedRate = 10000)
public void scheduleFixedRateTaskAsync() throws InterruptedException {
// your task logic ...
}
}
Offload expensive parts of your tasks to a different executor
Finally, you could use a separate ExecutorService and run expensive parts of your tasks using that executor instead of the one used for task scheduling. This will keep the time needed to complete the execution on the thread used by Spring to schedule tasks to a minimum, allowing it to start next executions.
public class ScheduledAsyncTask implements DisposableBean {
private final ExecutorService executorService = Executors.newFixedThreadPool(4);
#Scheduled(fixedRate = 10000)
public void scheduleFixedRateTaskAsync() throws InterruptedException {
executorService.submit(() -> {
// Expensive calculations ...
});
}
#Override
public void destroy() {
executorService.shutdown();
}
}
I thought it was always recommended to name your threads to make it easier to debug later on.
In my SpringBoot project I now used the #Async notation and later on a TaskExecutor, but could not find a way to name my threads.
Is there a way to do that, or not really done in the Spring abstractions?
You can use thread prefix conf property in task executor configuration, or you can use ThreadFactory if prefix is not enough
#Bean
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setThreadNamePrefix("my_thread_prefix");
executor.setThreadFactory( new ThreadFactoryBuilder().setNameFormat("my-thread-%d").build())
executor.initialize();
return executor;
}
TaskExecutor in Spring is a functional interface that extends directly from Java's Executor. According to the documentation:
An object that executes submitted Runnable tasks. This interface
provides a way of decoupling task submission from the mechanics of how
each task will be run, including details of thread use, scheduling,
etc.
What this means is that it is not possible (and should not be required) to name your thread as you are not responsible for starting and managing it. That said, for debugging purposes, if you want to provide some name, you should do that to the thread pool itself by setting threadNamePrefix and/or threadGroupName properties.
Suppose I am writing a service, which needs some executor service/separate thread. I give ability to use factory method to not worry about executor service, but still want to allow passing existing executor service (dependency injection).
How can I manage for executorService.shutdown()?
Example code:
public class ThingsScheduler {
private final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor());
}
public scheduleThing() {
executorService.submit(new SomeTask());
}
// implement Closeable?
// #PreDestory?
// .shutdown() + JavaDoc?
}
There are several problems
We should have ability to shutdown internally created executor, or in best case handle it automatically (Spring #PreDestory, or in worst case finalize())
We shold rather not shutdown executor if it's externally managed (injected)
We could create some attribute stating if executor is created by our class or if it's injected, and then on finalize/#PreDestroy/shutdown hook we could shut down it, but it not feels elegant for me.
Maybe we should completely resign from factory method and always require injection pushing executor lifecycle management to the client?
You may crate an instance of anonymous sub-inner class from your default factory as shown below. The class will define the close/#PreDestroy method which shall be called by your DI container.
e.g.
public class ThingsScheduler {
final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
/**
* assuming you are using this method as factory method to make the returned
* bean as managed by your DI container
*/
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor()) {
#PreDestroy
public void close() {
System.out.println("closing the bean");
executorService.shutdown();
}
};
}
}
I would say that solution in fully up to you. Third-party libraries like spring widely use a dedicated attribute to understand who should release a particular resource depending on its creator. mongoInstanceCreated in SimpleMongoDbFactory, localServer in SimpleHttpServerJaxWsServiceExporter, etc. But they do it because these classes are created only for external usage. If your class is only used in your application code than you can either inject executorService and don't care about its releasing or create and release it inside the class which uses it. This choice depends on your class/application design (does your class work with any executorService, whether executorService is shared and used by other classes, etc). Otherwise i don't see other option than the dedicated flag.
More "elegant" solution would be to extend your ExecutorService and in it override shutdown method (whichever you choose). In case of injection, you would return that extended type and it would have it's own shutdown logic. In case of factory - you still have original logic.
After some more thinking I came up with some conclusions:
do not think about shutting it down if it's injected - someone else created it, someone else will manage it's lifecycle
an executor factory could be injected instead of Executor, then we create instance using factory and manage closing it by ourself as we manage the lifecycle (and in such case responses from other users applies)
I'd like to create an executor service that I can use as follows:
#Asyn(value = "asyncService")
public void task() {
//...
}
When should the #Bean be created using ThreadPoolTaskExecutor or ThreadPoolExecutorFactoryBean?
#Bean
public ExecutorService getAsyncService() {
//when to favor ThreadPoolTaskExecutor over ThreadPoolExecutorFactoryBean
}
Are there any cases where one should be favored over the other?
Favor direct injection of the TaskExecutor unless running under an app server, mainframe, or other environment where you need special handling of threads. Like the docs say, it's easy to get confused on which class you're using otherwise.
I have defined a bean which needs to do some heavy processing during the #PostConstruct lifecycle phase (during start up).
As it stands, I submit a new Callable to an executor service with each iteration of the processing loop. I keep a list of the Future objects returned from these submissions in a member variable.
#Component
#Scope("singleton")
public class StartupManager implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ExecutorService executorService;
private final Map<Class<?>, Optional<Action>> actionMappings = new ConcurrentHashMap<>();
private final List<Future> processingTasks = Collections.synchronizedList(new ArrayList<>());
#PostConstruct
public void init() throws ExecutionException, InterruptedException {
this.controllers.getHandlerMethods().entrySet().stream().forEach(handlerItem -> {
processingTasks.add(executorService.submit(() -> {
// processing
}));
});
}
}
This same bean implements the ApplicationListener interface, so it can listen for a ContextRefreshedEvent which allows me to detect when the application has finished starting up. I use this handler to loop through the list of Futures and invoke the blocking get method which ensures that all of the processing has occurred before the application continues.
#Override
public void onApplicationEvent(ContextRefreshedEvent applicationEvent) {
for(Future task : this.processingTasks) {
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e.getMessage());
}
}
}
My first question... Is changing the actionMapping stream to a parallelStream going to achieve the same thing as submitting a task to the executor service? Is there a way I can pass an existing executor service into a parallel stream for it to use the thread pool size i've defined for the bean?
Secondly.. As part of the processing.. The actionMappings map is read and entries are put in there. It is sufficient enough to make this Map a ConcurrentHashMap to make it thread safe in this scenario?
And secondly is implementing the ApplicationListener interface and listening for the ContextRefreshedEvent the best way to detect when the application has startedup and therefore force complete the un-processed tasks by blocking? Or can this be done another way?
Thanks.
About using parallelStream(): No, and this is precisely the main drawback of using this method. It should be used only when the thread pool size doesn't matter, so I think your ExecutorService-based approach is fine.
Since you are working with Java 8, you could as well use the CompletableFuture.supplyAsync() method, which has an overload that takes an Executor. Since ExecutorService extends Executor, you can pass it your ExecutorService and you're done!
I think a ConcurrentHashMap is fine. It ensures thread safety in all its operations, especially when comes the time to add or modify entries.
When is a ContextRefreshedEvent fired? According to the Javadoc:
Event raised when an ApplicationContext gets initialized or refreshed.
which doesn't guarantee your onApplicationEvent() method is to be called once and only once, that is, when your bean is properly initialized, which includes execution of the #PostConstruct-annotated method.
I suggest you implement the BeanPostProcessor interface and put your Future-checkup logic in the postProcessAfterInitialization() method. The two BeanPostProcessormethods are called before and after the InitializingBean.afterPropertiesSet() method (if present), respectively.
I hope this will be helpful...
Cheers,
Jeff