Naming threads in Spring - java

I thought it was always recommended to name your threads to make it easier to debug later on.
In my SpringBoot project I now used the #Async notation and later on a TaskExecutor, but could not find a way to name my threads.
Is there a way to do that, or not really done in the Spring abstractions?

You can use thread prefix conf property in task executor configuration, or you can use ThreadFactory if prefix is not enough
#Bean
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setThreadNamePrefix("my_thread_prefix");
executor.setThreadFactory( new ThreadFactoryBuilder().setNameFormat("my-thread-%d").build())
executor.initialize();
return executor;
}

TaskExecutor in Spring is a functional interface that extends directly from Java's Executor. According to the documentation:
An object that executes submitted Runnable tasks. This interface
provides a way of decoupling task submission from the mechanics of how
each task will be run, including details of thread use, scheduling,
etc.
What this means is that it is not possible (and should not be required) to name your thread as you are not responsible for starting and managing it. That said, for debugging purposes, if you want to provide some name, you should do that to the thread pool itself by setting threadNamePrefix and/or threadGroupName properties.

Related

What are impact to my spring boot application if I have task executor

I already have the configuration to config for min and max threads for my spring boot application
server.tomcat.threads.min=20
server.tomcat.threads.max=50
What are impact to my spring boot application if I have task executor in my application?
#Configuration
public class AsyncConfiguration {
#Bean("myExecutor")
public TaskExecutor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(20);
executor.setMaxPoolSize(1000);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.setThreadNamePrefix("Async-");
return executor;
} }
Those are two different thread pools:
server.tomcat.threads.* defines the request thread pool (knowing that the Servlet API uses one thread per request)
myExecutor is just another pool that you could use for async operations, for instance via #EnableAsync/#Async:
By default, Spring will be searching for an associated thread pool definition: either a unique TaskExecutor bean in the context, or an Executor bean named "taskExecutor" otherwise. If neither of the two is resolvable, a SimpleAsyncTaskExecutor will be used to process async method invocations.
See also https://stackoverflow.com/a/65185737/1225328 for more details about using thread pools with Spring MVC/WebFlux.
So, to answer your actual question: both configurations don't interfere with each other :)
From my understanding
server.tomcat.threads.* : it tomcat thread pool which using for requests.
It mean If I have 2000 request coming, it only create 50 threads as my configuration.
myExcutor: it is using for Asynchronous.
In case I want to handle for logging error in the background by using #Async annotation, If all 2000 request got fail, it create max 1000 threads. So tomcat will manage 50 threads as main threads and Spring Container will manage 1000 threads. But If I not use a custom task executor, it will use default from Spring and create 2000 new threads. In that case, my application will be slower

Project Reactor create multiple scheduler from the same Executor

In my application I have a single java Executor, can it create issues if I create multiple Scheduler from the same instance of Executor, having something like:
public class MyServiceController {
#Autowired
private Executor mainExecutor;
public Object something() {
return Flux.from(somethingElse())
.publishOn(Schedulers.fromExecutor(mainExecutor))
.toFuture()
}
}
(Or having multiple classes implementing this pattern, all of them having the same instance of mainExecutor)
it should be fine in both cases, the same Executor will back all Scheduler.Worker spawned by each Scheduler you create from it, and the same is true if it is an ExecutorService (albeit with a different Scheduler implementation for the wrapper).
For clarity, I'd still consider making the Scheduler a singleton next to the Executor.

How to use multiple threadPoolExecutor for Async Spring

I am using Spring #Async on two classes. Both are ultimately implementing an interface. I am creating two separate ThreadPoolTaskExecutor so each class has its own ThreadPool to work off of. However due to I think something with proxy and how Spring implements Async classes, I have to put the #Async annotation on the base interface. Because of this, both classes end up using the same ThreadPoolTaskExecutor.
Is it possible to tell Spring that for this Bean (in this case I am calling the classes that implement that interface a Service), use this ThreadPoolTaskExecutor.
By default when specifying #Async on a method, the executor that will be used is the one supplied to the 'annotation-driven' element as described here.
However, the value attribute of the #Async annotation can be used when needing to indicate that an executor other than the default should be used when executing a given method.
#Async("otherExecutor")
void doSomething(String s) {
// this will be executed asynchronously by "otherExecutor"
}
In this case, "otherExecutor" may be the name of any Executor bean in the Spring container, or may be the name of a qualifier associated with any Executor, e.g. as specified with the element or Spring’s #Qualifier annotation
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html
And probably you need to specify the otherExecutor bean in you app with the pool settings you wish.
#Bean
public TaskExecutor otherExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(25);
return executor;
}

Sidekiq-like queue using java tools?

I want to have a work queue that behaves almost exactly like ruby's sidekiq(it doesn't need to use Redis, but it can - I just can't use ruby - not even Jruby). Basically I want to be able to create jobs that runs with some parameters and a worker pool executes the jobs. The workers are going to use hibernate to do some work, so I think that Spring integration could make things easier.
Spring Integration has Redis Queue inbound and outbound channel adapters.
The inbound message-driven adapter doesn't currently support concurrency; we worked around that in Spring XD with a composite adapter that wraps a collection of RedisQueueMessageDrivenEndpoint.
Or you could use RabbitMQ; the Spring Integration adapter for it does support concurrency.
EDIT
The bus was extracted to a sub project within that repo.
Spring Framework has ThreadPoolTaskExecutor. You could use it in your class as follows.
#Autowired
ThreadPoolTaskExecutor executor;
ThreadPoolTaskExecutor has properties needed to be set before it is put to use. PostConstruct will be executed after the dependency injections, so we can set the properities of ThreadPoolExecutor there.
#PostConstruct
public void init() {
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(25);
}
Then you can start using executor as follow
executor.execute(new EmailtoCustomerTask("zhw#gmail.com"));
The only requirement needed to become a task is to implement the Runnable interface.
private class EmailtoCustomerTask implements Runnable

looking for persistent timers for a spring application

I'm looking for a lib that allow me to do
define a worker that will be invoked once on a specific time in the future (not need the re-schedule / cron like featrure) i.e. a Timer
The worker should accept a context which withe some parameters / inputs
all should be persistent in the DB (or file) the worker
worker should be managed by spring -- spring should instantiate the worker so it can be injected with dependencies
be able to create timers dynamically via API and not just statically via spring XML beans
nice to have:
support a cluster i.e. have several nodes that can host a worker. each store jobn in the DB will cause invokaction of ONE work on one of the nods
I've examined several alternatives none meets the requirements:
Quartz
when using org.springframework.scheduling.quartz.JobDetailBean makes quartz create your worker instance (and not by spring) so you can't get dependecy ijection, (which will lead me to use Service Locator which I want to avoid)
while using org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean you can't get a context. your Worker expose one public method that accepts no arguments.In addition when using MethodInvokingJobDetailFactoryBean you can't use persistence (form the Javadoc)
Note: JobDetails created via this FactoryBean are not serializable and thus not suitable for persistent job stores. You need to implement your own Quartz Job as a thin wrapper for each case where you want a persistent job to delegate to a specific service method.
Spring's Timer and simple JDK Timers does not support the persistence / cluster feature
I know I can impl thing myself using a DB and Spring (or even JDK) Timers but I prefer to use an a 3r party lib for that.
Any suggestions?
If you want to create the job details to generate triggers/job-details at runtime and still be able to use Spring DI on your beans you can refer to this blog post, it shows how to use SpringBeanJobFactory in conjunction with ObjectFactoryCreatingFactoryBean to create Quartz triggering objects at runtime with Spring injected beans.
For those interested in an alternative to Quartz, have a look at db-scheduler (https://github.com/kagkarlsson/db-scheduler). A persistent task/execution-schedule is kept in a single database table. It is guaranteed to be executed only once by a scheduler in the cluster.
Yes, see code example below.
Currently limited to a single string identifier for no format restriction. The scheduler will likely be extended in the future with better support for job-details/parameters.
The execution-time and context is persistent in the database. Binding a task-name to a worker is done when the Scheduler starts. The worker may be instantiated by Spring as long as it implements the ExecutionHandler interface.
See 3).
Yes, see code example below.
Code example:
private static void springWorkerExample(DataSource dataSource, MySpringWorker mySpringWorker) {
// instantiate and start the scheduler somewhere in your application
final Scheduler scheduler = Scheduler
.create(dataSource)
.threads(2)
.build();
scheduler.start();
// define a task and a handler that named task, MySpringWorker implements the ExecutionHandler interface
final OneTimeTask oneTimeTask = ComposableTask.onetimeTask("my-onetime-task", mySpringWorker);
// schedule a future execution for the task with a custom id (currently the only form for context supported)
scheduler.scheduleForExecution(LocalDateTime.now().plusDays(1), oneTimeTask.instance("1001"));
}
public static class MySpringWorker implements ExecutionHandler {
public MySpringWorker() {
// could be instantiated by Spring
}
#Override
public void execute(TaskInstance taskInstance, ExecutionContext executionContext) {
// called when the execution-time is reached
System.out.println("Executed task with id="+taskInstance.getId());
}
}
Your requirements 3 and 4 do not really make sense to me: how can you have the whole package (worker + work) serialized and have it wake up magically and do its work? Shouldn't something in your running system do this at the proper time? Shouldn't this be the worker in the first place?
My approach would be this: create a Timer that Spring can instantiate and inject dependencies to. This Timer would then load its work / tasks from persistent storage, schedule them for execution and execute them. Your class can be a wrapper around java.util.Timer and not deal with the scheduling stuff at all. You must implement the clustering-related logic yourself, so that only one Timer / Worker gets to execute the work / task.

Categories

Resources