Using a ScheduledThreadPool with parallel execution - java

Currently, I have an application that collects data every second and sends it to an API endpoint. To run every second, I am using a ScheduledThreadPoolExecutor that runs the thread which sends the data. The issue is the sending of the data sometimes takes more than one second, and this results in the next sequence of data to be collected more than a second later. Is there any way this can be changed (or other libraries can be used) so that even if a thread is not finished sending the data, another thread can start running in parallel?

The usual way to deal with the desire for overlapping executions of the same scheduled task is to execute the (time consuming) business logic of the task asynchronously.
In other words, when the once-per-second task is triggered, submit the real work to an ExecutorService (either the one you are using for the scheduled tasks or another one). This way, the scheduled task has already finished it's work (to queue the actual work) long before it is time for it to execute again.

Separate out the data collection and send tasks.
Data collection on a separate Thread pool (or a scheduled single thread) and submit the data to another pool whose job is to publish the data

Assuming you are not concerned about out of order invocations on the "API endpoint" then you can create the ScheduledThreadPoolExecutor with a corePoolSize > 1. In this way, every time the scheduler kicks in it will use the first available thread in the pool. And given a corePoolSize > 1 you would need several invocations to take more than 1s before you'd run out of threads.
For additional context: a ScheduledThreadPoolExecutor has a scheduling thread which checks for tasks and on finding one it delegates the task to a worker thread from its internal pool. If the internal pool has a single thread (i.e. corePoolSize=1) then all tasks are executed seriallly and you cannot guarantee that the tasks will be executed every _wait_period_ (though you can be certain about ordering). If you want to insist on the tasks running on schedule and you are not concerned about ordering then you can configure the pool with a corePoolSize which ensures that there is always an available thread in the 'worker' pool every time the scheduler finds a task.
Edit 1: if you are using scheduleAtFixedRate then the other answer which refers to delegating the scheduled invocation to a separate thread pool is an option. If you adopt this approach then corePoolSize=1 will be sufficient since the 'worker' thread is then only reponsible for delegating the task to a separate pool.

Related

Single ScheduledExecutorService instance vs Multiple ScheduledExecutorService instances

I have a service which schedules async tasks using ScheduledExecutorService for the user. Each user will trigger the service to schedule two tasks. (The 1st Task schedule the 2nd task with a fixed delay, such as 10 seconds interval)
pseudocode code illustration:
task1Future = threadPoolTaskScheduler.schedule(task1);
for(int i = 0; i< 10000; ++i) {
task2Future = threadPoolTaskScheduler.schedule(task2);
task2Future.get(); // Takes long time
Thread.sleep(10);
}
task1.Future.get();
Suppose I have a potential of 10000 users using the service at the same time, we can have two kinds of ScheduledExecutorService configuration for my service:
A single ScheduledExecutorService for all the users.
Create a ScheduledExecutorService for each user.
What I can think about the first method:
Pros:
Easy to control the number of threads in the thread pool.
Avoid creating new threads for scheduled tasks.
Cons:
Always keeping multiple number of threads available could waste computer resources.
May cause the hang of the service because of lacking available threads. (For example, set the thread pool size to 10, and then there is a 100 person using the service the same time, then after entering the 1st task and it tries to schedule the 2nd task, then finding out there is no thread available for scheduling the 2nd task)
What I can think about the second method
Pros:
Avoiding always keep many threads available when the number of user is small.
Can always provide threads for a large number of simultaneously usage.
Cons:
Creating new threads creates overheads.
Don't know how to control the number of maximum threads for the service. May cause the RAM out of space.
Any ideas about which way is better?
Single ScheduledExecutorService drives many tasks
The entire point of a ScheduledExecutorService is to maintain a collection of tasks to be executed after a certain amount of time elapses.
So given the scenario you describe, you need only a single ScheduledExecutorService object. Submit your 10,000 tasks to that one object. Each task will be executed approximately when its designated delay elapses. Simple, and easy.
Thread pool size
The real issue is deciding how many threads to assign to the ScheduledExecutorService.
Threads, as currently implemented in the OpenJDK project, are mapped directly to host OS threads. This makes them relatively heavyweight in terms of CPU and memory usage. In other words, currently Java threads are “expensive”.
There is no simple easy answer to calculating thread pool size. The optimal number is the least amount of threads that can keep up with the workload without over-burdening the host machine’s limited number of cores and limited memory. If you search Stack Overflow, you’ll find many discussions on the topic of deciding how many threads to use in a pool.
Project Loom
And keep tabs with the progress of Project Loom and its promise to bring virtual threads to Java. That technology has the potential to radically alter the calculus of deciding thread pool size. Virtual threads will be more efficient with CPU and with memory. In other words, virtual threads will be quite “cheap”, “inexpensive”.
How executor service works
You said:
entering the 1st task and it tries to schedule the 2nd task, then finding out there is no thread available for scheduling the 2nd task
That is not how the scheduled executor service (SES) works.
If a task being currently executed by a SES needs to schedule itself or some other task to later execution, that submitted task is added to the queue maintained internally by the SES. There is no need to have a thread immediately available. Nothing happens immediately except that queue addition. Later, when the added task’s specified delay has elapsed, the SES looks for an available thread in its thread-pool to execute that task that was queued a while back in time.
You seem to feel a need to manage the time of each task’s execution on certain threads. But that is the job of the scheduled executor service. The SES tracks the tasks submitted for execution, notices when their specified delay elapses, and schedules their execution on a thread from its managed pool of threads. You don’t need to manage any of that. Your only challenge is to assign an appropriate number of threads to the pool.
Multiple executor services
You commented:
why don't use multiple ScheduledExecutorService instances
Because in your scenario, there is no benefit. Your Question implies that you have many tasks all similar with none being prioritized. In such a case, just use one executor service. One scheduled executor service with 12 threads will get the same amount of work accomplished as 3 services with 4 threads each.
As for excess threads, they are not a burden. Any thread without a task to execute uses virtually no CPU time. A pool may or may not choose to close some unused threads after a while. But such a policy is up to the implementation of the thread pool of the executor service, and is transparent to us as calling programmers.
If the scenario were different, where some of the tasks block for long periods of time, or where you need to prioritize certain tasks, then you may want to segregate those into a separate executor service.
In today's Java (before Project Loom with virtual threads), when code in a thread blocks, that thread sits there doing nothing but waiting to unblock. Blocking means your code is performing an operation that awaits a response. For example, making network calls to a socket or web service blocks, writing to storage blocks, and accessing an external database blocks. Ideally, you would not write code that blocks for long periods of time. But sometimes you must.
In such a case where some tasks run long, or conversely you have some tasks that must be prioritized for fast execution, then yes, use multiple executor services.
For example, say you have a 16-core machine with not much else running except your Java app. You might have one executor service with a thread pool size of 4 maximum for long-running tasks, one executor service with a thread pool with a size of 7 maximum for many run-of-the-mill tasks, and a third executor service with a thread pool maximum size of 2 for very few tasks that run short but must run quickly. (The numbers here are arbitrary examples, not a recommendation.)
Other approaches
As commented, there are other frameworks for managing concurrency. The ScheduledExecutorService discussed here is general purpose.
For example, Swing, JavaFX, Spring, and Jakarta EE each have their own concurrency management. Consider using those where approriate to your particular project.

Short but frequent jobs: HandlerThread or ThreadPoolExecutor?

First of all, I could not determine what the title should be, so if it's not specific enough, the question itself will be.
We have an application that uses a foreground service and stays alive forever, and in this service, there are frequent database access jobs, network access jobs and some more, that needs to run on background threads. One job itself consumes a small amount of time, but the jobs themselves are frequent. Obviously, they need to run on worker threads, so I'm here to ask which design we should follow.
HandlerThread is a structure that creates a singular thread and uses a queue to execute tasks but always loops and waits for messages which consumes power, while ThreadPoolExecutor creates multiple threads for each job and deletes threads when the jobs are done, but because of too many threads there could be leaks, or out-of-memory even. The job count may be 5, or it may be 20, depending on how the user acts in a certain way. And, between 2 jobs, there can be a 5 second gap, or a day gap, totally depending on user. But, remember, the application stays alive forever and waits for these jobs to execute.
So, for this specific occasion, which one is better to use? A thread pool executor or a handler thread? Any advice is appreciated, thanks.
Caveat: I do not do Android work, so I am no expert there. My opinions here are based a quick reading of Android documentation.
tl;dr
➥ Use Executors rather than HandlerThread.
The Executors framework is more modern, flexible, and powerful than the legacy Thread facility used by HandlerThread. Everything you can do in HandlerThread you can do better with executors.
Differences
One big difference between HandlerThread and ThreadPoolExecutor is that the first comes from Android while the second comes from Java. So if you'll be doing other work with Java, you might not want to get in the habit of using HandlerThread.
Another big difference is age. The android.os.HandlerThread class inherits from java.lang.Thread, and dates back to the original Android API level 1. While nice for its time, the Thread facility in Java is limited in its design. That facility was supplanted by the more modern, flexible, and powerful Executors framework in later Java.
Executors
Your Question is not clear about whether these are recurring jobs or sporadically scheduled. Either can be handled with Executors.
For jobs that run once at a specific time, and for recurring scheduled jobs, use a ScheduledExecutorService. You can schedule a job to run once at a certain time by specifying a delay, a span of time to wait until execution. For repeated jobs, you can specify an amount to wait, then run, then wait, then run, and so on. I'll not address this further, as you seem to be talking about sporadic immediate jobs rather than scheduled or repeating jobs. If interested, search Stack Overflow as ScheduledExecutorService has been covered many times already on Stack Overflow.
Single thread pool
HandlerThread is a structure that creates a singular thread
If you want to recreate that single thread behavior, use a thread pool consisting of only a single thread.
ExecutorService es = Executors.newSingleThreadExecutor() ;
Make your tasks. Implement either Runnable or Callable using (a) a class implementing either interface, (b) without defining a class, via lambda syntax or conventional syntax.
Conventional syntax.
Runnable sayHelloJob = new Runnable()
{
#Override
public void run ( )
{
System.out.println( "Hello. " + Instant.now() );
}
};
Lambda syntax.
Runnable sayBonjourJob = ( ) -> System.out.println( "Bonjour. " + Instant.now() );
Submit as many of these jobs to the executor service as you wish.
es.submit( sayHelloJob ) ;
es.submit( sayBonjourJob ) ;
Notice that the submit method returns a Future. With that Future object, you can check if the computation is complete, wait for its completion, or retrieve the result of the computation. Or you may choose to ignore the Future object as seen in the code above.
Fixed thread pool
If you want multiple thread behavior, just create your executor with a different kind of thread pool.
A fixed thread pool has a maximum number of threads servicing a single queue of submitted jobs (Runnable or Callable objects). The threads continue to live, and are replaced as needed in case of failure.
ExecutorService es = Executors.newFixedThreadPool​( 3 ) ; // Specify number of threads.
The rest of the code remains the same. That is the beauty of using the ExecutorService interface: You can change the implementation of the executor service to get difference behavior while not breaking your code that calls upon that executor service.
Cached thread pool
Your needs may be better service by a cached thread pool. Rather than immediately creating and maintaining a certain number of threads as the fixed thread pool does, this pool creates threads only as needed, up to a maximum. When a thread is done, and resting for over a minute, the thread is terminated. As the Javadoc notes, this is ideal for “many short-lived asynchronous tasks” such as yours. But notice that there is no upper limit of threads that may be running simultaneously. If the nature of your app is such that you may see often spikes of many jobs arriving simultaneously, you may want to use a different implementation other than cached thread pool.
ExecutorService es = Executors.newCachedThreadPool() ;
Managing executors and threads
but because of too many threads there could be leaks, or out-of-memory even
It is the job of you the programmer and your sysadmin to not overburden the production server. You need to monitor performance in production. The managagement is easy enough to perform, as you control the number of threads available in the thread pool backing your executor service.
We have an application that uses a foreground service and stays alive forever
Of course your app does eventually come to end, being shutdown. When that happens, be sure to shutdown your executor and its backing thread pool. Otherwise the threads may survive, and continue indefinitely. Be sure to use the life cycle hooks of your app’s execution environment to detect and react to the app shutting down.
The job count may be 5, or it may be 20, depending on how the user acts in a certain way.
Jobs submitted to an executor service are buffered up until they can be scheduled on a thread for execution. So you may have a thread pool of, for example, 3 threads and 20 waiting jobs. No problem. The waiting jobs will be eventually executed when their time comes.
You may want to prioritize certain jobs, to be done ahead of lower priority jobs. One easy way to do this is to have two executor services. Each executor has its own backing thread pool. One executor is for the fewer but higher-priority jobs, while the other executor is for the many lower-priority jobs.
Remember that threads in a thread pool doing no work, on stand-by, have virtually no overhead in Java for either CPU or memory. So there is no downside to having a special higher-priority executor service sitting around and waiting for eventual jobs to arrive. The only concern is that your total number of all background threads and their workload not overwhelm your machine. Also, the implementation of the thread pool may well shut down unused threads after a period of disuse.
Don't really think its a question of the number of threads you are running, more how you want them run. If you want them run one at at time (i.e. you only want to execute on database query at a time) then use a HandlerThread. If you want multi-threading / a pool of threads, then use and Executor.
In my experience, leaks are really more down to how you have coded your threads, not really the chosen implementation.
Personally, I'd use a HandlerThread, here's a nice article on implementing them and how to avoid memory leaks ... Using HandlerThread in Android

Examples of when it is convenient to use Executors.newSingleThreadExecutor()

Could please somebody tell me a real life example where it's convenient to use this factory method rather than others?
newSingleThreadExecutor
public static ExecutorService newSingleThreadExecutor()
Creates an Executor that uses a single worker thread operating off an
unbounded queue. (Note however that if this single thread terminates
due to a failure during execution prior to shutdown, a new one will
take its place if needed to execute subsequent tasks.) Tasks are
guaranteed to execute sequentially, and no more than one task will be
active at any given time. Unlike the otherwise equivalent
newFixedThreadPool(1) the returned executor is guaranteed not to be
reconfigurable to use additional threads.
Thanks in advance.
Could please somebody tell me a real life example where it's convenient to use [the newSingleThreadExecutor() factory method] rather than others?
I assume you are asking about when you use a single-threaded thread-pool as opposed to a fixed or cached thread pool.
I use a single threaded executor when I have many tasks to run but I only want one thread to do it. This is the same as using a fixed thread pool of 1 of course. Often this is because we don't need them to run in parallel, they are background tasks, and we don't want to take too many system resources (CPU, memory, IO). I want to deal with the various tasks as Callable or Runnable objects so an ExecutorService is optimal but all I need is a single thread to run them.
For example, I have a number of timer tasks that I spring inject. I have two kinds of tasks and my "short-run" tasks run in a single thread pool. There is only one thread that executes them all even though there are a couple of hundred in my system. They do routine tasks such as checking for disk space, cleaning up logs, dumping statistics, etc.. For the tasks that are time critical, I run in a cached thread pool.
Another example is that we have a series of partner integration tasks. They don't take very long and they run rather infrequently and we don't want them to compete with other system threads so they run in a single threaded executor.
A third example is that we have a finite state machine where each of the state mutators takes the job from one state to another and is registered as a Runnable in a single thread-pool. Even though we have hundreds of mutators, only one task is valid at any one point in time so it makes no sense to allocate more than one thread for the task.
Apart from the reasons already mentioned, you would want to use a single threaded executor when you want ordering guarantees, i.e you need to make sure that whatever tasks are being submitted will always happen in the order they were submitted.
The difference between Executors.newSingleThreadExecutor() and Executors.newFixedThreadPool(1) is small but can be helpful when designing a library API. If you expose the returned ExecutorService to users of your library and the library works correctly only when the executor uses a single thread (tasks are not thread safe), it is preferable to use Executors.newSingleThreadExecutor(). Otherwise the user of your library could break it by doing this:
ExecutorService e = myLibrary.getBackgroundTaskExecutor();
((ThreadPoolExecutor)e).setCorePoolSize(10);
, which is not possible for Executors.newSingleThreadExecutor().
It is helpful when you need a lightweight service which only makes it convenient to defer task execution, and you want to ensure only one thread is used for the job.

Waiting for another thread in executorservice scenario

Suppose there are three threads created using executor service and now I want that t2 would start running after t1 and t3 would start running after t2. how to achieve this kind of scenario in case of thread pool?
If it would have any normal thread creating using thread.start(). I could have waited using join() method. But how to handle above scenario?
Thread t1,t2 and t3 can implement callable interface and from the call method you can return some value.
Based on the return value, after t1 returns, you can initiate t2 and similarly for t3.
"Callable" is the answer for it
You are confusing the notion of threads and what is executed on a thread. It doesn't matter when a thread "starts" in a thread pool but when execution of your processing begins or continues. So the better statement is that you have 3 Callables or Runnables and you need one of the to wait for the other two before continuing. This is done using a CountDownLatch. Create a shared latch with a count of 2. 2 of the Callables will call countDown() on the latch, the one that should wait will call await() (possibly with a timeout).
Jobs submitted to an ExecutorService must be mutually independent. If you try to establish dependencies by waiting on Semaphores, CountDownLatches or similar, you run the risk of blocking the whole Service, when all available worker threads execute jobs that wait for a jobs that has been submitted, but is behind the current jobs in the queue. You want to make sure you have more workers than possible blocking jobs. In most cases, it is better to use more than one ExecutorService and submit each job of a dependent group to a different Service.
A few options:
If this is the only scenario you have to deal with (t1->t2->t3), don't use a thread pool. Run the three tasks sequentially.
Use some inter-thread notification mechanism (e.g. BlockingQueue, CountDownLatch). This requires your tasks to hold a shared reference to the synchronization instrument you choose.
Wrap any dependence sequence with a new runnable/callable to be submitted as a single task. This approach is simple, but won't deal correctly with non-linear dependency topologies.
Every task that depends on another task should submit the other task for execution, and wait for its completion. This is a generic approach for thread pools with dependencies, but it requires a careful tuning to avoid possible deadlocks (running tasks may wait for tasks which don't have an available thread to run on. See my response here for a simple solution).

ForkJoinPool seems to waste a thread

I'm comparing two variations on a test program. Both are operating with a 4-thread ForkJoinPool on a machine with four cores.
In 'mode 1', I use the pool very much like an executor service. I toss a pile of tasks into ExecutorService.invokeAll. I get better performance than from an ordinary fixed thread executor service (even though there are calls to Lucene, that do some I/O, in there).
There is no divide-and-conquer here. Literally, I do
ExecutorService es = new ForkJoinPool(4);
es.invokeAll(collection_of_Callables);
In 'mode 2', I submit a single task to the pool, and in that task call ForkJoinTask.invokeAll to submit the subtasks. So, I have an object that inherits from RecursiveAction, and it is submitted to the pool. In the compute method of that class, I call the invokeAll on a collection of objects from a different class that also inherits from RecursiveAction. For testing purposes, I submit only one-at-a-time of the first objects. What I naively expected to see what all four threads busy, as the thread calling invokeAll would grab one of the subtasks for itself instead of just sitting and blocking. I can think of some reasons why it might not work that way.
Watching in VisualVM, in mode 2, one thread is pretty nearly always waiting. What I expect to see is the thread calling invokeAll immediately going to work on one of the invoked tasks rather than just sitting still. This is certainly better than the deadlocks that would result from trying this scheme with an ordinary thread pool, but still, what up? Is it holding one thread back in case something else gets submitted? And, if so, why not the same problem in mode 1?
So far I've been running this using the jsr166 jar added to java 1.6's boot class path.
ForkJoinTask.invokeAll is forking all tasks, but the first in the list. The first task it runs itself. Then it joins other tasks. It's thread is not released in any way to the pool. So you what you see, it it's thread blocking on other tasks to be complete.
The classic use of invokeAll for a Fork Join pool is to fork one task and compute another (in that executing thread). The thread that does not fork will join after it computes. The work stealing comes in with both tasks computing. When each task computes it is expected to fork it's own subtasks (until some threshold is met).
I am not sure what invokeAll is being called for your RecursiveAction.compute() but if it is the invokeAll which takes two RecursiveAction it will fork one, compute the other and wait for the forked task to finish.
This is different then a plain executor service because each task of an ExecutorService is simply a Runnable on a queue. There is no need for two tasks of an ExecutorService to know the outcome of another. That is the primary use case of a FJ Pool.

Categories

Resources