I've a long running task which consist of 2 parts. First part is intensive I/O operation (and almost no CPU), second part is intensive CPU operation. I would have 2 threads running this task so that CPU part of the task in one thread is bound to I/O part of this task running by another thread. In other words, I would like to run CPU-intensive part in thread #1 while thread #2 runs I/O operation and vise versa, so I utilize maximum CPU and I/O.
Is there some generic solution in Java for more then 2 threads?
Make a class which extends Thread. Now make two objects of that class and handle your logic of I/O and CPU part in two separate functions.
Take a look to the Executors class:
//Create a thread pool
ExecutorService executorService = Executors.newFixedThreadPool(10);
//Launch a new thread
executorService.execute(new Runnable() {
public void run() {
System.out.println("Asynchronous task");
}
});
//Try to terminate all the alive threads
executorService.shutdown();
This may help you:
Task Execution and Scheduling
I do not think that is possible. The scheduling of threads is fundamentally handled by the operating system. The OS decides which thread is run on which logical CPU. On the level of the application you can only give some hints to the OS scheduler, like priority, but you cannot force a certain scheduling.
It might be possible with languages like C or C++ by invoking OS specific APIs, but on the abstraction layer on Java you cannot force that behaviour.
splitting the work in 2 threads is an artifical constraint, and as any artifacal constraint, it can only limit the level of parallelism. If two parts are logically sequential (e.g. io work must preceede the cpu intensive work in order to provide data), then they should be executed sequentially on the same thread. If you have several independent tasks, they should be executed on different threads. Problems may arise if you have thousand of threads and they eat too much memory. Then you have to split your work in tasks and run that tasks on a thread pool (executor service). This is more complicated approach as you may need to coordinate starting of your tasks but there are no standard means to do so. One of solutions to coordinate small tasks is actor execution model, but it is impossible to say beforehand if actor model fits your needs.
Related
I have this "ugly" Java code I need to convert to Kotlin idiomatic coroutines and I cant quite figure out how.
Thread[] pool=new Thread[2*Runtime.getRuntime().availableProcessors()];
for (int i=0;i<pool.length;i++)
pool[i]=new Thread(){
public void run() {
int y; while((y=yCt.getAndIncrement())<out.length) putLine(y,out[y]);
}
};
for (Thread t:pool) t.start();
for (Thread t:pool) t.join();
I think it is possible to implement using runBlocking but how do I deal with availableProcessors count?
I'll make some assumptions here:
putLine() is a CPU intensive and not IO operation. I assume this, because it is executed using threads number of 2 * CPU cores, which is usually used for CPU intensive tasks.
We just need to execute putLine() for each item in out. From the above code it is not clear if e.g. yCt is initially 0.
out isn't huge like e.g. millions of items.
You don't look for 1:1 the same code in Kotlin, but rather its equivalent.
Then the solution is really very easy:
coroutineScope {
out.forEachIndexed { index, item ->
launch(Dispatchers.Default) { putLine(index, item) }
}
}
Few words of explanation:
Dispatchers.Default coroutine dispatcher is used specifically for CPU calculations and its number of threads depends on the number of CPU cores. We don't need to create our own threads, because coroutines provide a suitable thread pool.
We don't handle a queue of tasks manually, because coroutines are lightweight and we can instead just schedule a separate coroutine per each item - they will be queued automatically.
coroutineScope() waits for its children, so we don't need to also manually wait for all asynchronous tasks. Any code put below coroutineScope() will be executed when all tasks finish.
There are some differences in behavior between the Java/threads and Kotlin/coroutines code:
Dispatchers.Default by default has the number of threads = CPU cores, not 2 * CPU cores.
In coroutines solution, if any task fail, the whole operation throws an exception. In the original code, errors are ignored and the application continues with inconsistent state.
In coroutines solution the thread pool is shared with other components of the application. This could be a desired behavior or not.
I have a service which schedules async tasks using ScheduledExecutorService for the user. Each user will trigger the service to schedule two tasks. (The 1st Task schedule the 2nd task with a fixed delay, such as 10 seconds interval)
pseudocode code illustration:
task1Future = threadPoolTaskScheduler.schedule(task1);
for(int i = 0; i< 10000; ++i) {
task2Future = threadPoolTaskScheduler.schedule(task2);
task2Future.get(); // Takes long time
Thread.sleep(10);
}
task1.Future.get();
Suppose I have a potential of 10000 users using the service at the same time, we can have two kinds of ScheduledExecutorService configuration for my service:
A single ScheduledExecutorService for all the users.
Create a ScheduledExecutorService for each user.
What I can think about the first method:
Pros:
Easy to control the number of threads in the thread pool.
Avoid creating new threads for scheduled tasks.
Cons:
Always keeping multiple number of threads available could waste computer resources.
May cause the hang of the service because of lacking available threads. (For example, set the thread pool size to 10, and then there is a 100 person using the service the same time, then after entering the 1st task and it tries to schedule the 2nd task, then finding out there is no thread available for scheduling the 2nd task)
What I can think about the second method
Pros:
Avoiding always keep many threads available when the number of user is small.
Can always provide threads for a large number of simultaneously usage.
Cons:
Creating new threads creates overheads.
Don't know how to control the number of maximum threads for the service. May cause the RAM out of space.
Any ideas about which way is better?
Single ScheduledExecutorService drives many tasks
The entire point of a ScheduledExecutorService is to maintain a collection of tasks to be executed after a certain amount of time elapses.
So given the scenario you describe, you need only a single ScheduledExecutorService object. Submit your 10,000 tasks to that one object. Each task will be executed approximately when its designated delay elapses. Simple, and easy.
Thread pool size
The real issue is deciding how many threads to assign to the ScheduledExecutorService.
Threads, as currently implemented in the OpenJDK project, are mapped directly to host OS threads. This makes them relatively heavyweight in terms of CPU and memory usage. In other words, currently Java threads are “expensive”.
There is no simple easy answer to calculating thread pool size. The optimal number is the least amount of threads that can keep up with the workload without over-burdening the host machine’s limited number of cores and limited memory. If you search Stack Overflow, you’ll find many discussions on the topic of deciding how many threads to use in a pool.
Project Loom
And keep tabs with the progress of Project Loom and its promise to bring virtual threads to Java. That technology has the potential to radically alter the calculus of deciding thread pool size. Virtual threads will be more efficient with CPU and with memory. In other words, virtual threads will be quite “cheap”, “inexpensive”.
How executor service works
You said:
entering the 1st task and it tries to schedule the 2nd task, then finding out there is no thread available for scheduling the 2nd task
That is not how the scheduled executor service (SES) works.
If a task being currently executed by a SES needs to schedule itself or some other task to later execution, that submitted task is added to the queue maintained internally by the SES. There is no need to have a thread immediately available. Nothing happens immediately except that queue addition. Later, when the added task’s specified delay has elapsed, the SES looks for an available thread in its thread-pool to execute that task that was queued a while back in time.
You seem to feel a need to manage the time of each task’s execution on certain threads. But that is the job of the scheduled executor service. The SES tracks the tasks submitted for execution, notices when their specified delay elapses, and schedules their execution on a thread from its managed pool of threads. You don’t need to manage any of that. Your only challenge is to assign an appropriate number of threads to the pool.
Multiple executor services
You commented:
why don't use multiple ScheduledExecutorService instances
Because in your scenario, there is no benefit. Your Question implies that you have many tasks all similar with none being prioritized. In such a case, just use one executor service. One scheduled executor service with 12 threads will get the same amount of work accomplished as 3 services with 4 threads each.
As for excess threads, they are not a burden. Any thread without a task to execute uses virtually no CPU time. A pool may or may not choose to close some unused threads after a while. But such a policy is up to the implementation of the thread pool of the executor service, and is transparent to us as calling programmers.
If the scenario were different, where some of the tasks block for long periods of time, or where you need to prioritize certain tasks, then you may want to segregate those into a separate executor service.
In today's Java (before Project Loom with virtual threads), when code in a thread blocks, that thread sits there doing nothing but waiting to unblock. Blocking means your code is performing an operation that awaits a response. For example, making network calls to a socket or web service blocks, writing to storage blocks, and accessing an external database blocks. Ideally, you would not write code that blocks for long periods of time. But sometimes you must.
In such a case where some tasks run long, or conversely you have some tasks that must be prioritized for fast execution, then yes, use multiple executor services.
For example, say you have a 16-core machine with not much else running except your Java app. You might have one executor service with a thread pool size of 4 maximum for long-running tasks, one executor service with a thread pool with a size of 7 maximum for many run-of-the-mill tasks, and a third executor service with a thread pool maximum size of 2 for very few tasks that run short but must run quickly. (The numbers here are arbitrary examples, not a recommendation.)
Other approaches
As commented, there are other frameworks for managing concurrency. The ScheduledExecutorService discussed here is general purpose.
For example, Swing, JavaFX, Spring, and Jakarta EE each have their own concurrency management. Consider using those where approriate to your particular project.
First of all, I could not determine what the title should be, so if it's not specific enough, the question itself will be.
We have an application that uses a foreground service and stays alive forever, and in this service, there are frequent database access jobs, network access jobs and some more, that needs to run on background threads. One job itself consumes a small amount of time, but the jobs themselves are frequent. Obviously, they need to run on worker threads, so I'm here to ask which design we should follow.
HandlerThread is a structure that creates a singular thread and uses a queue to execute tasks but always loops and waits for messages which consumes power, while ThreadPoolExecutor creates multiple threads for each job and deletes threads when the jobs are done, but because of too many threads there could be leaks, or out-of-memory even. The job count may be 5, or it may be 20, depending on how the user acts in a certain way. And, between 2 jobs, there can be a 5 second gap, or a day gap, totally depending on user. But, remember, the application stays alive forever and waits for these jobs to execute.
So, for this specific occasion, which one is better to use? A thread pool executor or a handler thread? Any advice is appreciated, thanks.
Caveat: I do not do Android work, so I am no expert there. My opinions here are based a quick reading of Android documentation.
tl;dr
➥ Use Executors rather than HandlerThread.
The Executors framework is more modern, flexible, and powerful than the legacy Thread facility used by HandlerThread. Everything you can do in HandlerThread you can do better with executors.
Differences
One big difference between HandlerThread and ThreadPoolExecutor is that the first comes from Android while the second comes from Java. So if you'll be doing other work with Java, you might not want to get in the habit of using HandlerThread.
Another big difference is age. The android.os.HandlerThread class inherits from java.lang.Thread, and dates back to the original Android API level 1. While nice for its time, the Thread facility in Java is limited in its design. That facility was supplanted by the more modern, flexible, and powerful Executors framework in later Java.
Executors
Your Question is not clear about whether these are recurring jobs or sporadically scheduled. Either can be handled with Executors.
For jobs that run once at a specific time, and for recurring scheduled jobs, use a ScheduledExecutorService. You can schedule a job to run once at a certain time by specifying a delay, a span of time to wait until execution. For repeated jobs, you can specify an amount to wait, then run, then wait, then run, and so on. I'll not address this further, as you seem to be talking about sporadic immediate jobs rather than scheduled or repeating jobs. If interested, search Stack Overflow as ScheduledExecutorService has been covered many times already on Stack Overflow.
Single thread pool
HandlerThread is a structure that creates a singular thread
If you want to recreate that single thread behavior, use a thread pool consisting of only a single thread.
ExecutorService es = Executors.newSingleThreadExecutor() ;
Make your tasks. Implement either Runnable or Callable using (a) a class implementing either interface, (b) without defining a class, via lambda syntax or conventional syntax.
Conventional syntax.
Runnable sayHelloJob = new Runnable()
{
#Override
public void run ( )
{
System.out.println( "Hello. " + Instant.now() );
}
};
Lambda syntax.
Runnable sayBonjourJob = ( ) -> System.out.println( "Bonjour. " + Instant.now() );
Submit as many of these jobs to the executor service as you wish.
es.submit( sayHelloJob ) ;
es.submit( sayBonjourJob ) ;
Notice that the submit method returns a Future. With that Future object, you can check if the computation is complete, wait for its completion, or retrieve the result of the computation. Or you may choose to ignore the Future object as seen in the code above.
Fixed thread pool
If you want multiple thread behavior, just create your executor with a different kind of thread pool.
A fixed thread pool has a maximum number of threads servicing a single queue of submitted jobs (Runnable or Callable objects). The threads continue to live, and are replaced as needed in case of failure.
ExecutorService es = Executors.newFixedThreadPool( 3 ) ; // Specify number of threads.
The rest of the code remains the same. That is the beauty of using the ExecutorService interface: You can change the implementation of the executor service to get difference behavior while not breaking your code that calls upon that executor service.
Cached thread pool
Your needs may be better service by a cached thread pool. Rather than immediately creating and maintaining a certain number of threads as the fixed thread pool does, this pool creates threads only as needed, up to a maximum. When a thread is done, and resting for over a minute, the thread is terminated. As the Javadoc notes, this is ideal for “many short-lived asynchronous tasks” such as yours. But notice that there is no upper limit of threads that may be running simultaneously. If the nature of your app is such that you may see often spikes of many jobs arriving simultaneously, you may want to use a different implementation other than cached thread pool.
ExecutorService es = Executors.newCachedThreadPool() ;
Managing executors and threads
but because of too many threads there could be leaks, or out-of-memory even
It is the job of you the programmer and your sysadmin to not overburden the production server. You need to monitor performance in production. The managagement is easy enough to perform, as you control the number of threads available in the thread pool backing your executor service.
We have an application that uses a foreground service and stays alive forever
Of course your app does eventually come to end, being shutdown. When that happens, be sure to shutdown your executor and its backing thread pool. Otherwise the threads may survive, and continue indefinitely. Be sure to use the life cycle hooks of your app’s execution environment to detect and react to the app shutting down.
The job count may be 5, or it may be 20, depending on how the user acts in a certain way.
Jobs submitted to an executor service are buffered up until they can be scheduled on a thread for execution. So you may have a thread pool of, for example, 3 threads and 20 waiting jobs. No problem. The waiting jobs will be eventually executed when their time comes.
You may want to prioritize certain jobs, to be done ahead of lower priority jobs. One easy way to do this is to have two executor services. Each executor has its own backing thread pool. One executor is for the fewer but higher-priority jobs, while the other executor is for the many lower-priority jobs.
Remember that threads in a thread pool doing no work, on stand-by, have virtually no overhead in Java for either CPU or memory. So there is no downside to having a special higher-priority executor service sitting around and waiting for eventual jobs to arrive. The only concern is that your total number of all background threads and their workload not overwhelm your machine. Also, the implementation of the thread pool may well shut down unused threads after a period of disuse.
Don't really think its a question of the number of threads you are running, more how you want them run. If you want them run one at at time (i.e. you only want to execute on database query at a time) then use a HandlerThread. If you want multi-threading / a pool of threads, then use and Executor.
In my experience, leaks are really more down to how you have coded your threads, not really the chosen implementation.
Personally, I'd use a HandlerThread, here's a nice article on implementing them and how to avoid memory leaks ... Using HandlerThread in Android
I have only two short-lived tasks to run in the background upon the start of the application. Would it make sense to use a thread for each task or an Executor, for instance, a single thread executor to submit these two tasks.
Does it make sense to create two threads that die quickly as opposed to having a single threaded executor waiting for tasks throughout the lifecycle of the application when there are none?
One big benefit of using a threadpool is that you avoid the scenario where you have some task that you perform repeatedly then, if something goes wrong with that task that causes the thread to hang, you're at risk of losing a thread every time the task happens, resulting in running the application out of threads. If your threads only run once on startup then it seems likely that risk wouldn't apply to your case.
You could still use Executor, but shut it down once your tasks have both run. It might be preferable to use Futures or a CompletionService over raw threads.
If you do this more than once in your application, ThreadPoolExecutor is definitely worth a look.
One benefit is the pooling of threads. This releaves the runtime to create and destroy OS objects every time you need a thread. Additionally you get control of the amount of threads spawned - but this seems not the big issue for you - and threads running/done.
But if you actually really only spawn two threads over the runtime of your application, the executors may be oversized, but they are nevertheless very comfortable to work with.
Since Nathan added Futures, there is also Timer and TimerTask. Also very convenient for "Fire and Forget" type of background action :-).
This question already has answers here:
When should we use Java's Thread over Executor?
(7 answers)
Closed 7 years ago.
In Java, both of the following code snippets can be used to quickly spawn a new thread for running some task-
This one using Thread-
new Thread(new Runnable() {
#Override
public void run() {
// TODO: Code goes here
}
}).start();
And this one using Executor-
Executors.newSingleThreadExecutor().execute(new Runnable(){
#Override
public void run() {
// TODO: Code goes here
}
});
Internally, what is the difference between this two codes and which one is a better approach?
Just in case, I'm developing for Android.
Now I think, I was actually looking for use-cases of newSingleThreadExecutor(). Exactly this was asked in this question and answered-
Examples of when it is convenient to use Executors.newSingleThreadExecutor()
Your second example is strange, creating an executor just to run one task is not a good usage. The point of having the executor is so that you can keep it around for the duration of your application and submit tasks to it. It will work but you're not getting the benefits of having the executor.
The executor can keep a pool of threads handy that it can reuse for incoming tasks, so that each task doesn't have to spin up a new thread, or if you pick the singleThread one it can enforce that the tasks are done in sequence and not overlap. With the executor you can better separate the individual tasks being performed from the technical implementation of how the work is done.
With the first approach where you create a thread, if something goes wrong with your task in some cases the thread can get leaked; it gets hung up on something, never finishes its task, and the thread is lost to the application and anything else using that JVM. Using an executor can put an upper bound on the number of threads you lose to this kind of error, so at least your application degrades gracefully and doesn't impair other applications using the same JVM.
Also with the thread approach each thread you create has to be kept track of separately (so that for instance you can interrupt them once it's time to shutdown the application), with the executor you can shut the executor down once and let it handle its threads itself.
The second using an ExecutorService is definitely the best approach.
ExecutorService determines how you want your tasks to run concurrently. It decouples the Runnables (or Callables) from their execution.
When using Thread, you couple the tasks with how you want them to be executed, giving you much less flexibility.
Also, ExecutorService gives you a better way of tracking your tasks and getting a return value with Future while the start method from Thread just run without giving any information. Thread therefore encourages you to code side-effects in the Runnable which may make the overall execution harder to understand and debug.
Also Thread is a costly resource and ExecutorService can handle their lifecycle, reusing Thread to run a new tasks or creating new ones depending on the strategy you defined. For instance: Executors.newSingleThreadExecutor(); creates a ThreadPoolExecutor with only one thread that can sequentially execute the tasks passed to it while Executors.newFixedThreadPool(8)creates a ThreadPoolExecutor with 8 thread allowing to run a maximum of 8 tasks in parallel.
You already have three answers, but I think this question deserves one more because none of the others talk about thread pools and the problem that they are meant to solve.
A thread pool (e.g., java.util.concurrent.ThreadPoolExecutor) is meant to reduce the number of threads that are created and destroyed by a program.
Some programs need to continually create and destroy new tasks that will run in separate threads. One example is a server that accepts connections from many clients, and spawns a new task to serve each one.
Creating a new thread for each new task is expensive; In many programs, the cost of creating the thread can be significantly higher than the cost of performing the task. Instead of letting a thread die after it has finished one task, wouldn't it be better to use the same thread over again to perform the next one?
That's what a thread pool does: It manages and re-uses a controlled number of worker threads, to perform your program's tasks.
Your two examples show two different ways of creating a single thread that will perform a single task, but there's no context. How much work will that task perform? How long will it take?
The first example is a perfectly acceptable way to create a thread that will run for a long time---a thread that must exist for the entire lifetime of the program, or a thread that performs a task so big that the cost of creating and destroying the thread is not significant.
Your second example makes no sense though because it creates a thread pool just to execute one Runnable. Creating a thread pool for one Runnable (or worse, for each new task) completely defeats the purpose of the thread-pool which is to re-use threads.
P.S.: If you are writing code that will become part of some larger system, and you are worried about the "right way" to create threads, then you probably should also learn what problem the java.util.concurrent.ThreadFactory interface was meant to solve.
Google is your friend.
According to documentation of ThreadPoolExecutor
Thread pools address two different problems: they usually provide
improved performance when executing large numbers of asynchronous
tasks, due to reduced per-task invocation overhead, and they provide a
means of bounding and managing the resources, including threads,
consumed when executing a collection of tasks. Each ThreadPoolExecutor
also maintains some basic statistics, such as the number of completed
tasks.
First approach is suitable for me if I want to spawn single background processing and for small applications.
I will prefer second approach for controlled thread execution environment. If I use ThreadPoolExecutor, I am sure that 1 thread will be running at time , even If I submit more threads to executor. Such cases are tend to happen if you consider large enterprise application, where threading logic is not exposed to other modules. In large enterprise application , you want to control the number of concurrent running threads. So second approach is more pereferable if you are designing enterprise or large scale applications.