In my app I need to execute different future task.
My call would be something like
public Item getTaskResult(){
//creating the task object named task
Executors.newCachedThreadPool().execute(task);
....
}
Is it wrong to just call Executors.newCachedThreadPool() ?
Should I keep a reference to it? Am I wasting some resources doing in my way?
You should probably have only one CachedThreadPool in your whole application. Doing so, it allows you to factorize resources associated to the pool and also to take advantage of a better thread re-use.
Creating a thread pool every time is a costly operation. Therefore create it once and use it as much as you want.
Take it this way: In your house, would you want to create a new swimming pool every time you need to swim? Create just one CachedThreadPool and use it.
The worst problem with your demonstrated code is that it has a resource leak. The thread pool will not be automatically closed, and threads killed, just because it has become unreachable. You may observe your thread count growing without bounds, until finally you get an OutOfMemoryException: cannot create a native thread.
You can legally submit a task to a new thread pool and immediately call shutdown on it. This will work correctly, even if failing to be the most performant option.
On a different level of approaching this issue, thread pools are not designed to be used in such an ephemeral fashion. You are degrading the pool to what a raw Thread instance would do, where the main point of using a thread pool is... well, pooling threads, which are expensive system resources. This is why a global singleton is the preferred approach to using the Executor Service.
Related
I know the thread pool is a good thing because it can reuse threads and thus save the cost of creating new threads. But my question is, are there any disadvantages of using a thread pool? In which situation is using a thread pool not as good as using just individual threads?
In which situation is using a thread pool not as good as using just individual threads?
The only time I can think of is when you have a single thread that only needs to do a single task for the life of your program. Something like a background thread attached to a permanent cache or something. That's about the only time I fork a thread directly as opposed to using an ExecutorService. Even then, using a Executor.newSingleThreadExecutor() would be fine. The overhead of the thread-pool itself is maybe a bit more logic and some memory but very hard to see a pressing downside.
Certainly anytime you need multiple threads to perform tasks, a thread-pool is warranted. What the ExecutorService code does is reduce the amount of code you need to write to manage the threads. The improvements in readability and code maintainability is a big win.
Threadpool is suitable only when you use it for operations that takes less time to complete. Threadpool threads are not suitable for long running operations, as it can easily lead to thread starvation.
If you require your thread to have a specific priority, then threadpool thread is not suitable.
You have tasks that cause the thread to block for long periods of time. The thread pool has a maximum number of threads, so a large number of blocked thread pool threads might prevent tasks from starting.
You've got a bunch of different answers here. I think one reason for that is the question is incomplete. You are asking for "disadvantages of using a thread pool," but you didn't say, disadvantages compared to what?
A thread pool solves a particular problem. There are other problems where "thread" or "threads" is part of the solution, but "thread pool" is not. "Thread pool" usually is the answer, when the question is, how to achieve parallel execution of many, short-lived, CPU-intensive tasks, on a multi-processor system.
Threads are useful, even on a uni-processor, for other purposes. The first question I ask about any long-running thread, for example, is "what does it wait for." Threads are an excellent tool for organizing a program that has to wait for different kinds of event. You would not use a thread pool for that, though.
In addition to Gray's answer.
Other use-case is if you are using thread local or using thread as a key of some kind of hash table or stateful custom implementation of thread. In this case you have to care about cleaning the state when particular task finished using the thread even if it failed. Otherwise some surprises are possible: next task that uses thread that has some state can start functioning wrong.
Thread pools of limited size are dangerous if the tasks running on it exchange information via blocking queues - this may cause a thread starvation: What is starvation?. Good rule is to never use blocking operation in the tasks running on a thread pool.
Theads are better when you don't plan to stop using the thread. For instance in an infinite loop. Threadpools are best when doing many tasks that don't happen all at the same time. Especially when the tasks are short the overhead and clarity of using the same thread is bigger.
It depends on the situation you are going to utilize the thread pool. For example, if your system does not need to perform tasks in parallel, a threading pool would be in no use. It would keep unnecessary threads ready for a work that will never come. In such cases you can use a SingleThreadExecutor anyway. Check this link if you haven't it may clarify you about it: Thread Pool Pattern
Is it possible to have one thread pool for my whole program so that the threads are reused, or do I need to make the ExecutorService global/ pass it to all objects using it.
To be more precise I have multiple tasks that run in my program but they do not run extremely often.
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(1);
I believe that it would be unnecessary to have a full thread running all the time for every single task but it might also be costly to restart the thread every single time when a task is executed.
Is there a better alternative to making the Thread pool global?
How do I reuse Threads with different ExecutorService objects?
It is not possible to re-use threads across different ExecutorService thread-pools. You can certainly submit vastly different types of Runnable classes to a common thread-pool however.
Is there a better alternative to making the Thread pool global?
I don't see a problem with a "global" thread-pool in your application. Someone needs to know when to call shutdown() on it of course but that's the only problem I see with it. If you have a lot of disparate classes which are submitting tasks, they all could access this set (or 1) of common background threads.
You may find however that different tasks may want to use a cached thread pool while others need a fixed sized pool so that multiple pools are still necessary.
I believe that it would be unnecessary to have a full thread running all the time for every single task but it might also be costly to restart the thread every single time when a task is executed.
In general, unless you are forking tons and tons of threads, the relative cost of starting one up every so often is relatively small. Unless you have evidence from a profiler or some other source, this may be premature optimization.
With Java 8 there is a new solution.
The fork join global thread pool:
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html#commonPool--
I am trying to use both InheritableThreadLocal and a ThreadPoolExecutor.
This breaks down because ThreadPoolExecutor reuses threads for each pool (it is a pool, after all), meaning the InheritableThreadLocal doesn't work as expected. Now the problem seems obvious to me, but it was particularly snarly to track down.
I use InheritableThreadLocal so that each of several top-level processes has its own database connection for itself and any sub-processes it spawns. I don't just use one shared connection pool because each top-level process will do a lot of multi-step work with its connection before committing to the database and/or preparing a lot of PreparedStatements that are used over and over.
I use a shared ThreadPoolExecutor between these top-level processes because there are certain behaviors that need to be gated. e.g. Even though I might have 4 top-level processes running, I can only have any one process writing to the database at a time (or the system needs to gate on some other shared resource). So I'll have the top-level process create a Runnable and send it to the shared ThreadPoolExecutor to make sure that no more than one (or two or three as the case may be) are running at the same time across the entire system.
The problem is that because the ThreadPoolExecutor reuses its threads for the pools, the InheritableThreadLocal is picking up the original value that was run in that pool rather than the value that was in the top-level process which sent the Runnable to the ThreadPoolExecutor.
Is there any way to force the worker pool in the ThreadPoolExecutor to use the InheritableThreadLocal value that was in the context of the process which created the Runnable rather than in the context of the reused thread pool?
Alternatively, is there any implementation of ThreadPoolExecutor that creates a new thread each time it starts a new Runnable? For my purposes I only care about gating the number of simultaneously running threads to a fixed size.
Is there any other solution or suggestion people have for me to accomplish what I've described above?
(While I realize I could solve the problem by passing around the database connection from class to class to subthread to subthread like some kind of community bicycle, I'd like to avoid this.)
There is a previous question on StackOverflow, InheritableThreadLocal and thread pools, that addresses this issue as well. However, the solution to that problem seems to be that it's a poor use case for InheritableThreadLocal, which I do not think applies to my situation.
Thanks for any ideas.
using InheritedThreadLocal is almost surely wrong. Probably you'd have not asked the question if you can fit that bizarre tool.
First and foremost it's horribly leak-prone and often the value(s) escapes in some totally strange threads.
As for the Runnable being associate w/ a context.
Override publicvoid execute(Runnable command) of the ExecutorPool and wrap the Runnable withing some context carrying the value you want in the first place from the InheritedThreadLocal.
The wrapping class shall look something like
class WrappedRunnable extends Runnable{
static final ThreadLocal<Ctx> context=new ThreadLocal<Ctx>();
final Runnable target;
final Ctx context;
WrappedRunnable(Ctx context, Runnable target){...}
public void run(){
ctx.set(context);
try{
target.run();
}finally{
ctx.set(null);//or ctx.remove()
}
}
}
Alternatively, is there any implementation of ThreadPoolExecutor that creates a new >thread each time it starts a new Runnable? For my purposes I only care about gating the >number of simultaneously running threads to a fixed size.
While truly bad from performance point of view, you can implement your own, basically you need only execute(Runnable task) method for the Executor that spawns new thread and starts it.
Instead of using a ThreadPoolExecutor to protect shared resources, why not use a java.util.concurrent.Semaphore? The sub tasks you create would run to completion in their own threads, but only after having acquired a permit from the semaphore, and of course releasing the permit when done.
We had the same issue earlier and we solved this issue by writing ThreadLocalContextMigrator which basically copies the thread local context to the task that will be executed using a thread from the pool. The Task, while executing will collect more context info and upon completion of the task we copy it back.
Why not just pass the current connection on to any sub-tasks spawned by the main task? maybe some sort of shared Context object?
Using Executors.newFixedThreadPool(int nThreads) is a nice way to minimize the overhead of creating too many threads, but it may lead to a deadlock in case that all threads are waiting for another job which itself is waiting for a free thread from the pool. Sometimes the problem can be solved by using multiple thread pools, but sometimes it can't. I'm looking for something behaving similar to newFixedThreadPool except in case that all pooled threads are blocked - in such a case the pool should grow despite its predefined bound. Is there something like this?
Actually, the deadlock is not that important here. The real problem is "how to manage the number of running threads" rather than their total number. This can be also interesting when trying to keep the CPU fully utilized without creating needlessly many threads.
If you have a contention issue, this is a design problem. If you want a quick fix as you described, you will only be curing the symptoms, not the underlying sickness.
You should instead refactor your design to eliminate deadlock using some other means.
It's generally a bad idea to have threads in a pool blocked waiting for other threads in the same thread pool.
I would try to change the design to a non-blocking one. If a thread needs the result of another operation that is being processed by the same executor I would have it submit a task back to the executor to run after the second operation completes. Or place an object into a queue to be picked up later when the other job finishes.
Alternatively you can do what Swing does with modal dialogs and have the thread that is about to block start up a child thread to keep processing requests until the parent thread unblocks. This is tricky to get right though and would require you to manually manage the threads which is a lot less safe than using an Executor.
Executor.newCachedThreadPool(); A cached thread pool will check to see if there are any available threads. If there is, the thread pool will re use the thread. If it isnt, the thread pool will create a new thread. The threads time to live is 60 seconds, so after 60 seconds the extra threads will be terminated.
I need to make a program with a limited amount of threads (currently using newFixedThreadPool) but I have the problem that all threads get created from start, filling up memory at alarming rate.
I wish to prevent this. Threads should only be created shortly before they are executed.
e.g.: I call the program and instruct it to use 2 threads in the pool. The program should create & launch the first 2 Threads immediately (obviously), create the next 2 to wait for the previous 2, and at that point wait until one or both of the first 2 ended executing.
I thought about extending executor or FixedThreadPool or such. However I have no clue on how to start there and doubt it is the best solution. Easiest would have my main Thread sleeping on intervals, which is not really good either...
Thanks in advance!
Have you tried taking a look at ThreadPoolExecutor ? Using the right constructor parameters, you could easily tweak the number and keep-alive time of the created threads.
Looking at the details in your post...
I call the program and instruct it to use 2 threads in the pool. The program should create & launch the first 2 Threads immediately (obviously), create the next 2 to wait for the previous 2, and at that point wait until one or both of the first 2 ended executing.
Your problem is much more about synchronizing tasks execution than in fact pooling threads. From what you say here, you want to have 2 threads executing any number of tasks; if you don't want to have 100 jobs running at the same time, don't create a 100 threads pool...
I would suggest either using a BlockingQueue to control your Runnables, or create a 2 threads pool using a ThreadPoolExecutor, and feed it all your tasks. It will execute them when threads are available.
Does that make sense with what you try to achieve here?
I don't think you should manipulate the thread pool implementation. If you create the threads shortly before execution, you lose the main benefit of the pool, that recycles your threads.
Maybe you should reduce the maximum number of threads in the pool. If you instruct the pool to create too many of them, the total out-of-heap memory used for their stack spaces will consume all available memory. I assume that this is the kind of OutOfMemoryError you have (?).
If you're looking at this from a performance perspective, then it's best to take the hit in memory when you first start up the application than constantly get bombarded with allocating and deallocating memory while the program is running.
If it's using too much memory when you start the application, then it will still be too much memory later. You should throttle down the size of the thread pool.
There are additional benefits to using a thread pool, such as if you lose a thread along the way, the thread pool will automatically create a new one to replace it, keeping your thread pool at a constant size.
If this isn't the type of benefit that you're looking for, then you may wish to handle the threads in memory manually, and avoid the thread pool.