I have a list of work-unit and I want to process them in parallel. Unit work is 8-15 seconds each, fully computational time, no I/O blocking. What I want to achieve is to have an ExecutorService that:
has zero threads instantiated when there is no work to do
can dynamically scale up to 20 thread if needed
allow me to add all work-units at once (without blocking the submission)
Something like:
Queue<WorkResult> queue = new ConcurrentLinkedDeque<>();
ExecutorService service = ....
for(WorkUnit unit : list) {
service.submit(() -> {
.. do some work ..
queue.offer(result);
);
}
while(queue.peek() != null) {
... process results while they arrive ...
}
What I tried with no success is:
Using a newCachedThreadPool() creates too many threads
Then I've used its internal call new ThreadPoolExecutor(0, 20, 60L, SECONDS, new SynchronousQueue<>()), but then I noticed that submit() is blocking due to the synchronous queue
So I've used new LinkedBlockingQueue(), just to find out that the ThreadPoolExecutor spawns only one thread
I'm sure there is official implementation to handle this very basic use-case of concurrency.
Can someone advice?
Create the ThreadPoolExecutor using a LinkedBlockingQueue and 20 as corePoolSize (first argument in the constructor):
new ThreadPoolExecutor(20, 20, 60L, SECONDS, new LinkedBlockingQueue<>());
If you use the LinkedBlockingQueue without a predefined capacity, the Pool:
Won't ever check maxPoolSize.
Won't create more threads than corePoolSize's specified number.
In your case, only one thread will be executed. And you're lucky to get one, since you set it to 0 and previous versions of Java (<1.6) wouldn't create any if the corePoolSize was set to 0 (how dare they?).
Further versions do create a new thread even if the corePoolSize is 0, which seems like ... a fix that is ... a bug that... changes ... a logical behaviour?.
Thread Pool Executor
Using an unbounded queue (for example a LinkedBlockingQueue without a
predefined capacity) will cause new tasks to wait in the queue when
all corePoolSize threads are busy. Thus, no more than corePoolSize
threads will ever be created. (And the value of the maximumPoolSize
therefore doesn't have any effect.)
About scaling down
In order to achieve removing all threads if there's no work to do, you will have to close the coreThreads specifically (they don't terminate by default). To achieve this, set allowCoreThreadTimeOut(true) before starting the Pool.
Be aware of setting a correct keep-alive timeout: for example, if a new task is received on average at every 6 seconds, setting the keep-alive time to 5 seconds could lead to unnecessary erase+create operations(oh dear thread, you just had to wait one second!). Set this timeout based on the task reception income speed.
allowCoreThreadTimeOut
Sets the policy governing whether core threads may time out and
terminate if no tasks arrive within the keep-alive time, being
replaced if needed when new tasks arrive. When false, core threads are
never terminated due to lack of incoming tasks. When true, the same
keep-alive policy applying to non-core threads applies also to core
threads. To avoid continual thread replacement, the keep-alive time
must be greater than zero when setting true. This method should in
general be called before the pool is actively used.
TL/DR
Unbounded LinkedBloquingQueue as task queue.
corePoolSize replacing maxPoolSize's meaning.
allowCoreThreadTimeOut(true) in order to allow the Pool to scale down using a timeout based mechanism that also affects coreThreads.
keep-alive value set to something logical based on the task reception latency.
This fresh mix will lead to an ExecutorService that 99,99999% percent of the time won't block the submitter (for this to happen, the number of tasks queued should be 2.147.483.647), and that efficiently scales the number of threads in base of the work load, fluctuating (in both directions) between { 0 <--> corePoolSize } concurrent threads.
As a suggestion, the queue's size should be monitorized, as the non-blocking behaviour has a price: the probability of getting OOM exceptions if it keeps growing without control, until INTEGER.MAX_VALUE is met (f.e: if the threads are deadlocked for an entire day while the submitters keep inserting tasks). Even if the task's size in memory could be small, 2.147.483.647 objects with its corresponding link wrappers, etc... is a lot of extra load.
The simplest way would be to use the method
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize)
Of class Executors. This gives you a simple out-of-the-box solution. The pool that you get will expand and shrink as per need. You can further configure it with methods dealing with core threads timeout etc. ScheduledExecutorService is an extension of ExecutorService class and is the only one that is out of the box may dynamically expand and shrink.
Related
newCachedThreadPool() versus newFixedThreadPool()
When should I use one or the other? Which strategy is better in terms of resource utilization?
I think the docs explain the difference and usage of these two functions pretty well:
newFixedThreadPool
Creates a thread pool that reuses a
fixed number of threads operating off
a shared unbounded queue. At any
point, at most nThreads threads will
be active processing tasks. If
additional tasks are submitted when
all threads are active, they will wait
in the queue until a thread is
available. If any thread terminates
due to a failure during execution
prior to shutdown, a new one will take
its place if needed to execute
subsequent tasks. The threads in the
pool will exist until it is explicitly
shutdown.
newCachedThreadPool
Creates a thread pool that creates new
threads as needed, but will reuse
previously constructed threads when
they are available. These pools will
typically improve the performance of
programs that execute many short-lived
asynchronous tasks. Calls to execute
will reuse previously constructed
threads if available. If no existing
thread is available, a new thread will
be created and added to the pool.
Threads that have not been used for
sixty seconds are terminated and
removed from the cache. Thus, a pool
that remains idle for long enough will
not consume any resources. Note that
pools with similar properties but
different details (for example,
timeout parameters) may be created
using ThreadPoolExecutor constructors.
In terms of resources, the newFixedThreadPool will keep all the threads running until they are explicitly terminated. In the newCachedThreadPool Threads that have not been used for sixty seconds are terminated and removed from the cache.
Given this, the resource consumption will depend very much in the situation. For instance, If you have a huge number of long running tasks I would suggest the FixedThreadPool. As for the CachedThreadPool, the docs say that "These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks".
Just to complete the other answers, I would like to quote Effective Java, 2nd Edition, by Joshua Bloch, chapter 10, Item 68 :
"Choosing the executor service for a particular application can be tricky. If you’re writing a small program, or a lightly loaded server, using Executors.new- CachedThreadPool is generally a good choice, as it demands no configuration and generally “does the right thing.” But a cached thread pool is not a good choice for a heavily loaded production server!
In a cached thread pool, submitted tasks are not queued but immediately handed off to a thread for execution. If no threads are available, a new one is created. If a server is so heavily loaded that all of its CPUs are fully utilized, and more tasks arrive, more threads will be created, which will only make matters worse.
Therefore, in a heavily loaded production server, you are much better off using Executors.newFixedThreadPool, which gives you a pool with a fixed number of threads, or using the ThreadPoolExecutor class directly, for maximum control."
If you see the code in the grepcode, you will see, they are calling ThreadPoolExecutor. internally and setting their properties. You can create your one to have a better control of your requirement.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
The ThreadPoolExecutor class is the base implementation for the executors that are returned from many of the Executors factory methods. So let's approach Fixed and Cached thread pools from ThreadPoolExecutor's perspective.
ThreadPoolExecutor
The main constructor of this class looks like this:
public ThreadPoolExecutor(
int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler
)
Core Pool Size
The corePoolSize determines the minimum size of the target thread pool. The implementation would maintain a pool of that size even if there are no tasks to execute.
Maximum Pool Size
The maximumPoolSize is the maximum number of threads that can be active at once.
After the thread pool grows and becomes bigger than the corePoolSize threshold, the executor can terminate idle threads and reach to the corePoolSize again.
If allowCoreThreadTimeOut is true, then the executor can even terminate core pool threads if they were idle more than keepAliveTime threshold.
So the bottom line is if threads remain idle more than keepAliveTime threshold, they may get terminated since there is no demand for them.
Queuing
What happens when a new task comes in and all core threads are occupied? The new tasks will be queued inside that BlockingQueue<Runnable> instance. When a thread becomes free, one of those queued tasks can be processed.
There are different implementations of the BlockingQueue interface in Java, so we can implement different queuing approaches like:
Bounded Queue: New tasks would be queued inside a bounded task queue.
Unbounded Queue: New tasks would be queued inside an unbounded task queue. So this queue can grow as much as the heap size allows.
Synchronous Handoff: We can also use the SynchronousQueue to queue the new tasks. In that case, when queuing a new task, another thread must already be waiting for that task.
Work Submission
Here's how the ThreadPoolExecutor executes a new task:
If fewer than corePoolSize threads are running, tries to start a
new thread with the given task as its first job.
Otherwise, it tries to enqueue the new task using the
BlockingQueue#offer method. The offer method won't block if the queue is full and immediately returns false.
If it fails to queue the new task (i.e. offer returns false), then it tries to add a new thread to the thread pool with this task as its first job.
If it fails to add the new thread, then the executor is either shut down or saturated. Either way, the new task would be rejected using the provided RejectedExecutionHandler.
The main difference between the fixed and cached thread pools boils down to these three factors:
Core Pool Size
Maximum Pool Size
Queuing
+-----------+-----------+-------------------+---------------------------------+
| Pool Type | Core Size | Maximum Size | Queuing Strategy |
+-----------+-----------+-------------------+---------------------------------+
| Fixed | n (fixed) | n (fixed) | Unbounded `LinkedBlockingQueue` |
+-----------+-----------+-------------------+---------------------------------+
| Cached | 0 | Integer.MAX_VALUE | `SynchronousQueue` |
+-----------+-----------+-------------------+---------------------------------+
Fixed Thread Pool
Here's how the Excutors.newFixedThreadPool(n) works:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
As you can see:
The thread pool size is fixed.
If there is high demand, it won't grow.
If threads are idle for quite some time, it won't shrink.
Suppose all those threads are occupied with some long-running tasks and the arrival rate is still pretty high. Since the executor is using an unbounded queue, it may consume a huge part of the heap. Being unfortunate enough, we may experience an OutOfMemoryError.
When should I use one or the other? Which strategy is better in terms of resource utilization?
A fixed-size thread pool seems to be a good candidate when we're going to limit the number of concurrent tasks for resource management purposes.
For example, if we're going to use an executor to handle web server requests, a fixed executor can handle the request bursts more reasonably.
For even better resource management, it's highly recommended to create a custom ThreadPoolExecutor with a bounded BlockingQueue<T> implementation coupled with reasonable RejectedExecutionHandler.
Cached Thread Pool
Here's how the Executors.newCachedThreadPool() works:
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
As you can see:
The thread pool can grow from zero threads to Integer.MAX_VALUE. Practically, the thread pool is unbounded.
If any thread is idle for more than 1 minute, it may get terminated. So the pool can shrink if threads remain too much idle.
If all allocated threads are occupied while a new task comes in, then it creates a new thread, as offering a new task to a SynchronousQueue always fails when there is no one on the other end to accept it!
When should I use one or the other? Which strategy is better in terms of resource utilization?
Use it when you have a lot of predictable short-running tasks.
If you are not worried about an unbounded queue of Callable/Runnable tasks, you can use one of them. As suggested by bruno, I too prefer newFixedThreadPool to newCachedThreadPool over these two.
But ThreadPoolExecutor provides more flexible features compared to either newFixedThreadPool or newCachedThreadPool
ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory,
RejectedExecutionHandler handler)
Advantages:
You have full control of BlockingQueue size. It's not un-bounded, unlike the earlier two options. I won't get an out of memory error due to a huge pile-up of pending Callable/Runnable tasks when there is unexpected turbulence in the system.
You can implement custom Rejection handling policy OR use one of the policies:
In the default ThreadPoolExecutor.AbortPolicy, the handler throws a runtime RejectedExecutionException upon rejection.
In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.
In ThreadPoolExecutor.DiscardPolicy, a task that cannot be executed is simply dropped.
In ThreadPoolExecutor.DiscardOldestPolicy, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.)
You can implement a custom Thread factory for the below use cases:
To set a more descriptive thread name
To set thread daemon status
To set thread priority
That’s right, Executors.newCachedThreadPool() isn't a great choice for server code that's servicing multiple clients and concurrent requests.
Why? There are basically two (related) problems with it:
It's unbounded, which means that you're opening the door for anyone to cripple your JVM by simply injecting more work into the service (DoS attack). Threads consume a non-negligible amount of memory and also increase memory consumption based on their work-in-progress, so it's quite easy to topple a server this way (unless you have other circuit-breakers in place).
The unbounded problem is exacerbated by the fact that the Executor is fronted by a SynchronousQueue which means there's a direct handoff between the task-giver and the thread pool. Each new task will create a new thread if all existing threads are busy. This is generally a bad strategy for server code. When the CPU gets saturated, existing tasks take longer to finish. Yet more tasks are being submitted and more threads created, so tasks take longer and longer to complete. When the CPU is saturated, more threads is definitely not what the server needs.
Here are my recommendations:
Use a fixed-size thread pool Executors.newFixedThreadPool or a ThreadPoolExecutor. with a set maximum number of threads;
You must use newCachedThreadPool only when you have short-lived asynchronous tasks as stated in Javadoc, if you submit tasks which takes longer time to process, you will end up creating too many threads. You may hit 100% CPU if you submit long running tasks at faster rate to newCachedThreadPool (http://rashcoder.com/be-careful-while-using-executors-newcachedthreadpool/).
I do some quick tests and have the following findings:
1) if using SynchronousQueue:
After the threads reach the maximum size, any new work will be rejected with the exception like below.
Exception in thread "main" java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#3fee733d rejected from java.util.concurrent.ThreadPoolExecutor#5acf9800[Running, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
2) if using LinkedBlockingQueue:
The threads never increase from minimum size to maximum size, meaning the thread pool is fixed size as the minimum size.
I used to use ThreadPoolExecutors for years and one of the main reasons - it is designed to 'faster' process many requests because of parallelism and 'ready-to-go' threads (there are other though).
Now I'm stuck on minding inner design well known before.
Here is snippet from java 8 ThreadPoolExecutor:
public void execute(Runnable command) {
...
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*/
...
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
...
I'm interested in this very first step as in most cases you do not want thread poll executor to store 'unprocessed requests' in the internal queue, it is better to leave them in external input Kafka topic / JMS queue etc. So I'm usually designing my performance / parallelism oriented executor to have zero internal capacity and 'caller runs rejection policy'. You chose some sane big amount of parallel threads and core pool timeout not scare others and show how big the value is ;). I don't use internal queue and I want tasks to start to be processed the earlier the better, thus it has become 'fixed thread pool executor'. Thus in most cases I'm under this 'first step' of the method logic.
Here is the question: is this really the case that it will not 'reuse' existing threads but will create new one each time it is 'under core size' (most cases)? Would it be not better to 'add new core thread only if all others are busy' and not 'when we have a chance to suck for a while on another thread creation'? Am I missing anything?
The doc describes the relationship between the corePoolSize, maxPoolSize, and the task queue, and what happens when a task is submitted.
...but will create new one [thread] each time it is 'under core size...'
Yes. From the doc:
When a new task is submitted in method execute(Runnable), and fewer
than corePoolSize threads are running, a new thread is created to
handle the request, even if other worker threads are idle.
Would it be not better to add new core thread only if all others are busy...
Since you don't want to use the internal queue this seems reasonable. So set the corePoolSize and maxPoolSize to be the same. Once the ramp up of creating the threads is complete there won't be any more creation.
However, using CallerRunsPolicy would seem to hurt performance if the external queue grows faster than can be processed.
Here is the question: is this really the case that it will not 'reuse' existing threads but will create new one each time it is 'under core size' (most cases)?
Yes that is how the code is documented and written.
Am I missing anything?
Yes, I think you are missing the whole point of "core" threads. Core threads are defined in the Executors docs are:
... threads to keep in the pool, even if they are idle.
That's the definition. Thread startup is a non trivial process and so if you have 10 core threads in a pool, the first 10 requests to the pool each start a thread until all of the core threads are live. This spreads the startup load across the first X requests. This is not about getting the tasks done, this is about initializing the TPE and spreading the thread creation load out. You could call prestartAllCoreThreads() if you don't want this behavior.
The whole purpose of the core threads is to have threads already started and running available to work on tasks immediately. If we had to start a thread each time we needed one, there would be unnecessary resource allocation time and thread start/stop overhead taking compute and OS resources. If you don't want the core threads then you can let them timeout and pay for the startup time.
I used to use ThreadPoolExecutors for years and one of the main reasons - it is designed to 'faster' process many requests because of parallelism and 'ready-to-go' threads (there are other though).
TPE is not necessarily "faster". We use it because to manually manage and communicate with a number of threads is hard and easy to get wrong. That's why the TPE code is so powerful. It is the OS threads that give us parallelism.
I don't use internal queue and I want tasks to start to be processed the earlier the better,
The entire point of a threaded program is the maximize throughput. If you run 100 threads on a 4 core system and the tasks are CPU intensive, you are going to pay for the increased context switching and the overall time to process a large number of requests is going to decrease. Your application is also most likely competing for resources on the server with other programs and you don't want to cause it to slow to a crawl if 100s of jobs try to run in a thread pool at the same time.
The whole point of limiting your core threads (i.e. not making them a "sane big amount") is that there is an optimal number of concurrent threads that will maximize the overall throughput of your application. It can be really hard to find the optimal core thread size but experimentation, if possible, would help.
It depends highly on the degree of CPU versus IO in a task. If the tasks are making remote RPC calls to a slow service then it might make sense to have a large number of core threads in your pool. If they are predominantly CPU tasks, however, you are going to want to be closer to the number of CPU/cores and then queue the rest of the tasks. Again it is all about overall throughput.
To reuse threads one need somehow to transfer task to existing thread.
This pushed me towards synchronous queue and zero core pool size.
return new ThreadPoolExecutor(0, maxThreadsCount,
10L, SECONDS,
new SynchronousQueue<Runnable>(),
new BasicThreadFactory.Builder().namingPattern("processor-%d").build());
I have really reduced amounts of 'peaks' of 500 - 1500 (ms) on my 'main flow'.
But this will work only for zero-sized queue. For non-zero-sized queue question is still open.
I have a long running process that listens to events and do some intense processing.
Currently I use Executors.newFixedThreadPool(x) to throttle the number of jobs that runs concurrently, but depending of the time of the day, and other various factors, I would like to be able to dynamically increase or decrease the number of concurrent threads.
If I decrease the number of concurrent threads, I want the current running jobs to finish nicely.
Is there a Java library that let me control and dynamically increase or decrease the number of concurrent threads running in a Thread Pool ? (The class must implement ExecutorService).
Do I have to implement it myself ?
Have a look at below API in ThreadPoolExecutor
public void setCorePoolSize(int corePoolSize)
Sets the core number of threads. This overrides any value set in the constructor.
If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle.
If larger, new threads will, if needed, be started to execute any queued tasks.
Initialization:
ExecutorService service = Executors.newFixedThreadPool(5);
On need basis, resize Thread pool by using below API
((ThreadPoolExecutor)service).setCorePoolSize(newLimit);//newLimit is new size of the pool
Important note:
If the queue is full, and new value of number of threads is greater than or equal to maxPoolSize defined earlier, Task will be rejected.
So set values of maxPoolSize and corePoolSize properly.
I have read many similar questions . However I was not quite satisfied with answers.
I would like to build an algorithm that would adjust the number of threads depending on the average speed.
Let's say as I introduce a new thread, the average speed of task execution increases , it means that the new thread is good. Then the algorithm should try to add another thread ... until the optimal number of threads is achieved .......
Also the algorithm should be keeping track of the average speed. If at some point the average speed goes down significantly, let's say by 10 % (for any reason e.g. i open a different application or whatever) , then the algorithm should terminate one thread and see if the speed goes up ...
Maybe such an API exists. Please, give me any directions or any code example how I could implement such an algorithm
Thank You !
I do not know self-tune system that you are describing but it sounds like not so complicated task once you are using ready thread pool. Take thread pool from concurrency package, implement class TimeConsumptionCallable implements Callable that wraps any other callable and just measures the execution time.
Now you just have to change (increase or decrease) number of working threads when average execution time increases or decreases.
Just do not forget that you need enough statistics before you decide to change number of working threads. Otherwise various random effects that do not depend on your application can cause your thread pool to grow and go down all the time that can itself kill overall performance.
newCachedThreadPool() V/s newFixedThreadPool suggests that perhaps you should be looking at ExecutorService.newCachedThreadPool().
Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks. Calls to execute will reuse previously constructed threads if available. If no existing thread is available, a new thread will be created and added to the pool. Threads that have not been used for sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will not consume any resources. Note that pools with similar properties but different details (for example, timeout parameters) may be created using ThreadPoolExecutor constructors.
If your threads do not block at any time, then the maximum execution speed is reached when you have as many threads as cores, as simply more than 100% CPU usage is not possible.
In other situations it is very difficult to measure how much a new thread will increase/decrease the execution speed, as you just watch a moment in time and make assumptions based on something that could be entirely different the next second.
One idea would be to use an Executor class in combination with a Queue that you specified. So you can measure the size of the queue and make assumptions based on that. If the queue is empty, threads are idle and you can remove one. If the queue fills up, threads cannot handle the load, you need to add more. If the queue is stable, you are about right.
You can come up with your own algorithm by using existing API of java :
public void setCorePoolSize(int corePoolSize) in ThreadPoolExecutor
Sets the core number of threads. This overrides any value set in the constructor.
If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle.
If larger, new threads will, if needed, be started to execute any queued tasks.
Initialization:
ExecutorService service = Executors.newFixedThreadPool(5); // initializaiton
On your need basis, resize the pool by using below API
((ThreadPoolExecutor)service).setCorePoolSize(newLimit);//newLimit is new size of the pool
And one important point: If the queue is full, and new value of number of threads is greater than or equal to maxPoolSize defined earlier, Task will be rejected.
Be careful when setting maxPoolSize so that setCorePoolSize works properly.
I'm trying to use a ThreadPoolExecutor to schedule tasks, but running into some problems with its policies. Here's its stated behavior:
If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
The behavior I want is this:
same as above
If more than corePoolSize but less than maximumPoolSize threads are running, prefers adding a new thread over queuing, and using an idle thread over adding a new thread.
same as above
Basically I don't want any tasks to be rejected; I want them to be queued in an unbounded queue. But I do want to have up to maximumPoolSize threads. If I use an unbounded queue, it never generates threads after it hits coreSize. If I use a bounded queue, it rejects tasks. Is there any way around this?
What I'm thinking about now is running the ThreadPoolExecutor on a SynchronousQueue, but not feeding tasks directly to it - instead feeding them to a separate unbounded LinkedBlockingQueue. Then another thread feeds from the LinkedBlockingQueue into the Executor, and if one gets rejected, it simply tries again until it is not rejected. This seems like a pain and a bit of a hack, though - is there a cleaner way to do this?
It probably isn't necessary to micro-manage the thread pool as being requested.
A cached thread pool will re-use idle threads while also allowing potentially unlimited concurrent threads. This of course could lead to runaway performance degrading from context switching overhead during bursty periods.
Executors.newCachedThreadPool();
A better option is to place a limit on the total number of threads while discarding the notion of ensuring idle threads are used first. The configuration changes would be:
corePoolSize = maximumPoolSize = N;
allowCoreThreadTimeOut(true);
setKeepAliveTime(aReasonableTimeDuration, TimeUnit.SECONDS);
Reasoning over this scenario, if the executor has less than corePoolSize threads, than it must not be very busy. If the system is not very busy, then there is little harm in spinning up a new thread. Doing this will cause your ThreadPoolExecutor to always create a new worker if it is under the maximum number of workers allowed. Only when the maximum number of workers are "running" will workers waiting idly for tasks be given tasks. If a worker waits aReasonableTimeDuration without a task, then it is allowed to terminate. Using reasonable limits for the pool size (after all, there are only so many CPUs) and a reasonably large timeout (to keep threads from needlessly terminating), the desired benefits will likely be seen.
The final option is hackish. Basically, the ThreadPoolExecutor internally uses BlockingQueue.offer to determine if the queue has capacity. A custom implementation of BlockingQueue could always reject the offer attempt. When the ThreadPoolExecutor fails to offer a task to the queue, it will try to make a new worker. If a new worker can not be created, a RejectedExecutionHandler would be called. At that point, a custom RejectedExecutionHandler could force a put into the custom BlockingQueue.
/** Hackish BlockingQueue Implementation tightly coupled to ThreadPoolexecutor implementation details. */
class ThreadPoolHackyBlockingQueue<T> implements BlockingQueue<T>, RejectedExecutionHandler {
BlockingQueue<T> delegate;
public boolean offer(T item) {
return false;
}
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
delegate.put(r);
}
//.... delegate methods
}
Your use case is common, completely legit and unfortunately more difficult than one would expect. For background info you can read this discussion and find a pointer to a solution (also mentioned in the thread) here. Shay's solution works fine.
Generally I'd be a bit wary of unbounded queues; it's usually better to have explicit incoming flow control that degrades gracefully and regulates the ratio of current/remaining work to not overwhelm either producer or consumer.
Just set corePoolsize = maximumPoolSize and use an unbounded queue?
In your list of points, 1 excludes 2, since corePoolSize will always be less or equal than maximumPoolSize.
Edit
There is still something incompatible between what you want and what TPE will offer you.
If you have an unbounded queue, maximumPoolSize is ignored so, as you observed, no more than corePoolSize threads will ever be created and used.
So, again, if you take corePoolsize = maximumPoolSize with an unbounded queue, you have what you want, no?
Would you be looking for something more like a cached thread pool?
http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Executors.html#newCachedThreadPool()