Java scheduled threadpool executor internal queue - java

I want to write a program that runs every 30 minutes. I am using java scheduled threadpool executor to process the tasks that i submit to the executor.
I have been looking at what the official docs say https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
and i have run into a dilemma.
Lets say i have submitted 5 tasks to the queue and i have defined 10 threads in the thread pool.
Is there a likelyhood that one of the tasks shall be performed twice
Does the threadpool executor make sure that a task is removed when it has been processed by one of the threads or must i remove the task myself once it has been processed.
Having the task removed is desirable since i wouldn't like old tasks to still be in the queue 30 minutes later.

Executors.newFixedThreadPool() creates a new ThreadPoolExecutor using a LinkedBlockingQueue.
From Executors.newFixedThreadPool():
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
When tasks are submitted for executions, the ThreadPoolExecutor will add them to this queue. Further, the worker threads will take tasks from this queue and execute them.
From ThreadPoolExecutor.getTask():
private Runnable getTask() {
// ...
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
}
As per BlockingQueue.take() contract, taking an element from the queue implies removing it as well.
/**
* Retrieves and removes the head of this queue, waiting if necessary
* until an element becomes available.
*
* #return the head of this queue
* #throws InterruptedException if interrupted while waiting
*/
E take() throws InterruptedException;

It will be executed only once, the executor will remove it automatically.
This is not explicitly documented, while the doc implys it:
Executes the given task sometime in the future.

Related

ThreadPoolTaskExecutor should Finish Task Queue before Main thread ends

Problem statement:
I have 1,000 tasks and need to process them via ThreadPoolTaskExecutor. ThreadPoolTaskExecutor has corePoolSize = 5, maxPoolSize = 10 and queueCapacity = 1000.
Now from the main method, I am executing the following code
CountDownLatch latch = new CountDownLatch(5);
Collection<Future<?>> futures = new LinkedList<Future<?>>();
for (Map.Entry<String,Boolean> entry : map.entrySet()){
FutureTask task = new FutureTask(new CustomTask(entry));
executor.execute(task);
}
log.info("ACTIVE COUNT : "+executor.getActiveCount());
log.info("SIZE of the QUEUE : "+executor.getThreadPoolExecutor().getQueue().size());
log.info("LATCH WAIT : "+latch.getCount());
latch.wait();
.....
#Override
public Object call() throws Exception {
latch.countDown();
//some logic
return entry;
}
Now, the map has 1,000 entries in it and I want to process all tasks in queue(1,000) and then print these log lines. Whats happening here is, the corePoolSize(which is equal to the CountDounLatch count) create this number of thread and executes them 'Right-Away'. However, when this number is hit, it starts filling up the queue(which is totally fine and desired). However, this queue tasks are processed ONLY AFTER the main thread reaches the end, only then these tasks start executing. This is something that I don't want. I want the Executor to start picking up items from queue as soon as threads get free from processing batch-1.
But in my case, once the batch-1 is processed, the next task is picked only when the main threads ends(which I do not want).
Anyone with a solution on how can this be achieved? (The processing of queue as soon as the thread is available for processing)
P.S : I do understand that latch.await() waits for the threads to complete their execution, but I am looking for a behavior in which it should wait for all the threads to be finished(which is happening) and all the queue should be empty(my expectations).
Thank You
If you are going to do it this way, you need to initialize the latch with the number of tasks that you are going to submit; i.e. 1,000. Also you should decrement the latch at the end of each task, not at its start (as your code currently seems to be doing.)
But you don't need a latch or a counter or anything to implement this. Instead, if you are using a Java SE ExecutorService directly, just do this:
public static void main(String[] args) {
// Submit lots of tasks
executorService.shutdown();
try {
// Waits until all tasks in the queue have completed
executorService.awaitTermination(1_000_000, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
// OK ... will end now
}
}
And if you are using the SpringFramework specific ThreadPoolTaskExecutor class:
public static void main(String[] args) {
// Submit lots of tasks
executor.setAwaitTerminationSeconds(1_000_000);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.shutdown();
}

Executor framework - Producer Consumer pattern

It is mentioned by Java_author in section 5.3.1,
... many producer-consumer designs can be expressed using the Executor task execution framework, which itself uses the producer-consumer pattern.
... The producer-consumer pattern offers a thread-friendly means of decomposing the problem into simpler components(if possible).
Does Executor framework implementation internally follow producer-consumer pattern?
If yes, How the idea of producer-consumer pattern helps in implementation of Executor framework?
Check implementation of ThreadPoolExecutor
public void execute(Runnable command) {
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
Now check
private boolean addWorker(Runnable firstTask, boolean core) {
// After some checks, it creates Worker and start the thread
Worker w = new Worker(firstTask);
Thread t = w.thread;
// After some checks, thread has been started
t.start();
}
Implementation of Worker:
/**
* Class Worker mainly maintains interrupt control state for
* threads running tasks, along with other minor bookkeeping.
* This class opportunistically extends AbstractQueuedSynchronizer
* to simplify acquiring and releasing a lock surrounding each
* task execution. This protects against interrupts that are
* intended to wake up a worker thread waiting for a task from
* instead interrupting a task being run. We implement a simple
* non-reentrant mutual exclusion lock rather than use ReentrantLock
* because we do not want worker tasks to be able to reacquire the
* lock when they invoke pool control methods like setCorePoolSize.
*/
private final class Worker
extends AbstractQueuedSynchronizer
implements Runnable
{
/** Delegates main run loop to outer runWorker */
public void run() {
runWorker(this);
}
final void runWorker(Worker w) {
Runnable task = w.firstTask;
w.firstTask = null;
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
clearInterruptsForTaskRun();
try {
beforeExecute(w.thread, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
Which Runnable to execute is dependent on below logic.
/**
* Performs blocking or timed wait for a task, depending on
* current configuration settings, or returns null if this worker
* must exit because of any of:
* 1. There are more than maximumPoolSize workers (due to
* a call to setMaximumPoolSize).
* 2. The pool is stopped.
* 3. The pool is shutdown and the queue is empty.
* 4. This worker timed out waiting for a task, and timed-out
* workers are subject to termination (that is,
* {#code allowCoreThreadTimeOut || workerCount > corePoolSize})
* both before and after the timed wait.
*
* #return task, or null if the worker must exit, in which case
* workerCount is decremented
*/
private Runnable getTask() {
// After some checks, below code returns Runnable
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
In Summary:
Producer adds Runnable or Callable in execute API with workQueue.offer(command)
execute() method creates Worker thread if needed
This Worker thread runs in infinite loop. It gets task (e.g. Runnable) from getTask()
getTask() pools on BlockingQueue<Runnable> workQueue) and take Runnable. It is consumer of BlockingQueue.
Does Executor framework implementation internally follow producer-consumer pattern?
Yes as explained above.
If yes, How the idea of producer-consumer pattern helps in implementation of Executor framework?
BlockingQueue implementation like ArrayBlockingQueue and ExecutorService implementationThreadPoolExecutor are thread safe. Overhead on programmer on explicit implementation of synchronized, wait and notify calls to implement the same has been reduced.
Executor framework uses producer-consumer pattern.
From Wikipedia,
In computing, the producer–consumer problem (also known as the
bounded-buffer problem) is a classic example of a multi-process
synchronization problem. The problem describes two processes, the
producer and the consumer, who share a common, fixed-size buffer used
as a queue. The producer's job is to generate data, put it into the
buffer, and start again. At the same time, the consumer is consuming
the data (i.e., removing it from the buffer), one piece at a time. The
problem is to make sure that the producer won't try to add data into
the buffer if it's full and that the consumer won't try to remove data
from an empty buffer.
If we have a look on different ExecutorService framework implementations, more specifically ThreadPoolExecutor class, it basically has the following:
A queue, where the jobs are submitted and held
Number of threads which consumes the tasks submitted to the queue.
Based on the type of the executor service, these parameters changes
For example,
Fixed thread pool uses a LinkedBlockingQueue and user configured no of threads
Cached Thread pool uses a SynchronousQueue and no of threads between 0 to Integer.MAX_VALUE based on the number of submitted tasks

Unexpected deadlock in Executor

I found an unexpected deadlock while running tasks in a ThreadPoolExecutor.
The idea is a main task that launches a secondary task that changes a flag.
The main task halts until the secondary task updates the flag.
If corePoolSize >=2 the main task completes as expected.
If corePoolSize <2 it seems that the secondary task is enquenqued but never launched.
Using a SynchronousQueue instead, the main task completes even for corePoolSize=0.
I'd like to know:
what's the cause of the deadlock?. It seems no obvious from the
documentation.
why using a SynchronousQueue instead of a LinkedBlockingQueue prevents the deadlock?
Is corePoolSize =2 a safe value to prevent this kind of deadlocks?
import java.util.concurrent.*;
class ExecutorDeadlock {
/*------ FIELDS -------------*/
boolean halted = true;
ExecutorService executor;
Runnable secondaryTask = new Runnable() {
public void run() {
System.out.println("secondaryTask started");
halted = false;
System.out.println("secondaryTask completed");
}
};
Runnable primaryTask = new Runnable() {
public void run() {
System.out.println("primaryTask started");
executor.execute(secondaryTask);
while (halted) {
try {
Thread.sleep(500);
}
catch (Throwable e) {
e.printStackTrace();
}
}
System.out.println("primaryTask completed");
}
};
/*-------- EXECUTE -----------*/
void execute(){
executor.execute(primaryTask);
}
/*-------- CTOR -----------*/
ExecutorDeadlock(int corePoolSize,BlockingQueue<Runnable> workQueue) {
this.executor = new ThreadPoolExecutor(corePoolSize, 4,0L, TimeUnit.MILLISECONDS, workQueue);
}
/*-------- TEST -----------*/
public static void main(String[] args) {
new ExecutorDeadlock(2,new LinkedBlockingQueue<>()).execute();
//new ExecutorDeadlock(1,new LinkedBlockingQueue<>()).execute();
//new ExecutorDeadlock(0,new SynchronousQueue<>()).execute();
}
}
How do you expect for this to work on threads count <2 if
You have only 1 executor thread
first tast adding secondary task to the executor queue and WAITS for it to start
Tasks are fetched from the queue by executor service when there are free executors in pool. In you case (<2) executor thread is never released by first task.
There is no deadlock issue here.
EDIT:
Ok, I'v dug up some info and this is what I have found out. First of all some info from ThreadPoolExecutor
Any BlockingQueue may be used to transfer and hold submitted tasks.
The use of this queue interacts with pool sizing:
If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
Ok and now as for queuess offer methods
SyncQueue:
Inserts the specified element into this queue, if another thread is
waiting to receive it.
LinkedBlockingQueue
Inserts the specified element into this queue, waiting if necessary for space to become available.
return value of offer method determines whetever new task will be queued or ran in new thread.
As LinkedBlockingQueue enqueues new task because it can as there is enought capacity, task is enqueued and no new threads are spawn. However SyncQueu will not enqueue another task, as there is no other threads that are waiting for something to be enqueued (offer returns false as task is not enqueued) and thats why new executor thread will be spawned.
If you read javadocs for ThreadPoolExecutor LinkedBlockingQueue and SynchronousQueue + check implementation of execute method, you will get to the same conclusion.
So you were wrong, there is explenation in documentation :)

Executor Service and scheduleWithFixedDelay()

Here is my task. I have a static queue of jobs in a class and a static method that adds jobs to the queue. Have n amount of threads that poll from a queue and perform the pulled job. I need to have the n threads poll simultaneously at an interval. AKA, all 3 should poll every 5 seconds and look for jobs.
I have this:
public class Handler {
private static final Queue<Job> queue = new LinkedList<>();
public static void initialize(int maxThreads) { // maxThreads == 3
ScheduledExecutorService executorService =
Executors.newScheduledThreadPool(maxThreads);
executorService.scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
Job job = null;
synchronized(queue) {
if(queue.size() > 0) {
job = queue.poll();
}
}
if(job != null) {
Log.log("start job");
doJob(job);
Log.log("end job");
}
}
}, 15, 5, TimeUnit.SECONDS);
}
}
I get this output when I add 4 tasks:
startjob
endjob
startjob
endjob
startjob
endjob
startjob
endjob
It is obvious that these threads perform that jobs serially, whereas I need them to be done 3 at a time. What am I doing wrong? Thanks!
From the documentation:
If any execution of this task takes longer than its period, then subsequent executions may start late, but will not concurrently execute.
So you must schedule three independent tasks to have them run concurrently. Also note that the scheduled executor service is a fixed thread pool, which is not flexible enough for many use cases. A good idiom is to use the scheduled service just to submit tasks to a regular executor service, which may be configured as a resizable thread pool.
You are running ScheduledExecutorService with fixed delay, what means, that your jobs will run one after one. Use fixed thread pool, and submit 3 threads at a time. Here is an explanation with examples
If you declare Job extends Runnable then your code simplifies dramatically:
First declare the Executor somewhere globally accessible:
public static final ExecutorService executor = Executors.newFixedThreadPool(MAX_THREADS);
Then add a job like this:
executor.submit(new Job());
You are done.

How to get the ThreadPoolExecutor to increase threads to max before queueing?

I've been frustrated for some time with the default behavior of ThreadPoolExecutor which backs the ExecutorService thread-pools that so many of us use. To quote from the Javadocs:
If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full.
What this means is that if you define a thread pool with the following code, it will never start the 2nd thread because the LinkedBlockingQueue is unbounded.
ExecutorService threadPool =
new ThreadPoolExecutor(1 /*core*/, 50 /*max*/, 60 /*timeout*/,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(/* unlimited queue*/));
Only if you have a bounded queue and the queue is full are any threads above the core number started. I suspect a large number of junior Java multithreaded programmers are unaware of this behavior of the ThreadPoolExecutor.
Now I have specific use case where this is not-optimal. I'm looking for ways, without writing my own TPE class, to work around it.
My requirements are for a web service that is making call-backs to a possibly unreliable 3rd party.
I don't want to make the call-back synchronously with the web-request, so I want to use a thread-pool.
I typically get a couple of these a minute so I don't want to have a newFixedThreadPool(...) with a large number of threads that mostly are dormant.
Every so often I get a burst of this traffic and I want to scale up the number of threads to some max value (let's say 50).
I need to make a best attempt to do all callbacks so I want to queue up any additional ones above 50. I don't want to overwhelm the rest of my web-server by using a newCachedThreadPool().
How can I work around this limitation in ThreadPoolExecutor where the queue needs to be bounded and full before more threads will be started? How can I get it to start more threads before queuing tasks?
Edit:
#Flavio makes a good point about using the ThreadPoolExecutor.allowCoreThreadTimeOut(true) to have the core threads timeout and exit. I considered that but I still wanted the core-threads feature. I did not want the number of threads in the pool to drop below the core-size if possible.
How can I work around this limitation in ThreadPoolExecutor where the queue needs to be bounded and full before more threads will be started.
I believe I have finally found a somewhat elegant (maybe a little hacky) solution to this limitation with ThreadPoolExecutor. It involves extending LinkedBlockingQueue to have it return false for queue.offer(...) when there are already some tasks queued. If the current threads are not keeping up with the queued tasks, the TPE will add additional threads. If the pool is already at max threads, then the RejectedExecutionHandler will be called which does the put(...) into the queue.
It certainly is strange to write a queue where offer(...) can return false and put() never blocks so that's the hack part. But this works well with TPE's usage of the queue so I don't see any problem with doing this.
Here's the code:
// extend LinkedBlockingQueue to force offer() to return false conditionally
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
private static final long serialVersionUID = -6903933921423432194L;
#Override
public boolean offer(Runnable e) {
// Offer it to the queue if there is 0 items already queued, else
// return false so the TPE will add another thread. If we return false
// and max threads have been reached then the RejectedExecutionHandler
// will be called which will do the put into the queue.
if (size() == 0) {
return super.offer(e);
} else {
return false;
}
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1 /*core*/, 50 /*max*/,
60 /*secs*/, TimeUnit.SECONDS, queue);
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
// This does the actual put into the queue. Once the max threads
// have been reached, the tasks will then queue up.
executor.getQueue().put(r);
// we do this after the put() to stop race conditions
if (executor.isShutdown()) {
throw new RejectedExecutionException(
"Task " + r + " rejected from " + e);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
});
With this mechanism, when I submit tasks to the queue, the ThreadPoolExecutor will:
Scale the number of threads up to the core size initially (here 1).
Offer it to the queue. If the queue is empty it will be queued to be handled by the existing threads.
If the queue has 1 or more elements already, the offer(...) will return false.
If false is returned, scale up the number of threads in the pool until they reach the max number (here 50).
If at the max then it calls the RejectedExecutionHandler
The RejectedExecutionHandler then puts the task into the queue to be processed by the first available thread in FIFO order.
Although in my example code above, the queue is unbounded, you could also define it as a bounded queue. For example, if you add a capacity of 1000 to the LinkedBlockingQueue then it will:
scale the threads up to max
then queue up until it is full with 1000 tasks
then block the caller until space becomes available to the queue.
Also, if you needed to use offer(...) in the
RejectedExecutionHandler then you could use the offer(E, long, TimeUnit) method instead with Long.MAX_VALUE as the timeout.
Warning:
If you expect tasks to be added to the executor after it has been shutdown, then you may want to be smarter about throwing RejectedExecutionException out of our custom RejectedExecutionHandler when the executor-service has been shutdown. Thanks to #RaduToader for pointing this out.
Edit:
Another tweak to this answer could be to ask the TPE if there are idle threads and only enqueue the item if there is so. You would have to make a true class for this and add ourQueue.setThreadPoolExecutor(tpe); method on it.
Then your offer(...) method might look something like:
Check to see if the tpe.getPoolSize() == tpe.getMaximumPoolSize() in which case just call super.offer(...).
Else if tpe.getPoolSize() > tpe.getActiveCount() then call super.offer(...) since there seem to be idle threads.
Otherwise return false to fork another thread.
Maybe this:
int poolSize = tpe.getPoolSize();
int maximumPoolSize = tpe.getMaximumPoolSize();
if (poolSize >= maximumPoolSize || poolSize > tpe.getActiveCount()) {
return super.offer(e);
} else {
return false;
}
Note that the get methods on TPE are expensive since they access volatile fields or (in the case of getActiveCount()) lock the TPE and walk the thread-list. Also, there are race conditions here that may cause a task to be enqueued improperly or another thread forked when there was an idle thread.
Set core size and max size to the same value, and allow core threads to be removed from the pool with allowCoreThreadTimeOut(true).
I've already got two other answers on this question, but I suspect this one is the best.
It's based on the technique of the currently accepted answer, namely:
Override the queue's offer() method to (sometimes) return false,
which causes the ThreadPoolExecutor to either spawn a new thread or reject the task, and
set the RejectedExecutionHandler to actually queue the task on rejection.
The problem is when offer() should return false. The currently accepted answer returns false when the queue has a couple of tasks on it, but as I've pointed out in my comment there, this causes undesirable effects. Alternately, if you always return false, you'll keep spawning new threads even when you have threads waiting on the queue.
The solution is to use Java 7 LinkedTransferQueue and have offer() call tryTransfer(). When there is a waiting consumer thread the task will just get passed to that thread. Otherwise, offer() will return false and the ThreadPoolExecutor will spawn a new thread.
BlockingQueue<Runnable> queue = new LinkedTransferQueue<Runnable>() {
#Override
public boolean offer(Runnable e) {
return tryTransfer(e);
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue);
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
Note: I now prefer and recommend my other answer.
Here's a version which feels to me much more straightforward: Increase the corePoolSize (up to the limit of maximumPoolSize) whenever a new task is executed, then decrease the corePoolSize (down to the limit of the user specified "core pool size") whenever a task completes.
To put it another way, keep track of the number of running or enqueued tasks, and ensure that the corePoolSize is equal to the number of tasks as long as it is between the user specified "core pool size" and the maximumPoolSize.
public class GrowBeforeQueueThreadPoolExecutor extends ThreadPoolExecutor {
private int userSpecifiedCorePoolSize;
private int taskCount;
public GrowBeforeQueueThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
userSpecifiedCorePoolSize = corePoolSize;
}
#Override
public void execute(Runnable runnable) {
synchronized (this) {
taskCount++;
setCorePoolSizeToTaskCountWithinBounds();
}
super.execute(runnable);
}
#Override
protected void afterExecute(Runnable runnable, Throwable throwable) {
super.afterExecute(runnable, throwable);
synchronized (this) {
taskCount--;
setCorePoolSizeToTaskCountWithinBounds();
}
}
private void setCorePoolSizeToTaskCountWithinBounds() {
int threads = taskCount;
if (threads < userSpecifiedCorePoolSize) threads = userSpecifiedCorePoolSize;
if (threads > getMaximumPoolSize()) threads = getMaximumPoolSize();
setCorePoolSize(threads);
}
}
As written the class doesn't support changing the user specified corePoolSize or maximumPoolSize after construction, and doesn't support manipulating the work queue directly or via remove() or purge().
We have a subclass of ThreadPoolExecutor that takes an additional creationThreshold and overrides execute.
public void execute(Runnable command) {
super.execute(command);
final int poolSize = getPoolSize();
if (poolSize < getMaximumPoolSize()) {
if (getQueue().size() > creationThreshold) {
synchronized (this) {
setCorePoolSize(poolSize + 1);
setCorePoolSize(poolSize);
}
}
}
}
maybe that helps too, but yours looks more artsy of course…
The recommended answer resolves only one (1) of the issue with the JDK thread pool:
JDK thread pools are biased towards queuing. So instead of spawning a new thread, they will queue the task. Only if the queue reaches its limit will the thread pool spawn a new thread.
Thread retirement does not happen when load lightens. For example if we have a burst of jobs hitting the pool that causes the pool to go to max, followed by light load of max 2 tasks at a time, the pool will use all threads to service the light load preventing thread retirement. (only 2 threads would be needed…)
Unhappy with the behavior above, I went ahead and implemented a pool to overcome the deficiencies above.
To resolve 2) Using Lifo scheduling resolves the issue. This idea was presented by Ben Maurer at ACM applicative 2015 conference:
Systems # Facebook scale
So a new implementation was born:
LifoThreadPoolExecutorSQP
So far this implementation improves async execution perfomance for ZEL.
The implementation is spin capable to reduce context switch overhead, yielding superior performance for certain use cases.
Hope it helps...
PS: JDK Fork Join Pool implement ExecutorService and works as a "normal" thread pool, Implementation is performant, It uses LIFO Thread scheduling, however there is no control over internal queue size, retirement timeout..., and most importantly tasks cannot be interrupted when canceling them
Note: I now prefer and recommend my other answer.
I have another proposal, following to the original idea of changing the queue to return false. In this one all tasks can enter the queue, but whenever a task is enqueued after execute(), we follow it with a sentinel no-op task which the queue rejects, causing a new thread to spawn, which will execute the no-op immediately followed by something from the queue.
Because worker threads may be polling the LinkedBlockingQueue for a new task, it's possible for a task to get enqueued even when there's an available thread. To avoid spawning new threads even when there are threads available, we need to keep track of how many threads are waiting for new tasks on the queue, and only spawn a new thread when there are more tasks on the queue than waiting threads.
final Runnable SENTINEL_NO_OP = new Runnable() { public void run() { } };
final AtomicInteger waitingThreads = new AtomicInteger(0);
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
#Override
public boolean offer(Runnable e) {
// offer returning false will cause the executor to spawn a new thread
if (e == SENTINEL_NO_OP) return size() <= waitingThreads.get();
else return super.offer(e);
}
#Override
public Runnable poll(long timeout, TimeUnit unit) throws InterruptedException {
try {
waitingThreads.incrementAndGet();
return super.poll(timeout, unit);
} finally {
waitingThreads.decrementAndGet();
}
}
#Override
public Runnable take() throws InterruptedException {
try {
waitingThreads.incrementAndGet();
return super.take();
} finally {
waitingThreads.decrementAndGet();
}
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue) {
#Override
public void execute(Runnable command) {
super.execute(command);
if (getQueue().size() > waitingThreads.get()) super.execute(SENTINEL_NO_OP);
}
};
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
if (r == SENTINEL_NO_OP) return;
else throw new RejectedExecutionException();
}
});
The best solution that I can think of is to extend.
ThreadPoolExecutor offers a few hook methods: beforeExecute and afterExecute. In your extension you could maintain use a bounded queue to feed in tasks and a second unbounded queue to handle overflow. When someone calls submit, you could attempt to place the request into the bounded queue. If you're met with an exception, you just stick the task in your overflow queue. You could then utilize the afterExecute hook to see if there is anything in the overflow queue after finishing a task. This way, the executor will take care of the stuff in it's bounded queue first, and automatically pull from this unbounded queue as time permits.
It seems like more work than your solution, but at least it doesn't involve giving queues unexpected behaviors. I also imagine that there's a better way to check the status of the queue and threads rather than relying on exceptions, which are fairly slow to throw.
Note: For JDK ThreadPoolExecutor when you have a bounded queue, you are only creating new threads when offer is returning false. You might obtain something usefull with CallerRunsPolicy which creates a bit of BackPressure and directly calls run in caller thread.
I need tasks to be executed from threads created by the pool and have an ubounded queue for scheduling, while the number of threads within the pool may grow or shrink between corePoolSize and maximumPoolSize so...
I ended up doing a full copy paste from ThreadPoolExecutor and change a bit the execute method because
unfortunately this could not be done by extension(it calls private methods).
I didn't wanted to spawn new threads just immediately when new request arrive and all threads are busy(because I have in general short lived tasks). I've added a threshold but feel free to change it to your needs ( maybe for mostly IO is better to remove this threshold)
private final AtomicInteger activeWorkers = new AtomicInteger(0);
private volatile double threshold = 0.7d;
protected void beforeExecute(Thread t, Runnable r) {
activeWorkers.incrementAndGet();
}
protected void afterExecute(Runnable r, Throwable t) {
activeWorkers.decrementAndGet();
}
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && this.workQueue.offer(command)) {
int recheck = this.ctl.get();
if (!isRunning(recheck) && this.remove(command)) {
this.reject(command);
} else if (workerCountOf(recheck) == 0) {
this.addWorker((Runnable) null, false);
}
//>>change start
else if (workerCountOf(recheck) < maximumPoolSize //
&& (activeWorkers.get() > workerCountOf(recheck) * threshold
|| workQueue.size() > workerCountOf(recheck) * threshold)) {
this.addWorker((Runnable) null, false);
}
//<<change end
} else if (!this.addWorker(command, false)) {
this.reject(command);
}
}
Below is a solution using two Threadpools both with core and max pool size as same. The second pool is used when the 1st pool is busy.
import java.util.concurrent.Future;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class MyExecutor {
ThreadPoolExecutor tex1, tex2;
public MyExecutor() {
tex1 = new ThreadPoolExecutor(15, 15, 5, TimeUnit.SECONDS, new LinkedBlockingQueue<>());
tex1.allowCoreThreadTimeOut(true);
tex2 = new ThreadPoolExecutor(45, 45, 100, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
tex2.allowCoreThreadTimeOut(true);
}
public Future<?> submit(Runnable task) {
ThreadPoolExecutor ex = tex1;
int excessTasks1 = tex1.getQueue().size() + tex1.getActiveCount() - tex1.getCorePoolSize();
if (excessTasks1 >= 0) {
int excessTasks2 = tex2.getQueue().size() + tex2.getActiveCount() - tex2.getCorePoolSize();;
if (excessTasks2 <= 0 || excessTasks2 / (double) tex2.getCorePoolSize() < excessTasks1 / (double) tex1.getCorePoolSize()) {
ex = tex2;
}
}
return ex.submit(task);
}
}

Categories

Resources