ThreadPoolExecutor mysteriously rejecting runnables - java

Given the following unit tests, can somebody explain to me as to why at some point the ThreadPoolExecutor rejects a tasks?
#Test
public void testRejectionBehavior() throws Exception {
final AtomicLong count = new AtomicLong(0);
final AtomicInteger activeThreads = new AtomicInteger(0);
for (;;) {
ThreadPoolExecutor pool = new ThreadPoolExecutor(20, 20,
0L, TimeUnit.MILLISECONDS,
new SynchronousQueue<Runnable>(), new ThreadPoolExecutor.CallerRunsPolicy());
int prestarted = pool.prestartAllCoreThreads();
pool.allowCoreThreadTimeOut(false);
System.out.println("Prestarted #" + prestarted);
for (int i = 0; i < 100; i++) {
final int thisTasksActive = activeThreads.incrementAndGet();
pool.execute(new Runnable() {
#Override
public void run() {
long value = count.incrementAndGet();
if (value % 50 == 0) {
System.out.println("Execution #" + value + " / active: " + thisTasksActive);
}
if (Thread.currentThread().getName().equals("main")) {
throw new IllegalStateException("Execution #" + value + " / active: " + thisTasksActive);
}
activeThreads.decrementAndGet();
}
});
Thread.sleep(5);
}
}
}
The output for me looks like this:
....
Execution #200 / active: 1
Prestarted #20
java.lang.IllegalStateException: Execution #201 / active: 1 / pool stats: java.util.concurrent.ThreadPoolExecutor#156643d4[Running, pool size = 20, active threads = 20, queued tasks = 0, completed tasks = 0]
As you can see, it does some 200 executions and then suddenly rejects the first task of a new iteration.

Ok, after a lot of digging into the ThreadPoolExecutor it turns out that using the given parameters when creating the ThreadPoolExecutor it is not immediately able to execute tasks.
There is actually a race condition even if you invoke pool.prestartAllCoreThreads();. You see, prestartAllCoreThreads() creates new ThreadPoolExecutor.Worker instances which implement Runnable interface. When instantiating them they set their internal state to to -1 making them appear as "active threads" in the toString() output of the ThreadPoolExecutor. Now also in their constructor, the Worker instances create a new Thread and set themselves as Runnable for this Thread. It is not until their run() method is actually called by the newly started thread that they set their state to be available for taking on tasks and subsequently calling the workQueue.take() method.
In short, when you have a ThreadPoolExecutor with a synchronous queue and prestart all Threads, it might take a while for these threads to really start up and block in the queue.take() state. It is not until then that you can submit tasks and not get a rejected execution.

You haven't provided a proper queue for the executor to store the tasks in. A SynchronousQueue has no capacity, not even 1. You fill out the threadpool and then your next task has to run on the main thread as is the normal behaviour in this case.
SynchronousQueue is a weird beast, and the only time I've seen it being used in code on SO is with executors. In questions like "why does my code act weird". How did you come up with using a SynchronousQueue here?

Related

How to submit List<LinkedBlockingQueue<Long>> to ThreadPoolExecutor and each thread will pick one LinkedBlockingQueue and execute it parallel

I submitting List of LinkedBlockingQueue of the Long type to ThreadPoolExecutor and condition should be as each thread pick LinkedBlockingQueue of long and execute in parallel
This is my Method Logic
public void doParallelProcess() {
List<LinkedBlockingQueue<Long>> linkedBlockingQueueList = splitListtoBlockingQueues();
ThreadPoolExecutor executor = new ThreadPoolExecutor(1, linkedBlockingQueueList.size(), 0L,
TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(), Executors.defaultThreadFactory());
Long initial = System.currentTimeMillis();
try {
System.out.println("linkedBlockingQueueList begin size is " + linkedBlockingQueueList.size() + "is empty"
+ linkedBlockingQueueList.isEmpty());
while (true) {
linkedBlockingQueueList.parallelStream().parallel().filter(q -> !q.isEmpty()).forEach(queue -> {
Long id = queue.poll();
MyTestRunnable runnab = new MyTestRunnable(id);
executor.execute(runnab);
System.out.println("Task Count: " + executor.getTaskCount() + ", Completed Task Count: "
+ executor.getCompletedTaskCount() + ", Active Task Count: " + executor.getActiveCount());
});
System.out.println("linkedBlockingQueueList end size is " + linkedBlockingQueueList.size() + "is empty"
+ linkedBlockingQueueList.isEmpty());
System.out.println("executor service " + executor);
if (executor.getCompletedTaskCount() == (long) mainList.size()) {
break;
}
while (executor.getActiveCount() != 0) {
System.out.println("Task Count: " + executor.getTaskCount() + ", Completed Task Count: "
+ executor.getCompletedTaskCount() + ", Active Task Count: " + executor.getActiveCount());
Thread.sleep(1000L);
}
}
} catch (Exception e) {
} finally {
executor.shutdown();
while (!executor.isTerminated()) {
}
}
} `
How to submit a list of LinkedBlockingQueue to an individual thread
example :
List<LinkedBlockingQueue<Long>> each LinkedBlockingQueue
contains 50 queue data
size of List<LinkedBlockingQueue<Long>>
is 50
each thread should pick one LinkedBlockingQueue<Long> and execute 50 queue
tasks.
The input to an ExecutorService is either Runnable or Callable. Any task you submit needs to implement one of those two interfaces. If you want to submit a bunch of tasks to a thread pool and wait until they are all complete, then you can use the invokeAll method and loop over the resulting Futures, calling get on each: see this informative answer to a similar question.
You do not need to batch your input tasks into groups, though. You never want an executor service to have idle threads while there is still work left to do! You want it to be able to grab the next task as soon as resources free up, and batching in this fashion runs contrary to that. Your code is doing this:
while non-empty input lists exist {
for each non-empty input list L {
t = new Runnable(L.pop())
executor.submit(t)
}
while (executor.hasTasks()) {
wait
}
}
Once one of those tasks completes, that thread should be free to move on to other work. But it won't because you wait until all N tasks complete before you submit any more. Submit them all at once with invokeAll and let the executor service do what it was built to do.
The Executors class is your main entry to thread pools:
ExecutorService executor = Executors.newCachedThreadPool();
linkedBlockingQueueList.forEach(queue -> executor.submit(() -> { /* process queue */ }));
If you do want to create a ThreadPoolExecutor yourself — it does give you more control over the configuration — there are at least two ways you may specify a default thread factory:
Leave out the thread factory argument:
ThreadPoolExecutor executor = new ThreadPoolExecutor(1, linkedBlockingQueueList.size(),
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
Use the Executors class again for getting the default thread factory:
ThreadPoolExecutor executor = new ThreadPoolExecutor(1, linkedBlockingQueueList.size(),
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(),
Executors.defaultThreadFactory());

ThreadPoolExecutor not shrinking at low load

In my program most of the time tasks are rarely submitted to the executor, yet they don't cease completely. There are periodic bursts when many tasks are submitted at once.
Even though allowCoreThreadTimeOut is set and only one thread would be enough most of the time, the redundant executor threads don't stop.
This is because of the fairness of the executor's blocking queue: when multiple threads wait for it, all have equal chance to get a task and their idle time doesn't grow significantly.
Is there a workaround? For example, a queue that in case of multiple waiting threads returns in the thread with lowest id?
public class ShrinkTPE {
public static void main(final String[] args) throws Exception {
final ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors
.newFixedThreadPool(NTHREADS);
executor.setKeepAliveTime(ALIVE_TIME, TimeUnit.SECONDS);
executor.allowCoreThreadTimeOut(true);
// thread alive time is 10s
// load all threads with tasks at start and every 12s
// also submit one task each second
for (int i = 0;; i++) {
int j = 0;
do {
if (false && !mostThreadsUnused(i))
break;
final int i2 = i, j2 = j;
executor.submit(new Callable<Void>() {
#Override
public Void call() throws Exception {
System.out.println(""
+ Thread.currentThread().getName() + " " + i2
+ " " + j2);
Thread.sleep(300);
return null;
}
});
} while (mostThreadsUnused(i) && ++j < NTHREADS);
Thread.sleep(1000);
System.out.println();
}
}
private static boolean mostThreadsUnused(final int i) {
return i % (ALIVE_TIME + 2) == 0;
}
private static final int NTHREADS = 5;
private static final int ALIVE_TIME = 10;
}
final ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(N_THREAD);
You are using fixedThreadPool and that means, that pool will have N_THREAD number of threads constantly all the time. allowCoreThreadTimeout is neglected here.
Use different thread pool, perhaps CachedThreadPool? It will reuse existing threads, but it will spin up additional threads if you submit new task to the pool and there will be no idle thread.
Idle threads dies after X amount of time (default 60 seconds of idle)
The official JDK implementation of newCachedThreadPool is as follows. You can simply call that constructor directly if you want to set a maximum thread pool size or customized the keepAliveTime or use a different queue.
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}

Learning about Threads

I have written a simple program, that is intended to start a few threads. The threads should then pick a integer n from an integer array, use it to wait n and return the time t the thread waited back into an array for the results.
If one thread finishes it's task, it should pick the next one, that has not yet being assigned to another thread.
Of course: The order in the arrays has to be maintained, so that integers and results match.
My code runs smoothly as far I see.
However I use one line of code block I find in particular unsatisfying and hope there is a good way to fix this without changing too much:
while(Thread.activeCount() != 1); // first evil line
I kinda abuse this line to make sure all my threads finish getting all the tasks done, before I access my array with the results. I want to do that to prevent ill values, like 0.0, Null Pointer Exception... etc. (in short anything that would make an application with an actual use crash)
Any sort of constructive help is appreciated. I am also not sure, if my code still runs smoothly for very very long arrays of tasks for the threads, for example the results no longer match the order of the integer.
Any constructive help is appreciated.
First class:
public class ThreadArrayWriterTest {
int[] repitions;
int len = 0;
double[] timeConsumed;
public boolean finished() {
synchronized (repitions) {
return len <= 0;
}
}
public ThreadArrayWriterTest(int[] repitions) {
this.repitions = repitions;
this.len = repitions.length;
timeConsumed = new double[this.len];
}
public double[] returnTimes(int[] repititions, int numOfThreads, TimeConsumer timeConsumer) {
for (int i = 0; i < numOfThreads; i++) {
new Thread() {
public void run() {
while (!finished()) {
len--;
timeConsumed[len] = timeConsumer.returnTimeConsumed(repititions[len]);
}
}
}.start();
}
while (Thread.activeCount() != 1) // first evil line
;
return timeConsumed;
}
public static void main(String[] args) {
long begin = System.currentTimeMillis();
int[] repitions = { 3, 1, 3, 1, 2, 1, 3, 3, 3 };
int numberOfThreads = 10;
ThreadArrayWriterTest t = new ThreadArrayWriterTest(repitions);
double[] times = t.returnTimes(repitions, numberOfThreads, new TimeConsumer());
for (double d : times) {
System.out.println(d);
}
long end = System.currentTimeMillis();
System.out.println("Total time of execution: " + (end - begin));
}
}
Second class:
public class TimeConsumer {
double returnTimeConsumed(int repitions) {
long before = System.currentTimeMillis();
for (int i = 0; i < repitions; i++) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
long after = System.currentTimeMillis();
double ret = after - before;
System.out.println("It takes: " + ret + "ms" + " for " + repitions + " runs through the for-loop");
return ret;
}
}
The easiest way to wait for all threads to complete is to keep a Collection of them and then call Thread.join() on each one in turn.
In addition to .join() you can use ExecutorService to manage pools of threads,
An Executor that provides methods to manage termination and methods
that can produce a Future for tracking progress of one or more
asynchronous tasks.
An ExecutorService can be shut down, which will cause it to reject new
tasks. Two different methods are provided for shutting down an
ExecutorService. The shutdown() method will allow previously submitted
tasks to execute before terminating, while the shutdownNow() method
prevents waiting tasks from starting and attempts to stop currently
executing tasks. Upon termination, an executor has no tasks actively
executing, no tasks awaiting execution, and no new tasks can be
submitted. An unused ExecutorService should be shut down to allow
reclamation of its resources.
Method submit extends base method Executor.execute(Runnable) by
creating and returning a Future that can be used to cancel execution
and/or wait for completion. Methods invokeAny and invokeAll perform
the most commonly useful forms of bulk execution, executing a
collection of tasks and then waiting for at least one, or all, to
complete.
ExecutorService executorService = Executors.newFixedThreadPool(maximumNumberOfThreads);
CompletionService completionService = new ExecutorCompletionService(executorService);
for (int i = 0; i < numberOfTasks; ++i) {
completionService.take();
}
executorService.shutdown();
Plus take a look at ThreadPoolExecutor
Since java provides more advanced threading API with concurrent package, You should have look into ExecutorService, which simplifies thread management mechanism.
Simple to solution to your problem.
Use Executors API to create thread pool
static ExecutorService newFixedThreadPool(int nThreads)
Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue.
Use invokeAll to wait for all tasks to complete.
Sample code:
ExecutorService service = Executors.newFixedThreadPool(10);
List<MyCallable> futureList = new ArrayList<MyCallable>();
for ( int i=0; i<12; i++){
MyCallable myCallable = new MyCallable((long)i);
futureList.add(myCallable);
}
System.out.println("Start");
try{
List<Future<Long>> futures = service.invokeAll(futureList);
for(Future<Long> future : futures){
try{
System.out.println("future.isDone = " + future.isDone());
System.out.println("future: call ="+future.get());
}
catch(Exception err1){
err1.printStackTrace();
}
}
}catch(Exception err){
err.printStackTrace();
}
service.shutdown();
Refer to this related SE question for more details on achieving the same:
wait until all threads finish their work in java

Java concurrency counter not properly clean up

This is a java concurrency question. 10 jobs need to be done, each of them will have 32 worker threads. Worker thread will increase a counter . Once the counter is 32, it means this job is done and then clean up counter map. From the console output, I expect that 10 "done" will be output, pool size is 0 and counterThread size is 0.
The issues are :
most of time, "pool size: 0 and countThreadMap size:3" will be
printed out. even those all threads are gone, but 3 jobs are not
finished yet.
some time, I can see nullpointerexception in line 27. I have used ConcurrentHashMap and AtomicLong, why still have concurrency
exception.
Thanks
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.atomic.AtomicLong;
public class Test {
final ConcurrentHashMap<Long, AtomicLong[]> countThreadMap = new ConcurrentHashMap<Long, AtomicLong[]>();
final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
final ThreadPoolExecutor tPoolExecutor = ((ThreadPoolExecutor) cachedThreadPool);
public void doJob(final Long batchIterationTime) {
for (int i = 0; i < 32; i++) {
Thread workerThread = new Thread(new Runnable() {
#Override
public void run() {
if (countThreadMap.get(batchIterationTime) == null) {
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis()); //start up time
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
System.out.println("done");
countThreadMap.remove(batchIterationTime);
}
}
});
tPoolExecutor.execute(workerThread);
}
}
public void report(){
while(tPoolExecutor.getActiveCount() != 0){
//
}
System.out.println("pool size: "+ tPoolExecutor.getActiveCount() + " and countThreadMap size:"+countThreadMap.size());
}
public static void main(String[] args) throws Exception {
Test test = new Test();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
test.doJob(batchIterationTime);
}
test.report();
System.out.println("All Jobs are done");
}
}
Let’s dig through all the mistakes of thread related programming, one man can make:
Thread workerThread = new Thread(new Runnable() {
…
tPoolExecutor.execute(workerThread);
You create a Thread but don’t start it but submit it to an executor. It’s a historical mistake of the Java API to let Thread implement Runnable for no good reason. Now, every developer should be aware, that there is no reason to treat a Thread as a Runnable. If you don’t want to start a thread manually, don’t create a Thread. Just create the Runnable and pass it to execute or submit.
I want to emphasize the latter as it returns a Future which gives you for free what you are attempting to implement: the information when a task has been finished. It’s even easier when using invokeAll which will submit a bunch of Callables and return when all are done. Since you didn’t tell us anything about your actual task, it’s not clear whether you can let your tasks simply implement Callable (may return null) instead of Runnable.
If you can’t use Callables or don’t want to wait immediately on submission, you have to remember the returned Futures and query them at a later time:
static final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
public static List<Future<?>> doJob(final Long batchIterationTime) {
final Random r=new Random();
List<Future<?>> list=new ArrayList<>(32);
for (int i = 0; i < 32; i++) {
Runnable job=new Runnable() {
public void run() {
// pretend to do something
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(r.nextInt(10)));
}
};
list.add(cachedThreadPool.submit(job));
}
return list;
}
public static void main(String[] args) throws Exception {
Test test = new Test();
Map<Long,List<Future<?>>> map=new HashMap<>();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
while(map.containsKey(batchIterationTime))
batchIterationTime++;
map.put(batchIterationTime,doJob(batchIterationTime));
}
// print some statistics, if you really need
int overAllDone=0, overallPending=0;
for(Map.Entry<Long,List<Future<?>>> e: map.entrySet()) {
int done=0, pending=0;
for(Future<?> f: e.getValue()) {
if(f.isDone()) done++;
else pending++;
}
System.out.println(e.getKey()+"\t"+done+" done, "+pending+" pending");
overAllDone+=done;
overallPending+=pending;
}
System.out.println("Total\t"+overAllDone+" done, "+overallPending+" pending");
// wait for the completion of all jobs
for(List<Future<?>> l: map.values())
for(Future<?> f: l)
f.get();
System.out.println("All Jobs are done");
}
But note that if you don’t need the ExecutorService for subsequent tasks, it’s much easier to wait for all jobs to complete:
cachedThreadPool.shutdown();
cachedThreadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
System.out.println("All Jobs are done");
But regardless of how unnecessary the manual tracking of the job status is, let’s delve into your attempt, so you may avoid the mistakes in the future:
if (countThreadMap.get(batchIterationTime) == null) {
The ConcurrentMap is thread safe, but this does not turn your concurrent code into sequential one (that would render multi-threading useless). The above line might be processed by up to all 32 threads at the same time, all finding that the key does not exist yet so possibly more than one thread will then be going to put the initial value into the map.
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis());
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
That’s why this is called the “check-then-act” anti-pattern. If more than one thread is going to process that code, they all will put their new value, being confident that this was the right thing as they have checked the initial condition before acting but for all but one thread the condition has changed when acting and they are overwriting the value of a previous put operation.
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
Since you are modifying the AtomicInteger which is already stored into the map, the put operation is useless, it will put the very array that it retrieved before. If there wasn’t the mistake that there can be multiple initial values as described above, the put operation had no effect.
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
Again, the use of a ConcurrentMap doesn’t turn the multi-threaded code into sequential code. While it is clear that the only last thread will update the atomic integer to 32 (when the initial race condition doesn’t materialize), it is not guaranteed that all other threads have already passed this if statement. Therefore more than one, up to all threads can still be at this point of execution and see the value of 32. Or…
System.out.println("done");
countThreadMap.remove(batchIterationTime);
One of the threads which have seen the 32 value might execute this remove operation. At this point, there might be still threads not having executed the above if statement, now not seeing the value 32 but producing a NullPointerException as the array supposed to contain the AtomicInteger is not in the map anymore. This is what happens, occasionally…
After creating your 10 jobs, your main thread is still running - it doesn't wait for your jobs to complete before it calls report on the test. You try to overcome this with the while loop, but tPoolExecutor.getActiveCount() is potentially coming out as 0 before the workerThread is executed, and then the countThreadMap.size() is happening after the threads were added to your HashMap.
There are a number of ways to fix this - but I will let another answer-er do that because I have to leave at the moment.

Changing ThreadPoolExecutor

As mentioned in the link below:-
How to get the ThreadPoolExecutor to increase threads to max before queueing?
I changed the queue implementation to return false after entering element.
As a result of which whenever a new task is inserted into the queue a new thread is created for it.
But when i ran the below implementation on a large scale (Bis System Testing) with loggers placed a new problem is generated.
When a task comes for execution it gets inserted into the queue and as queue returns false a new thread is created for its execution. Idle threads which are currently there in the pool are not picked up. Reason being Tasks are assigned to idle threads from getTask() method which picks tasks from queue. So my question is how to change this behavior so that if threads are idle how to make sure that idle threads are assigned tasks for execution rather than creating new threads ??
Below output will make it more clear:-
Task 46 ends
Active Count: 0 Pool Size : 3 Idle Count: 3 Queue Size: 0
Task 47 ends
Active Count: 0 Pool Size : 3 Idle Count: 3 Queue Size: 0
Task 48 ends
Active Count: 0 Pool Size : 3 Idle Count: 3 Queue Size: 0
Active Count: 1 Pool Size : 4 Idle Count: 3 Queue Size: 0
Task 49 ends
Active Count: 2 Pool Size : 5 Idle Count: 3 Queue Size: 0
Task 50 ends
Active Count: 2 Pool Size : 5 Idle Count: 3 Queue Size: 0
The code files are as follows:-
ThreadPoolExecutor is of version java 1.5 as we are using 1.5 on server machine and cannot upgrade it.
ThreadPoolExecutor:-
public void execute(Runnable command) {
System.out.println("Active Count: " + getActiveCount()
+ " Pool Size : " + getPoolSize() + " Idle Count: "
+ (getPoolSize() - getActiveCount())+" Queue Size: "+getQueue().size());
if (command == null)
throw new NullPointerException();
for (;;) {
if (runState != RUNNING) {
reject(command);
return;
}
if (poolSize < corePoolSize && addIfUnderCorePoolSize(command))
return;
if (workQueue.offer(command))
return;
int status = addIfUnderMaximumPoolSize(command);
if (status > 0) // created new thread
return;
if (status == 0) { // failed to create thread
reject(command);
return;
}
// Retry if created a new thread but it is busy with another task
}
}
LinkedBlockingQueue:-
public class CustomBlockingQueue<E> extends LinkedBlockingQueue<E>
{
/**
*
*/
private static final long serialVersionUID = 1L;
public CustomBlockingQueue() {
super(Integer.MAX_VALUE);
}
public boolean offer(E e) {
return false;
}
}
In rejection handler we are calling put method of queue which we haven't overriden
Callingexecutor
final CustomThreadPoolExecutor tpe = new CustomThreadPoolExecutor(3, 8, 0L, TimeUnit.MILLISECONDS, new MediationBlockingQueue<Runnable>(), new MediationRejectionHandler());
private static final int TASK_COUNT = 100;
for (int i = 0; i < TASK_COUNT; i++) {
......
tpe.execute(new Task(i));
.....
}
We are calling the executor with core pool size as 3 max pool size as 8 and using unbounded linked blocking queue for tasks.
The easiest way to achieve the “start before queuing but prefer existing threads” behavior using a SynchronousQueue. It will accept offered items if and only if there’s already a waiting receiver. So idle threads will get items and once there are no idle threads the ThreadPoolExecutor will start new threads.
The only disadvantage is that once all threads are started, you can’t simply put the pending item into the queue as it has no capacity. So you either have to accept that the submitter gets blocked or you need another queue for putting pending tasks to it and another background thread which tries to put these pending items to the synchronous queue. This additional thread won’t hurt the performance as it is blocked in either of these two queues most of the time.
class QueuingRejectionHandler implements RejectedExecutionHandler {
final ExecutorService processPending=Executors.newSingleThreadExecutor();
public void rejectedExecution(
final Runnable r, final ThreadPoolExecutor executor) {
processPending.execute(new Runnable() {
public void run() {
executor.execute(r);
}
});
}
}
…
ThreadPoolExecutor e=new ThreadPoolExecutor(
corePoolSize, maximumPoolSize, keepAliveTime, unit,
new SynchronousQueue<Runnable>(), new QueuingRejectionHandler());
I believe that you problem is in the following:
public boolean offer(E e) {
return false;
}
This will always return false to the TPE which will cause it to start another thread, regardless of how many threads are currently idle. This is not what my code sample on this answer recommends. I had to correct an early problem with it after feedback.
My answer says to make your offer(...) method look something like:
public boolean offer(Runnable e) {
/*
* Offer it to the queue if there is 1 or 0 items already queued, else
* return false so the TPE will add another thread.
*/
if (size() <= 1) {
return super.offer(e);
} else {
return false;
}
}
So if there are 2 or more things already in the queue, it will fork another thread otherwise it will enqueue the task in queue which should be picked up by the idle threads. You might also play with the 1 value. Trying it with 0 or more than 1 may be appropriate for your application. Injecting that value into your CustomBlockingQueue might be in order.
Solution given by Gray here is awesome, but I faced same problem as yours i.e ideal threads were not used to pick new task coming, but new thread was created in case poolSize is less than maxPoolSize.
So, I tried to tweak functionality of ThreadPoolExecutor itself, by copying complete class(not a good idea, but couldn't find any other solution) and extending it with ThreadPoolExecutor and overriding execute method.
Below is the method :
public void execute(Runnable command)
{
System.out.println("ActiveCount : " + this.getActiveCount()
+ " PoolSize : " + this.getPoolSize() + " QueueSize : "
+ this.getQueue().size());
if (command == null)
throw new NullPointerException();
for (;;)
{
if (runState != RUNNING)
{
reject(command);
return;
}
if (poolSize < corePoolSize && addIfUnderCorePoolSize(command))
return;
//Now, it will never offer to queue but will go further for thread creation.
//if (workQueue.offer(command))
//return;
//This check is introduced to utilized ideal threads instead of creating new thread
//for incoming tasks.
//Example : coreSize = 3, maxPoolSize = 8.
//activeCount = 4, and PoolSize = 5, so 1 thread is ideal Currently queue is empty.
//When new task comes, it will offer that to queue, and getTask() will take care and execute the task.
//But if new task comes, before ideal thread takes task from queue,
//activeCount = 4, and PoolSize = 5, so 1 thread is ideal Currently queue size = 1.
//this check fails and new thread is created if poolsize under max size or
//task is added to queue through rejection handler.
if ((this.getPoolSize() - this.getActiveCount()) > 0 &&
(this.getPoolSize() - this.getActiveCount() - workQueue.size()) > 0)
{
workQueue.offer(command);
return;
}
int status = addIfUnderMaximumPoolSize(command);
if (status > 0) // created new thread
return;
if (status == 0)
{ // failed to create thread
reject(command);
return;
}
// Retry if created a new thread but it is busy with another task
}
}
In rejection handler I am using put method to put task in queue(unbounded), as suggested by Gray. :)
Note : I am not overriding behavior of Queue in my code.
So my question is how to change this behavior so that if threads are idle how to make sure that idle threads are assigned tasks for execution rather than creating new threads ??
Things have been improved a lot in last couple of years. Your problem has a simple solution with Java 8 Executors newWorkStealingPool API
newWorkStealingPool
public static ExecutorService newWorkStealingPool()
Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService executorService = Executors.newWorkStealingPool();
will do required magic for you. newWorkSteatingPool will return a ExecutorService of ForkJoinPool type. In ForkJoinPool, Idle threads will steal task from busy thread's queue, which you are looking for.

Categories

Resources