I want to ask for a little more detail to the same question posted by Zeller over a year ago...
The javadoc says that the service returned by Executors.newCachedThreadPool reuses threads. How is this possible?
I get how the queue structure is setup internally, what I don't see is how it reuses threads in the queue.
All examples I've seen have the developer create an instance of their thread and pass it in through the "execute" method.
For example...
ExecutorService executor = Executors.newCachedThreadPool();
for (int i = 0; i < 10; i++) {
Runnable worker = new WorkerThread(i); //will create 10 instances
executor.execute(worker);
}
I understand that a thread pool can easily manage the life cycle of each thread, but again, I see no methods nor the ability to access or restart any of the threads in the pool.
In the above example, I would then expect that each thread would be started, run, terminated and disposed of by the thread pool, but never reused.
A messaging system would be an example of where you'd need this. Say you have an onMessage handler and you'd like to reuse one of the threads in the pool to handle it, so I'd expect methods like...
worker = executor.getIdleThread;
worker.setData(message);
executor.resubmit(worker);
or maybe have the ExecutorService acting as a factory class and have it return an instance of your threads, where internally it decides to create a new one or reuse an old one.
ExecutorService executor = Executors.newCachedThreadPool(WorkerThread);
Runnable worker = executor.getThread;
worker.setData(message);
So I'm missing something. It's probably something simple but I've spent the afternoon reading tutorials and examples and still haven't figured it out. Can someone shed some light on the subject?
I was curious too how this was possible since Threads can't be restarted, so I analyzed the code of ThreadPoolExecutor which is the implementation of all the ThreadPool ExecutorService you get through the static constructor.
First of all as stated in the other answer you don't use Threads, but Runnables in ThreadPools, because that would defeat the purpose. So here is a detailed explaination how an ExecutorService reuses Threads:
You usually add a Runnable through submit() which internally calls the execute() method. Basically this adds the runnable to a queue and adds a Worker if none is working ATM
public void execute(Runnable command) {
...
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reje
ct(command);
}
The executor maintains a bunch of Worker (inner Class of ThreadPoolExecutor). It has your submitted runnable and a Thread that will be created through the ThreadFactory you maybe set or else just a default one; also the Worker itself is a Runnable, which is used to create the Thread from the factory
private final class Worker
extends AbstractQueuedSynchronizer
implements Runnable
{
...
Worker(Runnable firstTask) {
this.firstTask = firstTask;
this.thread = getThreadFactory().newThread(this);
}
public void run() {
runWorker(this);
}
...
}
When adding a Worker it gets kickstarted
private boolean addWorker(Runnable firstTask, boolean core) {
...
Worker w = new Worker(firstTask);
Thread t = w.thread;
...
t.start();
...
return true;
}
the runWorker() method runs in a loop and get with getTask() the runnables you submitted which are queued in the workingQueue and will wait at getTask unitl a timeout happens.
final void runWorker(Worker w) {
Runnable task = w.firstTask;
w.firstTask = null;
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
clearInterruptsForTaskRun();
try {
beforeExecute(w.thread, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
...
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}
Here is the getTask() method
private Runnable getTask() {
...
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
..
} catch (InterruptedException retry) {
...
}
}
}
tl;dr So basically the Threadpool maintains worker threads that run in loops and execute runnables given by a blocking queue. The worker will be created and destroyed due to demand (no more tasks, a worker will end; if no free worker and < maxPoolSize then create new worker). Also I wouldn't call it "reuse" more the thread will be used as a looper to execute all runnables.
I understand that a thread pool can easily manage the life cycle of
each thread, but again, I see no methods nor the ability to access or
restart any of the threads in the pool.
The management of threads is done internally. The ExecutorService interface only provides the externally visible methods.
The javadoc of newCachedThreadPool simply states
Creates a thread pool that creates new threads as needed, but will
reuse previously constructed threads when they are available. [...] Calls to execute will reuse
previously constructed threads if available. If no existing thread is
available, a new thread will be created and added to the pool. Threads
that have not been used for sixty seconds are terminated and removed
from the cache. Thus, a pool that remains idle for long enough will
not consume any resources. [...]
So that is the guarantee you get. If you want to know how it is implemented, you can look at the source code, in particular, the code of ThreadPoolExecutor. Basically, idle threads will terminate after not executing a task after some time.
Related
It is mentioned by Java_author in section 5.3.1,
... many producer-consumer designs can be expressed using the Executor task execution framework, which itself uses the producer-consumer pattern.
... The producer-consumer pattern offers a thread-friendly means of decomposing the problem into simpler components(if possible).
Does Executor framework implementation internally follow producer-consumer pattern?
If yes, How the idea of producer-consumer pattern helps in implementation of Executor framework?
Check implementation of ThreadPoolExecutor
public void execute(Runnable command) {
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
Now check
private boolean addWorker(Runnable firstTask, boolean core) {
// After some checks, it creates Worker and start the thread
Worker w = new Worker(firstTask);
Thread t = w.thread;
// After some checks, thread has been started
t.start();
}
Implementation of Worker:
/**
* Class Worker mainly maintains interrupt control state for
* threads running tasks, along with other minor bookkeeping.
* This class opportunistically extends AbstractQueuedSynchronizer
* to simplify acquiring and releasing a lock surrounding each
* task execution. This protects against interrupts that are
* intended to wake up a worker thread waiting for a task from
* instead interrupting a task being run. We implement a simple
* non-reentrant mutual exclusion lock rather than use ReentrantLock
* because we do not want worker tasks to be able to reacquire the
* lock when they invoke pool control methods like setCorePoolSize.
*/
private final class Worker
extends AbstractQueuedSynchronizer
implements Runnable
{
/** Delegates main run loop to outer runWorker */
public void run() {
runWorker(this);
}
final void runWorker(Worker w) {
Runnable task = w.firstTask;
w.firstTask = null;
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
clearInterruptsForTaskRun();
try {
beforeExecute(w.thread, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
Which Runnable to execute is dependent on below logic.
/**
* Performs blocking or timed wait for a task, depending on
* current configuration settings, or returns null if this worker
* must exit because of any of:
* 1. There are more than maximumPoolSize workers (due to
* a call to setMaximumPoolSize).
* 2. The pool is stopped.
* 3. The pool is shutdown and the queue is empty.
* 4. This worker timed out waiting for a task, and timed-out
* workers are subject to termination (that is,
* {#code allowCoreThreadTimeOut || workerCount > corePoolSize})
* both before and after the timed wait.
*
* #return task, or null if the worker must exit, in which case
* workerCount is decremented
*/
private Runnable getTask() {
// After some checks, below code returns Runnable
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
In Summary:
Producer adds Runnable or Callable in execute API with workQueue.offer(command)
execute() method creates Worker thread if needed
This Worker thread runs in infinite loop. It gets task (e.g. Runnable) from getTask()
getTask() pools on BlockingQueue<Runnable> workQueue) and take Runnable. It is consumer of BlockingQueue.
Does Executor framework implementation internally follow producer-consumer pattern?
Yes as explained above.
If yes, How the idea of producer-consumer pattern helps in implementation of Executor framework?
BlockingQueue implementation like ArrayBlockingQueue and ExecutorService implementationThreadPoolExecutor are thread safe. Overhead on programmer on explicit implementation of synchronized, wait and notify calls to implement the same has been reduced.
Executor framework uses producer-consumer pattern.
From Wikipedia,
In computing, the producer–consumer problem (also known as the
bounded-buffer problem) is a classic example of a multi-process
synchronization problem. The problem describes two processes, the
producer and the consumer, who share a common, fixed-size buffer used
as a queue. The producer's job is to generate data, put it into the
buffer, and start again. At the same time, the consumer is consuming
the data (i.e., removing it from the buffer), one piece at a time. The
problem is to make sure that the producer won't try to add data into
the buffer if it's full and that the consumer won't try to remove data
from an empty buffer.
If we have a look on different ExecutorService framework implementations, more specifically ThreadPoolExecutor class, it basically has the following:
A queue, where the jobs are submitted and held
Number of threads which consumes the tasks submitted to the queue.
Based on the type of the executor service, these parameters changes
For example,
Fixed thread pool uses a LinkedBlockingQueue and user configured no of threads
Cached Thread pool uses a SynchronousQueue and no of threads between 0 to Integer.MAX_VALUE based on the number of submitted tasks
I have the code sample:
public class ThreadPoolTest {
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 100; i++) {
if (test() != 5 * 100) {
throw new RuntimeException("main");
}
}
test();
}
private static long test() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(100);
CountDownLatch countDownLatch = new CountDownLatch(100 * 5);
Set<Thread> threads = Collections.synchronizedSet(new HashSet<>());
AtomicLong atomicLong = new AtomicLong();
for (int i = 0; i < 5 * 100; i++) {
Thread.sleep(100);
executorService.submit(new Runnable() {
#Override
public void run() {
try {
threads.add(Thread.currentThread());
atomicLong.incrementAndGet();
countDownLatch.countDown();
Thread.sleep(1000);
} catch (Exception e) {
System.out.println(e);
}
}
});
}
executorService.shutdown();
countDownLatch.await();
if (threads.size() != 100) {
throw new RuntimeException("test");
}
return atomicLong.get();
}
}
I especially made application to work long.
And I see jvisualVM.
Each time gap threadpool was recreated.
After several minutes I see:
but if I use newCachedThreadPool instead of newFixedThreadPool I see constant picture:
Can you explain this behaviour?
P.S.
Problem was that exception occures in code and second iteration was not started
To answer your question; just look here:
private static long test() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(100);
The JVM creates a new ThreadPool during each run of test(), because you tell it to do so.
In other words: if you intend to re-use the same threadpool, then avoid creating/shutting down your instances all the time.
In that sense, the simple fix is: move the creation of that ExecutorService into your main() method; and pass the service as argument to your test() method.
Edit: regarding your last comment on cached vs. fixed threadpool; you probably want to look into this question.
Because you asked it to, in your code ? :) Try moving the Pool creation code outside the test.
From docs:
newFixedThreadPool
Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available. If any thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks. The threads in the pool will exist until it is explicitly shutdown.
newCachedThreadPool
Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks. Calls to execute will reuse previously constructed threads if available. If no existing thread is available, a new thread will be created and added to the pool. Threads that have not been used for sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will not consume any resources. Note that pools with similar properties but different details (for example, timeout parameters) may be created using ThreadPoolExecutor constructors.
I need to ask about how thread pooling is implemented for having constant number of thread executing each time when there is task submission happened . (In Executor to avoid each time thread creation and deletion overhead)
executor.submit(Runnable)
Lets say we create some threads in the start and when task come we assign task to them(Thread) using any Queue impl . But after completing it s task how could a thread return to its pool again when as per the lifecycle of thread says that
"After execution of its run method it goes into TERMINATED state and can't be used again"
I am not understood how thread pool works for having constant number of threads for execution of any task to its queue .
It would be great if anyone could provide me an example of thread reuse after its completion of task .
!!Thanks in advance .!!
"After execution of its run method it goes into TERMINATED state and can't be used again"
It doesn't finish its run() Instead it has a loop which runs the run() of the tasks you provide it.
Simplifying the thread pool pattern dramatically you have code which looks like this.
final BlockingQueue<Runnable> tasks = new LinkedBlockingQueue<Runnable>();
public void submit(Runnable runs) {
tasks.add(runs);
}
volatile boolean running = true;
// running in each thread in the pool
class RunsRunnable implement Runnable {
public void run() {
while(running) {
Runnable runs = tasks.take();
try {
runs.run();
} catch(Throwable t) {
// handles t
}
}
}
}
In this example, you can see that while the run() of each task completes, the run() of the thread itself does not until the pool is shutdown.
Usually what happens when we use thread pool , Its inside Run method it is forced to run iteratively. Until there are tasks available in the Queue.
in the below example pool.removeFromQueue() will run iteratively.
public class MyThread<V> extends Thread {
private MyThreadPool<V> pool;
private boolean active = true;
public boolean isActive() {
return active;
}
public void setPool(MyThreadPool<V> p) {
pool = p;
}
/**
* Checks if there are any unfinished tasks left. if there are , then runs
* the task and call back with output on resultListner Waits if there are no
* tasks available to run If shutDown is called on MyThreadPool, all waiting
* threads will exit and all running threads will exit after finishing the
* task
*/
#Override
public void run() {
ResultListener<V> result = pool.getResultListener();
Callable<V> task;
while (true) {
task = pool.removeFromQueue();
if (task != null) {
try {
V output = task.call();
result.finish(output);
} catch (Exception e) {
result.error(e);
}
} else {
if (!isActive())
break;
else {
synchronized (pool.getWaitLock()) {
try {
pool.getWaitLock().wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
}
}
void shutdown() {
active = false;
}
Need to design your thread pool
public MyThreadPool(int size, ResultListener<V> myResultListener) {
tasks = new LinkedList<Callable<V>>();
threads = new LinkedList<MyThread<V>>();
shutDown = false;
resultListener = myResultListener;
for (int i = 0; i < size; i++) {
MyThread<V> myThread = new MyThread<V>();
myThread.setPool(this);
threads.add(myThread);
myThread.start();
}
}
You can take a look here: http://www.ibm.com/developerworks/library/j-jtp0730/index.html for more details and an implementation example. The threads in the pool will wait if the queue is empty and will each start consome messages once they are notified that the queue has some elements.
ExecutorService executor = Executors.newFixedThreadPool(2);
- The above statement created a ThreadPool with fixed size of 2.
executor.execute(new Worker());
- The above statement takes an instance of the class Worker which has implemented Runnable Interface.
- Now here the Executors is an intermediate object, executing the task. Which manages the Thread Objects.
- By executing the above statement the run() method will be executed, and once the run() method completes, the thread doesNot go into dead state but moves back into the pool, waiting to have another work assigned to it, so it can once again move into Runnable state and then to running, all this is handled by Executors .
executor.shutdown();
- The above statement will shutdown the Executors itself, gracefully handling the shutdown of all the threads managed by it..shutdown() on that central object, which in turn could terminate each of the registered executors.
////////// Edited Part//////////////////////
- First of all Runnable has a run() method which canNot return anything, and run() method canNot throw a checked exception, So Callable was introduced in Java 5, which is of Parametric type , and has a method called call(), and it is capable of returning , and throwing Checked exceptions.
Now see this Example:
Thread t = new Thread(new Worker());
t.run();
t.start();
- t.run() is just a simple call to run() method, this won't span a thread of execution.
- t.start() whereas prepares for the things important for the initialization of the thread of execution, and then calls the run() method of the Runnable, and then assign the Task to the newly formed thread of execution, and returns quickly....
Threads in Java becomes a necessity when using Swing and AWT. Mainly the GUI component.
I am totally agree with Peter but want add steps related to ExecutorService execution flow, for clear understanding.
If you create pool (fixed size pool) of threads it does not means that threads were created.
If you submit and/or execute new Task (Runnuble or Callable) new thread will be created JUTS if count of created threads < size of pool
Created threads not returning to pool, threads can wait for new value in blocking queue, this point we can call RETURNING TO POOL
All threads from pool execs like Peter described above.
I would like to ask basic question about Java threads. Let's consider a producer - consumer scenario. Say there is one producer, and n consumer. Consumer arrive at random time, and once they are served they go away, meaning each consumer runs on its own thread. Should I still use run forever condition for consumer ?
public class Consumer extends Thread {
public void run() {
while (true) {
}
}
}
Won't this keep thread running forever ?
I wouldn't extend Thread, instead I would implement Runnable.
If you want the thread to run forever, I would have it loop forever.
A common alternative is to use
while(!Thread.currentThread().isInterrupted()) {
or
while(!Thread.interrupted()) {
It will, so you might want to do something like
while(beingServed)
{
//check if the customer is done being served (set beingServed to false)
}
This way you'll escaped the loop when it's meant to die.
Why not use a boolean that represents the presence of the Consumer?
public class Consumer extends Thread {
private volatile boolean present;
public Consumer() {
present = true;
}
public void run() {
while (present) {
// Do Stuff
}
}
public void consumerLeft() {
present = false;
}
}
First, you can create for each consumer and after the consumer will finish it's job than the consumer will finish the run function and will die, so no need for infinite loop. however, creating thread for each consumer is not good idea since creation of thread is quite expensive in performance point of view. threads are very expensive resources. In addition, i agree with the answers above that it is better to implement runnable and not to extends thread. extend thread only when you wish to customize your thread.
I strongly suggest you will use thread pool and the consumer will be the runnable object that ran by the thread in the thread pool.
the code should look like this:
public class ConsumerMgr{
int poolSize = 2;
int maxPoolSize = 2;
long keepAliveTime = 10;
ThreadPoolExecutor threadPool = null;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(
5);
public ConsumerMgr()
{
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize,
keepAliveTime, TimeUnit.SECONDS, queue);
}
public void runTask(Runnable task)
{
// System.out.println("Task count.."+threadPool.getTaskCount() );
// System.out.println("Queue Size before assigning the
// task.."+queue.size() );
threadPool.execute(task);
// System.out.println("Queue Size after assigning the
// task.."+queue.size() );
// System.out.println("Pool Size after assigning the
// task.."+threadPool.getActiveCount() );
// System.out.println("Task count.."+threadPool.getTaskCount() );
System.out.println("Task count.." + queue.size());
}
It is not a good idea to extend Thread (unless you are coding a new kind of thread - ie never).
The best approach is to pass a Runnable to the Thread's constructor, like this:
public class Consumer implements Runnable {
public void run() {
while (true) {
// Do something
}
}
}
new Thread(new Consumer()).start();
In general, while(true) is OK, but you have to handle being interrupted, either by normal wake or by spurious wakeup. There are many examples out there on the web.
I recommend reading Java Concurrency in Practice.
for producer-consumer pattern you better use wait() and notify(). See this tutorial. This is far more efficient than using while(true) loop.
If you want your thread to processes messages until you kill them (or they are killed in some way) inside while (true) there would be some synchronized call to your producer thread (or SynchronizedQueue, or queuing system) which would block until a message becomes available. Once a message is consumed, the loop restarts and waits again.
If you want to manually instantiate a bunch of thread which pull a message from a producer just once then die, don't use while (true).
I'm trying to find a less clunky solution to a Java concurrency problem.
The gist of the problem is that I need a shutdown call to block while there are still worker threads active, but the crucial aspect is that the worker tasks are each spawned and completed asynchronously so the hold and release must be done by different threads. I need them to somehow send a signal to the shutdown thread once their work has completed. Just to make things more interesting, the worker threads cannot block each other so I'm unsure about the application of a Semaphore in this particular instance.
I have a solution which I think safely does the job, but my unfamiliarity with the Java concurrency utils leads me to think that there might be a much easier or more elegant pattern. Any help in this regard would be greatly appreciated.
Here's what I have so far, fairly sparse except for the comments:
final private ReentrantReadWriteLock shutdownLock = new ReentrantReadWriteLock();
volatile private int activeWorkerThreads;
private boolean isShutdown;
private void workerTask()
{
try
{
// Point A: Worker tasks mustn't block each other.
shutdownLock.readLock().lock();
// Point B: I only want worker tasks to continue if the shutdown signal
// hasn't already been received.
if (isShutdown)
return;
activeWorkerThreads ++;
// Point C: This async method call returns immediately, soon after which
// we release our lock. The shutdown thread may then acquire the write lock
// but we want it to continue blocking until all of the asynchronous tasks
// have completed.
executeAsynchronously(new Runnable()
{
#Override
final public void run()
{
try
{
// Do stuff.
}
finally
{
// Point D: Release of shutdown thread loop, if there are no other
// active worker tasks.
activeWorkerThreads --;
}
}
});
}
finally
{
shutdownLock.readLock().unlock();
}
}
final public void shutdown()
{
try
{
// Point E: Shutdown thread must block while any worker threads
// have breached Point A.
shutdownLock.writeLock().lock();
isShutdown = true;
// Point F: Is there a better way to wait for this signal?
while (activeWorkerThreads > 0)
;
// Do shutdown operation.
}
finally
{
shutdownLock.writeLock().unlock();
}
}
Thanks in advance for any help!
Russ
Declaring activeWorkerThreads as volatile doesn't allow you to do activeWorkerThreads++, as ++ is just shorthand for,
activeWorkerThreads = activeWorkerThreads + 1;
Which isn't atomic. Use AtomicInteger instead.
Does executeAsynchronously() send jobs to a ExecutorService? If so you can just use the awaitTermination method, so your shutdown hook will be,
executor.shutdown();
executor.awaitTermination(1, TimeUnit.Minutes);
You can use a semaphore in this scenario and not require a busy wait for the shutdown() call. The way to think of it is as a set of tickets that are handed out to workers to indicate that they are in-flight. If the shutdown() method can acquire all of the tickets then it knows that it has drained all workers and there is no activity. Because #acquire() is a blocking call the shutdown() won't spin. I've used this approach for a distributed master-worker library and its easy extend it to handle timeouts and retrials.
Executor executor = // ...
final int permits = // ...
final Semaphore semaphore = new Semaphore(permits);
void schedule(final Runnable task) {
semaphore.acquire();
try {
executor.execute(new Runnable() {
#Override public run() {
try {
task.run();
} finally {
semaphore.release();
}
}
});
} catch (RejectedExecutionException e) {
semaphore.release();
throw e;
}
}
void shutDown() {
semaphore.acquireUninterruptibly(permits);
// do stuff
}
ExecutorService should be a preferred solution as sbridges mentioned.
As an alternative, if the number of worker threads is fixed, then you can use CountDownLatch:
final CountDownLatch latch = new CountDownLatch(numberOfWorkers);
Pass the latch to every worker thread and call latch.countDown() when task is done.
Call latch.await() from the main thread to wait for all tasks to complete.
Whoa nelly. Never do this:
// Point F: Is there a better way to wait for this signal?
while (activeWorkerThreads > 0)
;
You're spinning and consuming CPU. Use a proper notification:
First: synchronize on an object, then check activeWorkerThreads, and wait() on the object if it's still > 0:
synchronized (mutexObject) {
while (activeWorkerThreads > 0) {
mutexObject.wait();
}
}
Second: Have the workers notify() the object after they decrement the activeWorkerThreads count. You must synchronize on the object before calling notify.
synchronized (mutexObject) {
activeWorkerThreads--;
mutexObject.notify();
}
Third: Seeing as you are (after implementing 1 & 2) synchronizing on an object whenever you touch activeWorkerThreads, use it as protection; there is no need for the variable to be volatile.
Then: the same object you use as a mutex for controlling access to activeWorkerThreads could also be used to control access to isShutdown. Example:
synchronized (mutexObject) {
if (isShutdown) {
return;
}
}
This won't cause workers to block each other except for immeasurably small amounts of time (which you likely do not avoid by using a read-write lock anyway).
This is more like a comment to sbridges answer, but it was a bit too long to submit as a comment.
Anyways, just 1 comment.
When you shutdown the executor, submitting new task to the executor will result in unchecked RejectedExecutionException if you use the default implementations (like Executors.newSingleThreadExecutor()). So in your case you probably want to use the following code.
code:
new ThreadPoolExecutor(1,
1,
1,
TimeUnit.HOURS,
new LinkedBlockingQueue<Runnable>(),
new ThreadPoolExecutor.DiscardPolicy());
This way, the tasks that were submitted to the executor after shutdown() was called, are simply ignored. The parameter above (1,1... etc) should produce an executor that basically is a single-thread executor, but doesn't throw the runtime exception.