What happens to remaining thread of invokeAny Executor Service - java

InWhen invokeAny successfully returns, what happens to remaining threads? Does it get killed automatically? If not how can I make sure that thread is stopped and return back to threadpool
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.invokeAny(callables);

Just elaborating more on the topic.
What happens to remaining threads
If the treads are executing methods which throw InterruptedException then they receive the exception. Otherwise they get their interrupted flag set to true.
Does it get killed automaticlly?
Not really.- If they are running in infinite loop then you need to make sure you do not swallow InterruptedException and exit the thread in the catch block.- If you are not expecting the exception then you need to keep checking flag using Thread.interrupted() or Thread.currentThread().isInterrupted() and exit when it's true.
- If you are not running infinite loop then the threads will complete their tasks and stop. But their results will not be considered.
In following code both task, and task2 keep running even the service is stopped and main method exits:
public class Test {
public static void main(String[] args) throws Exception {
Callable<String> task1 = () -> {
for (;;) {
try {
Thread.sleep(9000);
System.out.println(Thread.currentThread().getName()+
" is still running..");
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName()
+ " has swallowed the exception.");
//it is a good practice to break the loop here or return
}
}
};
Callable<String> task2 = () -> {
for(;;) {
if(Thread.interrupted()) {
//it is a good practice to break the loop here or return
System.out.println(Thread.currentThread().getName()+
" is interrupted but it is still running..");
}
}
};
List<Callable<String>> tasks = List.of(task1, task2, () -> "small task done!");
ExecutorService service = Executors.newFixedThreadPool(4);
String result = service.invokeAny(tasks);
System.out.println(result);
service.shutdownNow();
System.out.println("main thread done");
}
}
Output:
small task done!
pool-1-thread-2 is interrupted but it is still running..
pool-1-thread-1 has swallowed the exception.
pool-1-thread-1 has swallowed the exception.
main thread done
pool-1-thread-1 is still running..
pool-1-thread-1 is still running..

Upon calling the method invokeAny they are all cancelled/stop when the remaining threads are not yet completed.
Here is the documentation of it:
Upon normal or exceptional return, tasks that have not completed are cancelled.

Related

Future.cancel() followed by Future.get() kills my thread

I want to use the Executor interface (using Callable) in order to start a Thread (let's call it callable Thread) which will do work that uses blocking methods.
That means the callable Thread can throw an InterruptedException when the main Thread calls the Future.cancel(true) (which calls a Thread.interrupt()).
I also want my callable Thread to properly terminate when interrupted USING other blocking methods in a cancellation part of code.
While implementing this, I experienced the following behavior: When I call Future.cancel(true) method, the callable Thread is correctly notified of the interruption BUT if the main Thread immediately waits for its termination using Future.get(), the callable Thread is kind of killed when calling any blocking method.
The following JUnit 5 snippet illustrates the problem.
We can easily reproduce it if the main Thread does not sleep between the cancel() and the get() calls.
If we sleep a while but not enough, we can see the callable Thread doing half of its cancellation work.
If we sleep enough, the callable Thread properly completes its cancellation work.
Note 1: I checked the interrupted status of the callable Thread: it is correctly set once and only once, as expected.
Note 2: When debugging step by step my callable Thread after interruption (when passing into the cancellation code), I "loose" it after several step when entering a blocking method (no InterruptedException seems to be thrown).
#Test
public void testCallable() {
ExecutorService executorService = Executors.newSingleThreadExecutor();
System.out.println("Main thread: Submitting callable...");
final Future<Void> future = executorService.submit(() -> {
boolean interrupted = Thread.interrupted();
while (!interrupted) {
System.out.println("Callable thread: working...");
try {
Thread.sleep(500);
} catch (InterruptedException e) {
System.out.println("Callable thread: Interrupted while sleeping, starting cancellation...");
Thread.currentThread().interrupt();
}
interrupted = Thread.interrupted();
}
final int steps = 5;
for (int i=0; i<steps; ++i) {
System.out.println(String.format("Callable thread: Cancelling (step %d/%d)...", i+1, steps));
try {
Thread.sleep(200);
} catch (InterruptedException e) {
Assertions.fail("Callable thread: Should not be interrupted!");
}
}
return null;
});
final int mainThreadSleepBeforeCancelMs = 2000;
System.out.println(String.format("Main thread: Callable submitted, sleeping %d ms...", mainThreadSleepBeforeCancelMs));
try {
Thread.sleep(mainThreadSleepBeforeCancelMs);
} catch (InterruptedException e) {
Assertions.fail("Main thread: interrupted while sleeping.");
}
System.out.println("Main thread: Cancelling callable...");
future.cancel(true);
System.out.println("Main thread: Cancelable just cancelled.");
// Waiting "manually" helps to test error cases:
// - Setting to 0 (no wait) will prevent the callable thread to correctly terminate;
// - Setting to 500 will prevent the callable thread to correctly terminate (but some cancel process is done);
// - Setting to 1500 will let the callable thread to correctly terminate.
final int mainThreadSleepBeforeGetMs = 0;
try {
Thread.sleep(mainThreadSleepBeforeGetMs);
} catch (InterruptedException e) {
Assertions.fail("Main thread: interrupted while sleeping.");
}
System.out.println("Main thread: calling future.get()...");
try {
future.get();
} catch (InterruptedException e) {
System.out.println("Main thread: Future.get() interrupted: Error.");
} catch (ExecutionException e) {
System.out.println("Main thread: Future.get() threw an ExecutionException: Error.");
} catch (CancellationException e) {
System.out.println("Main thread: Future.get() threw an CancellationException: OK.");
}
executorService.shutdown();
}
When you call get() on a canceled Future, you will get a CancellationException, hence will not wait for the Callable’s code to perform its cleanup. Then, you are just returning and the observed behavior of threads being killed seems to be part of JUnit’s cleanup when it has determined that the test has completed.
In order to wait for the full cleanup, change the last line from
executorService.shutdown();
to
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.DAYS);
Note that it is simpler to declare unexpected exceptions in the method’s throws clause rather than cluttering your test code with catch clauses calling Assertions.fail. JUnit will report such exceptions as failure anyway.
Then, you can remove the entire sleep code.
It might be worth putting the ExecutorService management into #Before/#After or even #BeforeClass/#AfterClass methods, to keep the testing methods free of that, to focus on the actual tests.¹
¹ These were the JUnit 4 names. IIRC, the JUnit 5 names are like #BeforeEach/#AfterEach resp. #BeforeAll/#AfterAll

Behaviour of ForkJoinPool in CompletableFuture.supplyAsync()

I'm comparing the behaviour of CompletableFuture.supplyAsync() in the two cases in which I set a custom ExecutorService or I want my Supplier to be executed by the default executor (if not specified) which is ForkJoinPool.commonPool()
Let's see the difference:
public class MainApplication {
public static void main(final String[] args) throws ExecutionException, InterruptedException {
Supplier<String> action1 = () -> {
try {
Thread.sleep(3000);
}finally {
return "Done";
}
};
Function<String, String> action2 = (input) -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
return input + "!!";
}
};
final ExecutorService executorService = Executors.newFixedThreadPool(4);
CompletableFuture.supplyAsync(action1, executorService)
.thenApply (action2)
.thenAccept (res -> System.out.println(res));
System.out.println("This is the end of the execution");
}
}
In this case I'm passing executorService to my supplyAsync() and it prints:
This is the end of the execution
Done!!
So "Done" gets printed after the end of the main execution.
BUT if I use instead:
CompletableFuture.supplyAsync(action1)
so I don't pass my custom executorService and the CompletableFuture class uses under the hood the ForkJoinPool.commonPool() then "Done" is not printed at all:
This is the end of the execution
Process finished with exit code 0
Why?
In both cases when you do
CompletableFuture.supplyAsync(action1, executorService)
.thenApply (action2)
.thenAccept (res -> System.out.println(res));
you don't wait for task completition. But then you program is going to exit and there is differences how common fork join pool:
ForkJoinPool.commonPool()
and regular executor service:
final ExecutorService executorService = Executors.newFixedThreadPool(4);
..react on attempt to call System.exit(...) equivalent.
This is what doc says about fork join common pool, you should point attention to that:
However this pool and any ongoing processing are automatically
terminated upon program System.exit(int). Any program that relies on
asynchronous task processing to complete before program termination
should invoke commonPool().awaitQuiescence, before exit.
That is link to ExecutorService docs, you may point attention to:
The shutdown() method will allow previously submitted tasks to execute
before terminating
I think that may be a difference you asking about.
ForkJoinPool uses daemon threads that does not prevent JVM from exiting. On the other hand the threads in the ExecutorService created by Executors are non-daemon threads, hence it keeps JVM from exiting until you explicitly shutdown the thread pool.
Also notice that in your example you need to shutdown the pool at the end in order to terminate the JVM.
executorService.shutdown();
So, one solution would be to keep the main thread waiting for few seconds until your computation is completed like so,
Thread.sleep(4000);

Why does ThreadPool interrupts its workers, when thread interrupted?

Please, look at this example. I take it from my production project. Webserver receive a command and starts new Thread which starts calculations via TheadPool. When user want to end calculations, he send another command which interrupts this new Thread, and workers of ThreadPool are shuting down. It's working fine, but I don't understand why.
public static void main(String[] args) throws Throwable {
final ExecutorService p = Executors.newFixedThreadPool(2);
System.out.println("main say: Hello, I'm Main!");
Thread t = new Thread(new Runnable() {
#Override
public void run() {
System.out.println(Thread.currentThread().getName() + " say: Starting monitor");
Thread monitor = new Thread(new Runnable() {
#Override
public void run() {
try {
while(true) {
Thread.sleep(1500);
System.out.println(Thread.currentThread().getName() + " say: I'm still here...hahahahah");
}
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + " say: Bye for now!");
}
}
},"monitor");
monitor.setDaemon(true);
monitor.start();
List<Callable<Integer>> threads = new ArrayList<>();
for (int i = 0; i < 5; i++) {
threads.add(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
System.out.println(Thread.currentThread().getName() + " say: Hello!");
try {
for (int c = 0; c < 5; c++) {
System.out.println(Thread.currentThread().getName() + " say: " + c);
Thread.sleep(500);
}
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + " say: I'm interrupted :(");
}
System.out.println(Thread.currentThread().getName() + " say: Bye!");
return 0;
}
});
}
System.out.println(Thread.currentThread().getName() + " say: Starting workers");
try {
p.invokeAll(threads);
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + " say: I'm interrupted :(");
}
System.out.println(Thread.currentThread().getName() + " say: Bye!");
}
}, "new thread");
System.out.println("main say: Starting new thread");
t.start();
System.out.println("main say: Waiting a little...");
Thread.sleep(1250);
System.out.println("main say: Interrupting new thread");
t.interrupt();
// p.shutdown();
System.out.println(String.format("main say: Executor state: isShutdown: %s, isTerminated: %s",
p.isShutdown(),
p.isTerminated()));
System.out.println("main say: Bye...");
}
Main question: why does ThreadPool interrupts its workers, when currentThread interrupted? Where can I learn about this its behavior?
And why in this example main thread don't exits, but do nothing? ThreadPool is inactive but not isTerminated and isShutdown and don't processing rest of tasks.
Main question: why does ThreadPool interrupts its workers, when currentThread interrupted? Where can I learn about this its behavior?
You are overgeneralizing. The invokeAll() methods of an ExecutorService cancel all unfinished tasks when they are interrupted. This is documented in the API docs.
If you're asking "how would I know it will do that" then the docs are your answer. If you're asking why the interface is designed that way, then it makes sense because when it is interrupted, the method throws InterruptedException instead of returning a value, and therefore it is reasonable to suppose that any further work that those unfinished tasks might perform would be wasted.
And why in this example main thread don't exits, but do nothing?
The "main thread" is the one that started at the beginning of main(). This thread does exit, and before it does so it does several other things, including creating, starting, and interrupting a Thread, and outputting several messages. It exits when control reaches the end of main().
But perhaps you mean thread "new thread" started directly by the main thread. This thread also does several things, including starting the monitor thread and submitting a job to the executor service. Or maybe you're asking why this thread does not exit while the ExecutorService is working on its job, but why would it exit while it's waiting for the invokeAll() method to return? Even though that method returns a list of Futures, its documentation is clear that it blocks until all the tasks submitted to it are complete, or an exception occurs.
Why the interrupts?
The interrupts to your tasks are mentioned in the API of ExecutorService.invokeAll():
Throws:
InterruptedException - if interrupted while waiting, in which case unfinished tasks are cancelled
So when the interrupt is received during your call to p.invokeAll(threads), all the tasks in threads are cancelled.
The API doesn't specify if Future.cancel() is called with mayInterruptIfRunning or not, but if you look in the code for AbstractExecutorService, from which ThreadPoolExecutor inherits its implementation of invokeAll(), you can see that the tasks are cancelled with interrupts enabled:
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks)
throws InterruptedException {
/* ... */
try {
/* ... */
} finally {
if (!done)
for (int i = 0, size = futures.size(); i < size; i++)
futures.get(i).cancel(true);
}
}
I suppose this makes slightly more sense than cancelling them without interrupts, because there's already been an interrupt; this is "just propagating it".
Why doesn't the thread pool finish?
The program doesn't exit, and the thread pool is not shut down or terminated, because you simply never told it to shut it down.
So this is no different from the following reduced program:
public static void main(String[] args) throws Throwable {
final ExecutorService p = Executors.newFixedThreadPool(2);
p.execute(new Runnable() { public void run() { } });
Thread.sleep(1000);
System.out.println(String.format("main say: Executor state: isShutdown: %s, isTerminated: %s",
p.isShutdown(),
p.isTerminated()));
}
Thread pools don't have any special magic to guess when you meant to shut them down; they wait until you actually tell them to. The documentation for Executors.newFixedThreadPool() states:
The threads in the pool will exist until it is explicitly shutdown.
When you create thread pools, you need to ensure that they're eventually cleaned up. Usually this is by calling shutdown() or shutdownNow(). Why is this necessary? Because running threads are special in the context of Java garbage collection. Running threads are the starting points for determining what objects will not be garbage collected, and will never be garbage collected while they are still running. And a Java program never exits while there are still running threads (unless you call System.exit(), of course.)
There are some special situations where a thread pool might have no running threads, and thus be garbage collected. The API docs for ThreadPoolExecutor explains this:
Finalization
A pool that is no longer referenced in a program AND has no remaining threads will be shutdown automatically. If you would like to ensure that unreferenced pools are reclaimed even if users forget to call shutdown(), then you must arrange that unused threads eventually die, by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting allowCoreThreadTimeOut(boolean).
So we can modify my example above to eventually exit like this:
final ThreadPoolExecutor p = new ThreadPoolExecutor(
0, 2, 1, TimeUnit.SECONDS, new LinkedBlockingQueue<>());
or this:
final ThreadPoolExecutor p = new ThreadPoolExecutor(
2, 2, 1, TimeUnit.SECONDS, new LinkedBlockingQueue<>());
p.allowCoreThreadTimeOut(true);
But it's often cleaner to call shutdown or shutdownNow when you're finished with your thread pool, instead of relying on a timeout.

How to wait for (fixed rate) ScheduledFuture to complete on cancellation

Is there a built-in way to cancel a Runnable task that has been scheduled at a fixed rate via ScheduledExecutorService.scheduleAtFixedRate and await it's completion if it happens to be running when cancel is called?.
Consider the following example:
public static void main(String[] args) throws InterruptedException, ExecutionException {
Runnable fiveSecondTask = new Runnable() {
#Override
public void run() {
System.out.println("5 second task started");
long finishTime = System.currentTimeMillis() + 5_000;
while (System.currentTimeMillis() < finishTime);
System.out.println("5 second task finished");
}
};
ScheduledExecutorService exec = Executors.newSingleThreadScheduledExecutor();
ScheduledFuture<?> fut = exec.scheduleAtFixedRate(fiveSecondTask, 0, 1, TimeUnit.SECONDS);
Thread.sleep(1_000);
System.out.print("Cancelling task..");
fut.cancel(true);
System.out.println("done");
System.out.println("isCancelled : " + fut.isCancelled());
System.out.println("isDone : " + fut.isDone());
try {
fut.get();
System.out.println("get : didn't throw exception");
}
catch (CancellationException e) {
System.out.println("get : threw exception");
}
}
The output of this program is:
5 second task started
Cancelling task..done
isCancelled : true
isDone : true
get : threw exception
5 second task finished
Setting a shared volatile flag seems the simplest option, but I'd prefer to avoid it if possible.
Does the java.util.concurrent framework have this capability built in?
I am not entirely sure what are you trying to achieve but as I went here from google search I thought It may be worth responding to your question.
1) If you want to forcibly stop heavy workload - unfortunately it seems there is no solution for it(when thread does not respond to interrupts). Only way of dealing with it would be to insert Thread.sleep(1) in between time consuming operations in your loop (http://docs.oracle.com/javase/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html) - maybe deamon thread would help here but I really discourage using them.
2) If you want to block current thread until the child thread finishes then instead of calling cancel you can use get http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Future.html#get() or even get with timeout.
3) If you want clean cancel of subthread then you can call:
fut.cancel(false);
this will not interrupt current execution but will not schedule it to run again.
4) If your workload is not heavy and you only need to wait for 5 seconds then use thread sleep or TimeUnit sleep. In such case interrupt / cancel will be immediate.
Also your example lacking shutdown call on Executor which cause application does not stop.

Under what conditions will BlockingQueue.take throw interrupted exception?

Let us suppose that I have a thread that consumes items produced by another thread. Its run method is as follows, with inQueue being a BlockingQueue
boolean shutdown = false;
while (!shutdown) {
try {
WorkItem w = inQueue.take();
w.consume();
} catch (InterruptedException e) {
shutdown = true;
}
}
Furthermore, a different thread will signal that there are no more work items by interrupting this running thread. Will take() throw an interrupted exception if it does not need to block to retrieve the next work item. i.e. if the producer signals that it is done filling the work queue, is it possible to accidentally leave some items in inQueue or miss the interrupt?
A good way to signal termination of a blocking queue is to submit a 'poison' value into the queue that indicates a shutdown has occurred. This ensures that the expected behavior of the queue is honored. Calling Thread.interupt() is probably not a good idea if you care about clearing the queue.
To provide some code:
boolean shutdown = false;
while (!shutdown) {
try {
WorkItem w = inQueue.take();
if (w == QUEUE_IS_DEAD)
shutdown = true;
else
w.consume();
} catch (InterruptedException e) {
// possibly submit QUEUE_IS_DEAD to the queue
}
}
I wondered about the same thing and reading the javadoc for take() I believed that it would throw an interrupted exception only after having taken all the items in the queue, since if the queue had items, it would not have to "wait".
But I made a small test:
package se.fkykko.slask;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.atomic.AtomicLong;
public class BlockingQueueTakeTest {
public static void main(String[] args) throws Exception {
Runner t = new Runner();
Thread t1 = new Thread(t);
for (int i = 0; i < 50; i++) {
t.queue.add(i);
}
System.out.println(("Number of items in queue: " + t.queue.size()));
t1.start();
Thread.sleep(1000);
t1.interrupt();
t1.join();
System.out.println(("Number of items in queue: " + t.queue.size()));
System.out.println(("Joined t1. Finished"));
}
private static final class Runner implements Runnable {
BlockingQueue<Integer> queue = new ArrayBlockingQueue<Integer>(100);
AtomicLong m_count = new AtomicLong(0);
#Override
public void run() {
try {
while (true) {
queue.take();
System.out.println("Took item " + m_count.incrementAndGet());
final long start = System.currentTimeMillis();
while ((System.currentTimeMillis() - start) < 100) {
Thread.yield(); //Spin wait
}
}
}
catch (InterruptedException ex) {
System.out.println("Interrupted. Count: " + m_count.get());
}
}
}
}
The runner will take 10-11 items and then finish i.e. take() will throw InterruptedException even if there still is items in the queue.
Summary: Use the Poison pill approach instead, then you have full control over how much is left in the queue.
According to javadoc, the take() method will throw InterruptedException if interrupted while waiting.
You can't in general interrupt the threads of an ExecutorService from external code if you used ExecutorService::execute(Runnable) to start the threads, because external code does not have a reference to the Thread objects of each of the running threads (see the end of this answer for a solution though, if you need ExecutorService::execute). However, if you instead use ExecutorService::submit(Callable<T>) to submit the jobs, you get back a Future<T>, which internally keeps a reference to the running thread once Callable::call() begins execution. This thread can be interrupted by calling Future::cancel(true). Any code within (or called by) the Callable that checks the current thread's interrupt status can therefore be interrupted via the Future reference. This includes BlockingQueue::take(), which, even when blocked, will respond to thread interruption. (JRE blocking methods will typically wake up if interrupted while blocked, realize they have been interrupted, and throw an InterruptedException.)
To summarize: Future::cancel() and Future::cancel(true) both cancel future work, while Future::cancel(true) also interrupts ongoing work (as long as the ongoing work responds to thread interrupt). Neither of the two cancel invocations affects work that has already successfully completed.
Note that once a thread is interrupted by cancellation, an InterruptException will be thrown within the thread (e.g. by BlockingQueue::take() in this case). However, you a CancellationException will be thrown back in the main thread the next time you call Future::get() on a successfully cancelled Future (i.e. a Future that was cancelled before it completed). This is different from what you would normally expect: if a non-cancelled Callable throws InterruptedException, the next call to Future::get() will throw InterruptedException, but if a cancelled Callable throws InterruptedException, the next call to Future::get() will through CancellationException.
Here's an example that illustrates this:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Callable;
import java.util.concurrent.CancellationException;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
public class Test {
public static void main(String[] args) throws Exception {
// Start Executor with 4 threads
int numThreads = 4;
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(numThreads);
try {
// Set up BlockingQueue for inputs, and List<Future> for outputs
BlockingQueue<Integer> queue = new LinkedBlockingQueue<Integer>();
List<Future<String>> futures = new ArrayList<>(numThreads);
for (int i = 0; i < numThreads; i++) {
int threadIdx = i;
futures.add(executor.submit(new Callable<String>() {
#Override
public String call() throws Exception {
try {
// Get an input from the queue (blocking)
int val = queue.take();
return "Thread " + threadIdx + " got value " + val;
} catch (InterruptedException e) {
// Thrown once Future::cancel(true) is called
System.out.println("Thread " + threadIdx + " got interrupted");
// This value is returned to the Future, but can never
// be read, since the caller will get a CancellationException
return "Thread " + threadIdx + " got no value";
}
}
}));
}
// Enqueue (numThreads - 1) values into the queue, so that one thread blocks
for (int i = 0; i < numThreads - 1; i++) {
queue.add(100 + i);
}
// Cancel all futures
for (int i = 0; i < futures.size(); i++) {
Future<String> future = futures.get(i);
// Cancel the Future -- this doesn't throw an exception until
// the get() method is called
future.cancel(/* mayInterruptIfRunning = */ true);
try {
System.out.println(future.get());
} catch (CancellationException e) {
System.out.println("Future " + i + " was cancelled");
}
}
} finally {
// Terminate main after all threads have shut down (this call does not block,
// so main will exit before the threads stop running)
executor.shutdown();
}
}
}
Each time you run this, the output will be different, but here's one run:
Future 1 was cancelled
Future 0 was cancelled
Thread 2 got value 100
Thread 3 got value 101
Thread 1 got interrupted
This shows that Thread 2 and Thread 3 completed before Future::cancel() was called. Thread 1 was cancelled, so internally InterruptedException was thrown, and externally CancellationException was thrown. Thread 0 was cancelled before it started running. (Note that the thread indices won't in general correlate with the Future indices, so Future 0 was cancelled could correspond to either thread 0 or thread 1 being cancelled, and the same for Future 1 was cancelled.)
Advanced: one way to achieve the same effect with Executor::execute (which does not return a Future reference) rather than Executor::submit would be to create a ThreadPoolExecutor with a custom ThreadFactory, and have your ThreadFactory record a reference in a concurrent collection (e.g. a concurrent queue) for every thread created. Then to cancel all threads, you can simply call Thread::interrupt() on all previously-created threads. However, you will need to deal with the race condition that new threads may be created while you are interrupting existing threads. To handle this, set an AtomicBoolean flag, visible to the ThreadFactory, that tells it not to create any more threads, then once that is set, cancel the existing threads.
The java.concurrency.utils package was designed and implemented by some of the finest minds in concurrent programming. Also, interrupting threads as a means to terminate them is explicitly endorsed by their book "Java Concurrency in Practice". Therefore, I would be extremely surprised if any items were left in the queue due to an interrupt.

Categories

Resources