In my code I have to run a task that makes heavy use of recursion and parallel stream processing in order to go deep into a tree of possible games moves and decide what's the best move. This takes a lot of time, so to prevent the user from waiting for too long for the computer to "think" I want to set a time out of, say, 1000 milliseconds. If the best move is not found withing 1000 msec then the computer will play a random move.
My problem is that although I call cancel on Future (with may interrupt set to true), the task is not interrupted and the busy threads keep running in the background.
I tried to periodically check for isInterrupted() on the current and then try to bail out, but this didn't help.
Any ideas?
Below is my code:
public Move bestMove() {
ExecutorService executor = Executors.newSingleThreadExecutor();
Callable<Move> callable = () -> bestEntry(bestMoves()).getKey();
Future<Move> future = executor.submit(callable);
try {
return future.get(1000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
System.exit(0);
} catch (ExecutionException e) {
throw new RuntimeException(e);
} catch (TimeoutException e) {
future.cancel(true);
return randomMove();
}
return null;
}
private Move randomMove() {
Random random = new Random();
List<Move> moves = state.possibleMoves();
return moves.get(random.nextInt(moves.size()));
}
private <K> Map.Entry<K, Double> bestEntry(Map<K, Double> map) {
List<Map.Entry<K, Double>> list = new ArrayList<>(map.entrySet());
Collections.sort(list, (e1, e2) -> (int) (e2.getValue() - e1.getValue()));
return list.get(0);
}
private <K> Map.Entry<K, Double> worstEntry(Map<K, Double> map) {
List<Map.Entry<K, Double>> list = new ArrayList<>(map.entrySet());
Collections.sort(list, (e1, e2) -> (int) (e1.getValue() - e2.getValue()));
return list.get(0);
}
private Map<Move, Double> bestMoves() {
Map<Move, Double> moves = new HashMap<>();
state.possibleMoves().stream().parallel().forEach(move -> {
if (!Thread.currentThread().isInterrupted()) {
Game newState = state.playMove(move);
Double score = newState.isTerminal() ? newState.utility()
: worstEntry(new (newState).worstMoves()).getValue();
moves.put(move, score);
}
});
return moves;
}
private Map<Move, Double> worstMoves() {
Map<Move, Double> moves = new HashMap<>();
state.possibleMoves().stream().parallel().forEach(move -> {
if (!Thread.currentThread().isInterrupted()) {
Game newState = state.playMove(move);
Double score = newState.isTerminal() ? -newState.utility()
: bestEntry(new (newState).bestMoves()).getValue();
moves.put(move, score);
}
});
return moves;
}
ps: I also tried without "parallel()" but again there is still a single thread left running.
Thanks in advance.
Thank you all for your answers. I think I found a simpler solution.
First of all , I think the reason that future.cancel(true) didn't work is because it probably only set the interrupted flag on the thread that started the task. (that is, the thread that is associated with the future).
However because the task itself uses parallel stream processing, it spawns workers on different threads which are never get interrupted, and therefore I cannot periodically check the isInterrupted() flag.
The "solution" (or maybe more of work-around) that I found is to keep my own interrupted flag in my algorithm's objects, and manually set it to true when the task is cancelled. Because all threads work on the same instanced they all have access to the interrupted flag and they obey.
Future.cancel just set the thread as interrupted, then your code must treat it as follow:
public static void main(String[] args) throws InterruptedException {
final ExecutorService executor = Executors.newSingleThreadExecutor();
final Future<Integer> future = executor.submit(() -> count());
try {
System.out.println(future.get(1, TimeUnit.SECONDS));
} catch (Exception e){
future.cancel(true);
e.printStackTrace();
}finally {
System.out.printf("status=finally, cancelled=%s, done=%s%n", future.isCancelled(), future.isDone());
executor.shutdown();
}
}
static int count() throws InterruptedException {
while (!Thread.interrupted());
throw new InterruptedException();
}
As you can see the count keep checking if the thread is available to keep running, you have to understand that actually there is not guarantee that a running Thread can be stopped if she don't want to.
Reference:
Why set the interrupt bit in a Callable
how to suspend thread using thread's id?
UPDATE 2017-11-18 23:22
I wrote a FutureTask extension that have the ability to try to stop the Thread even if the code doesn't respect the interrupt signal. Keep in mind that it is unsafe because the Thread.stop method is deprecated, anyway it is working and if you really need that you can use it (Please read Thread.stop deprecation notes before, for example, if you are using locks, then run .stop can cause deadlocks).
Test code
public static void main(String[] args) throws InterruptedException {
final ExecutorService executor = newFixedSizeExecutor(1);
final Future<Integer> future = executor.submit(() -> count());
try {
System.out.println(future.get(1, TimeUnit.SECONDS));
} catch (Exception e){
future.cancel(true);
e.printStackTrace();
}
System.out.printf("status=finally, cancelled=%s, done=%s%n", future.isCancelled(), future.isDone());
executor.shutdown();
}
static int count() throws InterruptedException {
while (true);
}
Custom Executor
static ThreadPoolExecutor newFixedSizeExecutor(final int threads) {
return new ThreadPoolExecutor(threads, threads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>()){
protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
return new StoppableFutureTask<>(new FutureTask<>(callable));
}
};
}
static class StoppableFutureTask<T> implements RunnableFuture<T> {
private final FutureTask<T> future;
private Field runnerField;
public StoppableFutureTask(FutureTask<T> future) {
this.future = future;
try {
final Class clazz = future.getClass();
runnerField = clazz.getDeclaredField("runner");
runnerField.setAccessible(true);
} catch (Exception e) {
throw new Error(e);
}
}
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
final boolean cancelled = future.cancel(mayInterruptIfRunning);
if(cancelled){
try {
((Thread) runnerField.get(future)).stop();
} catch (Exception e) {
throw new Error(e);
}
}
return cancelled;
}
#Override
public boolean isCancelled() {
return future.isCancelled();
}
#Override
public boolean isDone() {
return future.isDone();
}
#Override
public T get() throws InterruptedException, ExecutionException {
return future.get();
}
#Override
public T get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
return future.get(timeout, unit);
}
#Override
public void run() {
future.run();
}
}
output
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at com.mageddo.spark.sparkstream_1.Main$StoppableFutureTask.get(Main.java:91)
at com.mageddo.spark.sparkstream_1.Main.main(Main.java:20)
status=finally, cancelled=true, done=true
Process finished with exit code 0
Related
I've been trying to setup a simple atomic completable future cache that takes a String key and a Callable class and caches the executions and the results. I know I could use caffeine but still want to understand how this can be done without race conditions while inserting and clearing.
In this simple class I have two simplified caches: an executions cache that keeps tracks of the callables that haven't finished running, and a results cache that keeps track of the results (this will eventually be an ehcache that overflows to disk)
public class AsyncCache {
ExecutorService executor = Executors.newFixedThreadPool(10);
ConcurrentHashMap<String, CompletableFuture<Object>> executions = new ConcurrentHashMap<>();
ConcurrentHashMap<String, Object> results = new ConcurrentHashMap<>();
public CompletableFuture<Object> get(String key, Callable<Object> callable) {
Object result = results.get(key);
if (result != null) {
return CompletableFuture.completedFuture(result);
}
return executions.computeIfAbsent(key, k -> {
CompletableFuture<Object> future = CompletableFuture.supplyAsync(() -> {
try {
return callable.call();
} catch (Exception e) {
throw new CompletionException(e);
}
}, executor);
return future.whenComplete((Object r, Throwable t) -> {
if (executions.remove(k) != null) {
results.put(k, result);
}
});
});
}
public void clear() {
results.clear();
executions.clear();
}
}
I believe that this code has two problems. First, there is a synchronization problem in the lines:
if (executions.remove(k) != null) {
results.put(k, result);
}
In between remove(k) and results.put(k, result) there can be a new get call for the same key that is not already in results and has been removed from executions, thus triggering a new callable execution, when the result was about to be placed in the results cache.
Second, there is another synchronization problem in the lines:
results.clear();
executions.clear();
In between both clear() there can be a new get call that would not get result from the results map but would get a stalled response from the executions map.
Any ideas on how to fix this without naively synchronizing everything.
Edit.
What if I introduce a lock per key to guard against read and writes? Something like this:
ConcurrentMap<String, ReadWriteLock> locks = new ConcurrentHashMap<String, ReadWriteLock>();
public CompletableFuture<Object> get1(String key, Callable<Object> callable) {
ReadWriteLock reading = locks.computeIfAbsent(key, r -> new ReentrantReadWriteLock());
reading.readLock().lock();
try {
Object result = results.get(key);
if (result != null) {
return CompletableFuture.completedFuture(result);
}
return executions.computeIfAbsent(key, k -> {
CompletableFuture<Object> future = CompletableFuture.supplyAsync(() -> {
try {
return callable.call();
} catch (Exception e) {
throw new CompletionException(e);
}
}, executor);
return future.whenComplete((Object r, Throwable t) -> {
ReadWriteLock writing = locks.computeIfAbsent(k, w -> new ReentrantReadWriteLock());
writing.writeLock().lock();
try {
if (executions.remove(k) != null) {
results.put(k, r);
}
} finally {
writing.writeLock().unlock();
locks.remove(k);
}
});
});
} finally {
reading.readLock().unlock();
locks.remove(key);
}
}
This would still leave with questions regarding how to write the clear() method.
This is the code I ended up using. I have gone through many revisions but can't find any obvious deadlocks or race conditions. I'm going leave it here just in case it becomes useful for somebody else.
public class AsyncCache {
ExecutorService executor = Executors.newFixedThreadPool(10);
ConcurrentHashMap<String, CompletableFuture<Object>> executions = new ConcurrentHashMap<>();
ReentrantReadWriteLock guard = new ReentrantReadWriteLock();
MutableConfiguration<String, Object> configuration = new MutableConfiguration<>();
Cache<String, Object> results = Caching.getCachingProvider().getCacheManager().createCache("results", configuration);
public CompletableFuture<Object> get(String key, Callable<Object> callable) {
guard.readLock().lock();
try {
// Attempt to get from the cache first. Only pay attention to
// the cache eviction policies.
Object result = results.get(key);
if (result != null) {
return CompletableFuture.completedFuture(result);
}
// Attempt to get from the current executions second. Make sure
// that the cache and executions is guarded by a read write lock.
return executions.computeIfAbsent(key, k -> {
CompletableFuture<Object> future = CompletableFuture.supplyAsync(() -> {
try {
return callable.call();
} catch (Exception e) {
throw new CompletionException(e);
}
}, executor);
return future.whenComplete((Object r, Throwable t) -> {
guard.writeLock().lock();
try {
if (executions.remove(k) != null) {
if (r != null) {
results.put(k, r);
}
}
} finally {
guard.writeLock().unlock();
}
});
});
} finally {
guard.readLock().unlock();
}
}
public void clear() {
// Guard the cache and executions with a write lock.
guard.writeLock().lock();
try {
executions.clear();
results.clear();
} finally {
guard.writeLock().unlock();
}
}
}
I'm still not sure how to check if this is overly restrictive (locking more than needed). For my use case, the callables can take from 100 to 3000 milliseconds, and the clear method can be called every 15 minutes or so.
Perhaps I don't even need the executions to be a ConcurrentHashMap since the read and writes are protected.
Java 8 introduces CompletableFuture, a new implementation of Future that is composable (includes a bunch of thenXxx methods). I'd like to use this exclusively, but many of the libraries I want to use return only non-composable Future instances.
Is there a way to wrap up a returned Future instances inside of a CompleteableFuture so that I can compose it?
If the library you want to use also offers a callback style method in addition to the Future style, you can provide it a handler that completes the CompletableFuture without any extra thread blocking. Like so:
AsynchronousFileChannel open = AsynchronousFileChannel.open(Paths.get("/some/file"));
// ...
CompletableFuture<ByteBuffer> completableFuture = new CompletableFuture<ByteBuffer>();
open.read(buffer, position, null, new CompletionHandler<Integer, Void>() {
#Override
public void completed(Integer result, Void attachment) {
completableFuture.complete(buffer);
}
#Override
public void failed(Throwable exc, Void attachment) {
completableFuture.completeExceptionally(exc);
}
});
completableFuture.thenApply(...)
Without the callback the only other way I see solving this is to use a polling loop that puts all your Future.isDone() checks on a single thread and then invoking complete whenever a Future is gettable.
There is a way, but you won't like it. The following method transforms a Future<T> into a CompletableFuture<T>:
public static <T> CompletableFuture<T> makeCompletableFuture(Future<T> future) {
if (future.isDone())
return transformDoneFuture(future);
return CompletableFuture.supplyAsync(() -> {
try {
if (!future.isDone())
awaitFutureIsDoneInForkJoinPool(future);
return future.get();
} catch (ExecutionException e) {
throw new RuntimeException(e);
} catch (InterruptedException e) {
// Normally, this should never happen inside ForkJoinPool
Thread.currentThread().interrupt();
// Add the following statement if the future doesn't have side effects
// future.cancel(true);
throw new RuntimeException(e);
}
});
}
private static <T> CompletableFuture<T> transformDoneFuture(Future<T> future) {
CompletableFuture<T> cf = new CompletableFuture<>();
T result;
try {
result = future.get();
} catch (Throwable ex) {
cf.completeExceptionally(ex);
return cf;
}
cf.complete(result);
return cf;
}
private static void awaitFutureIsDoneInForkJoinPool(Future<?> future)
throws InterruptedException {
ForkJoinPool.managedBlock(new ForkJoinPool.ManagedBlocker() {
#Override public boolean block() throws InterruptedException {
try {
future.get();
} catch (ExecutionException e) {
throw new RuntimeException(e);
}
return true;
}
#Override public boolean isReleasable() {
return future.isDone();
}
});
}
Obviously, the problem with this approach is, that for each Future, a thread will be blocked to wait for the result of the Future--contradicting the idea of futures. In some cases, it might be possible to do better. However, in general, there is no solution without actively wait for the result of the Future.
If your Future is the result of a call to an ExecutorService method (e.g. submit()), the easiest would be to use the CompletableFuture.runAsync(Runnable, Executor) method instead.
From
Runnbale myTask = ... ;
Future<?> future = myExecutor.submit(myTask);
to
Runnbale myTask = ... ;
CompletableFuture<?> future = CompletableFuture.runAsync(myTask, myExecutor);
The CompletableFuture is then created "natively".
EDIT: Pursuing comments by #SamMefford corrected by #MartinAndersson, if you want to pass a Callable, you need to call supplyAsync(), converting the Callable<T> into a Supplier<T>, e.g. with:
CompletableFuture.supplyAsync(() -> {
try { return myCallable.call(); }
catch (Exception ex) { throw new CompletionException(ex); } // Or return default value
}, myExecutor);
Because T Callable.call() throws Exception; throws an exception and T Supplier.get(); doesn't, you have to catch the exception so prototypes are compatible.
A note on exception handling
The get() method doesn't specify a throws, which means it should not throw a checked exception. However, unchecked exception can be used. The code in CompletableFuture shows that CompletionException is used and is unchecked (i.e. is a RuntimeException), hence the catch/throw wrapping any exception into a CompletionException.
Also, as #WeGa indicated, you can use the handle() method to deal with exceptions potentially being thrown by the result:
CompletableFuture<T> future = CompletableFuture.supplyAsync(...);
future.handle((ex,res) -> {
if (ex != null) {
// An exception occurred ...
} else {
// No exception was thrown, 'res' is valid and can be handled here
}
});
I published a little futurity project that tries to do better than the straightforward way in the answer.
The main idea is to use only one thread (and of course with not just a spin loop) to check all Futures states inside, which helps to avoid blocking a thread from a pool for each Future -> CompletableFuture transformation.
Usage example:
Future oldFuture = ...;
CompletableFuture profit = Futurity.shift(oldFuture);
Suggestion:
http://www.thedevpiece.com/converting-old-java-future-to-completablefuture/
But, basically:
public class CompletablePromiseContext {
private static final ScheduledExecutorService SERVICE = Executors.newSingleThreadScheduledExecutor();
public static void schedule(Runnable r) {
SERVICE.schedule(r, 1, TimeUnit.MILLISECONDS);
}
}
And, the CompletablePromise:
public class CompletablePromise<V> extends CompletableFuture<V> {
private Future<V> future;
public CompletablePromise(Future<V> future) {
this.future = future;
CompletablePromiseContext.schedule(this::tryToComplete);
}
private void tryToComplete() {
if (future.isDone()) {
try {
complete(future.get());
} catch (InterruptedException e) {
completeExceptionally(e);
} catch (ExecutionException e) {
completeExceptionally(e.getCause());
}
return;
}
if (future.isCancelled()) {
cancel(true);
return;
}
CompletablePromiseContext.schedule(this::tryToComplete);
}
}
Example:
public class Main {
public static void main(String[] args) {
final ExecutorService service = Executors.newSingleThreadExecutor();
final Future<String> stringFuture = service.submit(() -> "success");
final CompletableFuture<String> completableFuture = new CompletablePromise<>(stringFuture);
completableFuture.whenComplete((result, failure) -> {
System.out.println(result);
});
}
}
Let me suggest another (hopefully, better) option:
https://github.com/vsilaev/java-async-await/tree/master/com.farata.lang.async.examples/src/main/java/com/farata/concurrent
Briefly, the idea is the following:
Introduce CompletableTask<V> interface -- the union of the
CompletionStage<V> + RunnableFuture<V>
Warp ExecutorService to return CompletableTask from submit(...) methods (instead of Future<V>)
Done, we have runnable AND composable Futures.
Implementation uses an alternative CompletionStage implementation (pay attention, CompletionStage rather than CompletableFuture):
Usage:
J8ExecutorService exec = J8Executors.newCachedThreadPool();
CompletionStage<String> = exec
.submit( someCallableA )
.thenCombineAsync( exec.submit(someCallableB), (a, b) -> a + " " + b)
.thenCombine( exec.submit(someCallableC), (ab, b) -> ab + " " + c);
public static <T> CompletableFuture<T> fromFuture(Future<T> f) {
return CompletableFuture.completedFuture(null).thenCompose(avoid -> {
try {
return CompletableFuture.completedFuture(f.get());
} catch (InterruptedException e) {
return CompletableFuture.failedFuture(e);
} catch (ExecutionException e) {
return CompletableFuture.failedFuture(e.getCause());
}
});
}
The main idea goes like this:
Future<?> future = null;
return CompletableFuture.supplyAsync(future::get);
However, you will receive some warnings from your compiler.
So, here is the first option.
Future<?> future = null;
return CompletableFuture.supplyAsync(
()->{
try {
return future.get();
} catch (Exception e) {
throw new RuntimeException(e);
}
});
Second Option, hide the try...catch via casting the functional interface.
#FunctionalInterface
public interface MySupplier<T> extends Supplier<T> {
#Override
default T get() {
try {
return getInternal();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
T getInternal() throws Exception;
}
public static void main(String[] args) {
Future<?> future = null;
return CompletableFuture.supplyAsync((MySupplier<?>) future::get);
}
Third Option, find out some 3rd party lib which has provided such a functional interface.
See Also: Java 8 Lambda function that throws exception?
I'm trying to use Java's ThreadPoolExecutor class to run a large number of heavy weight tasks with a fixed number of threads. Each of the tasks has many places during which it may fail due to exceptions.
I've subclassed ThreadPoolExecutor and I've overridden the afterExecute method which is supposed to provide any uncaught exceptions encountered while running a task. However, I can't seem to make it work.
For example:
public class ThreadPoolErrors extends ThreadPoolExecutor {
public ThreadPoolErrors() {
super( 1, // core threads
1, // max threads
1, // timeout
TimeUnit.MINUTES, // timeout units
new LinkedBlockingQueue<Runnable>() // work queue
);
}
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if(t != null) {
System.out.println("Got an error: " + t);
} else {
System.out.println("Everything's fine--situation normal!");
}
}
public static void main( String [] args) {
ThreadPoolErrors threadPool = new ThreadPoolErrors();
threadPool.submit(
new Runnable() {
public void run() {
throw new RuntimeException("Ouch! Got an error.");
}
}
);
threadPool.shutdown();
}
}
The output from this program is "Everything's fine--situation normal!" even though the only Runnable submitted to the thread pool throws an exception. Any clue to what's going on here?
Thanks!
WARNING: It should be noted that this solution will block the calling thread in future.get().
If you want to process exceptions thrown by the task, then it is generally better to use Callable rather than Runnable.
Callable.call() is permitted to throw checked exceptions, and these get propagated back to the calling thread:
Callable task = ...
Future future = executor.submit(task);
// do something else in the meantime, and then...
try {
future.get();
} catch (ExecutionException ex) {
ex.getCause().printStackTrace();
}
If Callable.call() throws an exception, this will be wrapped in an ExecutionException and thrown by Future.get().
This is likely to be much preferable to subclassing ThreadPoolExecutor. It also gives you the opportunity to re-submit the task if the exception is a recoverable one.
From the docs:
Note: When actions are enclosed in
tasks (such as FutureTask) either
explicitly or via methods such as
submit, these task objects catch and
maintain computational exceptions, and
so they do not cause abrupt
termination, and the internal
exceptions are not passed to this
method.
When you submit a Runnable, it'll get wrapped in a Future.
Your afterExecute should be something like this:
public final class ExtendedExecutor extends ThreadPoolExecutor {
// ...
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (t == null && r instanceof Future<?>) {
try {
Future<?> future = (Future<?>) r;
if (future.isDone()) {
future.get();
}
} catch (CancellationException ce) {
t = ce;
} catch (ExecutionException ee) {
t = ee.getCause();
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
if (t != null) {
System.out.println(t);
}
}
}
The explanation for this behavior is right in the javadoc for afterExecute:
Note: When actions are enclosed in
tasks (such as FutureTask) either
explicitly or via methods such as
submit, these task objects catch and
maintain computational exceptions, and
so they do not cause abrupt
termination, and the internal
exceptions are not passed to this
method.
I got around it by wrapping the supplied runnable submitted to the executor.
CompletableFuture.runAsync(() -> {
try {
runnable.run();
} catch (Throwable e) {
Log.info(Concurrency.class, "runAsync", e);
}
}, executorService);
I'm using VerboseRunnable class from jcabi-log, which swallows all exceptions and logs them. Very convenient, for example:
import com.jcabi.log.VerboseRunnable;
scheduler.scheduleWithFixedDelay(
new VerboseRunnable(
Runnable() {
public void run() {
// the code, which may throw
}
},
true // it means that all exceptions will be swallowed and logged
),
1, 1, TimeUnit.MILLISECONDS
);
Another solution would be to use the ManagedTask and ManagedTaskListener.
You need a Callable or Runnable which implements the interface ManagedTask.
The method getManagedTaskListener returns the instance you want.
public ManagedTaskListener getManagedTaskListener() {
And you implement in ManagedTaskListener the taskDone method:
#Override
public void taskDone(Future<?> future, ManagedExecutorService executor, Object task, Throwable exception) {
if (exception != null) {
LOGGER.log(Level.SEVERE, exception.getMessage());
}
}
More details about managed task lifecycle and listener.
This works
It is derived from SingleThreadExecutor, but you can adapt it easily
Java 8 lamdas code, but easy to fix
It will create a Executor with a single thread, that can get a lot of tasks; and will wait for the current one to end execution to begin with the next
In case of uncaugth error or exception the uncaughtExceptionHandler will catch it
public final class SingleThreadExecutorWithExceptions {
public static ExecutorService newSingleThreadExecutorWithExceptions(final Thread.UncaughtExceptionHandler uncaughtExceptionHandler) {
ThreadFactory factory = (Runnable runnable) -> {
final Thread newThread = new Thread(runnable, "SingleThreadExecutorWithExceptions");
newThread.setUncaughtExceptionHandler( (final Thread caugthThread,final Throwable throwable) -> {
uncaughtExceptionHandler.uncaughtException(caugthThread, throwable);
});
return newThread;
};
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue(),
factory){
protected void afterExecute(Runnable runnable, Throwable throwable) {
super.afterExecute(runnable, throwable);
if (throwable == null && runnable instanceof Future) {
try {
Future future = (Future) runnable;
if (future.isDone()) {
future.get();
}
} catch (CancellationException ce) {
throwable = ce;
} catch (ExecutionException ee) {
throwable = ee.getCause();
} catch (InterruptedException ie) {
Thread.currentThread().interrupt(); // ignore/reset
}
}
if (throwable != null) {
uncaughtExceptionHandler.uncaughtException(Thread.currentThread(),throwable);
}
}
});
}
private static class FinalizableDelegatedExecutorService
extends DelegatedExecutorService {
FinalizableDelegatedExecutorService(ExecutorService executor) {
super(executor);
}
protected void finalize() {
super.shutdown();
}
}
/**
* A wrapper class that exposes only the ExecutorService methods
* of an ExecutorService implementation.
*/
private static class DelegatedExecutorService extends AbstractExecutorService {
private final ExecutorService e;
DelegatedExecutorService(ExecutorService executor) { e = executor; }
public void execute(Runnable command) { e.execute(command); }
public void shutdown() { e.shutdown(); }
public List shutdownNow() { return e.shutdownNow(); }
public boolean isShutdown() { return e.isShutdown(); }
public boolean isTerminated() { return e.isTerminated(); }
public boolean awaitTermination(long timeout, TimeUnit unit)
throws InterruptedException {
return e.awaitTermination(timeout, unit);
}
public Future submit(Runnable task) {
return e.submit(task);
}
public Future submit(Callable task) {
return e.submit(task);
}
public Future submit(Runnable task, T result) {
return e.submit(task, result);
}
public List> invokeAll(Collection> tasks)
throws InterruptedException {
return e.invokeAll(tasks);
}
public List> invokeAll(Collection> tasks,
long timeout, TimeUnit unit)
throws InterruptedException {
return e.invokeAll(tasks, timeout, unit);
}
public T invokeAny(Collection> tasks)
throws InterruptedException, ExecutionException {
return e.invokeAny(tasks);
}
public T invokeAny(Collection> tasks,
long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException {
return e.invokeAny(tasks, timeout, unit);
}
}
private SingleThreadExecutorWithExceptions() {}
}
This is because of AbstractExecutorService :: submit is wrapping your runnable into RunnableFuture (nothing but FutureTask) like below
AbstractExecutorService.java
public Future<?> submit(Runnable task) {
if (task == null) throw new NullPointerException();
RunnableFuture<Void> ftask = newTaskFor(task, null); /////////HERE////////
execute(ftask);
return ftask;
}
Then execute will pass it to Worker and Worker.run() will call the below.
ThreadPoolExecutor.java
final void runWorker(Worker w) {
Thread wt = Thread.currentThread();
Runnable task = w.firstTask;
w.firstTask = null;
w.unlock(); // allow interrupts
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
try {
beforeExecute(wt, task);
Throwable thrown = null;
try {
task.run(); /////////HERE////////
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}
Finally task.run(); in the above code call will call
FutureTask.run(). Here is the exception handler code, because of
this you are NOT getting the expected exception.
class FutureTask<V> implements RunnableFuture<V>
public void run() {
if (state != NEW ||
!UNSAFE.compareAndSwapObject(this, runnerOffset,
null, Thread.currentThread()))
return;
try {
Callable<V> c = callable;
if (c != null && state == NEW) {
V result;
boolean ran;
try {
result = c.call();
ran = true;
} catch (Throwable ex) { /////////HERE////////
result = null;
ran = false;
setException(ex);
}
if (ran)
set(result);
}
} finally {
// runner must be non-null until state is settled to
// prevent concurrent calls to run()
runner = null;
// state must be re-read after nulling runner to prevent
// leaked interrupts
int s = state;
if (s >= INTERRUPTING)
handlePossibleCancellationInterrupt(s);
}
}
If you want to monitor the execution of task, you could spin 1 or 2 threads (maybe more depending on the load) and use them to take tasks from an ExecutionCompletionService wrapper.
The doc's example wasn't giving me the results I wanted.
When a Thread process was abandoned (with explicit interput();s) Exceptions were appearing.
Also I wanted to keep the "System.exit" functionality that a normal main thread has with a typical throw, I wanted this so that the programmer was not forced to work on the code having to worry on it's context (... a thread), If any error appears, it must either be a programming error, or the case must be solved in place with a manual catch... no need for overcomplexities really.
So I changed the code to match my needs.
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (t == null && r instanceof Future<?>) {
Future<?> future = (Future<?>) r;
boolean terminate = false;
try {
future.get();
} catch (ExecutionException e) {
terminate = true;
e.printStackTrace();
} catch (InterruptedException | CancellationException ie) {// ignore/reset
Thread.currentThread().interrupt();
} finally {
if (terminate) System.exit(0);
}
}
}
Be cautious though, this code basically transforms your threads into a main thread Exception-wise, while keeping all it's parallel properties... But let's be real, designing architectures in function of the system's parallel mechanism (extends Thread) is the wrong approach IMHO... unless an event driven design is strictly required....but then... if that is the requirement the question is: Is the ExecutorService even needed in this case?... maybe not.
If your ExecutorService comes from an external source (i. e. it's not possible to subclass ThreadPoolExecutor and override afterExecute()), you can use a dynamic proxy to achieve the desired behavior:
public static ExecutorService errorAware(final ExecutorService executor) {
return (ExecutorService) Proxy.newProxyInstance(Thread.currentThread().getContextClassLoader(),
new Class[] {ExecutorService.class},
(proxy, method, args) -> {
if (method.getName().equals("submit")) {
final Object arg0 = args[0];
if (arg0 instanceof Runnable) {
args[0] = new Runnable() {
#Override
public void run() {
final Runnable task = (Runnable) arg0;
try {
task.run();
if (task instanceof Future<?>) {
final Future<?> future = (Future<?>) task;
if (future.isDone()) {
try {
future.get();
} catch (final CancellationException ce) {
// Your error-handling code here
ce.printStackTrace();
} catch (final ExecutionException ee) {
// Your error-handling code here
ee.getCause().printStackTrace();
} catch (final InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
} catch (final RuntimeException re) {
// Your error-handling code here
re.printStackTrace();
throw re;
} catch (final Error e) {
// Your error-handling code here
e.printStackTrace();
throw e;
}
}
};
} else if (arg0 instanceof Callable<?>) {
args[0] = new Callable<Object>() {
#Override
public Object call() throws Exception {
final Callable<?> task = (Callable<?>) arg0;
try {
return task.call();
} catch (final Exception e) {
// Your error-handling code here
e.printStackTrace();
throw e;
} catch (final Error e) {
// Your error-handling code here
e.printStackTrace();
throw e;
}
}
};
}
}
return method.invoke(executor, args);
});
}
This is similar to mmm's solution, but a bit more understandable. Have your tasks extend an abstract class that wraps the run() method.
public abstract Task implements Runnable {
public abstract void execute();
public void run() {
try {
execute();
} catch (Throwable t) {
// handle it
}
}
}
public MySampleTask extends Task {
public void execute() {
// heavy, error-prone code here
}
}
Instead of subclassing ThreadPoolExecutor, I would provide it with a ThreadFactory instance that creates new Threads and provides them with an UncaughtExceptionHandler
Java 8 introduces CompletableFuture, a new implementation of Future that is composable (includes a bunch of thenXxx methods). I'd like to use this exclusively, but many of the libraries I want to use return only non-composable Future instances.
Is there a way to wrap up a returned Future instances inside of a CompleteableFuture so that I can compose it?
If the library you want to use also offers a callback style method in addition to the Future style, you can provide it a handler that completes the CompletableFuture without any extra thread blocking. Like so:
AsynchronousFileChannel open = AsynchronousFileChannel.open(Paths.get("/some/file"));
// ...
CompletableFuture<ByteBuffer> completableFuture = new CompletableFuture<ByteBuffer>();
open.read(buffer, position, null, new CompletionHandler<Integer, Void>() {
#Override
public void completed(Integer result, Void attachment) {
completableFuture.complete(buffer);
}
#Override
public void failed(Throwable exc, Void attachment) {
completableFuture.completeExceptionally(exc);
}
});
completableFuture.thenApply(...)
Without the callback the only other way I see solving this is to use a polling loop that puts all your Future.isDone() checks on a single thread and then invoking complete whenever a Future is gettable.
There is a way, but you won't like it. The following method transforms a Future<T> into a CompletableFuture<T>:
public static <T> CompletableFuture<T> makeCompletableFuture(Future<T> future) {
if (future.isDone())
return transformDoneFuture(future);
return CompletableFuture.supplyAsync(() -> {
try {
if (!future.isDone())
awaitFutureIsDoneInForkJoinPool(future);
return future.get();
} catch (ExecutionException e) {
throw new RuntimeException(e);
} catch (InterruptedException e) {
// Normally, this should never happen inside ForkJoinPool
Thread.currentThread().interrupt();
// Add the following statement if the future doesn't have side effects
// future.cancel(true);
throw new RuntimeException(e);
}
});
}
private static <T> CompletableFuture<T> transformDoneFuture(Future<T> future) {
CompletableFuture<T> cf = new CompletableFuture<>();
T result;
try {
result = future.get();
} catch (Throwable ex) {
cf.completeExceptionally(ex);
return cf;
}
cf.complete(result);
return cf;
}
private static void awaitFutureIsDoneInForkJoinPool(Future<?> future)
throws InterruptedException {
ForkJoinPool.managedBlock(new ForkJoinPool.ManagedBlocker() {
#Override public boolean block() throws InterruptedException {
try {
future.get();
} catch (ExecutionException e) {
throw new RuntimeException(e);
}
return true;
}
#Override public boolean isReleasable() {
return future.isDone();
}
});
}
Obviously, the problem with this approach is, that for each Future, a thread will be blocked to wait for the result of the Future--contradicting the idea of futures. In some cases, it might be possible to do better. However, in general, there is no solution without actively wait for the result of the Future.
If your Future is the result of a call to an ExecutorService method (e.g. submit()), the easiest would be to use the CompletableFuture.runAsync(Runnable, Executor) method instead.
From
Runnbale myTask = ... ;
Future<?> future = myExecutor.submit(myTask);
to
Runnbale myTask = ... ;
CompletableFuture<?> future = CompletableFuture.runAsync(myTask, myExecutor);
The CompletableFuture is then created "natively".
EDIT: Pursuing comments by #SamMefford corrected by #MartinAndersson, if you want to pass a Callable, you need to call supplyAsync(), converting the Callable<T> into a Supplier<T>, e.g. with:
CompletableFuture.supplyAsync(() -> {
try { return myCallable.call(); }
catch (Exception ex) { throw new CompletionException(ex); } // Or return default value
}, myExecutor);
Because T Callable.call() throws Exception; throws an exception and T Supplier.get(); doesn't, you have to catch the exception so prototypes are compatible.
A note on exception handling
The get() method doesn't specify a throws, which means it should not throw a checked exception. However, unchecked exception can be used. The code in CompletableFuture shows that CompletionException is used and is unchecked (i.e. is a RuntimeException), hence the catch/throw wrapping any exception into a CompletionException.
Also, as #WeGa indicated, you can use the handle() method to deal with exceptions potentially being thrown by the result:
CompletableFuture<T> future = CompletableFuture.supplyAsync(...);
future.handle((ex,res) -> {
if (ex != null) {
// An exception occurred ...
} else {
// No exception was thrown, 'res' is valid and can be handled here
}
});
I published a little futurity project that tries to do better than the straightforward way in the answer.
The main idea is to use only one thread (and of course with not just a spin loop) to check all Futures states inside, which helps to avoid blocking a thread from a pool for each Future -> CompletableFuture transformation.
Usage example:
Future oldFuture = ...;
CompletableFuture profit = Futurity.shift(oldFuture);
Suggestion:
http://www.thedevpiece.com/converting-old-java-future-to-completablefuture/
But, basically:
public class CompletablePromiseContext {
private static final ScheduledExecutorService SERVICE = Executors.newSingleThreadScheduledExecutor();
public static void schedule(Runnable r) {
SERVICE.schedule(r, 1, TimeUnit.MILLISECONDS);
}
}
And, the CompletablePromise:
public class CompletablePromise<V> extends CompletableFuture<V> {
private Future<V> future;
public CompletablePromise(Future<V> future) {
this.future = future;
CompletablePromiseContext.schedule(this::tryToComplete);
}
private void tryToComplete() {
if (future.isDone()) {
try {
complete(future.get());
} catch (InterruptedException e) {
completeExceptionally(e);
} catch (ExecutionException e) {
completeExceptionally(e.getCause());
}
return;
}
if (future.isCancelled()) {
cancel(true);
return;
}
CompletablePromiseContext.schedule(this::tryToComplete);
}
}
Example:
public class Main {
public static void main(String[] args) {
final ExecutorService service = Executors.newSingleThreadExecutor();
final Future<String> stringFuture = service.submit(() -> "success");
final CompletableFuture<String> completableFuture = new CompletablePromise<>(stringFuture);
completableFuture.whenComplete((result, failure) -> {
System.out.println(result);
});
}
}
Let me suggest another (hopefully, better) option:
https://github.com/vsilaev/java-async-await/tree/master/com.farata.lang.async.examples/src/main/java/com/farata/concurrent
Briefly, the idea is the following:
Introduce CompletableTask<V> interface -- the union of the
CompletionStage<V> + RunnableFuture<V>
Warp ExecutorService to return CompletableTask from submit(...) methods (instead of Future<V>)
Done, we have runnable AND composable Futures.
Implementation uses an alternative CompletionStage implementation (pay attention, CompletionStage rather than CompletableFuture):
Usage:
J8ExecutorService exec = J8Executors.newCachedThreadPool();
CompletionStage<String> = exec
.submit( someCallableA )
.thenCombineAsync( exec.submit(someCallableB), (a, b) -> a + " " + b)
.thenCombine( exec.submit(someCallableC), (ab, b) -> ab + " " + c);
public static <T> CompletableFuture<T> fromFuture(Future<T> f) {
return CompletableFuture.completedFuture(null).thenCompose(avoid -> {
try {
return CompletableFuture.completedFuture(f.get());
} catch (InterruptedException e) {
return CompletableFuture.failedFuture(e);
} catch (ExecutionException e) {
return CompletableFuture.failedFuture(e.getCause());
}
});
}
The main idea goes like this:
Future<?> future = null;
return CompletableFuture.supplyAsync(future::get);
However, you will receive some warnings from your compiler.
So, here is the first option.
Future<?> future = null;
return CompletableFuture.supplyAsync(
()->{
try {
return future.get();
} catch (Exception e) {
throw new RuntimeException(e);
}
});
Second Option, hide the try...catch via casting the functional interface.
#FunctionalInterface
public interface MySupplier<T> extends Supplier<T> {
#Override
default T get() {
try {
return getInternal();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
T getInternal() throws Exception;
}
public static void main(String[] args) {
Future<?> future = null;
return CompletableFuture.supplyAsync((MySupplier<?>) future::get);
}
Third Option, find out some 3rd party lib which has provided such a functional interface.
See Also: Java 8 Lambda function that throws exception?
I have the following code snippet that basically scans through the list of task that needs to be executed and each task is then given to the executor for execution.
The JobExecutor in turn creates another executor (for doing db stuff...reading and writing data to queue) and completes the task.
JobExecutor returns a Future<Boolean> for the tasks submitted. When one of the task fails, I want to gracefully interrupt all the threads and shutdown the executor by catching all the exceptions. What changes do I need to do?
public class DataMovingClass {
private static final AtomicInteger uniqueId = new AtomicInteger(0);
private static final ThreadLocal<Integer> uniqueNumber = new IDGenerator();
ThreadPoolExecutor threadPoolExecutor = null ;
private List<Source> sources = new ArrayList<Source>();
private static class IDGenerator extends ThreadLocal<Integer> {
#Override
public Integer get() {
return uniqueId.incrementAndGet();
}
}
public void init(){
// load sources list
}
public boolean execute() {
boolean succcess = true ;
threadPoolExecutor = new ThreadPoolExecutor(10,10,
10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024),
new ThreadFactory() {
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("DataMigration-" + uniqueNumber.get());
return t;
}// End method
}, new ThreadPoolExecutor.CallerRunsPolicy());
List<Future<Boolean>> result = new ArrayList<Future<Boolean>>();
for (Source source : sources) {
result.add(threadPoolExecutor.submit(new JobExecutor(source)));
}
for (Future<Boolean> jobDone : result) {
try {
if (!jobDone.get(100000, TimeUnit.SECONDS) && success) {
// in case of successful DbWriterClass, we don't need to change
// it.
success = false;
}
} catch (Exception ex) {
// handle exceptions
}
}
}
public class JobExecutor implements Callable<Boolean> {
private ThreadPoolExecutor threadPoolExecutor ;
Source jobSource ;
public SourceJobExecutor(Source source) {
this.jobSource = source;
threadPoolExecutor = new ThreadPoolExecutor(10,10,10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024),
new ThreadFactory() {
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("Job Executor-" + uniqueNumber.get());
return t;
}// End method
}, new ThreadPoolExecutor.CallerRunsPolicy());
}
public Boolean call() throws Exception {
boolean status = true ;
System.out.println("Starting Job = " + jobSource.getName());
try {
// do the specified task ;
}catch (InterruptedException intrEx) {
logger.warn("InterruptedException", intrEx);
status = false ;
} catch(Exception e) {
logger.fatal("Exception occurred while executing task "+jobSource.getName(),e);
status = false ;
}
System.out.println("Ending Job = " + jobSource.getName());
return status ;
}
}
}
When you submit a task to the executor, it returns you a FutureTask instance.
FutureTask.get() will re-throw any exception thrown by the task as an ExecutorException.
So when you iterate through the List<Future> and call get on each, catch ExecutorException and invoke an orderly shutdown.
Since you are submitting tasks to ThreadPoolExecutor, the exceptions are getting swallowed by FutureTask.
Have a look at this code
**Inside FutureTask$Sync**
void innerRun() {
if (!compareAndSetState(READY, RUNNING))
return;
runner = Thread.currentThread();
if (getState() == RUNNING) { // recheck after setting thread
V result;
try {
result = callable.call();
} catch (Throwable ex) {
setException(ex);
return;
}
set(result);
} else {
releaseShared(0); // cancel
}
}
protected void setException(Throwable t) {
sync.innerSetException(t);
}
From above code, it is clear that setException method catching Throwable. Due to this reason, FutureTask is swallowing all exceptions if you use "submit()" method on ThreadPoolExecutor
As per java documentation, you can extend afterExecute() method in ThreadPoolExecutor
protected void afterExecute(Runnable r,
Throwable t)
Sample code as per documentation:
class ExtendedExecutor extends ThreadPoolExecutor {
// ...
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (t == null && r instanceof Future<?>) {
try {
Object result = ((Future<?>) r).get();
} catch (CancellationException ce) {
t = ce;
} catch (ExecutionException ee) {
t = ee.getCause();
} catch (InterruptedException ie) {
Thread.currentThread().interrupt(); // ignore/reset
}
}
if (t != null)
System.out.println(t);
}
}
You can catch Exceptions in three ways
Future.get() as suggested in accepted answer
wrap entire run() or call() method in try{}catch{}Exceptoion{} blocks
override afterExecute of ThreadPoolExecutor method as shown above
To gracefully interrupt other Threads, have a look at below SE question:
How to stop next thread from running in a ScheduledThreadPoolExecutor
How to forcefully shutdown java ExecutorService
Subclass ThreadPoolExecutor and override its protected afterExecute (Runnable r, Throwable t) method.
If you're creating a thread pool via the java.util.concurrent.Executors convenience class (which you're not), take at look at its source to see how it's invoking ThreadPoolExecutor.