Calling ExecutorService.shutdownNow from CompletableFuture - java

I need to cancel all scheduled but not yet running CompletableFuture tasks when one of already running task throws an exception.
Tried following example but most of the time the main method does not exit (probably due to some type of deadlock).
public static void main(String[] args) {
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set< CompletableFuture<?> > tasks = new HashSet<>();
for (int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync( () -> {
System.out.println("Running: " + id);
if ( id == 400 ) throw new RuntimeException("Exception from: " + id);
}, executionService )
.whenComplete( (v, ex) -> {
if ( ex != null ) {
System.out.println("Shutting down.");
executionService.shutdownNow();
System.out.println("shutdown.");
}
} );
tasks.add(c);
}
try{
CompletableFuture.allOf( tasks.stream().toArray(CompletableFuture[]::new) ).join();
}catch(Exception e) {
System.out.println("Got async exception: " + e);
}finally {
System.out.println("DONE");
}
}
Last printout is something like this:
Running: 402
Running: 400
Running: 408
Running: 407
Running: 406
Running: 405
Running: 411
Shutting down.
Running: 410
Running: 409
Running: 413
Running: 412
shutdown.
Tried running shutdownNow method on separate thread but it still, most of the time, gives the same deadlock.
Any idea what might cause this deadlock?
And what you think is the best way to cancel all scheduled but not yet running CompletableFutures when exception is thrown?
Was thinking of iterating over tasks and calling cancel on each CompletableFuture. But what I dont like about this is that throws CancellationException from join.

You should keep in mind that
CompletableFuture<?> f = CompletableFuture.runAsync(runnable, executionService);
is basically equivalent to
CompletableFuture<?> f = new CompletableFuture<>();
executionService.execute(() -> {
if(!f.isDone()) {
try {
runnable.run();
f.complete(null);
}
catch(Throwable t) {
f.completeExceptionally(t);
}
}
});
So the ExecutorService doesn’t know anything about the CompletableFuture, therefore, it can’t cancel it in general. All it has, is some job, expressed as an implementation of Runnable.
In other words, shutdownNow() will prevent the execution of the pending jobs, thus, the remaining futures won’t get completed normally, but it will not cancel them. Then, you call join() on the future returned by allOf which will never return due to the never-completed futures.
But note that the scheduled job does check whether the future is already completed before doing anything expensive.
So, if you change your code to
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
AtomicBoolean canceled = new AtomicBoolean();
for(int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null && canceled.compareAndSet(false, true)) {
System.out.println("Canceling.");
for(CompletableFuture<?> f: tasks) f.cancel(false);
System.out.println("Canceled.");
}
});
tasks.add(c);
if(canceled.get()) {
c.cancel(false);
break;
}
}
try {
CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
} catch(Exception e) {
System.out.println("Got async exception: " + e);
} finally {
System.out.println("DONE");
}
executionService.shutdown();
The runnables won’t get executed once their associated future has been canceled. Since there is a race between the cancelation and the ordinary execution, it might be helpful to change the action to
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
LockSupport.parkNanos(1000);
}, executionService);
to simulate some actual workload. Then, you will see that less actions get executed after encountering the exception.
Since the asynchronous exception may even happen while the submitting loop is still running, it uses an AtomicBoolean to detect this situation and stop the loop in this situation.
Note that for a CompletableFuture, there is no difference between cancelation and any other exceptional completion. Calling f.cancel(…) is equivalent to f.completeExceptionally(new CancellationException()). Therefore, since CompletableFuture.allOf reports any exception in the exceptional case, it will be very likely a CancellationException instead of the triggering exception.
If you replace the two cancel(false) calls with complete(null), you get a similar effect, the runnables won’t get executed for already completed futures, but allOf will report the original exception, as it is the only exception then. And it has another positive effect: completing with a null value is much cheaper than constructing a CancellationException (for every pending future), so the forced completion via complete(null) runs much faster, preventing more futures from executing.

Another solution that relies only on CompletableFuture is to use a “canceller” future that will cause all non-done tasks to be cancelled when completed:
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
CompletableFuture<Void> canceller = new CompletableFuture<>();
for(int i = 0; i < 1000; i++) {
if (canceller.isDone()) {
System.out.println("Canceller invoked, not creating other futures.");
break;
}
//LockSupport.parkNanos(10);
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
//LockSupport.parkNanos(1000);
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null) {
canceller.complete(null);
}
});
tasks.add(c);
}
canceller.thenRun(() -> {
System.out.println("Cancelling all tasks.");
tasks.forEach(t -> t.cancel(false));
System.out.println("Finished cancelling tasks.");
});

Related

service.shutdownNow() not killing the thread

In my application im spawning a single thread executor and in the thread I'm doing logic of loading ML models/predictions. If any of the logic exceeds the time-limit(4 minutes), I'm shutting down the thread.
But when the application is up, when time out happens, I'm able to see logs of threads getting shut down. But the process(Prediciton logic) continues to execute.
Code snippet for creating the thread
ExecutorService service = Executors.newSingleThreadExecutor();
Future<Object> prediction = null;
try {
prediction = service.submit(() -> {
// Execute the requested Document Prediction Engine against the XML Document.
return executeEngine(tenantId, appId, engine, document, predictionParams == null ? new HashMap<>() : new HashMap<>(predictionParams)).get();
});
predictionResult = prediction != null ? (PredictionResult) prediction.get(Long.parseLong(System.getProperty(IDR_INTERNAL_TIMEOUT, "30000")),
TimeUnit.MILLISECONDS) : null;
} catch (TimeoutException e) {
if (prediction != null) {
LOGGER.debug("Task was cancelled with a {} status", (prediction.cancel(true) ? " successful" : " failure"));
}
ExpenseMetrics.internalPredictionTimeout.inc();
String message = "Prediction took more than allowed milliseconds: " + Long.parseLong(System.getProperty(IDR_INTERNAL_TIMEOUT, "30000")) +
" fileName: "+ documentFile.getFileName();
if (service != null && !service.isShutdown()) {
service.shutdownNow();
}
service = null;
throw new IDRExmClientException(message, requestId, ErrorCode.INTERNAL_TIMEOUT);
}
if (service != null && !service.isShutdown()) {
service.shutdownNow();
}
service = null;
Code snippet for Prediction logic and timeout
List<Callable<Void>> taskList = new ArrayList<Callable<Void>>();
taskList.add(callable1);
taskList.add(callable2);
ExecutorService executor = null;
List<Future<Void>> futures = null;
long s = System.currentTimeMillis();
try {
executor = Executors.newFixedThreadPool(THREAD_POOL);
futures = executor.invokeAll(taskList);
executor.shutdown();
if (!executor.awaitTermination(TOLERANCE_MINUTES, TimeUnit.MINUTES)) {
LOGGER.warn("Document predict thread took more than {} minutes to shutdown", TOLERANCE_MINUTES);
executor.shutdownNow();
}
} catch (InterruptedException iex) {
LOGGER.error("Document predict thread was interrupted", iex);
} finally {
cancelFutures("Predict", futures);
LOGGER.debug("Document predict thread took: {}", (System.currentTimeMillis() - s));
if (executor != null && !executor.isShutdown()) {
executor.shutdownNow();
}
}
executor = null;
from the Oracle documentation of the method shutDownNow() :
There are no guarantees beyond best-effort attempts to stopprocessing actively executing tasks. For example, typicalimplementations will cancel via Thread.interrupt, so any task that fails to respond to interrupts may never terminate.
you can see this answer it may help you:
ExecutorService is not shutting down

How to get the execution results of ExecutorService without blocking the current code path?

I have a service which adds a bunch of requests to Callables and then prints the results of the executions. Currently the service request is blocked until I print all the Future results from the execution. However I want to return 200 to the requestor and run these requests in parallel without blocking the request. How can I achieve this? Below is my code.
Below is my code to run parallel code.
public void runParallelFunctions(Callable<Map<String, String>> invokerTask) {
List<Callable<Map<String, String>>> myTasks = new ArrayList<>();
for (int i = 0; i < invocationCount; i++) {
myTasks.add(invokerTask);
}
List<Future<Map<String, String>>> results = null;
try {
results = executorService.invokeAll(myTasks);
} catch (InterruptedException e) {
}
this.printResultsFromParallelInvocations(results);
}
Below is how I print the results from the Futures.
private void printResultsFromParallelInvocations(List<Future<Map<String, String>>> results) {
results.forEach(executionResults -> {
try {
executionResults.get().entrySet().forEach(entry -> {
LOGGER.info(entry.getKey() + ": " + entry.getValue());
});
} catch (InterruptedException e) {
} catch (ExecutionException e) {
}
});
}
Below is how I'm invoking the above methods when someone places a request to the service.
String documentToBeIndexed = GSON.toJson(indexDocument);
int documentId = indexMyDocument(documentToBeIndexed);
createAdditionalCandidatesForFuture(someInput);
return true;
In the above code, I call the createAdditionalCandidatesForFuture and then return true. But the code still waits for the printResultsFromParallelInvocations method to complete. How can I make the code return after invoking createAdditionalCandidatesForFuture without waiting for the results to print? Do I have to print the results using another executor thread or is there another way? Any help would be much appreciated
The answer is CompletableFuture.
Updated runParallelFunctions:
public void runParallelFunctions(Callable<Map<String, String>> invokerTask) {
// write a wrapper to handle exception outside CompletableFuture
Supplier<Map<String, String>> taskSupplier = () -> {
try {
// some task that takes a long time
Thread.sleep(4000);
return invokerTask.call();
} catch (Exception e) {
System.out.println(e);
}
// return default value on error
return new HashMap<>();
};
for (int i = 0; i < 5; i++) {
CompletableFuture.supplyAsync(taskSupplier, executorService)
.thenAccept(this::printResultsFromParallelInvocations);
}
// main thread immediately comes here after running through the loop
System.out.println("Doing other work....");
}
And, printResultsFromParallelInvocations may look like:
private void printResultsFromParallelInvocations(Map<String, String> result) {
result.forEach((key, value) -> System.out.println(key + ": " + value));
}
Output:
Doing other work....
// 4 secs wait
key:value
Calling get on a Future will block the thread until the task is completed, so yes, you will have to move the printing of the results to another thread/Executor service.
Another option is that each task prints its results upon completion, provided they are supplied with the necessary tools to do so (Access to the logger, etc). Or putting it in another way, each task is divided into two consecutive steps: execution and printing.

CompletableFuture error handling in task chain

I completely lost it in how to do error handling with CompletableFutures. What I need is to have multiple tasks running async. These tasks consist of multiple steps like this example:
Receive data from DB -> Use this data for request -> Do another request -> Update DB record
Now every step could cause an Exception, i.e. DB record not found or incorrect data, request failed, bad response or update DB failed etc. I want to handle these Exceptions to log the error and stop the task and maybe even revert the task.
Now I build a new project to play with CompletableFutures to simulate this process. I used the following code:
public static Integer randomError() {
Random rd = new Random();
if(rd.nextBoolean()) {
try {
throw new Exception("RANDOM ERROR");
} catch (Exception e) {
e.printStackTrace();
}
} else {
return rd.nextInt();
}
return 0;
}
ExecutorService ex = Executors.newFixedThreadPool(64);
System.out.println("Main thread: " + Thread.currentThread());
//Starting tasks
List<CompletableFuture> listTasks = new ArrayList<CompletableFuture>();
List<String> listErrors = new ArrayList<String>();
System.out.println("Starting threads...");
for (int i = 0; i < 10; i++) {
int counter = i;
//Add tasks to TaskQueue (taskList)
listTasks.add(
CompletableFuture.supplyAsync(()->{
//Simulate step 1
return 0;
},ex).thenApplyAsync(x -> {
//Simulate step 2
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return x + 1;
}, ex).thenApplyAsync(x -> {
//Simulate step 3 with a potential error
randomError();
return x + 1;
}, ex).thenApplyAsync(x -> {
//On error this shouldnt be executed?
//Simulate tep 4
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return x + 1;
}, ex).thenAcceptAsync( x -> {
//Simulate COMPLETION step 5
// listTasks.remove(counter);
}, ex).exceptionally(e -> {
listErrors.add("ERROR: " + counter);
System.out.println(e);
return null;
})
);
}
System.out.println("Done");
Now this piece of code creates 10 tasks, where every tasks consists of 5 steps. Now when Step 3 produces an Exception, step 4 still executes. Why? In my Serial monitor I see the error thrown but the CompletableFuture still completes OK. When I do 1 / 0;. this produces an error which gets caught by .exceptionally(). How is that catched and not the custom thrown Exception?
What I want is on error, stop the chain and go do .exceptionally() to handle the error.

Enforce executorService.awaitTermination while using CompletionService

I am trying to submit multiple tasks and obtain the results as and when it is available. However, after the end of the loop, I have to enforce that all the tasks complete within specified amount of time. If not, throw an error. Initially, all I had was executorService's invokeAll, shutdown and awaitTermination calls that were used to ensure that all tasks complete (inspite of errors or not). I migrated the code to use CompletionService to display the results. Where can I enforce the awaitTermination clause in the CompletionService calls?
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
executor.shutdown();
logger.info("Tasks submitted. Now checking the status.");
while (!executor.isTerminated())
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
executor.awaitTermination(24, TimeUnit.HOURS);
In other words, how can I utilize timeout for CompletionService?
EDIT:
The initial code I had was displayed below. The problem is that I am iterating through the future list and then printing them as completed. However, my requirement is to display the ones that were completed at a FCFS basis.
List<Future<String>> results = executor.invokeAll(tasks);
executor.shutdown();
executor.awaitTermination(24, TimeUnit.HOURS);
while (results.size() > 0)
{
for (Iterator<Future<String>> iterator = results.iterator(); iterator.hasNext();)
{
Future<String> item = iterator.next();
if (item.isDone())
{
String itemValue;
try
{
itemValue = item.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
finally
{
iterator.remove();
}
}
}
}
I'd suggest you wait for the executor to terminate on another thread
That way you can achieve serving results FCFS and also enforce the timeout.
It can be easily achieved with something that will look like the following
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
// place all the work in a function (an Anonymous Runnable in this case)
// completionService.submit(() ->{work});
// as soon as the work is submitted it is handled by another Thread
completionService.submit(() ->{
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int counter = tasks.size();
for(int i = counter; counter >=1; counter--) // Replaced the while loop
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
});
// After submitting the work to another Thread
// Wait in your Main Thread, and enforce termination if needed
shutdownAndAwaitTermination(executor);
You handle the executors termination && waiting using this (taken from ExecutorsService)
void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(24, TimeUnit.HOURS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}
Ok then, you need to monitor completion. So, why are yon not using as per documentation? https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorCompletionService.html So, it submits n tasks to a new instance of ExecutorCompletionService and waits n to complete. No termination again, you could just reuse the same executor (usually thread pool, creating a new thread is more expensive rather than reusing from a pool). So, if I adapt code from the documentation to your scenario it would be something like:
CompletionService<Result> ecs
= new ExecutorCompletionService<String>(executor);
for (Callable<Result> task : tasks)
ecs.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int n = tasks.size();
for (int i = 0; i < n; ++i) {
try {
String r = ecs.take().get();
logger.info("Backup completed for " + r);
}
catch(InterruptedException | ExecutionException e) {
...
}
}
Also, it is bad idea to parse exception message, better if you create your custom exception class and use instanceof.
If you need to have a timeout for the completion - use poll with time parameters instead of take.

Future waiting for FixedThreadPool is returning before all Threads finish

I'm trying to wait all my threads finish before execute another task using Future, but something is wrong because my future is just wainting for the last thread of my for loop.
My executor method:
public static Future<?> downloadImages(Executor e, MainViewController controller, String filePath, String dns, int port, int numImg,
String offlineUuid, Map<String, String> cookies, String type, String outputFolder) throws SystemException, IOException, InterruptedException {
String urlImages;
String filePath2;
Future future = null;
if (numImg == 1) {
//Some Code
} else {
type = "multimages";
ExecutorService es = Executors.newFixedThreadPool(numImg);
for (int i = 0; i < numImg; i++) {
filePath2 = "";
filePath2 = filePath + File.separator + "TargetApp" + File.separator + "TempImage" + i + "Download.zip";
urlImages = "http://" + dns + ":" + port + Constants.TARGET_SERVICE_DOWNLOADIMAGES_PATH + offlineUuid + "/?pos=" + (i);
future = es.submit(new DownloaderAndUnzipTask(controller, urlImages, filePath2, outputFolder, cookies, type));
}
return future;
}
return null;
}
My waiting method:
Future future = fullDownloadSelected(tableViewFull.getSelectionModel().getSelectedIndex());
if (future != null) {
try {
future.get();
if (future.isDone());
System.out.println("Processamento de Imagens Acabou");
} catch (ExecutionException ex) {
Logger.getLogger(MainViewController.class.getName()).log(Level.SEVERE, null, ex);
}
My msg is shown when the last Thread created in first method is finished, but it should have finished when all threads in pool is finished. I think something is wrong where I submit my executor inside the for loop, but how can I fix it?
You need to capture every Future returned and then wait for each one to complete (using get on each)
You can, alternatively, do something like:
ExecutorService es = Executors.newFixedThreadPool(numImg);
List<Callable> tasks = ...
for (int i = 0; i < numImg; i++) {
tasks.add(your tasks);
}
List<Future<Object>> futures = es.invokeAll(tasks);
which will only return once all the tasks within are complete.
You are reassigning the future in each iteration.
You can use invokeAll which returns when all submitted tasks are done.
You are just waiting for the last Future to finish.
future = es.submit(...);
...
return future;
...
// in waiting method, wait for the last job to finish
future.get();
This only waits for the last of the jobs submitted to the executor-service to finish -- other jobs can still be running. You should instead return the ExecutorService from the downloadImages(). Then in your waiting method you do:
// you must always shut the service down, no more jobs can be submitted
es.shutdown();
// waits for the service to complete forever
es.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
It may make more sense for you to create your ExecutorService in the calling method and pass it into the downloadImages().

Categories

Resources