CompletableFuture error handling in task chain - java

I completely lost it in how to do error handling with CompletableFutures. What I need is to have multiple tasks running async. These tasks consist of multiple steps like this example:
Receive data from DB -> Use this data for request -> Do another request -> Update DB record
Now every step could cause an Exception, i.e. DB record not found or incorrect data, request failed, bad response or update DB failed etc. I want to handle these Exceptions to log the error and stop the task and maybe even revert the task.
Now I build a new project to play with CompletableFutures to simulate this process. I used the following code:
public static Integer randomError() {
Random rd = new Random();
if(rd.nextBoolean()) {
try {
throw new Exception("RANDOM ERROR");
} catch (Exception e) {
e.printStackTrace();
}
} else {
return rd.nextInt();
}
return 0;
}
ExecutorService ex = Executors.newFixedThreadPool(64);
System.out.println("Main thread: " + Thread.currentThread());
//Starting tasks
List<CompletableFuture> listTasks = new ArrayList<CompletableFuture>();
List<String> listErrors = new ArrayList<String>();
System.out.println("Starting threads...");
for (int i = 0; i < 10; i++) {
int counter = i;
//Add tasks to TaskQueue (taskList)
listTasks.add(
CompletableFuture.supplyAsync(()->{
//Simulate step 1
return 0;
},ex).thenApplyAsync(x -> {
//Simulate step 2
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return x + 1;
}, ex).thenApplyAsync(x -> {
//Simulate step 3 with a potential error
randomError();
return x + 1;
}, ex).thenApplyAsync(x -> {
//On error this shouldnt be executed?
//Simulate tep 4
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return x + 1;
}, ex).thenAcceptAsync( x -> {
//Simulate COMPLETION step 5
// listTasks.remove(counter);
}, ex).exceptionally(e -> {
listErrors.add("ERROR: " + counter);
System.out.println(e);
return null;
})
);
}
System.out.println("Done");
Now this piece of code creates 10 tasks, where every tasks consists of 5 steps. Now when Step 3 produces an Exception, step 4 still executes. Why? In my Serial monitor I see the error thrown but the CompletableFuture still completes OK. When I do 1 / 0;. this produces an error which gets caught by .exceptionally(). How is that catched and not the custom thrown Exception?
What I want is on error, stop the chain and go do .exceptionally() to handle the error.

Related

How to get the execution results of ExecutorService without blocking the current code path?

I have a service which adds a bunch of requests to Callables and then prints the results of the executions. Currently the service request is blocked until I print all the Future results from the execution. However I want to return 200 to the requestor and run these requests in parallel without blocking the request. How can I achieve this? Below is my code.
Below is my code to run parallel code.
public void runParallelFunctions(Callable<Map<String, String>> invokerTask) {
List<Callable<Map<String, String>>> myTasks = new ArrayList<>();
for (int i = 0; i < invocationCount; i++) {
myTasks.add(invokerTask);
}
List<Future<Map<String, String>>> results = null;
try {
results = executorService.invokeAll(myTasks);
} catch (InterruptedException e) {
}
this.printResultsFromParallelInvocations(results);
}
Below is how I print the results from the Futures.
private void printResultsFromParallelInvocations(List<Future<Map<String, String>>> results) {
results.forEach(executionResults -> {
try {
executionResults.get().entrySet().forEach(entry -> {
LOGGER.info(entry.getKey() + ": " + entry.getValue());
});
} catch (InterruptedException e) {
} catch (ExecutionException e) {
}
});
}
Below is how I'm invoking the above methods when someone places a request to the service.
String documentToBeIndexed = GSON.toJson(indexDocument);
int documentId = indexMyDocument(documentToBeIndexed);
createAdditionalCandidatesForFuture(someInput);
return true;
In the above code, I call the createAdditionalCandidatesForFuture and then return true. But the code still waits for the printResultsFromParallelInvocations method to complete. How can I make the code return after invoking createAdditionalCandidatesForFuture without waiting for the results to print? Do I have to print the results using another executor thread or is there another way? Any help would be much appreciated
The answer is CompletableFuture.
Updated runParallelFunctions:
public void runParallelFunctions(Callable<Map<String, String>> invokerTask) {
// write a wrapper to handle exception outside CompletableFuture
Supplier<Map<String, String>> taskSupplier = () -> {
try {
// some task that takes a long time
Thread.sleep(4000);
return invokerTask.call();
} catch (Exception e) {
System.out.println(e);
}
// return default value on error
return new HashMap<>();
};
for (int i = 0; i < 5; i++) {
CompletableFuture.supplyAsync(taskSupplier, executorService)
.thenAccept(this::printResultsFromParallelInvocations);
}
// main thread immediately comes here after running through the loop
System.out.println("Doing other work....");
}
And, printResultsFromParallelInvocations may look like:
private void printResultsFromParallelInvocations(Map<String, String> result) {
result.forEach((key, value) -> System.out.println(key + ": " + value));
}
Output:
Doing other work....
// 4 secs wait
key:value
Calling get on a Future will block the thread until the task is completed, so yes, you will have to move the printing of the results to another thread/Executor service.
Another option is that each task prints its results upon completion, provided they are supplied with the necessary tools to do so (Access to the logger, etc). Or putting it in another way, each task is divided into two consecutive steps: execution and printing.

Flux of strings emitted time to time

My problem: I want to create a stream of string that will be be sent from controller from time to time.
Processing started!
Step 1 completed. (This might be sent after 5 seconds or 10 minutes.)
Process completed. (This might be sent after 15 minutes.)
Here is code snippet in controller:
#GetMapping(value = "/stream1", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> streamData() {
return Flux.create(emitter -> {
emitter.next("Processing started!");
try {
TimeUnit.SECONDS.sleep(5);
emitter.next("Step 1 completed.");
TimeUnit.SECONDS.sleep(5);
emitter.next("Process completed.");
emitter.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
}, FluxSink.OverflowStrategy.LATEST);
//create.publish().connect();
//return create;
}
But it emmits data only when it is completed all processing. Means it emmits data after 10 seconds and all stream at once.
How to achieve some stream where it start sending data as soon as single data is ready?
you are using less ideal method for your task. You can use 'Flux.generate(...)'. It is in contrast to 'Flux.create(...)' used to generate single item, and it is used when subscriber requests something. So no problem with backpressure.
Sample:
#GetMapping(value = "/feapi/automation/approach1", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> streamData() {
final AtomicInteger counter = new AtomicInteger();
return Flux.generate(generator -> {
try {
TimeUnit.SECONDS.sleep(new Random().nextInt(1, 10));
generator.next("Next step (" + counter.incrementAndGet() + ") done. Going further.");
// NOTE only SINGLE item can be emmited in one generator call. You can also call complete or error.
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}

Calling ExecutorService.shutdownNow from CompletableFuture

I need to cancel all scheduled but not yet running CompletableFuture tasks when one of already running task throws an exception.
Tried following example but most of the time the main method does not exit (probably due to some type of deadlock).
public static void main(String[] args) {
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set< CompletableFuture<?> > tasks = new HashSet<>();
for (int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync( () -> {
System.out.println("Running: " + id);
if ( id == 400 ) throw new RuntimeException("Exception from: " + id);
}, executionService )
.whenComplete( (v, ex) -> {
if ( ex != null ) {
System.out.println("Shutting down.");
executionService.shutdownNow();
System.out.println("shutdown.");
}
} );
tasks.add(c);
}
try{
CompletableFuture.allOf( tasks.stream().toArray(CompletableFuture[]::new) ).join();
}catch(Exception e) {
System.out.println("Got async exception: " + e);
}finally {
System.out.println("DONE");
}
}
Last printout is something like this:
Running: 402
Running: 400
Running: 408
Running: 407
Running: 406
Running: 405
Running: 411
Shutting down.
Running: 410
Running: 409
Running: 413
Running: 412
shutdown.
Tried running shutdownNow method on separate thread but it still, most of the time, gives the same deadlock.
Any idea what might cause this deadlock?
And what you think is the best way to cancel all scheduled but not yet running CompletableFutures when exception is thrown?
Was thinking of iterating over tasks and calling cancel on each CompletableFuture. But what I dont like about this is that throws CancellationException from join.
You should keep in mind that
CompletableFuture<?> f = CompletableFuture.runAsync(runnable, executionService);
is basically equivalent to
CompletableFuture<?> f = new CompletableFuture<>();
executionService.execute(() -> {
if(!f.isDone()) {
try {
runnable.run();
f.complete(null);
}
catch(Throwable t) {
f.completeExceptionally(t);
}
}
});
So the ExecutorService doesn’t know anything about the CompletableFuture, therefore, it can’t cancel it in general. All it has, is some job, expressed as an implementation of Runnable.
In other words, shutdownNow() will prevent the execution of the pending jobs, thus, the remaining futures won’t get completed normally, but it will not cancel them. Then, you call join() on the future returned by allOf which will never return due to the never-completed futures.
But note that the scheduled job does check whether the future is already completed before doing anything expensive.
So, if you change your code to
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
AtomicBoolean canceled = new AtomicBoolean();
for(int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null && canceled.compareAndSet(false, true)) {
System.out.println("Canceling.");
for(CompletableFuture<?> f: tasks) f.cancel(false);
System.out.println("Canceled.");
}
});
tasks.add(c);
if(canceled.get()) {
c.cancel(false);
break;
}
}
try {
CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
} catch(Exception e) {
System.out.println("Got async exception: " + e);
} finally {
System.out.println("DONE");
}
executionService.shutdown();
The runnables won’t get executed once their associated future has been canceled. Since there is a race between the cancelation and the ordinary execution, it might be helpful to change the action to
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
LockSupport.parkNanos(1000);
}, executionService);
to simulate some actual workload. Then, you will see that less actions get executed after encountering the exception.
Since the asynchronous exception may even happen while the submitting loop is still running, it uses an AtomicBoolean to detect this situation and stop the loop in this situation.
Note that for a CompletableFuture, there is no difference between cancelation and any other exceptional completion. Calling f.cancel(…) is equivalent to f.completeExceptionally(new CancellationException()). Therefore, since CompletableFuture.allOf reports any exception in the exceptional case, it will be very likely a CancellationException instead of the triggering exception.
If you replace the two cancel(false) calls with complete(null), you get a similar effect, the runnables won’t get executed for already completed futures, but allOf will report the original exception, as it is the only exception then. And it has another positive effect: completing with a null value is much cheaper than constructing a CancellationException (for every pending future), so the forced completion via complete(null) runs much faster, preventing more futures from executing.
Another solution that relies only on CompletableFuture is to use a “canceller” future that will cause all non-done tasks to be cancelled when completed:
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
CompletableFuture<Void> canceller = new CompletableFuture<>();
for(int i = 0; i < 1000; i++) {
if (canceller.isDone()) {
System.out.println("Canceller invoked, not creating other futures.");
break;
}
//LockSupport.parkNanos(10);
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
//LockSupport.parkNanos(1000);
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null) {
canceller.complete(null);
}
});
tasks.add(c);
}
canceller.thenRun(() -> {
System.out.println("Cancelling all tasks.");
tasks.forEach(t -> t.cancel(false));
System.out.println("Finished cancelling tasks.");
});

Using Future with ExecutorService

I need to execute two tasks in parallel and wait for them to complete. Also I need the result from the second task, for that I am using Future.
My question is that DO I need executor.awaitTermination to join the tasks or Future.get() will take care of it. Also is there a better way to achieve this with Java 8?
public class Test {
public static void main(String[] args) {
test();
System.out.println("Exiting Main");
}
public static void test() {
System.out.println("In Test");
ExecutorService executor = Executors.newFixedThreadPool(2);
executor.submit(() -> {
for(int i = 0 ; i< 5 ; i++) {
System.out.print("["+i+"]");
try {
Thread.sleep(1000);
} catch (Exception e) {e.printStackTrace();}
}
});
Future<String> result = executor.submit(() -> {
StringBuilder builder = new StringBuilder();
for(int i = 0 ; i< 10 ; i++) {
System.out.print("("+i+")");
try {
Thread.sleep(1000);
} catch (Exception e) {e.printStackTrace();}
builder.append(i);
}
return builder.toString();
});
System.out.println("shutdown");
executor.shutdown();
// DO I need this code : START
System.out.println("awaitTermination");
try {
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
System.out.println("Error");
}
// DO I need this code : END
System.out.println("Getting result");
try {
System.out.println(result.get());
}
catch (InterruptedException e) {e.printStackTrace();}
catch (ExecutionException e) {e.printStackTrace();}
System.out.println("Exiting Test");
}
}
OUTPUT with awaitTermination:
In Test
[0]shutdown
(0)awaitTermination
[1](1)[2](2)[3](3)[4](4)(5)(6)(7)(8)(9)Getting result
0123456789
Exiting Test
Exiting Main
OUTPUT without awaitTermination:
In Test
[0]shutdown
Getting result
(0)[1](1)[2](2)[3](3)[4](4)(5)(6)(7)(8)(9)0123456789
Exiting Test
Exiting Main
From the get javadoc:
Waits if necessary for the computation to complete, and then retrieves its result.
get will wait for the second task only.
From the awaitTermination javadoc:
Blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first.
awaitTermination will wait for all tasks.
You should use CompletableFuture API
You can run a process async like follow:
CompletableFuture.supplyAsync( () -> { ... } );
It returns a future, and you can add a callback which will be called when process is finished and result is available.
For example:
CompletableFuture.runAsync( () -> {
// Here compute your string
return "something";
} ).thenAccept( result -> {
// Here do something with result (ie the computed string)
} );
Note that this statement uses internally the ForkJoinPool#commonPool() to execute the process async, but you can also call this statement with your own ExecutorService if you want. In both case, in order to be sure not exiting before tasks are completed, you need to call either get() (which is blocking) on each future of submitted tasks, or wait for the executor to shutdown.

Enforce executorService.awaitTermination while using CompletionService

I am trying to submit multiple tasks and obtain the results as and when it is available. However, after the end of the loop, I have to enforce that all the tasks complete within specified amount of time. If not, throw an error. Initially, all I had was executorService's invokeAll, shutdown and awaitTermination calls that were used to ensure that all tasks complete (inspite of errors or not). I migrated the code to use CompletionService to display the results. Where can I enforce the awaitTermination clause in the CompletionService calls?
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
executor.shutdown();
logger.info("Tasks submitted. Now checking the status.");
while (!executor.isTerminated())
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
executor.awaitTermination(24, TimeUnit.HOURS);
In other words, how can I utilize timeout for CompletionService?
EDIT:
The initial code I had was displayed below. The problem is that I am iterating through the future list and then printing them as completed. However, my requirement is to display the ones that were completed at a FCFS basis.
List<Future<String>> results = executor.invokeAll(tasks);
executor.shutdown();
executor.awaitTermination(24, TimeUnit.HOURS);
while (results.size() > 0)
{
for (Iterator<Future<String>> iterator = results.iterator(); iterator.hasNext();)
{
Future<String> item = iterator.next();
if (item.isDone())
{
String itemValue;
try
{
itemValue = item.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
finally
{
iterator.remove();
}
}
}
}
I'd suggest you wait for the executor to terminate on another thread
That way you can achieve serving results FCFS and also enforce the timeout.
It can be easily achieved with something that will look like the following
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
// place all the work in a function (an Anonymous Runnable in this case)
// completionService.submit(() ->{work});
// as soon as the work is submitted it is handled by another Thread
completionService.submit(() ->{
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int counter = tasks.size();
for(int i = counter; counter >=1; counter--) // Replaced the while loop
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
});
// After submitting the work to another Thread
// Wait in your Main Thread, and enforce termination if needed
shutdownAndAwaitTermination(executor);
You handle the executors termination && waiting using this (taken from ExecutorsService)
void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(24, TimeUnit.HOURS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}
Ok then, you need to monitor completion. So, why are yon not using as per documentation? https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorCompletionService.html So, it submits n tasks to a new instance of ExecutorCompletionService and waits n to complete. No termination again, you could just reuse the same executor (usually thread pool, creating a new thread is more expensive rather than reusing from a pool). So, if I adapt code from the documentation to your scenario it would be something like:
CompletionService<Result> ecs
= new ExecutorCompletionService<String>(executor);
for (Callable<Result> task : tasks)
ecs.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int n = tasks.size();
for (int i = 0; i < n; ++i) {
try {
String r = ecs.take().get();
logger.info("Backup completed for " + r);
}
catch(InterruptedException | ExecutionException e) {
...
}
}
Also, it is bad idea to parse exception message, better if you create your custom exception class and use instanceof.
If you need to have a timeout for the completion - use poll with time parameters instead of take.

Categories

Resources