service.shutdownNow() not killing the thread - java

In my application im spawning a single thread executor and in the thread I'm doing logic of loading ML models/predictions. If any of the logic exceeds the time-limit(4 minutes), I'm shutting down the thread.
But when the application is up, when time out happens, I'm able to see logs of threads getting shut down. But the process(Prediciton logic) continues to execute.
Code snippet for creating the thread
ExecutorService service = Executors.newSingleThreadExecutor();
Future<Object> prediction = null;
try {
prediction = service.submit(() -> {
// Execute the requested Document Prediction Engine against the XML Document.
return executeEngine(tenantId, appId, engine, document, predictionParams == null ? new HashMap<>() : new HashMap<>(predictionParams)).get();
});
predictionResult = prediction != null ? (PredictionResult) prediction.get(Long.parseLong(System.getProperty(IDR_INTERNAL_TIMEOUT, "30000")),
TimeUnit.MILLISECONDS) : null;
} catch (TimeoutException e) {
if (prediction != null) {
LOGGER.debug("Task was cancelled with a {} status", (prediction.cancel(true) ? " successful" : " failure"));
}
ExpenseMetrics.internalPredictionTimeout.inc();
String message = "Prediction took more than allowed milliseconds: " + Long.parseLong(System.getProperty(IDR_INTERNAL_TIMEOUT, "30000")) +
" fileName: "+ documentFile.getFileName();
if (service != null && !service.isShutdown()) {
service.shutdownNow();
}
service = null;
throw new IDRExmClientException(message, requestId, ErrorCode.INTERNAL_TIMEOUT);
}
if (service != null && !service.isShutdown()) {
service.shutdownNow();
}
service = null;
Code snippet for Prediction logic and timeout
List<Callable<Void>> taskList = new ArrayList<Callable<Void>>();
taskList.add(callable1);
taskList.add(callable2);
ExecutorService executor = null;
List<Future<Void>> futures = null;
long s = System.currentTimeMillis();
try {
executor = Executors.newFixedThreadPool(THREAD_POOL);
futures = executor.invokeAll(taskList);
executor.shutdown();
if (!executor.awaitTermination(TOLERANCE_MINUTES, TimeUnit.MINUTES)) {
LOGGER.warn("Document predict thread took more than {} minutes to shutdown", TOLERANCE_MINUTES);
executor.shutdownNow();
}
} catch (InterruptedException iex) {
LOGGER.error("Document predict thread was interrupted", iex);
} finally {
cancelFutures("Predict", futures);
LOGGER.debug("Document predict thread took: {}", (System.currentTimeMillis() - s));
if (executor != null && !executor.isShutdown()) {
executor.shutdownNow();
}
}
executor = null;

from the Oracle documentation of the method shutDownNow() :
There are no guarantees beyond best-effort attempts to stopprocessing actively executing tasks. For example, typicalimplementations will cancel via Thread.interrupt, so any task that fails to respond to interrupts may never terminate.
you can see this answer it may help you:
ExecutorService is not shutting down

Related

How to make executerService.shutdown wait for nested threads invokation

So I have a function which looks like this
ExecutorService executorService = Executors.newFixedThreadPool(2000);
Boolean getMore = true;
try{
While (getMore) {
JSONObject response = getPaginatedResponse();
int[] ar = response.get("something");
if (ar.length > 0) {
// loop through the array and invoke executorService.submit() for each
}
else { getMore = false; }
}
executorService.shutdown();
try {
System.out.println("waiting for tasks to complete, termination starting at : "+java.time.LocalDateTime.now());
executorService.awaitTermination(15, TimeUnit.MINUTES);
} catch (InterruptedException e) {
throw new Exception("loading was interrupted... thread pool timed out!");
}
} catch (Exception) {
System.out.println("Fatal error");
}
My issue is that the each of these threads invoke x number of threads, which in turn each call an API and processes its response, the implementation stops execution after all the "First-level" threads gets fired, but not necessarily all the second level ones, which is crucial for my program, how or where can I invoke the executerService.shutdown() to make sure that all the threads were called.
you can put executorService.shutdown(); inside finally block of exception

Calling ExecutorService.shutdownNow from CompletableFuture

I need to cancel all scheduled but not yet running CompletableFuture tasks when one of already running task throws an exception.
Tried following example but most of the time the main method does not exit (probably due to some type of deadlock).
public static void main(String[] args) {
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set< CompletableFuture<?> > tasks = new HashSet<>();
for (int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync( () -> {
System.out.println("Running: " + id);
if ( id == 400 ) throw new RuntimeException("Exception from: " + id);
}, executionService )
.whenComplete( (v, ex) -> {
if ( ex != null ) {
System.out.println("Shutting down.");
executionService.shutdownNow();
System.out.println("shutdown.");
}
} );
tasks.add(c);
}
try{
CompletableFuture.allOf( tasks.stream().toArray(CompletableFuture[]::new) ).join();
}catch(Exception e) {
System.out.println("Got async exception: " + e);
}finally {
System.out.println("DONE");
}
}
Last printout is something like this:
Running: 402
Running: 400
Running: 408
Running: 407
Running: 406
Running: 405
Running: 411
Shutting down.
Running: 410
Running: 409
Running: 413
Running: 412
shutdown.
Tried running shutdownNow method on separate thread but it still, most of the time, gives the same deadlock.
Any idea what might cause this deadlock?
And what you think is the best way to cancel all scheduled but not yet running CompletableFutures when exception is thrown?
Was thinking of iterating over tasks and calling cancel on each CompletableFuture. But what I dont like about this is that throws CancellationException from join.
You should keep in mind that
CompletableFuture<?> f = CompletableFuture.runAsync(runnable, executionService);
is basically equivalent to
CompletableFuture<?> f = new CompletableFuture<>();
executionService.execute(() -> {
if(!f.isDone()) {
try {
runnable.run();
f.complete(null);
}
catch(Throwable t) {
f.completeExceptionally(t);
}
}
});
So the ExecutorService doesn’t know anything about the CompletableFuture, therefore, it can’t cancel it in general. All it has, is some job, expressed as an implementation of Runnable.
In other words, shutdownNow() will prevent the execution of the pending jobs, thus, the remaining futures won’t get completed normally, but it will not cancel them. Then, you call join() on the future returned by allOf which will never return due to the never-completed futures.
But note that the scheduled job does check whether the future is already completed before doing anything expensive.
So, if you change your code to
ExecutorService executionService = Executors.newFixedThreadPool(5);
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
AtomicBoolean canceled = new AtomicBoolean();
for(int i = 0; i < 1000; i++) {
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null && canceled.compareAndSet(false, true)) {
System.out.println("Canceling.");
for(CompletableFuture<?> f: tasks) f.cancel(false);
System.out.println("Canceled.");
}
});
tasks.add(c);
if(canceled.get()) {
c.cancel(false);
break;
}
}
try {
CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
} catch(Exception e) {
System.out.println("Got async exception: " + e);
} finally {
System.out.println("DONE");
}
executionService.shutdown();
The runnables won’t get executed once their associated future has been canceled. Since there is a race between the cancelation and the ordinary execution, it might be helpful to change the action to
.runAsync(() -> {
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
LockSupport.parkNanos(1000);
}, executionService);
to simulate some actual workload. Then, you will see that less actions get executed after encountering the exception.
Since the asynchronous exception may even happen while the submitting loop is still running, it uses an AtomicBoolean to detect this situation and stop the loop in this situation.
Note that for a CompletableFuture, there is no difference between cancelation and any other exceptional completion. Calling f.cancel(…) is equivalent to f.completeExceptionally(new CancellationException()). Therefore, since CompletableFuture.allOf reports any exception in the exceptional case, it will be very likely a CancellationException instead of the triggering exception.
If you replace the two cancel(false) calls with complete(null), you get a similar effect, the runnables won’t get executed for already completed futures, but allOf will report the original exception, as it is the only exception then. And it has another positive effect: completing with a null value is much cheaper than constructing a CancellationException (for every pending future), so the forced completion via complete(null) runs much faster, preventing more futures from executing.
Another solution that relies only on CompletableFuture is to use a “canceller” future that will cause all non-done tasks to be cancelled when completed:
Set<CompletableFuture<?>> tasks = ConcurrentHashMap.newKeySet();
CompletableFuture<Void> canceller = new CompletableFuture<>();
for(int i = 0; i < 1000; i++) {
if (canceller.isDone()) {
System.out.println("Canceller invoked, not creating other futures.");
break;
}
//LockSupport.parkNanos(10);
final int id = i;
CompletableFuture<?> c = CompletableFuture
.runAsync(() -> {
//LockSupport.parkNanos(1000);
System.out.println("Running: " + id);
if(id == 400) throw new RuntimeException("Exception from: " + id);
}, executionService);
c.whenComplete((v, ex) -> {
if(ex != null) {
canceller.complete(null);
}
});
tasks.add(c);
}
canceller.thenRun(() -> {
System.out.println("Cancelling all tasks.");
tasks.forEach(t -> t.cancel(false));
System.out.println("Finished cancelling tasks.");
});

Using Future with ExecutorService

I need to execute two tasks in parallel and wait for them to complete. Also I need the result from the second task, for that I am using Future.
My question is that DO I need executor.awaitTermination to join the tasks or Future.get() will take care of it. Also is there a better way to achieve this with Java 8?
public class Test {
public static void main(String[] args) {
test();
System.out.println("Exiting Main");
}
public static void test() {
System.out.println("In Test");
ExecutorService executor = Executors.newFixedThreadPool(2);
executor.submit(() -> {
for(int i = 0 ; i< 5 ; i++) {
System.out.print("["+i+"]");
try {
Thread.sleep(1000);
} catch (Exception e) {e.printStackTrace();}
}
});
Future<String> result = executor.submit(() -> {
StringBuilder builder = new StringBuilder();
for(int i = 0 ; i< 10 ; i++) {
System.out.print("("+i+")");
try {
Thread.sleep(1000);
} catch (Exception e) {e.printStackTrace();}
builder.append(i);
}
return builder.toString();
});
System.out.println("shutdown");
executor.shutdown();
// DO I need this code : START
System.out.println("awaitTermination");
try {
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
System.out.println("Error");
}
// DO I need this code : END
System.out.println("Getting result");
try {
System.out.println(result.get());
}
catch (InterruptedException e) {e.printStackTrace();}
catch (ExecutionException e) {e.printStackTrace();}
System.out.println("Exiting Test");
}
}
OUTPUT with awaitTermination:
In Test
[0]shutdown
(0)awaitTermination
[1](1)[2](2)[3](3)[4](4)(5)(6)(7)(8)(9)Getting result
0123456789
Exiting Test
Exiting Main
OUTPUT without awaitTermination:
In Test
[0]shutdown
Getting result
(0)[1](1)[2](2)[3](3)[4](4)(5)(6)(7)(8)(9)0123456789
Exiting Test
Exiting Main
From the get javadoc:
Waits if necessary for the computation to complete, and then retrieves its result.
get will wait for the second task only.
From the awaitTermination javadoc:
Blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first.
awaitTermination will wait for all tasks.
You should use CompletableFuture API
You can run a process async like follow:
CompletableFuture.supplyAsync( () -> { ... } );
It returns a future, and you can add a callback which will be called when process is finished and result is available.
For example:
CompletableFuture.runAsync( () -> {
// Here compute your string
return "something";
} ).thenAccept( result -> {
// Here do something with result (ie the computed string)
} );
Note that this statement uses internally the ForkJoinPool#commonPool() to execute the process async, but you can also call this statement with your own ExecutorService if you want. In both case, in order to be sure not exiting before tasks are completed, you need to call either get() (which is blocking) on each future of submitted tasks, or wait for the executor to shutdown.

Enforce executorService.awaitTermination while using CompletionService

I am trying to submit multiple tasks and obtain the results as and when it is available. However, after the end of the loop, I have to enforce that all the tasks complete within specified amount of time. If not, throw an error. Initially, all I had was executorService's invokeAll, shutdown and awaitTermination calls that were used to ensure that all tasks complete (inspite of errors or not). I migrated the code to use CompletionService to display the results. Where can I enforce the awaitTermination clause in the CompletionService calls?
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
executor.shutdown();
logger.info("Tasks submitted. Now checking the status.");
while (!executor.isTerminated())
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
executor.awaitTermination(24, TimeUnit.HOURS);
In other words, how can I utilize timeout for CompletionService?
EDIT:
The initial code I had was displayed below. The problem is that I am iterating through the future list and then printing them as completed. However, my requirement is to display the ones that were completed at a FCFS basis.
List<Future<String>> results = executor.invokeAll(tasks);
executor.shutdown();
executor.awaitTermination(24, TimeUnit.HOURS);
while (results.size() > 0)
{
for (Iterator<Future<String>> iterator = results.iterator(); iterator.hasNext();)
{
Future<String> item = iterator.next();
if (item.isDone())
{
String itemValue;
try
{
itemValue = item.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
finally
{
iterator.remove();
}
}
}
}
I'd suggest you wait for the executor to terminate on another thread
That way you can achieve serving results FCFS and also enforce the timeout.
It can be easily achieved with something that will look like the following
CompletionService<String> completionService = new ExecutorCompletionService<String>(executor);
// place all the work in a function (an Anonymous Runnable in this case)
// completionService.submit(() ->{work});
// as soon as the work is submitted it is handled by another Thread
completionService.submit(() ->{
logger.info("Submitting all tasks");
for (Callable<String> task : tasks)
completionService.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int counter = tasks.size();
for(int i = counter; counter >=1; counter--) // Replaced the while loop
{
final Future<String> future = completionService.take();
String itemValue;
try
{
itemValue = future.get();
if (!itemValue.equals("Bulk"))
logger.info("Backup completed for " + itemValue);
}
catch (InterruptedException | ExecutionException e)
{
String message = e.getCause().getMessage();
String objName = "Bulk";
if (message.contains("(") && message.contains(")"))
objName = message.substring(message.indexOf("(") + 1, message.indexOf(")"));
logger.error("Failed retrieving the task status for " + objName, e);
}
}
});
// After submitting the work to another Thread
// Wait in your Main Thread, and enforce termination if needed
shutdownAndAwaitTermination(executor);
You handle the executors termination && waiting using this (taken from ExecutorsService)
void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(24, TimeUnit.HOURS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}
Ok then, you need to monitor completion. So, why are yon not using as per documentation? https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorCompletionService.html So, it submits n tasks to a new instance of ExecutorCompletionService and waits n to complete. No termination again, you could just reuse the same executor (usually thread pool, creating a new thread is more expensive rather than reusing from a pool). So, if I adapt code from the documentation to your scenario it would be something like:
CompletionService<Result> ecs
= new ExecutorCompletionService<String>(executor);
for (Callable<Result> task : tasks)
ecs.submit(task);
logger.info("Tasks submitted. Now checking the status.");
int n = tasks.size();
for (int i = 0; i < n; ++i) {
try {
String r = ecs.take().get();
logger.info("Backup completed for " + r);
}
catch(InterruptedException | ExecutionException e) {
...
}
}
Also, it is bad idea to parse exception message, better if you create your custom exception class and use instanceof.
If you need to have a timeout for the completion - use poll with time parameters instead of take.

Future waiting for FixedThreadPool is returning before all Threads finish

I'm trying to wait all my threads finish before execute another task using Future, but something is wrong because my future is just wainting for the last thread of my for loop.
My executor method:
public static Future<?> downloadImages(Executor e, MainViewController controller, String filePath, String dns, int port, int numImg,
String offlineUuid, Map<String, String> cookies, String type, String outputFolder) throws SystemException, IOException, InterruptedException {
String urlImages;
String filePath2;
Future future = null;
if (numImg == 1) {
//Some Code
} else {
type = "multimages";
ExecutorService es = Executors.newFixedThreadPool(numImg);
for (int i = 0; i < numImg; i++) {
filePath2 = "";
filePath2 = filePath + File.separator + "TargetApp" + File.separator + "TempImage" + i + "Download.zip";
urlImages = "http://" + dns + ":" + port + Constants.TARGET_SERVICE_DOWNLOADIMAGES_PATH + offlineUuid + "/?pos=" + (i);
future = es.submit(new DownloaderAndUnzipTask(controller, urlImages, filePath2, outputFolder, cookies, type));
}
return future;
}
return null;
}
My waiting method:
Future future = fullDownloadSelected(tableViewFull.getSelectionModel().getSelectedIndex());
if (future != null) {
try {
future.get();
if (future.isDone());
System.out.println("Processamento de Imagens Acabou");
} catch (ExecutionException ex) {
Logger.getLogger(MainViewController.class.getName()).log(Level.SEVERE, null, ex);
}
My msg is shown when the last Thread created in first method is finished, but it should have finished when all threads in pool is finished. I think something is wrong where I submit my executor inside the for loop, but how can I fix it?
You need to capture every Future returned and then wait for each one to complete (using get on each)
You can, alternatively, do something like:
ExecutorService es = Executors.newFixedThreadPool(numImg);
List<Callable> tasks = ...
for (int i = 0; i < numImg; i++) {
tasks.add(your tasks);
}
List<Future<Object>> futures = es.invokeAll(tasks);
which will only return once all the tasks within are complete.
You are reassigning the future in each iteration.
You can use invokeAll which returns when all submitted tasks are done.
You are just waiting for the last Future to finish.
future = es.submit(...);
...
return future;
...
// in waiting method, wait for the last job to finish
future.get();
This only waits for the last of the jobs submitted to the executor-service to finish -- other jobs can still be running. You should instead return the ExecutorService from the downloadImages(). Then in your waiting method you do:
// you must always shut the service down, no more jobs can be submitted
es.shutdown();
// waits for the service to complete forever
es.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
It may make more sense for you to create your ExecutorService in the calling method and pass it into the downloadImages().

Categories

Resources