I have an async chain in my java code that i want to stop after a certain timeout
so i created a threadPool with some threads and called the CompletableFuture like this
ExecutorService pool = Executors.newFixedThreadPool(10);
than i have a cyclic method that loads data from the db and executes some task on it, once all the CompletableFutures are completed its doing it again
CompletableFuture<MyObject> futureTask =
CompletableFuture.supplyAsync(() -> candidate, pool)
.thenApply(Task1::doWork).thenApply(Task2::doWork).thenApply(Task3::doWork)
.thenApply(Task4::doWork).thenApply(Task5::doWork).orTimeout(30,TimeUnit.SECONDS)
.thenApply(Task6::doWork).orTimeout(30,TimeUnit.SECONDS)
.exceptionally(ExceptionHandlerService::handle);
My problem is in task6, that has a very intensive task (its a network connection task that sometimes hangs forever)
i noticed that my orTimeout is being fired correctly after 30 seconds, but the thread running Task6 is still being running
after few cycles like this, all my threads are drained and my app dies
How can i cancel the running threads on the pool after the timeout has reached?
(without calling pool.shutdown())
UPDATE*
inside the main thread i did a simple check as shown here
for (int i = TIME_OUT_SECONDS; i >= 0; i--) {
unfinishedTasks = handleFutureTasks(unfinishedTasks, totalBatchSize);
if(unfinishedTasks.isEmpty()) {
break;
}
if(i==0) {
//handle cancelation of the tasks
for(CompletableFuture<ComplianceCandidate> task: unfinishedTasks) {
**task.cancel(true);**
log.error("Reached timeout on task, is canceled: {}", task.isCancelled());
}
break;
}
try {
TimeUnit.SECONDS.sleep(1);
} catch (Exception ex) {
}
}
What i see is that after few cycles, all the tasks complain about timeout...
in the first 1-2 cycles, i still get epected responses (while there are threads to process it)
i still feel that the thread pool is exhausted
I know you said without calling pool.shutDown, but there is simply no other way. When you look at your stages though, they will run in either the thread that "appends" them (adding those thenApply) or a thread from that pool that you define. May be an example should make more sense.
public class SO64743332 {
static ExecutorService pool = Executors.newFixedThreadPool(10);
public static void main(String[] args) {
CompletableFuture<String> f1 = CompletableFuture.supplyAsync(() -> dbCall(), pool);
//simulateWork(4);
CompletableFuture<String> f2 = f1.thenApply(x -> {
System.out.println(Thread.currentThread().getName());
return transformationOne(x);
});
CompletableFuture<String> f3 = f2.thenApply(x -> {
System.out.println(Thread.currentThread().getName());
return transformationTwo(x);
});
f3.join();
}
private static String dbCall() {
simulateWork(2);
return "a";
}
private static String transformationOne(String input) {
return input + "b";
}
private static String transformationTwo(String input) {
return input + "b";
}
private static void simulateWork(int seconds) {
try {
Thread.sleep(TimeUnit.SECONDS.toMillis(seconds));
} catch (InterruptedException e) {
System.out.println("Interrupted!");
e.printStackTrace();
}
}
}
They key point of the above code is this : simulateWork(4);. Run the code with it commented out and then uncomment it. See what thread is actually going to execute all those thenApply. It is either main or the same thread from the pool, meaning although you have a pool defined - it's only a single thread from that pool that will execute all those stages.
In this context, you could define a single thread executor (inside a method let's say) that will run all those stages. This way you could control when to call shutDownNow and potentially interrupt (if your code responds to interrupts) the running task. Here is a made-up example that simulates that:
public class SO64743332 {
public static void main(String[] args) {
execute();
}
public static void execute() {
ExecutorService pool = Executors.newSingleThreadExecutor();
CompletableFuture<String> cf1 = CompletableFuture.supplyAsync(() -> dbCall(), pool);
CompletableFuture<String> cf2 = cf1.thenApply(x -> transformationOne(x));
// give enough time for transformationOne to start, but not finish
simulateWork(2);
try {
CompletableFuture<String> cf3 = cf2.thenApply(x -> transformationTwo(x))
.orTimeout(4, TimeUnit.SECONDS);
cf3.get(10, TimeUnit.SECONDS);
} catch (ExecutionException | InterruptedException | TimeoutException e) {
pool.shutdownNow();
}
}
private static String dbCall() {
System.out.println("Started DB call");
simulateWork(1);
System.out.println("Done with DB call");
return "a";
}
private static String transformationOne(String input) {
System.out.println("Started work");
simulateWork(10);
System.out.println("Done work");
return input + "b";
}
private static String transformationTwo(String input) {
System.out.println("Started transformation two");
return input + "b";
}
private static void simulateWork(int seconds) {
try {
Thread.sleep(TimeUnit.SECONDS.toMillis(seconds));
} catch (InterruptedException e) {
System.out.println("Interrupted!");
e.printStackTrace();
}
}
}
Running this you should notice that transformationOne starts, but it is interrupted because of the shutDownNow.
The drawback of this should be obvious, every invocation of execute will create a new thread pool...
This is to continue over an earlier post, as part of my task I'm trying to download files from URL using callables, and whenever an exception occurs I'm trying to resubmit the same callable again for maximum number of times.
The problem is, with the current approach my program doesn't terminate after finishing all of the callables in a happy day scenario, it keeps running forever (maybe because I'm using non-daemon threads ? wouldn't it terminate after a given amount of time ?).
Also I believe that the current design will prevent resubmitting the failed callables again, since I'm calling executor.shutdown(), thus whenever a callable fails the executor will prevent adding new callable to the executing queue.
Any ideas how to get over this ?
public class DownloadManager {
int allocatedMemory;
private final int MAX_FAILURES = 5;
private ExecutorService executor;
private CompletionService<Status> completionService;
private HashMap<String, Integer> failuresPerDownload;
private HashMap<Future<Status>, DownloadWorker> URLDownloadFuturevsDownloadWorker;
public DownloadManager() {
allocatedMemory = 0;
executor = Executors.newWorkStealingPool();
completionService = new ExecutorCompletionService<Status>(executor);
URLDownloadFuturevsDownloadWorker = new HashMap<Future<Status>, DownloadWorker>();
failuresPerDownload = new HashMap<String, Integer>();
}
public ArrayList<Status> downloadURLs(String[] urls, int memorySize) throws Exception {
validateURLs(urls);
for (String url : urls) {
failuresPerDownload.put(url, 0);
}
ArrayList<Status> allDownloadsStatus = new ArrayList<Status>();
allocatedMemory = memorySize / urls.length;
for (String url : urls) {
DownloadWorker URLDownloader = new DownloadWorker(url, allocatedMemory);
Future<Status> downloadStatusFuture = completionService.submit(URLDownloader);
URLDownloadFuturevsDownloadWorker.put(downloadStatusFuture, URLDownloader);
}
executor.shutdown();
Future<Status> downloadQueueHead = null;
while (!executor.isTerminated()) {
downloadQueueHead = completionService.take();
try {
Status downloadStatus = downloadQueueHead.get();
if (downloadStatus.downloadSucceeded()) {
allDownloadsStatus.add(downloadStatus);
System.out.println(downloadStatus);
} else {
handleDownloadFailure(allDownloadsStatus, downloadStatus.getUrl());
}
} catch (Exception e) {
String URL = URLDownloadFuturevsDownloadWorker.get(downloadQueueHead).getAssignedURL();
handleDownloadFailure(allDownloadsStatus, URL);
}
}
return allDownloadsStatus;
}
private void handleDownloadFailure(ArrayList<Status> allDownloadsStatus, String URL) {
int failuresPerURL = failuresPerDownload.get(URL);
failuresPerURL++;
if (failuresPerURL < MAX_FAILURES) {
failuresPerDownload.put(URL, failuresPerURL);
// resubmit the same job
DownloadWorker downloadJob = URLDownloadFuturevsDownloadWorker.get(URL);
completionService.submit(downloadJob);
} else {
Status failedDownloadStatus = new Status(URL, false);
allDownloadsStatus.add(failedDownloadStatus);
System.out.println(failedDownloadStatus);
}
}
}
Update: After I've changed the while loop's condition to a counter instead of !executor.isTerminated() it worked.
Why doesn't the executor terminate ?
You need to call ExecutorService.shutdown() and awaitTermination() to terminate the threads after all your work is done.
Alternatively, you could provide your own ThreadFactory when constructing your ExecutorService and mark all your threads as daemon so that they won't keep your process alive once the main thread exits.
In ExecutorCompletionService javadoc, we see examples
CompletionService<Result> ecs
= new ExecutorCompletionService<Result>(e);
List<Future<Result>> futures
= new ArrayList<Future<Result>>(n);
try {
...
} finally {
for (Future<Result> f : futures)
f.cancel(true);
}
so try to call cancel(true) with all your Future when you need to stop ExecutorCompletionService
I have a method to run that makes connection to server, and when server fails, would wait until it receives a message that server is up again. However, this entire method should have a timeout, and if it is over the time, method should interrupt and return error log instead.
private Semaphore sem = new Semaphore(0);
private TimeUnit unit = TimeUnit.MILLISECONDS;
public String some_method(Object params, long timeout, TimeUnit unit) {
long time = 0;
while(time < timeout) { // not sure about timeout method
try {
//some task that is prone to ServerConnectException
return; // returns value and exits
} catch(ServerConnectException ex) {
sem.acquire();
} catch(InterruptedException uhoh) {
System.out.println("uhoh, thread interrupted");
}
// increment time somehow
}
sem.release();
return null; // a message of task incompletion
}
I was thinking about running a thread containing semaphore that blocks thread if there's a server failure problem, but I cannot seem to organize thread such that it will contain the semaphore but be contained by method itself.
QUESTION:
- However, the method is already in a gigantic class and making separate Thread for just that method will mess up entire call hierarchy as well as whole API, so I don't want to do that. I need some process that runs along with the some_method and places lock and release on its processes as needed, with timeout. What should I be thinking? Some other concurrency wrapper like executor?
Thanks!
Semaphore doesn't seem to be the right concurrency primitive to use here, as you don't really need a utility for locking, but rather a utility to help you coordinate inter-thread communication.
If you need to communicate a stream of values, you would typically use a blocking queue, but if you need to communicate a single value, a CountDownLatch and a variable do the trick. For example (untested):
public String requestWithRetry(final Object params, long timeout, TimeUnit unit) throws InterruptedException {
String[] result = new String[1];
CountDownLatch latch = new CountDownLatch(1);
Thread t = new Thread(new Runnable() {
public void run() {
while (true) {
try {
result[0] = request(params);
latch.countDown();
return;
}
catch(OtherException oe) {
// ignore and retry
}
catch(InterruptedException ie) {
// task was cancelled; terminate thread
return;
}
}
}
});
t.start();
try {
if (!latch.await(timeout, unit)) {
t.interrupt(); // cancel the background task if timed out
}
// note that this returns null if timed out
return result[0];
}
catch(InterruptedException ie) {
t.interrupt(); // cancel the background task
throw ie;
}
}
private String request(Object params) throws OtherException, InterruptedException {
// should handle interruption to cancel this operation
return null;
}
While I was exploring ExecutorService, I encountered a method Future.get() which accepts the timeout.
The Java doc of this method says
Waits if necessary for at most the given time for the computation to complete, and then retrieves its result, if available.
Parameters:
timeout the maximum time to wait
unit the time unit of the timeout argument
As per my understanding, we are imposing a timeout on the callable, we submit to the ExecutorService so that, my callable will interrupt after the specified time(timeout) has passed
But as per below code, the longMethod() seems to be running beyond the timeout(2 seconds), and I am really confused understanding this. Can anyone please point me to the right path?
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class Timeout implements Callable<String> {
public void longMethod() {
for(int i=0; i< Integer.MAX_VALUE; i++) {
System.out.println("a");
}
}
#Override
public String call() throws Exception {
longMethod();
return "done";
}
/**
* #param args
*/
public static void main(String[] args) {
ExecutorService service = Executors.newSingleThreadExecutor();
try {
service.submit(new Timeout()).get(2, TimeUnit.SECONDS);
} catch (Exception e) {
e.printStackTrace();
}
}
}
my callable will interrupt after the specified time(timeout) has passed
Not true. The task will continue to execute, instead you will have a null string after the timeout.
If you want to cancel it:
timeout.cancel(true) //Timeout timeout = new Timeout();
P.S. As you have it right now this interrupt will have no effect what so ever. You are not checking it in any way.
For example this code takes into account interrupts:
private static final class MyCallable implements Callable<String>{
#Override
public String call() throws Exception {
StringBuilder builder = new StringBuilder();
try{
for(int i=0;i<Integer.MAX_VALUE;++i){
builder.append("a");
Thread.sleep(100);
}
}catch(InterruptedException e){
System.out.println("Thread was interrupted");
}
return builder.toString();
}
}
And then:
ExecutorService service = Executors.newFixedThreadPool(1);
MyCallable myCallable = new MyCallable();
Future<String> futureResult = service.submit(myCallable);
String result = null;
try{
result = futureResult.get(1000, TimeUnit.MILLISECONDS);
}catch(TimeoutException e){
System.out.println("No response after one second");
futureResult.cancel(true);
}
service.shutdown();
The timeout on get() is for how long the 'client' will wait for the Future to complete. It does not have an impact on the future's execution.
Object result;
int seconds = 0;
while ((result = fut.get.(1, TimeUnit.SECOND)) == null) {
seconds++;
System.out.println("Waited " + seconds + " seconds for future";
}
my callable will interrupt after the specified time(timeout) has passed
The above statement is wrong, Usually Future.get is blocking. Specifying the timeout allows you to use it in a non blocking manner.
This is useful for instance in time critical applications, if you need a result within let's say 2 seconds and receiving after that means you can't do anything with that.
I need to execute some amount of tasks 4 at a time, something like this:
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
//...wait for completion somehow
How can I get notified once all of them are complete? For now I can't think about anything better than setting some global task counter and decrease it at the end of every task, then monitor in infinite loop this counter to become 0; or get a list of Futures and in infinite loop monitor isDone for all of them. What are better solutions not involving infinite loops?
Thanks.
Basically on an ExecutorService you call shutdown() and then awaitTermination():
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}
Use a CountDownLatch:
CountDownLatch latch = new CountDownLatch(totalNumberOfTasks);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
try {
latch.await();
} catch (InterruptedException E) {
// handle
}
and within your task (enclose in try / finally)
latch.countDown();
ExecutorService.invokeAll() does it for you.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
List<Callable<?>> tasks; // your tasks
// invokeAll() returns when all tasks are complete
List<Future<?>> futures = taskExecutor.invokeAll(tasks);
You can use Lists of Futures, as well:
List<Future> futures = new ArrayList<Future>();
// now add to it:
futures.add(executorInstance.submit(new Callable<Void>() {
public Void call() throws IOException {
// do something
return null;
}
}));
then when you want to join on all of them, its essentially the equivalent of joining on each, (with the added benefit that it re-raises exceptions from child threads to the main):
for(Future f: this.futures) { f.get(); }
Basically the trick is to call .get() on each Future one at a time, instead of infinite looping calling isDone() on (all or each). So you're guaranteed to "move on" through and past this block as soon as the last thread finishes. The caveat is that since the .get() call re-raises exceptions, if one of the threads dies, you would raise from this possibly before the other threads have finished to completion [to avoid this, you could add a catch ExecutionException around the get call]. The other caveat is it keeps a reference to all threads so if they have thread local variables they won't get collected till after you get past this block (though you might be able to get around this, if it became a problem, by removing Future's off the ArrayList). If you wanted to know which Future "finishes first" you could use some something like https://stackoverflow.com/a/31885029/32453
In Java8 you can do it with CompletableFuture:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Just my two cents.
To overcome the requirement of CountDownLatch to know the number of tasks beforehand, you could do it the old fashion way by using a simple Semaphore.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
int numberOfTasks=0;
Semaphore s=new Semaphore(0);
while(...) {
taskExecutor.execute(new MyTask());
numberOfTasks++;
}
try {
s.aquire(numberOfTasks);
...
In your task just call s.release() as you would latch.countDown();
A bit late to the game but for the sake of completion...
Instead of 'waiting' for all tasks to finish, you can think in terms of the Hollywood principle, "don't call me, I'll call you" - when I'm finished.
I think the resulting code is more elegant...
Guava offers some interesting tools to accomplish this.
An example:
Wrap an ExecutorService into a ListeningExecutorService:
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));
Submit a collection of callables for execution ::
for (Callable<Integer> callable : callables) {
ListenableFuture<Integer> lf = service.submit(callable);
// listenableFutures is a collection
listenableFutures.add(lf)
});
Now the essential part:
ListenableFuture<List<Integer>> lf = Futures.successfulAsList(listenableFutures);
Attach a callback to the ListenableFuture, that you can use to be notified when all futures complete:
Futures.addCallback(lf, new FutureCallback<List<Integer>> () {
#Override
public void onSuccess(List<Integer> result) {
// do something with all the results
}
#Override
public void onFailure(Throwable t) {
// log failure
}
});
This also offers the advantage that you can collect all the results in one place once the processing is finished...
More information here
The CyclicBarrier class in Java 5 and later is designed for this sort of thing.
here is two options , just bit confuse which one is best to go.
Option 1:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Option 2:
ExecutorService es = Executors.newFixedThreadPool(4);
List< Future<?>> futures = new ArrayList<>();
for(Runnable task : taskList) {
futures.add(es.submit(task));
}
for(Future<?> future : futures) {
try {
future.get();
}catch(Exception e){
// do logging and nothing else
}
}
es.shutdown();
Here putting future.get(); in try catch is good idea right?
Follow one of below approaches.
Iterate through all Future tasks, returned from submit on ExecutorService and check the status with blocking call get() on Future object as suggested by Kiran
Use invokeAll() on ExecutorService
CountDownLatch
ForkJoinPool or Executors.html#newWorkStealingPool
Use shutdown, awaitTermination, shutdownNow APIs of ThreadPoolExecutor in proper sequence
Related SE questions:
How is CountDownLatch used in Java Multithreading?
How to properly shutdown java ExecutorService
You could wrap your tasks in another runnable, that will send notifications:
taskExecutor.execute(new Runnable() {
public void run() {
taskStartedNotification();
new MyTask().run();
taskFinishedNotification();
}
});
Clean way with ExecutorService
List<Future<Void>> results = null;
try {
List<Callable<Void>> tasks = new ArrayList<>();
ExecutorService executorService = Executors.newFixedThreadPool(4);
results = executorService.invokeAll(tasks);
} catch (InterruptedException ex) {
...
} catch (Exception ex) {
...
}
I've just written a sample program that solves your problem. There was no concise implementation given, so I'll add one. While you can use executor.shutdown() and executor.awaitTermination(), it is not the best practice as the time taken by different threads would be unpredictable.
ExecutorService es = Executors.newCachedThreadPool();
List<Callable<Integer>> tasks = new ArrayList<>();
for (int j = 1; j <= 10; j++) {
tasks.add(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
int sum = 0;
System.out.println("Starting Thread "
+ Thread.currentThread().getId());
for (int i = 0; i < 1000000; i++) {
sum += i;
}
System.out.println("Stopping Thread "
+ Thread.currentThread().getId());
return sum;
}
});
}
try {
List<Future<Integer>> futures = es.invokeAll(tasks);
int flag = 0;
for (Future<Integer> f : futures) {
Integer res = f.get();
System.out.println("Sum: " + res);
if (!f.isDone())
flag = 1;
}
if (flag == 0)
System.out.println("SUCCESS");
else
System.out.println("FAILED");
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
Just to provide more alternatives here different to use latch/barriers.
You can also get the partial results until all of them finish using CompletionService.
From Java Concurrency in practice:
"If you have a batch of computations to submit to an Executor and you want to retrieve their results as they become
available, you could retain the Future associated with each task and repeatedly poll for completion by calling get with a
timeout of zero. This is possible, but tedious. Fortunately there is a better way: a completion service."
Here the implementation
public class TaskSubmiter {
private final ExecutorService executor;
TaskSubmiter(ExecutorService executor) { this.executor = executor; }
void doSomethingLarge(AnySourceClass source) {
final List<InterestedResult> info = doPartialAsyncProcess(source);
CompletionService<PartialResult> completionService = new ExecutorCompletionService<PartialResult>(executor);
for (final InterestedResult interestedResultItem : info)
completionService.submit(new Callable<PartialResult>() {
public PartialResult call() {
return InterestedResult.doAnOperationToGetPartialResult();
}
});
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<PartialResult> f = completionService.take();
PartialResult PartialResult = f.get();
processThisSegment(PartialResult);
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (ExecutionException e) {
throw somethinghrowable(e.getCause());
}
}
}
This is my solution, based in "AdamSkywalker" tip, and it works
package frss.main;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestHilos {
void procesar() {
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream().map(task -> CompletableFuture.runAsync(task, es)).toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
System.out.println("FIN DEL PROCESO DE HILOS");
}
private List<Runnable> getTasks() {
List<Runnable> tasks = new ArrayList<Runnable>();
Hilo01 task1 = new Hilo01();
tasks.add(task1);
Hilo02 task2 = new Hilo02();
tasks.add(task2);
return tasks;
}
private class Hilo01 extends Thread {
#Override
public void run() {
System.out.println("HILO 1");
}
}
private class Hilo02 extends Thread {
#Override
public void run() {
try {
sleep(2000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("HILO 2");
}
}
public static void main(String[] args) {
TestHilos test = new TestHilos();
test.procesar();
}
}
You could use this code:
public class MyTask implements Runnable {
private CountDownLatch countDownLatch;
public MyTask(CountDownLatch countDownLatch {
this.countDownLatch = countDownLatch;
}
#Override
public void run() {
try {
//Do somethings
//
this.countDownLatch.countDown();//important
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
CountDownLatch countDownLatch = new CountDownLatch(NUMBER_OF_TASKS);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
for (int i = 0; i < NUMBER_OF_TASKS; i++){
taskExecutor.execute(new MyTask(countDownLatch));
}
countDownLatch.await();
System.out.println("Finish tasks");
So I post my answer from linked question here, incase someone want a simpler way to do this
ExecutorService executor = Executors.newFixedThreadPool(10);
CompletableFuture[] futures = new CompletableFuture[10];
int i = 0;
while (...) {
futures[i++] = CompletableFuture.runAsync(runner, executor);
}
CompletableFuture.allOf(futures).join(); // THis will wait until all future ready.
I created the following working example. The idea is to have a way to process a pool of tasks (I am using a queue as example) with many Threads (determined programmatically by the numberOfTasks/threshold), and wait until all Threads are completed to continue with some other processing.
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/** Testing CountDownLatch and ExecutorService to manage scenario where
* multiple Threads work together to complete tasks from a single
* resource provider, so the processing can be faster. */
public class ThreadCountDown {
private CountDownLatch threadsCountdown = null;
private static Queue<Integer> tasks = new PriorityQueue<>();
public static void main(String[] args) {
// Create a queue with "Tasks"
int numberOfTasks = 2000;
while(numberOfTasks-- > 0) {
tasks.add(numberOfTasks);
}
// Initiate Processing of Tasks
ThreadCountDown main = new ThreadCountDown();
main.process(tasks);
}
/* Receiving the Tasks to process, and creating multiple Threads
* to process in parallel. */
private void process(Queue<Integer> tasks) {
int numberOfThreads = getNumberOfThreadsRequired(tasks.size());
threadsCountdown = new CountDownLatch(numberOfThreads);
ExecutorService threadExecutor = Executors.newFixedThreadPool(numberOfThreads);
//Initialize each Thread
while(numberOfThreads-- > 0) {
System.out.println("Initializing Thread: "+numberOfThreads);
threadExecutor.execute(new MyThread("Thread "+numberOfThreads));
}
try {
//Shutdown the Executor, so it cannot receive more Threads.
threadExecutor.shutdown();
threadsCountdown.await();
System.out.println("ALL THREADS COMPLETED!");
//continue With Some Other Process Here
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
/* Determine the number of Threads to create */
private int getNumberOfThreadsRequired(int size) {
int threshold = 100;
int threads = size / threshold;
if( size > (threads*threshold) ){
threads++;
}
return threads;
}
/* Task Provider. All Threads will get their task from here */
private synchronized static Integer getTask(){
return tasks.poll();
}
/* The Threads will get Tasks and process them, while still available.
* When no more tasks available, the thread will complete and reduce the threadsCountdown */
private class MyThread implements Runnable {
private String threadName;
protected MyThread(String threadName) {
super();
this.threadName = threadName;
}
#Override
public void run() {
Integer task;
try{
//Check in the Task pool if anything pending to process
while( (task = getTask()) != null ){
processTask(task);
}
}catch (Exception ex){
ex.printStackTrace();
}finally {
/*Reduce count when no more tasks to process. Eventually all
Threads will end-up here, reducing the count to 0, allowing
the flow to continue after threadsCountdown.await(); */
threadsCountdown.countDown();
}
}
private void processTask(Integer task){
try{
System.out.println(this.threadName+" is Working on Task: "+ task);
}catch (Exception ex){
ex.printStackTrace();
}
}
}
}
Hope it helps!
You could use your own subclass of ExecutorCompletionService to wrap taskExecutor, and your own implementation of BlockingQueue to get informed when each task completes and perform whatever callback or other action you desire when the number of completed tasks reaches your desired goal.
you should use executorService.shutdown() and executorService.awaitTermination method.
An example as follows :
public class ScheduledThreadPoolExample {
public static void main(String[] args) throws InterruptedException {
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
executorService.scheduleAtFixedRate(() -> System.out.println("process task."),
0, 1, TimeUnit.SECONDS);
TimeUnit.SECONDS.sleep(10);
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.DAYS);
}
}
if you use more thread ExecutionServices SEQUENTIALLY and want to wait EACH EXECUTIONSERVICE to be finished. The best way is like below;
ExecutorService executer1 = Executors.newFixedThreadPool(THREAD_SIZE1);
for (<loop>) {
executer1.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer1.shutdown();
try{
executer1.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
ExecutorService executer2 = Executors.newFixedThreadPool(THREAD_SIZE2);
for (true) {
executer2.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer2.shutdown();
} catch (Exception e){
...
}
Try-with-Resources syntax on AutoCloseable executor service with Project Loom
Project Loom seeks to add new features to the concurrency abilities in Java.
One of those features is making the ExecutorService AutoCloseable. This means every ExecutorService implementation will offer a close method. And it means we can use try-with-resources syntax to automatically close an ExecutorService object.
The ExecutorService#close method blocks until all submitted tasks are completed. Using close takes the place of calling shutdown & awaitTermination.
Being AutoCloseable contributes to Project Loom’s attempt to bring “structured concurrency” to Java.
try (
ExecutorService executorService = Executors.… ;
) {
// Submit your `Runnable`/`Callable` tasks to the executor service.
…
}
// At this point, flow-of-control blocks until all submitted tasks are done/canceled/failed.
// After this point, the executor service will have been automatically shutdown, wia `close` method called by try-with-resources syntax.
For more information on Project Loom, search for talks and interviews given by Ron Pressler and others on the Project Loom team. Focus on the more recent, as Project Loom has evolved.
Experimental builds of Project Loom technology are available now, based on early-access Java 18.
Java 8 - We can use stream API to process stream. Please see snippet below
final List<Runnable> tasks = ...; //or any other functional interface
tasks.stream().parallel().forEach(Runnable::run) // Uses default pool
//alternatively to specify parallelism
new ForkJoinPool(15).submit(
() -> tasks.stream().parallel().forEach(Runnable::run)
).get();
ExecutorService WORKER_THREAD_POOL
= Executors.newFixedThreadPool(10);
CountDownLatch latch = new CountDownLatch(2);
for (int i = 0; i < 2; i++) {
WORKER_THREAD_POOL.submit(() -> {
try {
// doSomething();
latch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
// wait for the latch to be decremented by the two remaining threads
latch.await();
If doSomething() throw some other exceptions, the latch.countDown() seems will not execute, so what should I do?
This might help
Log.i(LOG_TAG, "shutting down executor...");
executor.shutdown();
while (true) {
try {
Log.i(LOG_TAG, "Waiting for executor to terminate...");
if (executor.isTerminated())
break;
if (executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) {
break;
}
} catch (InterruptedException ignored) {}
}
You could call waitTillDone() on this Runner class:
Runner runner = Runner.runner(4); // create pool with 4 threads in thread pool
while(...) {
runner.run(new MyTask()); // here you submit your task
}
runner.waitTillDone(); // and this blocks until all tasks are finished (or failed)
runner.shutdown(); // once you done you can shutdown the runner
You can reuse this class and call waitTillDone() as many times as you want to before calling shutdown(), plus your code is extremly simple. Also you don't have to know the number of tasks upfront.
To use it just add this gradle/maven compile 'com.github.matejtymes:javafixes:1.3.1' dependency to your project.
More details can be found here:
https://github.com/MatejTymes/JavaFixes
There is a method in executor getActiveCount() - that gives the count of active threads.
After spanning the thread, we can check if the activeCount() value is 0. Once the value is zero, it is meant that there are no active threads currently running which means task is finished:
while (true) {
if (executor.getActiveCount() == 0) {
//ur own piece of code
break;
}
}