I have a problem with concurrent programming in Java. I am working on my bachelor thesis and I have to make several methods which will return me a String value. In the Futures TriggerMessageFututre and getMeterValuesFuture is a process running which takes between 1-5 seconds and returns a String Value when it's finished.
The problem is now that future.get() is blocking my main thread. I want to call the TriggerMessage and the getMeterValue methode in my main without blocking my main thread and get their answer as a return value when they are finished. I wasn't able to find a way to solve my problem, because either it was a solution without return value or it was a solution which blocked the thread.
private String TriggerMessage(String Messagetyp) throws InterruptedException, ExecutionException{
Future<String> future = new communicator().TriggerMessageFuture(queue,centralSystem,Messagetyp);
while(!future.isDone()) {
System.out.println("[TriggerMessage]: Calculating... ");
Thread.sleep(500);
}
String result = future.get(); //blocking
return result;
}
private String getMeterValue(String key) throws Exception{
Future<String> future = new communicator().getMeterValueFuture(queue,centralSystem,key);
while(!future.isDone()) {
System.out.println("[getMeterValue]: Calculating...");
Thread.sleep(500);
}
String result = future.get(); //blocking
return result;
}
It depends on what main thread are you referring to, plus if you can use CompletableFutures instead of plain old Java Futures.
Using the main(String[] args) thread
It's not possible to do it without any form of blocking. If you are not blocking on get, you'll have to block on a BlockingQueue implementation, otherwise the main thread just ends.
Using the Swing Event Dispatch thread
You'd need to submit a continuation task which is not possible with Future.get from the outside. So either you include this submission inside the task Future has been created for, or switch to CompletableFuture
ExecutorService exec = ...
Future<?> future = exec.submit(() -> {
var value = someCalculation();
SwingUtilities.invokeLater(() -> {
useValueOnEDT(value);
});
});
or
CompletableFuture<ValueType> cf = ...
cf.whenComplete((value, error) -> {
SwingUtilities.invokeLater(() -> {
if (error != null) {
handleErrorOnEdt(error);
} else {
useValueOnEDT(value);
}
});
});
Android Main Thread
The idea is the same as with Swing, but you'll have to use a Handler
// given value
new Handler(Looper.getMainLooper()).post(() -> {
useValueOnMainLooper(value);
});
You can wrap the Future into a CompletableFuture like so
static <T> CompletableFuture<T> from(Future<T> future) {
var delegate = new CompletableFuture<T>();
CompletableFuture.runAsync(() -> {
try {
delegate.complete(future.get());
} catch (Throwable e) {
delegate.completeExceptionally(e);
}
});
return delegate;
}
And then use that CompletableFuture to asynchronously handle the completion via its various then... and when... methods.
This is to continue over an earlier post, as part of my task I'm trying to download files from URL using callables, and whenever an exception occurs I'm trying to resubmit the same callable again for maximum number of times.
The problem is, with the current approach my program doesn't terminate after finishing all of the callables in a happy day scenario, it keeps running forever (maybe because I'm using non-daemon threads ? wouldn't it terminate after a given amount of time ?).
Also I believe that the current design will prevent resubmitting the failed callables again, since I'm calling executor.shutdown(), thus whenever a callable fails the executor will prevent adding new callable to the executing queue.
Any ideas how to get over this ?
public class DownloadManager {
int allocatedMemory;
private final int MAX_FAILURES = 5;
private ExecutorService executor;
private CompletionService<Status> completionService;
private HashMap<String, Integer> failuresPerDownload;
private HashMap<Future<Status>, DownloadWorker> URLDownloadFuturevsDownloadWorker;
public DownloadManager() {
allocatedMemory = 0;
executor = Executors.newWorkStealingPool();
completionService = new ExecutorCompletionService<Status>(executor);
URLDownloadFuturevsDownloadWorker = new HashMap<Future<Status>, DownloadWorker>();
failuresPerDownload = new HashMap<String, Integer>();
}
public ArrayList<Status> downloadURLs(String[] urls, int memorySize) throws Exception {
validateURLs(urls);
for (String url : urls) {
failuresPerDownload.put(url, 0);
}
ArrayList<Status> allDownloadsStatus = new ArrayList<Status>();
allocatedMemory = memorySize / urls.length;
for (String url : urls) {
DownloadWorker URLDownloader = new DownloadWorker(url, allocatedMemory);
Future<Status> downloadStatusFuture = completionService.submit(URLDownloader);
URLDownloadFuturevsDownloadWorker.put(downloadStatusFuture, URLDownloader);
}
executor.shutdown();
Future<Status> downloadQueueHead = null;
while (!executor.isTerminated()) {
downloadQueueHead = completionService.take();
try {
Status downloadStatus = downloadQueueHead.get();
if (downloadStatus.downloadSucceeded()) {
allDownloadsStatus.add(downloadStatus);
System.out.println(downloadStatus);
} else {
handleDownloadFailure(allDownloadsStatus, downloadStatus.getUrl());
}
} catch (Exception e) {
String URL = URLDownloadFuturevsDownloadWorker.get(downloadQueueHead).getAssignedURL();
handleDownloadFailure(allDownloadsStatus, URL);
}
}
return allDownloadsStatus;
}
private void handleDownloadFailure(ArrayList<Status> allDownloadsStatus, String URL) {
int failuresPerURL = failuresPerDownload.get(URL);
failuresPerURL++;
if (failuresPerURL < MAX_FAILURES) {
failuresPerDownload.put(URL, failuresPerURL);
// resubmit the same job
DownloadWorker downloadJob = URLDownloadFuturevsDownloadWorker.get(URL);
completionService.submit(downloadJob);
} else {
Status failedDownloadStatus = new Status(URL, false);
allDownloadsStatus.add(failedDownloadStatus);
System.out.println(failedDownloadStatus);
}
}
}
Update: After I've changed the while loop's condition to a counter instead of !executor.isTerminated() it worked.
Why doesn't the executor terminate ?
You need to call ExecutorService.shutdown() and awaitTermination() to terminate the threads after all your work is done.
Alternatively, you could provide your own ThreadFactory when constructing your ExecutorService and mark all your threads as daemon so that they won't keep your process alive once the main thread exits.
In ExecutorCompletionService javadoc, we see examples
CompletionService<Result> ecs
= new ExecutorCompletionService<Result>(e);
List<Future<Result>> futures
= new ArrayList<Future<Result>>(n);
try {
...
} finally {
for (Future<Result> f : futures)
f.cancel(true);
}
so try to call cancel(true) with all your Future when you need to stop ExecutorCompletionService
I have a method in which I create a some files using ThreadPoolExecuter, and later zip the files created.
private void createAndZip(){
// Some Code
ThreadPoolExecutor executer = (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
for(String str : someStringList){
// This piece of code creates files and drops to certain location.
executer.execute(new MyRunnable());
}
executer.shutdown();
// Code to Zip the files created above.
}
Now my piece of code to create zip files runs even before all files are created, so not all files are zipped.
Please help. I tried Sleep, but can't gaurantee how much time the files creation will take.
You need to invoke awaitTermination on the executor object, in order to wait for the executor to finish shutting down.
I used a CountDownLatch to solve the problem. Here is sample code.
private void createAndZip() throws Exception{
CountDownLatch latch = new CountDownLatch(someStringList.size());
// Some Code
ThreadPoolExecutor executer = (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
for(String str : someStringList){
// This piece of code creates files and drops to certain location.
executer.execute(new MyRunnable(latch));
}
executer.shutdown();
// Code to Zip the files created above.
try {
latch.await();
} catch (InterruptedException exception) {
throw new GIException(exception);
}
//Code here.
}
public class MyRunnable implements Runnable{
CountDownLatch latch = null;
MyRunnable(CountDownLatch latch){
this.latch = latch;
}
#Override
public void run() {
try {
// Some Logic
latch.countDown();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I think you can use Future objects here. Instead of calling execute() on the executor use submit() method. This should give you a Future object for each task you are submitting to the executor. Once you submit all tasks just loop over the list of futures you got and call get() on each. This is a blocking call and it waits until the corresponding task finishes.
Here the advantage is that you can retrieve any exception thrown from your task and then decide whether to zip the files or not.
Please refer this code -
private void createAndZip() throws Exception {
// Some Code
ThreadPoolExecutor executer = (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
// collect all futures
List<Future> futures = new ArrayList<>();
for(String str : someStringList){
// This piece of code creates files and drops to certain location.
futures.add(executer.submit(new MyRunnable()));
}
// wait for all tasks to finish
try {
for (Future future : futures) {
future.get();
}
} catch (Exception e) {
e.printStackTrace();
if (e instanceof ExecutionException) {
throw e;
}
} finally {
executer.shutdown();
}
// Code to Zip the files created above.
}
In your code block you are narrowing the return of Executors.newFixedThreadPool(5). One option you have is to use the ExecutorService it returns. This class already has facilities that avoid having to re-implement synchronization code such as latches. For example:
Using Futures
private void createAndZip(ExecutorService executor) throws ExecutionException, InterruptedException {
// Some Code
List<String> list = new ArrayList<>();
// For a number of reasons ExecutorService should be constructed outside
// ExecutorService executer = Executors.newFixedThreadPool(5);
List<Future<?>> futures = new ArrayList<>();
for(String str : list){
// This piece of code creates files and drops to certain location.
futures.add(executer.submit(new MyRunnable()));
}
// async work
for (Future<?> future : futures) {
future.get(); // blocks
}
// Code to Zip the files created above.
}
There are some advantages here:
Error management: when executing the the background if you use another technique you have to arrange for errors to be delivered from the background thread to your master thread. Here the future takes care of this. If your worker throws then the exception will back it back to your controlling thread.
Keeping few threadpools in your code. The reason to pool threads in the first place is to make the startup costs smaller. If you have any significant sized program you wouldn't want to create and destroy threadpools whenever you wanted to perform an operation in parallel.
With Java8 Lambda this the loops can be written in a more compact way.
Fork/Join
Perhaps better suited to your task, particularly if you are going to process a tree of files is the Fork/Join framework. Here you could roll the processing and the zipping into a collection of tasks that are submitted to the fork-join pool. That's neat because you can get a Future for the whole zip file allowing you to produce the entire zip off your main thread. Something similar to your design using fork/join might be:
static class PrepareFile extends RecursiveTask<Void> {
private String filePath;
PrepareFile(String filePath) {
this.filePath = filePath;
}
#Override
protected Void compute() {
try {
System.out.println(filePath);
Thread.sleep(1009L);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return null; // void
}
}
static class ZipTask extends RecursiveTask<String>
{
private List<String> files;
ZipTask(List<String> files) {
this.files = files;
}
#Override
protected String compute() {
List<PrepareFile> prepareTasks = new ArrayList<>();
for(String file : files) {
PrepareFile fileTask = new PrepareFile(file);
prepareTasks.add(fileTask);
fileTask.fork();
}
for(PrepareFile task : prepareTasks) {
task.join(); // can collect results here
}
System.out.println("Zipping");
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Done task");
return "filename.zip";
}
}
public static void main(String[] args) {
ForkJoinPool pool = new ForkJoinPool();
List<String> toProcess = Arrays.asList("a","b");
String filename = pool.invoke(new ZipTask(toProcess));
System.out.println("Zipped " + filename);
}
This is an illustration you'd want to change a few things, like the the return types of the tasks and how the tasks are invoked perhaps.
On awaitTermination
It is possible to use the awaitTermination method after calling shutdown to wait for all processes to terminate. However this may not be so desirable in longer running services or programs where thread-pools may be shared between operations.
private void createAndZip() throws Exception{
// Some Code
ThreadPoolExecutor executer = (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
for(String str : someStringList){
// This piece of code creates files and drops to certain location.
executer.execute(new MyRunnable());
}
executer.shutdown();
while (true) {
boolean result_ = threadPoolExecutor.awaitTermination(TimeUnit.DAYS, 1);
if(result_)
break;
}
// Code to Zip the files created above.
//Code here.
}
I have a simple application in which I create 3 threads inside a class to ping 3 different websites and note the time taken to do so.
I wish to enhance it by seeing which thread out of the 3 executes successfully first and terminating the other two .
Which class of the JDK would be helpful in doing so ? and how ?
Sample code to ping websites :
public static boolean pingUrl(final String address) {
try {
final URL url = new URL("http://" + address);
final HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
urlConn.setConnectTimeout(1000 * 10); // mTimeout is in seconds
final long startTime = System.currentTimeMillis();
urlConn.connect();
final long endTime = System.currentTimeMillis();
if (urlConn.getResponseCode() == HttpURLConnection.HTTP_OK) {
System.out.println("Time (ms) : " + (endTime - startTime));
System.out.println("Ping to "+address +" was success");
return true;
}
} catch (final MalformedURLException e1) {
e1.printStackTrace();
} catch (final IOException e) {
e.printStackTrace();
}
return false;
}
I wish to enhance it by seeing which thread out of the 3 executes successfully first and terminating the other two .
I would use an ExecutorService combined with a ExecutorCompletionService. Then, when the first Future is returned from the completion service when the first task completes, you would call shutdownNow() on the ExecutorService.
The javadocs for ExecutorCompletionService are pretty good and show how to use it.
// maybe you want 10 threads working on your tasks
ExecutorService threadPool = Executors.newFixedThreadPool(10);
CompletionService<Result> ecs
= new ExecutorCompletionService<Result>(threadPool);
for (Callable<Result> task : tasks) {
// submit your tasks to the completion service, they run in the thread-pool
ecs.submit(task);
}
// once you get one result
Future<Result> future = ecs.take();
// kill the rest of the tasks
threadPool.shutdownNow();
Result result = future.get();
// probably will need to close the thread connections, see below
// maybe call threadPool.awaitShutdown(...) here to wait for the others to die
The only problem with this mechanism is that this will only interrupt the threads. In your case they are going to be stuck in urlConn.connect(); which is not interruptible. Once the ecs.take() returns, you are going to have to run back over your tasks and call disconnect() on the the HttpURLConnection that are still in progress. Even then I'm not sure if it will stop a connection that is currently underway. If that doesn't work then you may need to switch to using Apache HttpClient or some other class that you can close to stop the threads from waiting longer.
for (Callable<Result> task : tasks) {
// you'll need to do something like this
task.closeConnection();
}
In your case, your task might look something like:
public class MyPingTask implements Callable<Boolean> {
private String address;
public MyPingTask(String address) {
this.address = address;
}
public Boolean call() throws Exception {
// obviously the pingUrl code could go right here
return pingUrl(address);
}
}
Here is the Java tutorial on ExecutorService and related classes.
I suppose BlockingQueue may be useful. The main idea that spawned thread writes some value to BlockingQueue when finished and gracefully closes on InterruptedException
For example:
public void runPing(List<String> urls) {
Collection<Thread> runningThreads = new ArrayList<>(urls.size());
final BlockingQueue<Integer> queue = new ArrayBlockingQueue<>(urls.size());
for (int i = 0; i < 3; i++) {
final String url = urls.get(i);
Thread t = new Thread(new Runnable() {
public void run() {
pingUrl(url);
queue.add(1);
}
});
runningThreads.add(t);
}
try {
queue.poll(1, TimeUnit.HOURS);
interruptChilds(runningThreads);
} catch (Exception e) {
interruptChilds(runningThreads);
}
}
private void interruptChilds(Collection<Thread> runningThreads) {
for (Thread t : runningThreads) {
t.interrupt();
}
}
Please note that in there are no handling of InterruptedException. It should be added in your method
I need to execute some amount of tasks 4 at a time, something like this:
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
//...wait for completion somehow
How can I get notified once all of them are complete? For now I can't think about anything better than setting some global task counter and decrease it at the end of every task, then monitor in infinite loop this counter to become 0; or get a list of Futures and in infinite loop monitor isDone for all of them. What are better solutions not involving infinite loops?
Thanks.
Basically on an ExecutorService you call shutdown() and then awaitTermination():
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}
Use a CountDownLatch:
CountDownLatch latch = new CountDownLatch(totalNumberOfTasks);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
try {
latch.await();
} catch (InterruptedException E) {
// handle
}
and within your task (enclose in try / finally)
latch.countDown();
ExecutorService.invokeAll() does it for you.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
List<Callable<?>> tasks; // your tasks
// invokeAll() returns when all tasks are complete
List<Future<?>> futures = taskExecutor.invokeAll(tasks);
You can use Lists of Futures, as well:
List<Future> futures = new ArrayList<Future>();
// now add to it:
futures.add(executorInstance.submit(new Callable<Void>() {
public Void call() throws IOException {
// do something
return null;
}
}));
then when you want to join on all of them, its essentially the equivalent of joining on each, (with the added benefit that it re-raises exceptions from child threads to the main):
for(Future f: this.futures) { f.get(); }
Basically the trick is to call .get() on each Future one at a time, instead of infinite looping calling isDone() on (all or each). So you're guaranteed to "move on" through and past this block as soon as the last thread finishes. The caveat is that since the .get() call re-raises exceptions, if one of the threads dies, you would raise from this possibly before the other threads have finished to completion [to avoid this, you could add a catch ExecutionException around the get call]. The other caveat is it keeps a reference to all threads so if they have thread local variables they won't get collected till after you get past this block (though you might be able to get around this, if it became a problem, by removing Future's off the ArrayList). If you wanted to know which Future "finishes first" you could use some something like https://stackoverflow.com/a/31885029/32453
In Java8 you can do it with CompletableFuture:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Just my two cents.
To overcome the requirement of CountDownLatch to know the number of tasks beforehand, you could do it the old fashion way by using a simple Semaphore.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
int numberOfTasks=0;
Semaphore s=new Semaphore(0);
while(...) {
taskExecutor.execute(new MyTask());
numberOfTasks++;
}
try {
s.aquire(numberOfTasks);
...
In your task just call s.release() as you would latch.countDown();
A bit late to the game but for the sake of completion...
Instead of 'waiting' for all tasks to finish, you can think in terms of the Hollywood principle, "don't call me, I'll call you" - when I'm finished.
I think the resulting code is more elegant...
Guava offers some interesting tools to accomplish this.
An example:
Wrap an ExecutorService into a ListeningExecutorService:
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));
Submit a collection of callables for execution ::
for (Callable<Integer> callable : callables) {
ListenableFuture<Integer> lf = service.submit(callable);
// listenableFutures is a collection
listenableFutures.add(lf)
});
Now the essential part:
ListenableFuture<List<Integer>> lf = Futures.successfulAsList(listenableFutures);
Attach a callback to the ListenableFuture, that you can use to be notified when all futures complete:
Futures.addCallback(lf, new FutureCallback<List<Integer>> () {
#Override
public void onSuccess(List<Integer> result) {
// do something with all the results
}
#Override
public void onFailure(Throwable t) {
// log failure
}
});
This also offers the advantage that you can collect all the results in one place once the processing is finished...
More information here
The CyclicBarrier class in Java 5 and later is designed for this sort of thing.
here is two options , just bit confuse which one is best to go.
Option 1:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Option 2:
ExecutorService es = Executors.newFixedThreadPool(4);
List< Future<?>> futures = new ArrayList<>();
for(Runnable task : taskList) {
futures.add(es.submit(task));
}
for(Future<?> future : futures) {
try {
future.get();
}catch(Exception e){
// do logging and nothing else
}
}
es.shutdown();
Here putting future.get(); in try catch is good idea right?
Follow one of below approaches.
Iterate through all Future tasks, returned from submit on ExecutorService and check the status with blocking call get() on Future object as suggested by Kiran
Use invokeAll() on ExecutorService
CountDownLatch
ForkJoinPool or Executors.html#newWorkStealingPool
Use shutdown, awaitTermination, shutdownNow APIs of ThreadPoolExecutor in proper sequence
Related SE questions:
How is CountDownLatch used in Java Multithreading?
How to properly shutdown java ExecutorService
You could wrap your tasks in another runnable, that will send notifications:
taskExecutor.execute(new Runnable() {
public void run() {
taskStartedNotification();
new MyTask().run();
taskFinishedNotification();
}
});
Clean way with ExecutorService
List<Future<Void>> results = null;
try {
List<Callable<Void>> tasks = new ArrayList<>();
ExecutorService executorService = Executors.newFixedThreadPool(4);
results = executorService.invokeAll(tasks);
} catch (InterruptedException ex) {
...
} catch (Exception ex) {
...
}
I've just written a sample program that solves your problem. There was no concise implementation given, so I'll add one. While you can use executor.shutdown() and executor.awaitTermination(), it is not the best practice as the time taken by different threads would be unpredictable.
ExecutorService es = Executors.newCachedThreadPool();
List<Callable<Integer>> tasks = new ArrayList<>();
for (int j = 1; j <= 10; j++) {
tasks.add(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
int sum = 0;
System.out.println("Starting Thread "
+ Thread.currentThread().getId());
for (int i = 0; i < 1000000; i++) {
sum += i;
}
System.out.println("Stopping Thread "
+ Thread.currentThread().getId());
return sum;
}
});
}
try {
List<Future<Integer>> futures = es.invokeAll(tasks);
int flag = 0;
for (Future<Integer> f : futures) {
Integer res = f.get();
System.out.println("Sum: " + res);
if (!f.isDone())
flag = 1;
}
if (flag == 0)
System.out.println("SUCCESS");
else
System.out.println("FAILED");
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
Just to provide more alternatives here different to use latch/barriers.
You can also get the partial results until all of them finish using CompletionService.
From Java Concurrency in practice:
"If you have a batch of computations to submit to an Executor and you want to retrieve their results as they become
available, you could retain the Future associated with each task and repeatedly poll for completion by calling get with a
timeout of zero. This is possible, but tedious. Fortunately there is a better way: a completion service."
Here the implementation
public class TaskSubmiter {
private final ExecutorService executor;
TaskSubmiter(ExecutorService executor) { this.executor = executor; }
void doSomethingLarge(AnySourceClass source) {
final List<InterestedResult> info = doPartialAsyncProcess(source);
CompletionService<PartialResult> completionService = new ExecutorCompletionService<PartialResult>(executor);
for (final InterestedResult interestedResultItem : info)
completionService.submit(new Callable<PartialResult>() {
public PartialResult call() {
return InterestedResult.doAnOperationToGetPartialResult();
}
});
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<PartialResult> f = completionService.take();
PartialResult PartialResult = f.get();
processThisSegment(PartialResult);
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (ExecutionException e) {
throw somethinghrowable(e.getCause());
}
}
}
This is my solution, based in "AdamSkywalker" tip, and it works
package frss.main;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestHilos {
void procesar() {
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream().map(task -> CompletableFuture.runAsync(task, es)).toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
System.out.println("FIN DEL PROCESO DE HILOS");
}
private List<Runnable> getTasks() {
List<Runnable> tasks = new ArrayList<Runnable>();
Hilo01 task1 = new Hilo01();
tasks.add(task1);
Hilo02 task2 = new Hilo02();
tasks.add(task2);
return tasks;
}
private class Hilo01 extends Thread {
#Override
public void run() {
System.out.println("HILO 1");
}
}
private class Hilo02 extends Thread {
#Override
public void run() {
try {
sleep(2000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("HILO 2");
}
}
public static void main(String[] args) {
TestHilos test = new TestHilos();
test.procesar();
}
}
You could use this code:
public class MyTask implements Runnable {
private CountDownLatch countDownLatch;
public MyTask(CountDownLatch countDownLatch {
this.countDownLatch = countDownLatch;
}
#Override
public void run() {
try {
//Do somethings
//
this.countDownLatch.countDown();//important
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
CountDownLatch countDownLatch = new CountDownLatch(NUMBER_OF_TASKS);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
for (int i = 0; i < NUMBER_OF_TASKS; i++){
taskExecutor.execute(new MyTask(countDownLatch));
}
countDownLatch.await();
System.out.println("Finish tasks");
So I post my answer from linked question here, incase someone want a simpler way to do this
ExecutorService executor = Executors.newFixedThreadPool(10);
CompletableFuture[] futures = new CompletableFuture[10];
int i = 0;
while (...) {
futures[i++] = CompletableFuture.runAsync(runner, executor);
}
CompletableFuture.allOf(futures).join(); // THis will wait until all future ready.
I created the following working example. The idea is to have a way to process a pool of tasks (I am using a queue as example) with many Threads (determined programmatically by the numberOfTasks/threshold), and wait until all Threads are completed to continue with some other processing.
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/** Testing CountDownLatch and ExecutorService to manage scenario where
* multiple Threads work together to complete tasks from a single
* resource provider, so the processing can be faster. */
public class ThreadCountDown {
private CountDownLatch threadsCountdown = null;
private static Queue<Integer> tasks = new PriorityQueue<>();
public static void main(String[] args) {
// Create a queue with "Tasks"
int numberOfTasks = 2000;
while(numberOfTasks-- > 0) {
tasks.add(numberOfTasks);
}
// Initiate Processing of Tasks
ThreadCountDown main = new ThreadCountDown();
main.process(tasks);
}
/* Receiving the Tasks to process, and creating multiple Threads
* to process in parallel. */
private void process(Queue<Integer> tasks) {
int numberOfThreads = getNumberOfThreadsRequired(tasks.size());
threadsCountdown = new CountDownLatch(numberOfThreads);
ExecutorService threadExecutor = Executors.newFixedThreadPool(numberOfThreads);
//Initialize each Thread
while(numberOfThreads-- > 0) {
System.out.println("Initializing Thread: "+numberOfThreads);
threadExecutor.execute(new MyThread("Thread "+numberOfThreads));
}
try {
//Shutdown the Executor, so it cannot receive more Threads.
threadExecutor.shutdown();
threadsCountdown.await();
System.out.println("ALL THREADS COMPLETED!");
//continue With Some Other Process Here
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
/* Determine the number of Threads to create */
private int getNumberOfThreadsRequired(int size) {
int threshold = 100;
int threads = size / threshold;
if( size > (threads*threshold) ){
threads++;
}
return threads;
}
/* Task Provider. All Threads will get their task from here */
private synchronized static Integer getTask(){
return tasks.poll();
}
/* The Threads will get Tasks and process them, while still available.
* When no more tasks available, the thread will complete and reduce the threadsCountdown */
private class MyThread implements Runnable {
private String threadName;
protected MyThread(String threadName) {
super();
this.threadName = threadName;
}
#Override
public void run() {
Integer task;
try{
//Check in the Task pool if anything pending to process
while( (task = getTask()) != null ){
processTask(task);
}
}catch (Exception ex){
ex.printStackTrace();
}finally {
/*Reduce count when no more tasks to process. Eventually all
Threads will end-up here, reducing the count to 0, allowing
the flow to continue after threadsCountdown.await(); */
threadsCountdown.countDown();
}
}
private void processTask(Integer task){
try{
System.out.println(this.threadName+" is Working on Task: "+ task);
}catch (Exception ex){
ex.printStackTrace();
}
}
}
}
Hope it helps!
You could use your own subclass of ExecutorCompletionService to wrap taskExecutor, and your own implementation of BlockingQueue to get informed when each task completes and perform whatever callback or other action you desire when the number of completed tasks reaches your desired goal.
you should use executorService.shutdown() and executorService.awaitTermination method.
An example as follows :
public class ScheduledThreadPoolExample {
public static void main(String[] args) throws InterruptedException {
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
executorService.scheduleAtFixedRate(() -> System.out.println("process task."),
0, 1, TimeUnit.SECONDS);
TimeUnit.SECONDS.sleep(10);
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.DAYS);
}
}
if you use more thread ExecutionServices SEQUENTIALLY and want to wait EACH EXECUTIONSERVICE to be finished. The best way is like below;
ExecutorService executer1 = Executors.newFixedThreadPool(THREAD_SIZE1);
for (<loop>) {
executer1.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer1.shutdown();
try{
executer1.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
ExecutorService executer2 = Executors.newFixedThreadPool(THREAD_SIZE2);
for (true) {
executer2.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer2.shutdown();
} catch (Exception e){
...
}
Try-with-Resources syntax on AutoCloseable executor service with Project Loom
Project Loom seeks to add new features to the concurrency abilities in Java.
One of those features is making the ExecutorService AutoCloseable. This means every ExecutorService implementation will offer a close method. And it means we can use try-with-resources syntax to automatically close an ExecutorService object.
The ExecutorService#close method blocks until all submitted tasks are completed. Using close takes the place of calling shutdown & awaitTermination.
Being AutoCloseable contributes to Project Loom’s attempt to bring “structured concurrency” to Java.
try (
ExecutorService executorService = Executors.… ;
) {
// Submit your `Runnable`/`Callable` tasks to the executor service.
…
}
// At this point, flow-of-control blocks until all submitted tasks are done/canceled/failed.
// After this point, the executor service will have been automatically shutdown, wia `close` method called by try-with-resources syntax.
For more information on Project Loom, search for talks and interviews given by Ron Pressler and others on the Project Loom team. Focus on the more recent, as Project Loom has evolved.
Experimental builds of Project Loom technology are available now, based on early-access Java 18.
Java 8 - We can use stream API to process stream. Please see snippet below
final List<Runnable> tasks = ...; //or any other functional interface
tasks.stream().parallel().forEach(Runnable::run) // Uses default pool
//alternatively to specify parallelism
new ForkJoinPool(15).submit(
() -> tasks.stream().parallel().forEach(Runnable::run)
).get();
ExecutorService WORKER_THREAD_POOL
= Executors.newFixedThreadPool(10);
CountDownLatch latch = new CountDownLatch(2);
for (int i = 0; i < 2; i++) {
WORKER_THREAD_POOL.submit(() -> {
try {
// doSomething();
latch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
// wait for the latch to be decremented by the two remaining threads
latch.await();
If doSomething() throw some other exceptions, the latch.countDown() seems will not execute, so what should I do?
This might help
Log.i(LOG_TAG, "shutting down executor...");
executor.shutdown();
while (true) {
try {
Log.i(LOG_TAG, "Waiting for executor to terminate...");
if (executor.isTerminated())
break;
if (executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) {
break;
}
} catch (InterruptedException ignored) {}
}
You could call waitTillDone() on this Runner class:
Runner runner = Runner.runner(4); // create pool with 4 threads in thread pool
while(...) {
runner.run(new MyTask()); // here you submit your task
}
runner.waitTillDone(); // and this blocks until all tasks are finished (or failed)
runner.shutdown(); // once you done you can shutdown the runner
You can reuse this class and call waitTillDone() as many times as you want to before calling shutdown(), plus your code is extremly simple. Also you don't have to know the number of tasks upfront.
To use it just add this gradle/maven compile 'com.github.matejtymes:javafixes:1.3.1' dependency to your project.
More details can be found here:
https://github.com/MatejTymes/JavaFixes
There is a method in executor getActiveCount() - that gives the count of active threads.
After spanning the thread, we can check if the activeCount() value is 0. Once the value is zero, it is meant that there are no active threads currently running which means task is finished:
while (true) {
if (executor.getActiveCount() == 0) {
//ur own piece of code
break;
}
}