I have a problem with concurrent programming in Java. I am working on my bachelor thesis and I have to make several methods which will return me a String value. In the Futures TriggerMessageFututre and getMeterValuesFuture is a process running which takes between 1-5 seconds and returns a String Value when it's finished.
The problem is now that future.get() is blocking my main thread. I want to call the TriggerMessage and the getMeterValue methode in my main without blocking my main thread and get their answer as a return value when they are finished. I wasn't able to find a way to solve my problem, because either it was a solution without return value or it was a solution which blocked the thread.
private String TriggerMessage(String Messagetyp) throws InterruptedException, ExecutionException{
Future<String> future = new communicator().TriggerMessageFuture(queue,centralSystem,Messagetyp);
while(!future.isDone()) {
System.out.println("[TriggerMessage]: Calculating... ");
Thread.sleep(500);
}
String result = future.get(); //blocking
return result;
}
private String getMeterValue(String key) throws Exception{
Future<String> future = new communicator().getMeterValueFuture(queue,centralSystem,key);
while(!future.isDone()) {
System.out.println("[getMeterValue]: Calculating...");
Thread.sleep(500);
}
String result = future.get(); //blocking
return result;
}
It depends on what main thread are you referring to, plus if you can use CompletableFutures instead of plain old Java Futures.
Using the main(String[] args) thread
It's not possible to do it without any form of blocking. If you are not blocking on get, you'll have to block on a BlockingQueue implementation, otherwise the main thread just ends.
Using the Swing Event Dispatch thread
You'd need to submit a continuation task which is not possible with Future.get from the outside. So either you include this submission inside the task Future has been created for, or switch to CompletableFuture
ExecutorService exec = ...
Future<?> future = exec.submit(() -> {
var value = someCalculation();
SwingUtilities.invokeLater(() -> {
useValueOnEDT(value);
});
});
or
CompletableFuture<ValueType> cf = ...
cf.whenComplete((value, error) -> {
SwingUtilities.invokeLater(() -> {
if (error != null) {
handleErrorOnEdt(error);
} else {
useValueOnEDT(value);
}
});
});
Android Main Thread
The idea is the same as with Swing, but you'll have to use a Handler
// given value
new Handler(Looper.getMainLooper()).post(() -> {
useValueOnMainLooper(value);
});
You can wrap the Future into a CompletableFuture like so
static <T> CompletableFuture<T> from(Future<T> future) {
var delegate = new CompletableFuture<T>();
CompletableFuture.runAsync(() -> {
try {
delegate.complete(future.get());
} catch (Throwable e) {
delegate.completeExceptionally(e);
}
});
return delegate;
}
And then use that CompletableFuture to asynchronously handle the completion via its various then... and when... methods.
I need to achieve the following functionality using multithreading/parallel processing using Java 7
List<String> permissionsList = new ArrayList<String>();
for (Integer site : getSites()) {
String perm = authenticationService.getUserPermissionsForSite(site);
permissionsList.add(perm);
}
The call getUserPermissionsForSite is a web service call. After the end of the for loop, the permissionsList should be fully populated with all the values.
What would be the best way to achieve it?
This is how it would typically be done in Java 7. A CountDownLatch is used for waiting till all the sites are called.
int numOfThreads = 5;
List<String> permissionsList = Collections.synchronizedList(new ArrayList<>());
List<Integer> sites = getSites();
CountDownLatch countDownLatch = new CountDownLatch(sites.size());
ExecutorService es = Executors.newFixedThreadPool(numOfThreads);
for (Integer site : sites) {
es.submit(new Runnable() {
#Override
public void run() {
try {
String perm = authenticationService.getUserPermissionsForSite(site);
permissionsList.add(perm);
} finally {
countDownLatch.countDown();
}
}
});
}
countDownLatch.await();
You can use ExecutorService.invokeAll() to schedule all your requests at once. You can specify a timeout to avoid waiting indefinitely. Upon return, all requests will have completed, faulted, or timed out. Iterate over the tasks and fetch the results, and handle timeouts and partial failures however you like. Note that if you add a timeout, you may want to scale it by the number of requests.
You'll need to decide what degree of parallelism you want. My example will create up to as many threads as there are processors available to run them, but the thread count won't exceed the number of requests. That's pretty aggressive; you could probably get away with fewer threads.
List<String> getPermissions(final List<Integer> sites) throws InterruptedException {
if (sites.isEmpty()) {
return Collections.emptyList();
}
final List<String> permissions = new ArrayList<>();
final List<Callable<String>> tasks = new ArrayList<>();
final int threadCount = Math.min(
Runtime.getRuntime().availableProcessors(),
sites.size()
);
final ExecutorService executor = Executors.newFixedThreadPool(threadCount);
try {
for (final Integer site : sites) {
tasks.add(
new Callable<String>() {
#Override
public String call() throws Exception {
return authenticationService.getUserPermissionsForSite(site);
}
}
);
}
for (final Future<String> f : executor.invokeAll(tasks/*, timeout? */)) {
try {
// If you added a timeout, you can check f.isCancelled().
// If timed out, get() will throw a CancellationException.
permissions.add(f.get());
}
catch (final ExecutionException e) {
// This request failed. Handle it however you like.
}
}
}
finally {
executor.shutdown();
}
return permissions;
}
Like most Java operations that perform a wait, invokeAll may throw an InterruptedException. It's up to you whether you want to catch it or let it propagate.
I have a Java application that watches a directory and when XML files are added to that directory, it parses out data from the XML file.
It works file as a single-threaded application, but I'm thinking out making it multi-threaded so multiple files can be processed simultaneously.
Question is, when thread #1 finds a file and starts processing it, how can I mark the file as 'in progress' so thread #2 doesn't try to process it to?
I was thinking that the thread could simply rename the file once it starts working on it, to myfile.xml.inprogress, and then myfile.xml.finished when done.
But if I do that, is it possible that two threads will see the file at the same time and both will try to rename it simultaneously?
I might also want to run two instances of this application reading files in the same directory, so whatever path I take supports multiple processes.
Thanks!
You should use a Producer-Consumer pattern. Have one thread to listen for changes in the files and pass off that work to other threads.
You can use a BlockingQueue for this to make the code very simple.
First you need two classes, a producer:
class Producer implements Callable<Void> {
private final BlockingQueue<Path> changedFiles;
Producer(BlockingQueue<Path> changedFiles) {
this.changedFiles = changedFiles;
}
#Override
public Void call() throws Exception {
while (true) {
if (something) {
changedFiles.add(changedFile);
}
//to make the thread "interruptable"
try {
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ex) {
break;
}
}
return null;
}
}
And a Consumer:
class Consumer implements Callable<Void> {
private final BlockingQueue<Path> changedFiles;
Consumer(BlockingQueue<Path> changedFiles) {
this.changedFiles = changedFiles;
}
#Override
public Void call() throws Exception {
while (true) {
try {
final Path changedFile = changedFiles.take();
//process your file
//to make the thread "interruptable"
} catch (InterruptedException ex) {
break;
}
}
return null;
}
}
So, now create an ExecutorService, submit one Producer and as many consumers as you need:
final BlockingQueue<Path> queue = new LinkedBlockingDeque<>();
final ExecutorService executorService = Executors.newCachedThreadPool();
final Collection<Future<?>> consumerHandles = new LinkedList<>();
for (int i = 0; i < numConsumers; ++i) {
consumerHandles.add(executorService.submit(new Consumer(queue)));
}
final Future<?> producerHandle = executorService.submit(new Producer(queue));
So you guarantee that only one file is being worked on at a time because you control that yourself. You also do so with minimum synchronisation.
It might be worthwhile the Consumer also reading the file to remove the the shared disc IO that will happen otherwise - this will likely slow the system down. You could also add another Consumer that writes changed files at the other end to completely eliminate shared IO.
To shutdown the system simply call:
executorService.shutdownNow();
executorService.awaitTermination(1, TimeUnit.DAYS);
Because your workers are interruptible this will bring down the ExectuorService once tasks currently in progress have finished.
Producer Consumer is definitely the idea here.
With Java 7 there is a WatchService provided which can take care of the Producer problem even though it is a pain to work with.
Have an ExecutorService with the desired pool size to take care of Consumers.
Here is it how it all wires up.
public class FolderWatchService {
private ExecutorService executorService = Executors.newFixedThreadPool(5);
public void watch() throws Exception {
Path folder = Paths.get("/home/user/temp");
try (WatchService watchService = FileSystems.getDefault().newWatchService()) {
folder.register(watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.ENTRY_DELETE);
while(true) {
final WatchKey key = watchService.take();
if (key != null) {
for (WatchEvent<?> watchEvent : key.pollEvents()) {
WatchEvent<Path> event = (WatchEvent<Path>) watchEvent;
Path dir = (Path) key.watchable();
Path absolutePath = dir.resolve(event.context());
executorService.submit(new WatchTask(absolutePath));
}
key.reset();
}
}
}
}
public static void main(String[] args) throws Exception {
FolderWatchService folderWatchService = new FolderWatchService();
folderWatchService.watch();
}
}
class WatchTask implements Runnable {
private Path absolutePath;
WatchTask(Path absolutePath) {
this.absolutePath = absolutePath;
}
#Override
public void run() {
System.out.println(Thread.currentThread().getName() + absolutePath.toAbsolutePath());
try (BufferedReader reader = Files.newBufferedReader(absolutePath , StandardCharsets.UTF_8)) {
//Do read
reader.lines().forEach(line -> System.out.println(line));
} catch (IOException e) {
e.printStackTrace();
}
}
}
You can use java.nio.channels.FileLock: http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileLock.html to synchronize access between different processes.
For synchronization between different threads running inside the same process, you can use java.util.concurrent.locks.Lock: http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Lock.html
I need to execute some amount of tasks 4 at a time, something like this:
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
//...wait for completion somehow
How can I get notified once all of them are complete? For now I can't think about anything better than setting some global task counter and decrease it at the end of every task, then monitor in infinite loop this counter to become 0; or get a list of Futures and in infinite loop monitor isDone for all of them. What are better solutions not involving infinite loops?
Thanks.
Basically on an ExecutorService you call shutdown() and then awaitTermination():
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}
Use a CountDownLatch:
CountDownLatch latch = new CountDownLatch(totalNumberOfTasks);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
try {
latch.await();
} catch (InterruptedException E) {
// handle
}
and within your task (enclose in try / finally)
latch.countDown();
ExecutorService.invokeAll() does it for you.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
List<Callable<?>> tasks; // your tasks
// invokeAll() returns when all tasks are complete
List<Future<?>> futures = taskExecutor.invokeAll(tasks);
You can use Lists of Futures, as well:
List<Future> futures = new ArrayList<Future>();
// now add to it:
futures.add(executorInstance.submit(new Callable<Void>() {
public Void call() throws IOException {
// do something
return null;
}
}));
then when you want to join on all of them, its essentially the equivalent of joining on each, (with the added benefit that it re-raises exceptions from child threads to the main):
for(Future f: this.futures) { f.get(); }
Basically the trick is to call .get() on each Future one at a time, instead of infinite looping calling isDone() on (all or each). So you're guaranteed to "move on" through and past this block as soon as the last thread finishes. The caveat is that since the .get() call re-raises exceptions, if one of the threads dies, you would raise from this possibly before the other threads have finished to completion [to avoid this, you could add a catch ExecutionException around the get call]. The other caveat is it keeps a reference to all threads so if they have thread local variables they won't get collected till after you get past this block (though you might be able to get around this, if it became a problem, by removing Future's off the ArrayList). If you wanted to know which Future "finishes first" you could use some something like https://stackoverflow.com/a/31885029/32453
In Java8 you can do it with CompletableFuture:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Just my two cents.
To overcome the requirement of CountDownLatch to know the number of tasks beforehand, you could do it the old fashion way by using a simple Semaphore.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
int numberOfTasks=0;
Semaphore s=new Semaphore(0);
while(...) {
taskExecutor.execute(new MyTask());
numberOfTasks++;
}
try {
s.aquire(numberOfTasks);
...
In your task just call s.release() as you would latch.countDown();
A bit late to the game but for the sake of completion...
Instead of 'waiting' for all tasks to finish, you can think in terms of the Hollywood principle, "don't call me, I'll call you" - when I'm finished.
I think the resulting code is more elegant...
Guava offers some interesting tools to accomplish this.
An example:
Wrap an ExecutorService into a ListeningExecutorService:
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));
Submit a collection of callables for execution ::
for (Callable<Integer> callable : callables) {
ListenableFuture<Integer> lf = service.submit(callable);
// listenableFutures is a collection
listenableFutures.add(lf)
});
Now the essential part:
ListenableFuture<List<Integer>> lf = Futures.successfulAsList(listenableFutures);
Attach a callback to the ListenableFuture, that you can use to be notified when all futures complete:
Futures.addCallback(lf, new FutureCallback<List<Integer>> () {
#Override
public void onSuccess(List<Integer> result) {
// do something with all the results
}
#Override
public void onFailure(Throwable t) {
// log failure
}
});
This also offers the advantage that you can collect all the results in one place once the processing is finished...
More information here
The CyclicBarrier class in Java 5 and later is designed for this sort of thing.
here is two options , just bit confuse which one is best to go.
Option 1:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Option 2:
ExecutorService es = Executors.newFixedThreadPool(4);
List< Future<?>> futures = new ArrayList<>();
for(Runnable task : taskList) {
futures.add(es.submit(task));
}
for(Future<?> future : futures) {
try {
future.get();
}catch(Exception e){
// do logging and nothing else
}
}
es.shutdown();
Here putting future.get(); in try catch is good idea right?
Follow one of below approaches.
Iterate through all Future tasks, returned from submit on ExecutorService and check the status with blocking call get() on Future object as suggested by Kiran
Use invokeAll() on ExecutorService
CountDownLatch
ForkJoinPool or Executors.html#newWorkStealingPool
Use shutdown, awaitTermination, shutdownNow APIs of ThreadPoolExecutor in proper sequence
Related SE questions:
How is CountDownLatch used in Java Multithreading?
How to properly shutdown java ExecutorService
You could wrap your tasks in another runnable, that will send notifications:
taskExecutor.execute(new Runnable() {
public void run() {
taskStartedNotification();
new MyTask().run();
taskFinishedNotification();
}
});
Clean way with ExecutorService
List<Future<Void>> results = null;
try {
List<Callable<Void>> tasks = new ArrayList<>();
ExecutorService executorService = Executors.newFixedThreadPool(4);
results = executorService.invokeAll(tasks);
} catch (InterruptedException ex) {
...
} catch (Exception ex) {
...
}
I've just written a sample program that solves your problem. There was no concise implementation given, so I'll add one. While you can use executor.shutdown() and executor.awaitTermination(), it is not the best practice as the time taken by different threads would be unpredictable.
ExecutorService es = Executors.newCachedThreadPool();
List<Callable<Integer>> tasks = new ArrayList<>();
for (int j = 1; j <= 10; j++) {
tasks.add(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
int sum = 0;
System.out.println("Starting Thread "
+ Thread.currentThread().getId());
for (int i = 0; i < 1000000; i++) {
sum += i;
}
System.out.println("Stopping Thread "
+ Thread.currentThread().getId());
return sum;
}
});
}
try {
List<Future<Integer>> futures = es.invokeAll(tasks);
int flag = 0;
for (Future<Integer> f : futures) {
Integer res = f.get();
System.out.println("Sum: " + res);
if (!f.isDone())
flag = 1;
}
if (flag == 0)
System.out.println("SUCCESS");
else
System.out.println("FAILED");
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
Just to provide more alternatives here different to use latch/barriers.
You can also get the partial results until all of them finish using CompletionService.
From Java Concurrency in practice:
"If you have a batch of computations to submit to an Executor and you want to retrieve their results as they become
available, you could retain the Future associated with each task and repeatedly poll for completion by calling get with a
timeout of zero. This is possible, but tedious. Fortunately there is a better way: a completion service."
Here the implementation
public class TaskSubmiter {
private final ExecutorService executor;
TaskSubmiter(ExecutorService executor) { this.executor = executor; }
void doSomethingLarge(AnySourceClass source) {
final List<InterestedResult> info = doPartialAsyncProcess(source);
CompletionService<PartialResult> completionService = new ExecutorCompletionService<PartialResult>(executor);
for (final InterestedResult interestedResultItem : info)
completionService.submit(new Callable<PartialResult>() {
public PartialResult call() {
return InterestedResult.doAnOperationToGetPartialResult();
}
});
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<PartialResult> f = completionService.take();
PartialResult PartialResult = f.get();
processThisSegment(PartialResult);
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (ExecutionException e) {
throw somethinghrowable(e.getCause());
}
}
}
This is my solution, based in "AdamSkywalker" tip, and it works
package frss.main;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestHilos {
void procesar() {
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream().map(task -> CompletableFuture.runAsync(task, es)).toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
System.out.println("FIN DEL PROCESO DE HILOS");
}
private List<Runnable> getTasks() {
List<Runnable> tasks = new ArrayList<Runnable>();
Hilo01 task1 = new Hilo01();
tasks.add(task1);
Hilo02 task2 = new Hilo02();
tasks.add(task2);
return tasks;
}
private class Hilo01 extends Thread {
#Override
public void run() {
System.out.println("HILO 1");
}
}
private class Hilo02 extends Thread {
#Override
public void run() {
try {
sleep(2000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("HILO 2");
}
}
public static void main(String[] args) {
TestHilos test = new TestHilos();
test.procesar();
}
}
You could use this code:
public class MyTask implements Runnable {
private CountDownLatch countDownLatch;
public MyTask(CountDownLatch countDownLatch {
this.countDownLatch = countDownLatch;
}
#Override
public void run() {
try {
//Do somethings
//
this.countDownLatch.countDown();//important
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
CountDownLatch countDownLatch = new CountDownLatch(NUMBER_OF_TASKS);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
for (int i = 0; i < NUMBER_OF_TASKS; i++){
taskExecutor.execute(new MyTask(countDownLatch));
}
countDownLatch.await();
System.out.println("Finish tasks");
So I post my answer from linked question here, incase someone want a simpler way to do this
ExecutorService executor = Executors.newFixedThreadPool(10);
CompletableFuture[] futures = new CompletableFuture[10];
int i = 0;
while (...) {
futures[i++] = CompletableFuture.runAsync(runner, executor);
}
CompletableFuture.allOf(futures).join(); // THis will wait until all future ready.
I created the following working example. The idea is to have a way to process a pool of tasks (I am using a queue as example) with many Threads (determined programmatically by the numberOfTasks/threshold), and wait until all Threads are completed to continue with some other processing.
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/** Testing CountDownLatch and ExecutorService to manage scenario where
* multiple Threads work together to complete tasks from a single
* resource provider, so the processing can be faster. */
public class ThreadCountDown {
private CountDownLatch threadsCountdown = null;
private static Queue<Integer> tasks = new PriorityQueue<>();
public static void main(String[] args) {
// Create a queue with "Tasks"
int numberOfTasks = 2000;
while(numberOfTasks-- > 0) {
tasks.add(numberOfTasks);
}
// Initiate Processing of Tasks
ThreadCountDown main = new ThreadCountDown();
main.process(tasks);
}
/* Receiving the Tasks to process, and creating multiple Threads
* to process in parallel. */
private void process(Queue<Integer> tasks) {
int numberOfThreads = getNumberOfThreadsRequired(tasks.size());
threadsCountdown = new CountDownLatch(numberOfThreads);
ExecutorService threadExecutor = Executors.newFixedThreadPool(numberOfThreads);
//Initialize each Thread
while(numberOfThreads-- > 0) {
System.out.println("Initializing Thread: "+numberOfThreads);
threadExecutor.execute(new MyThread("Thread "+numberOfThreads));
}
try {
//Shutdown the Executor, so it cannot receive more Threads.
threadExecutor.shutdown();
threadsCountdown.await();
System.out.println("ALL THREADS COMPLETED!");
//continue With Some Other Process Here
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
/* Determine the number of Threads to create */
private int getNumberOfThreadsRequired(int size) {
int threshold = 100;
int threads = size / threshold;
if( size > (threads*threshold) ){
threads++;
}
return threads;
}
/* Task Provider. All Threads will get their task from here */
private synchronized static Integer getTask(){
return tasks.poll();
}
/* The Threads will get Tasks and process them, while still available.
* When no more tasks available, the thread will complete and reduce the threadsCountdown */
private class MyThread implements Runnable {
private String threadName;
protected MyThread(String threadName) {
super();
this.threadName = threadName;
}
#Override
public void run() {
Integer task;
try{
//Check in the Task pool if anything pending to process
while( (task = getTask()) != null ){
processTask(task);
}
}catch (Exception ex){
ex.printStackTrace();
}finally {
/*Reduce count when no more tasks to process. Eventually all
Threads will end-up here, reducing the count to 0, allowing
the flow to continue after threadsCountdown.await(); */
threadsCountdown.countDown();
}
}
private void processTask(Integer task){
try{
System.out.println(this.threadName+" is Working on Task: "+ task);
}catch (Exception ex){
ex.printStackTrace();
}
}
}
}
Hope it helps!
You could use your own subclass of ExecutorCompletionService to wrap taskExecutor, and your own implementation of BlockingQueue to get informed when each task completes and perform whatever callback or other action you desire when the number of completed tasks reaches your desired goal.
you should use executorService.shutdown() and executorService.awaitTermination method.
An example as follows :
public class ScheduledThreadPoolExample {
public static void main(String[] args) throws InterruptedException {
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
executorService.scheduleAtFixedRate(() -> System.out.println("process task."),
0, 1, TimeUnit.SECONDS);
TimeUnit.SECONDS.sleep(10);
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.DAYS);
}
}
if you use more thread ExecutionServices SEQUENTIALLY and want to wait EACH EXECUTIONSERVICE to be finished. The best way is like below;
ExecutorService executer1 = Executors.newFixedThreadPool(THREAD_SIZE1);
for (<loop>) {
executer1.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer1.shutdown();
try{
executer1.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
ExecutorService executer2 = Executors.newFixedThreadPool(THREAD_SIZE2);
for (true) {
executer2.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer2.shutdown();
} catch (Exception e){
...
}
Try-with-Resources syntax on AutoCloseable executor service with Project Loom
Project Loom seeks to add new features to the concurrency abilities in Java.
One of those features is making the ExecutorService AutoCloseable. This means every ExecutorService implementation will offer a close method. And it means we can use try-with-resources syntax to automatically close an ExecutorService object.
The ExecutorService#close method blocks until all submitted tasks are completed. Using close takes the place of calling shutdown & awaitTermination.
Being AutoCloseable contributes to Project Loom’s attempt to bring “structured concurrency” to Java.
try (
ExecutorService executorService = Executors.… ;
) {
// Submit your `Runnable`/`Callable` tasks to the executor service.
…
}
// At this point, flow-of-control blocks until all submitted tasks are done/canceled/failed.
// After this point, the executor service will have been automatically shutdown, wia `close` method called by try-with-resources syntax.
For more information on Project Loom, search for talks and interviews given by Ron Pressler and others on the Project Loom team. Focus on the more recent, as Project Loom has evolved.
Experimental builds of Project Loom technology are available now, based on early-access Java 18.
Java 8 - We can use stream API to process stream. Please see snippet below
final List<Runnable> tasks = ...; //or any other functional interface
tasks.stream().parallel().forEach(Runnable::run) // Uses default pool
//alternatively to specify parallelism
new ForkJoinPool(15).submit(
() -> tasks.stream().parallel().forEach(Runnable::run)
).get();
ExecutorService WORKER_THREAD_POOL
= Executors.newFixedThreadPool(10);
CountDownLatch latch = new CountDownLatch(2);
for (int i = 0; i < 2; i++) {
WORKER_THREAD_POOL.submit(() -> {
try {
// doSomething();
latch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
// wait for the latch to be decremented by the two remaining threads
latch.await();
If doSomething() throw some other exceptions, the latch.countDown() seems will not execute, so what should I do?
This might help
Log.i(LOG_TAG, "shutting down executor...");
executor.shutdown();
while (true) {
try {
Log.i(LOG_TAG, "Waiting for executor to terminate...");
if (executor.isTerminated())
break;
if (executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) {
break;
}
} catch (InterruptedException ignored) {}
}
You could call waitTillDone() on this Runner class:
Runner runner = Runner.runner(4); // create pool with 4 threads in thread pool
while(...) {
runner.run(new MyTask()); // here you submit your task
}
runner.waitTillDone(); // and this blocks until all tasks are finished (or failed)
runner.shutdown(); // once you done you can shutdown the runner
You can reuse this class and call waitTillDone() as many times as you want to before calling shutdown(), plus your code is extremly simple. Also you don't have to know the number of tasks upfront.
To use it just add this gradle/maven compile 'com.github.matejtymes:javafixes:1.3.1' dependency to your project.
More details can be found here:
https://github.com/MatejTymes/JavaFixes
There is a method in executor getActiveCount() - that gives the count of active threads.
After spanning the thread, we can check if the activeCount() value is 0. Once the value is zero, it is meant that there are no active threads currently running which means task is finished:
while (true) {
if (executor.getActiveCount() == 0) {
//ur own piece of code
break;
}
}
I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished).
Currently they are presented as Future, so I need something like
/**
* Blocks current thread until one of specified futures is done and returns it.
*/
public static <T> Future<T> waitForAny(Collection<Future<T>> futures)
throws AllFuturesFailedException
Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance.
I could try using
new CountDownLatch(1)
and decrease countdown when task is complete and do
countdown.await()
, but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override
<T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable)
and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures.
All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like
WaitHandle.WaitAny(WaitHandle[] waitHandles)
in c#. Are there any well known solutions for such kind of problem?
UPDATE:
Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine.
Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
simple, check out ExecutorCompletionService.
ExecutorService.invokeAny
Why not just create a results queue and wait on the queue? Or more simply, use a CompletionService since that's what it is: an ExecutorService + result queue.
This is actually pretty easy with wait() and notifyAll().
First, define a lock object. (You can use any class for this, but I like to be explicit):
package com.javadude.sample;
public class Lock {}
Next, define your worker thread. He must notify that lock object when he's finished with his processing. Note that the notify must be in a synchronized block locking on the lock object.
package com.javadude.sample;
public class Worker extends Thread {
private Lock lock_;
private long timeToSleep_;
private String name_;
public Worker(Lock lock, String name, long timeToSleep) {
lock_ = lock;
timeToSleep_ = timeToSleep;
name_ = name;
}
#Override
public void run() {
// do real work -- using a sleep here to simulate work
try {
sleep(timeToSleep_);
} catch (InterruptedException e) {
interrupt();
}
System.out.println(name_ + " is done... notifying");
// notify whoever is waiting, in this case, the client
synchronized (lock_) {
lock_.notify();
}
}
}
Finally, you can write your client:
package com.javadude.sample;
public class Client {
public static void main(String[] args) {
Lock lock = new Lock();
Worker worker1 = new Worker(lock, "worker1", 15000);
Worker worker2 = new Worker(lock, "worker2", 10000);
Worker worker3 = new Worker(lock, "worker3", 5000);
Worker worker4 = new Worker(lock, "worker4", 20000);
boolean started = false;
int numNotifies = 0;
while (true) {
synchronized (lock) {
try {
if (!started) {
// need to do the start here so we grab the lock, just
// in case one of the threads is fast -- if we had done the
// starts outside the synchronized block, a fast thread could
// get to its notification *before* the client is waiting for it
worker1.start();
worker2.start();
worker3.start();
worker4.start();
started = true;
}
lock.wait();
} catch (InterruptedException e) {
break;
}
numNotifies++;
if (numNotifies == 4) {
break;
}
System.out.println("Notified!");
}
}
System.out.println("Everyone has notified me... I'm done");
}
}
As far as I know, Java has no analogous structure to the WaitHandle.WaitAny method.
It seems to me that this could be achieved through a "WaitableFuture" decorator:
public WaitableFuture<T>
extends Future<T>
{
private CountDownLatch countDownLatch;
WaitableFuture(CountDownLatch countDownLatch)
{
super();
this.countDownLatch = countDownLatch;
}
void doTask()
{
super.doTask();
this.countDownLatch.countDown();
}
}
Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new doTask() method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution.
Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources.
/**
* Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join().
*/
public static joinAny(Collection<Thread> threads, int numberToWaitFor)
{
CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor);
foreach(Thread thread in threads)
{
(new Thread(new JoinThreadHelper(thread, countDownLatch))).start();
}
countDownLatch.await();
}
class JoinThreadHelper
implements Runnable
{
Thread thread;
CountDownLatch countDownLatch;
JoinThreadHelper(Thread thread, CountDownLatch countDownLatch)
{
this.thread = thread;
this.countDownLatch = countDownLatch;
}
void run()
{
this.thread.join();
this.countDownLatch.countDown();
}
}
If you can use CompletableFutures instead then there is CompletableFuture.anyOf that does what you want, just call join on the result:
CompletableFuture.anyOf(futures).join()
You can use CompletableFutures with executors by calling the CompletableFuture.supplyAsync or runAsync methods.
Since you don't care which one finishes, why not just have a single WaitHandle for all threads and wait on that? Whichever one finishes first can set the handle.
See this option:
public class WaitForAnyRedux {
private static final int POOL_SIZE = 10;
public static <T> T waitForAny(Collection<T> collection) throws InterruptedException, ExecutionException {
List<Callable<T>> callables = new ArrayList<Callable<T>>();
for (final T t : collection) {
Callable<T> callable = Executors.callable(new Thread() {
#Override
public void run() {
synchronized (t) {
try {
t.wait();
} catch (InterruptedException e) {
}
}
}
}, t);
callables.add(callable);
}
BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(POOL_SIZE);
ExecutorService executorService = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 0, TimeUnit.SECONDS, queue);
return executorService.invokeAny(callables);
}
static public void main(String[] args) throws InterruptedException, ExecutionException {
final List<Integer> integers = new ArrayList<Integer>();
for (int i = 0; i < POOL_SIZE; i++) {
integers.add(i);
}
(new Thread() {
public void run() {
Integer notified = null;
try {
notified = waitForAny(integers);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("notified=" + notified);
}
}).start();
synchronized (integers) {
integers.wait(3000);
}
Integer randomInt = integers.get((new Random()).nextInt(POOL_SIZE));
System.out.println("Waking up " + randomInt);
synchronized (randomInt) {
randomInt.notify();
}
}
}