Which threading mechanism to use for tasks that enqueue other tasks? - java

I'm using a task that creates other tasks. Those tasks in turn may or may not create subsequent tasks. I don't know beforehand how many tasks will be created in total. At some point, no more tasks will be created, and all the task will finish.
When the last task is done, I must do some extra stuff.
Which threading mechanism should be used? I've read about CountDownLatch, Cyclic Barrier and Phaser but none seem to fit.
I've also tried using ExecutorService, but I've encountered some issues such as the inability to execute something at the end, and you can see my attempt below:
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
public class Issue {
public static void main(String[] args) throws InterruptedException {
var count = new AtomicInteger(1);
var executor = Executors.newFixedThreadPool(3);
class Task implements Runnable {
final int id = count.getAndIncrement();
#Override
public void run() {
try {
MILLISECONDS.sleep((long)(Math.random() * 1000L + 1000L));
} catch (InterruptedException e) {
// Do nothing
}
if (id < 5) {
executor.submit(new Task());
executor.submit(new Task());
}
System.out.println(id);
}
}
executor.execute(new Task());
executor.shutdown();
// executor.awaitTermination(20, TimeUnit.SECONDS);
System.out.println("Hello");
}
}
This outputs an exception because tasks are added after shutdown() is called, but the expected output would be akin to:
1
2
3
4
5
6
7
8
9
Hello
Which threading mechanism can help me do that?

It seems pretty tricky. If there is even a single task that's either in the queue or currently executing, then since you can't say whether or not it will spawn another task, you have no way to know how long it may run for. It may be the start of a chain of tasks that takes another 2 hours.
I think all the information you'd need to achieve this is encapsulated by the executor implementations. You need to know what's running and what's in the queue.
I think you're unfortunately looking at having to write your own executor. It needn't be complicated and it doesn't have to conform to the JDK's interfaces if you don't want it to. Just something that maintains a thread pool and a queue of tasks. Add the ability to attach listeners to the executor. When the queue is empty and there are no actively executing tasks then you can notify the listeners.
Here's a quick code sketch.
class MyExecutor
{
private final AtomicLong taskId = new AtomicLong();
private final Map<Long, Runnable> idToQueuedTask = new ConcurrentHashMap<>();
private final AtomicLong runningTasks = new AtomicLong();
private final ExecutorService delegate = Executors.newFixedThreadPool(3);
public void submit(Runnable task) {
long id = taskId.incrementAndGet();
final Runnable wrapped = () -> {
taskStarted(id);
try {
task.run();
}
finally {
taskEnded();
}
};
idToQueuedTask.put(id, wrapped);
delegate.submit(wrapped);
}
private void taskStarted(long id) {
idToQueuedTask.remove(id);
runningTasks.incrementAndGet();
}
private void taskEnded() {
final long numRunning = runningTasks.decrementAndGet();
if (numRunning == 0 && idToQueuedTask.isEmpty()) {
System.out.println("Done, time to notify listeners");
}
}
public static void main(String[] args) {
MyExecutor executor = new MyExecutor();
executor.submit(() -> {
System.out.println("Parent task");
try {
Thread.sleep(1000);
}
catch (Exception e) {}
executor.submit(() -> {
System.out.println("Child task");
});
});
}
}

If you change your ExecutorService to this:
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(3);
You could then use the count functions to wait:
while(executor.getTaskCount() > executor.getCompletedTaskCount())
{
TimeUnit.SECONDS.sleep(10L);
}
executor.shutdown();
System.out.println("Hello");

Related

Editable queue of tasks running in background thread

I know this question was answered many times, but I'm struggling to understand how it works.
So in my application the user must be able to select items which will be added to a queue (displayed in a ListView using an ObservableList<Task>) and each item needs to be processed sequentially by an ExecutorService.
Also that queue should be editable (change the order and remove items from the list).
private void handleItemClicked(MouseEvent event) {
if (event.getClickCount() == 2) {
File item = listView.getSelectionModel().getSelectedItem();
Task<Void> task = createTask(item);
facade.getTaskQueueList().add(task); // this list is bound to a ListView, where it can be edited
Future result = executor.submit(task);
// where executor is an ExecutorService of which type?
try {
result.get();
} catch (Exception e) {
// ...
}
}
}
Tried it with executor = Executors.newFixedThreadPool(1) but I don't have control over the queue.
I read about ThreadPoolExecutor and queues, but I'm struggling to understand it as I'm quite new to Concurrency.
I need to run that method handleItemClicked in a background thread, so that the UI does not freeze, how can I do that the best way?
Summed up: How can I implement a queue of tasks, which is editable and sequentially processed by a background thread?
Please help me figure it out
EDIT
Using the SerialTaskQueue class from vanOekel helped me, now I want to bind the List of tasks to my ListView.
ListProperty<Runnable> listProperty = new SimpleListProperty<>();
listProperty.set(taskQueue.getTaskList()); // getTaskList() returns the LinkedList from SerialTaskQueue
queueListView.itemsProperty().bind(listProperty);
Obviously this doesn't work as it's expecting an ObservableList. There is an elegant way to do it?
The simplest solution I can think of is to maintain the task-list outside of the executor and use a callback to feed the executor the next task if it is available. Unfortunately, it involves synchronization on the task-list and an AtomicBoolean to indicate a task executing.
The callback is simply a Runnable that wraps the original task to run and then "calls back" to see if there is another task to execute, and if so, executes it using the (background) executor.
The synchronization is needed to keep the task-list in order and at a known state. The task-list can be modified by two threads at the same time: via the callback running in the executor's (background) thread and via handleItemClicked method executed via the UI foreground thread. This in turn means that it is never exactly known when the task-list is empty for example. To keep the task-list in order and at a known fixed state, synchronization of the task-list is needed.
This still leaves an ambiguous moment to decide when a task is ready for execution. This is where the AtomicBoolean comes in: a value set is always immediatly availabe and read by any other thread and the compareAndSet method will always ensure only one thread gets an "OK".
Combining the synchronization and the use of the AtomicBoolean allows the creation of one method with a "critical section" that can be called by both foreground- and background-threads at the same time to trigger the execution of a new task if possible. The code below is designed and setup in such a way that one such method (runNextTask) can exist. It is good practice to make the "critical section" in concurrent code as simple and explicit as possible (which, in turn, generally leads to an efficient "critical section").
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class SerialTaskQueue {
public static void main(String[] args) {
ExecutorService executor = Executors.newSingleThreadExecutor();
// all operations on this list must be synchronized on the list itself.
SerialTaskQueue tq = new SerialTaskQueue(executor);
try {
// test running the tasks one by one
tq.add(new SleepSome(10L));
Thread.sleep(5L);
tq.add(new SleepSome(20L));
tq.add(new SleepSome(30L));
Thread.sleep(100L);
System.out.println("Queue size: " + tq.size()); // should be empty
tq.add(new SleepSome(10L));
Thread.sleep(100L);
} catch (Exception e) {
e.printStackTrace();
} finally {
executor.shutdownNow();
}
}
// all lookups and modifications to the list must be synchronized on the list.
private final List<Runnable> tasks = new LinkedList<Runnable>();
// atomic boolean used to ensure only 1 task is executed at any given time
private final AtomicBoolean executeNextTask = new AtomicBoolean(true);
private final Executor executor;
public SerialTaskQueue(Executor executor) {
this.executor = executor;
}
public void add(Runnable task) {
synchronized(tasks) { tasks.add(task); }
runNextTask();
}
private void runNextTask() {
// critical section that ensures one task is executed.
synchronized(tasks) {
if (!tasks.isEmpty()
&& executeNextTask.compareAndSet(true, false)) {
executor.execute(wrapTask(tasks.remove(0)));
}
}
}
private CallbackTask wrapTask(Runnable task) {
return new CallbackTask(task, new Runnable() {
#Override public void run() {
if (!executeNextTask.compareAndSet(false, true)) {
System.out.println("ERROR: programming error, the callback should always run in execute state.");
}
runNextTask();
}
});
}
public int size() {
synchronized(tasks) { return tasks.size(); }
}
public Runnable get(int index) {
synchronized(tasks) { return tasks.get(index); }
}
public Runnable remove(int index) {
synchronized(tasks) { return tasks.remove(index); }
}
// general callback-task, see https://stackoverflow.com/a/826283/3080094
static class CallbackTask implements Runnable {
private final Runnable task, callback;
public CallbackTask(Runnable task, Runnable callback) {
this.task = task;
this.callback = callback;
}
#Override public void run() {
try {
task.run();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
callback.run();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
// task that just sleeps for a while
static class SleepSome implements Runnable {
static long startTime = System.currentTimeMillis();
private final long sleepTimeMs;
public SleepSome(long sleepTimeMs) {
this.sleepTimeMs = sleepTimeMs;
}
#Override public void run() {
try {
System.out.println(tdelta() + "Sleeping for " + sleepTimeMs + " ms.");
Thread.sleep(sleepTimeMs);
System.out.println(tdelta() + "Slept for " + sleepTimeMs + " ms.");
} catch (Exception e) {
e.printStackTrace();
}
}
private String tdelta() { return String.format("% 4d ", (System.currentTimeMillis() - startTime)); }
}
}
Update: if groups of tasks need to be executed serial, have a look at the adapted implementation here.

Simple queue and multi-threading

I have to process a lot of files. I wrote simple Java program that does the job, but it is too slow.
I need more than 1 working thread.
Im totally new with Java and Java multithreading.
Here is my code (simplified):
public static void main(String[] args)
{
// some queue here?
for (int i = 1; i < 8000000; i++)
{
processId(i);
}
}
public static void processId(int id)
{
try
{
// do work
System.out.println("Im working on: " + Integer.toString(id));
}
catch (Exception e)
{
// do something with errors
System.out.println("Error while working on: " + Integer.toString(id));
}
}
How can I add simple queue with 8 threads?
You should look into Executors.
You can create a thread pool of 8 threads using:
ExecutorService executor = Executors.newFixedThreadPool(8);
Then submit your tasks inside your loop the following way:
final int finalId = i; // final is necessary to be enclosed in lambda
executor.submit(() -> processId(finalId));
Or prior to java 8:
final int temp = i; // final is necessary to be enclosed in anonymous class
executor.submit(new Runnable() {
public void run() {
processId(finalId);
}
});
Don't forget to shutdown the thread pool when not needed anymore, as mentioned in the documentation. Here is an example from the doc:
private void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(60, TimeUnit.SECONDS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}
You should look into ExecutorService. This will make multithreading easy. An example:
Main code:
ExecutorService pool = Executors.newFixedThreadPool(8);
for (int i = 1; i < 8000000; i++) {
pool.submit(new intProcessingTask(i));
}
pool.shutdown();
pool.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
// all tasks have now finished (unless an exception is thrown above)
intProcessingTask code:
private static class DownloadTask implements Runnable {
private int id;
public DownloadTask(int id) {
this.id = id;
}
#Override
public void run() {
System.out.println("Im working on: " + Integer.toString(id));
}
}
This is slightly longer than the the other answer, but does pretty much the same thing, and works on Java 7 and earlier.
There are many ways in Java for processing mulithreading. Base on your question that you need a queue, I think the most simple version is use Java ExecutorService. You can see this code:
public static void main(String[] args) {
// creating a thread pool with maximum thread will be 8
ExecutorService executorService = Executors.newFixedThreadPool(8);
for (int i = 0; i < 8000000; i++) {
final int threadId = i;
executorService.execute(new Runnable() {
public void run() {
processId(threadId);
}
});
}
}
ExecutorService has some methods:
execute(Runnable)
submit(Runnable)
submit(Callable)
invokeAny(...)
invokeAll(...)
I recommend you view this link: ExecutorService tutorial for clear explanation.
Hope this help :)

Executor/Queue process last known task only

I'm looking to write some concurrent code which will process an event. This processing can take a long time.
Whilst that event is processing it should record incoming events and then process the last incoming events when it is free to run again. (The other events can be thrown away). This is a little bit like a FILO queue but I only need to store one element in the queue.
Ideally I would like to plug in my new Executor into my event processing architecture shown below.
public class AsyncNode<I, O> extends AbstractNode<I, O> {
private static final Logger log = LoggerFactory.getLogger(AsyncNode.class);
private Executor executor;
public AsyncNode(EventHandler<I, O> handler, Executor executor) {
super(handler);
this.executor = executor;
}
#Override
public void emit(O output) {
if (output != null) {
for (EventListener<O> node : children) {
node.handle(output);
}
}
}
#Override
public void handle(final I input) {
executor.execute(new Runnable() {
#Override
public void run() {
try{
emit(handler.process(input));
}catch (Exception e){
log.error("Exception occured whilst processing input." ,e);
throw e;
}
}
});
}
}
I wouldn't do either. I would have an AtomicReference to the event you want to process and add a task to process it in a destructive way.
final AtomicReference<Event> eventRef =
public void processEvent(Event event) {
eventRef.set(event);
executor.submit(new Runnable() {
public vodi run() {
Event e = eventRef.getAndSet(null);
if (e == null) return;
// process event
}
}
}
This will only ever process the next event when the executor is free, without customising the executor or queue (which can be used for other things)
This also scales to having keyed events i.e. you want to process the last event for a key.
I think the key to this is the "discard policy" you need to apply to your Executor. If you only want to handle the latest task then you need a queue size of one and a "discarding policy" of throw away the oldest. Here is an example of an Executor that will do this
Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
Then when your tasks come in just submit them to this executor, if there is already a queued job it will be replaced with the new one
latestTaskExecutor.execute(() -> doUpdate()));
Here is a example app showing this working
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class LatestUpdate {
private static final Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
private static final AtomicInteger counter = new AtomicInteger(0);
private static final Random random = new Random();
public static void main(String[] args) {
LatestUpdate latestUpdate = new LatestUpdate();
latestUpdate.run();
}
private void doUpdate(int number) {
System.out.println("Latest number updated is: " + number);
try { // Wait a random amount of time up to 5 seconds. Processing the update takes time...
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void run() {
// Updates a counter every second and schedules an update event
Thread counterUpdater = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000L); // Wait one second
} catch (InterruptedException e) {
e.printStackTrace();
}
counter.incrementAndGet();
// Schedule this update will replace any existing update waiting
latestTaskExecutor.execute(() -> doUpdate(counter.get()));
System.out.println("New number is: " + counter.get());
}
});
counterUpdater.start(); // Run the thread
}
}
This also covers the case for GUIs where once updates stop arriving you want the GUI to become eventually consistent with the last event received.
public class LatestTaskExecutor implements Executor {
private final AtomicReference<Runnable> lastTask =new AtomicReference<>();
private final Executor executor;
public LatestTaskExecutor(Executor executor) {
super();
this.executor = executor;
}
#Override
public void execute(Runnable command) {
lastTask.set(command);
executor.execute(new Runnable() {
#Override
public void run() {
Runnable task=lastTask.getAndSet(null);
if(task!=null){
task.run();
}
}
});
}
}
#RunWith( MockitoJUnitRunner.class )
public class LatestTaskExecutorTest {
#Mock private Executor executor;
private LatestTaskExecutor latestExecutor;
#Before
public void setup(){
latestExecutor=new LatestTaskExecutor(executor);
}
#Test
public void testRunSingleTask() {
Runnable run=mock(Runnable.class);
latestExecutor.execute(run);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor).execute(captor.capture());
captor.getValue().run();
verify(run).run();
}
#Test
public void discardsIntermediateUpdates(){
Runnable run=mock(Runnable.class);
Runnable run2=mock(Runnable.class);
latestExecutor.execute(run);
latestExecutor.execute(run2);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor,times(2)).execute(captor.capture());
for (Runnable runnable:captor.getAllValues()){
runnable.run();
}
verify(run2).run();
verifyNoMoreInteractions(run);
}
}
This answer is a modified version of the one from DD which minimzes submission of superfluous tasks.
An atomic reference is used to keep track of the latest event. A custom task is submitted to the queue for potentially processing an event, only the task that gets to read the latest event actually goes ahead and does useful work before clearing out the atomic reference to null. When other tasks get a chance to run and find no event is available to process, they just do nothing and pass away silently. Submitting superfluous tasks are avoided by tracking the number of available tasks in the queue. If there is at least one task pending in the queue, we can avoid submitting the task as the event will be handled when an already queued task is dequeued.
import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
public class EventExecutorService implements Executor {
private final Executor executor;
// the field which keeps track of the latest available event to process
private final AtomicReference<Runnable> latestEventReference = new AtomicReference<>();
private final AtomicInteger activeTaskCount = new AtomicInteger(0);
public EventExecutorService(final Executor executor) {
this.executor = executor;
}
#Override
public void execute(final Runnable eventTask) {
// update the latest event
latestEventReference.set(eventTask);
// read count _after_ updating event
final int activeTasks = activeTaskCount.get();
if (activeTasks == 0) {
// there is definitely no other task to process this event, create a new task
final Runnable customTask = new Runnable() {
#Override
public void run() {
// decrement the count for available tasks _before_ reading event
activeTaskCount.decrementAndGet();
// find the latest available event to process
final Runnable currentTask = latestEventReference.getAndSet(null);
if (currentTask != null) {
// if such an event exists, process it
currentTask.run();
} else {
// somebody stole away the latest event. Do nothing.
}
}
};
// increment tasks count _before_ submitting task
activeTaskCount.incrementAndGet();
// submit the new task to the queue for processing
executor.execute(customTask);
}
}
}
Though I like James Mudd's solution but it still enqueues a second task while previous is running which might be undesirable. If you want to always ignore/discard arriving task if previous is not completed you can make some wrapper like this:
public class DiscardingSubmitter {
private final ExecutorService es = Executors.newSingleThreadExecutor();
private Future<?> future = CompletableFuture.completedFuture(null); //to avoid null check
public void submit(Runnable r){
if (future.isDone()) {
future = es.submit(r);
}else {
//Task skipped, log if you want
}
}
}

How to wait for all threads to finish, using ExecutorService?

I need to execute some amount of tasks 4 at a time, something like this:
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
//...wait for completion somehow
How can I get notified once all of them are complete? For now I can't think about anything better than setting some global task counter and decrease it at the end of every task, then monitor in infinite loop this counter to become 0; or get a list of Futures and in infinite loop monitor isDone for all of them. What are better solutions not involving infinite loops?
Thanks.
Basically on an ExecutorService you call shutdown() and then awaitTermination():
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}
Use a CountDownLatch:
CountDownLatch latch = new CountDownLatch(totalNumberOfTasks);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
try {
latch.await();
} catch (InterruptedException E) {
// handle
}
and within your task (enclose in try / finally)
latch.countDown();
ExecutorService.invokeAll() does it for you.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
List<Callable<?>> tasks; // your tasks
// invokeAll() returns when all tasks are complete
List<Future<?>> futures = taskExecutor.invokeAll(tasks);
You can use Lists of Futures, as well:
List<Future> futures = new ArrayList<Future>();
// now add to it:
futures.add(executorInstance.submit(new Callable<Void>() {
public Void call() throws IOException {
// do something
return null;
}
}));
then when you want to join on all of them, its essentially the equivalent of joining on each, (with the added benefit that it re-raises exceptions from child threads to the main):
for(Future f: this.futures) { f.get(); }
Basically the trick is to call .get() on each Future one at a time, instead of infinite looping calling isDone() on (all or each). So you're guaranteed to "move on" through and past this block as soon as the last thread finishes. The caveat is that since the .get() call re-raises exceptions, if one of the threads dies, you would raise from this possibly before the other threads have finished to completion [to avoid this, you could add a catch ExecutionException around the get call]. The other caveat is it keeps a reference to all threads so if they have thread local variables they won't get collected till after you get past this block (though you might be able to get around this, if it became a problem, by removing Future's off the ArrayList). If you wanted to know which Future "finishes first" you could use some something like https://stackoverflow.com/a/31885029/32453
In Java8 you can do it with CompletableFuture:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Just my two cents.
To overcome the requirement of CountDownLatch to know the number of tasks beforehand, you could do it the old fashion way by using a simple Semaphore.
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
int numberOfTasks=0;
Semaphore s=new Semaphore(0);
while(...) {
taskExecutor.execute(new MyTask());
numberOfTasks++;
}
try {
s.aquire(numberOfTasks);
...
In your task just call s.release() as you would latch.countDown();
A bit late to the game but for the sake of completion...
Instead of 'waiting' for all tasks to finish, you can think in terms of the Hollywood principle, "don't call me, I'll call you" - when I'm finished.
I think the resulting code is more elegant...
Guava offers some interesting tools to accomplish this.
An example:
Wrap an ExecutorService into a ListeningExecutorService:
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));
Submit a collection of callables for execution ::
for (Callable<Integer> callable : callables) {
ListenableFuture<Integer> lf = service.submit(callable);
// listenableFutures is a collection
listenableFutures.add(lf)
});
Now the essential part:
ListenableFuture<List<Integer>> lf = Futures.successfulAsList(listenableFutures);
Attach a callback to the ListenableFuture, that you can use to be notified when all futures complete:
Futures.addCallback(lf, new FutureCallback<List<Integer>> () {
#Override
public void onSuccess(List<Integer> result) {
// do something with all the results
}
#Override
public void onFailure(Throwable t) {
// log failure
}
});
This also offers the advantage that you can collect all the results in one place once the processing is finished...
More information here
The CyclicBarrier class in Java 5 and later is designed for this sort of thing.
here is two options , just bit confuse which one is best to go.
Option 1:
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream()
.map(task -> CompletableFuture.runAsync(task, es))
.toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
Option 2:
ExecutorService es = Executors.newFixedThreadPool(4);
List< Future<?>> futures = new ArrayList<>();
for(Runnable task : taskList) {
futures.add(es.submit(task));
}
for(Future<?> future : futures) {
try {
future.get();
}catch(Exception e){
// do logging and nothing else
}
}
es.shutdown();
Here putting future.get(); in try catch is good idea right?
Follow one of below approaches.
Iterate through all Future tasks, returned from submit on ExecutorService and check the status with blocking call get() on Future object as suggested by Kiran
Use invokeAll() on ExecutorService
CountDownLatch
ForkJoinPool or Executors.html#newWorkStealingPool
Use shutdown, awaitTermination, shutdownNow APIs of ThreadPoolExecutor in proper sequence
Related SE questions:
How is CountDownLatch used in Java Multithreading?
How to properly shutdown java ExecutorService
You could wrap your tasks in another runnable, that will send notifications:
taskExecutor.execute(new Runnable() {
public void run() {
taskStartedNotification();
new MyTask().run();
taskFinishedNotification();
}
});
Clean way with ExecutorService
List<Future<Void>> results = null;
try {
List<Callable<Void>> tasks = new ArrayList<>();
ExecutorService executorService = Executors.newFixedThreadPool(4);
results = executorService.invokeAll(tasks);
} catch (InterruptedException ex) {
...
} catch (Exception ex) {
...
}
I've just written a sample program that solves your problem. There was no concise implementation given, so I'll add one. While you can use executor.shutdown() and executor.awaitTermination(), it is not the best practice as the time taken by different threads would be unpredictable.
ExecutorService es = Executors.newCachedThreadPool();
List<Callable<Integer>> tasks = new ArrayList<>();
for (int j = 1; j <= 10; j++) {
tasks.add(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
int sum = 0;
System.out.println("Starting Thread "
+ Thread.currentThread().getId());
for (int i = 0; i < 1000000; i++) {
sum += i;
}
System.out.println("Stopping Thread "
+ Thread.currentThread().getId());
return sum;
}
});
}
try {
List<Future<Integer>> futures = es.invokeAll(tasks);
int flag = 0;
for (Future<Integer> f : futures) {
Integer res = f.get();
System.out.println("Sum: " + res);
if (!f.isDone())
flag = 1;
}
if (flag == 0)
System.out.println("SUCCESS");
else
System.out.println("FAILED");
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
Just to provide more alternatives here different to use latch/barriers.
You can also get the partial results until all of them finish using CompletionService.
From Java Concurrency in practice:
"If you have a batch of computations to submit to an Executor and you want to retrieve their results as they become
available, you could retain the Future associated with each task and repeatedly poll for completion by calling get with a
timeout of zero. This is possible, but tedious. Fortunately there is a better way: a completion service."
Here the implementation
public class TaskSubmiter {
private final ExecutorService executor;
TaskSubmiter(ExecutorService executor) { this.executor = executor; }
void doSomethingLarge(AnySourceClass source) {
final List<InterestedResult> info = doPartialAsyncProcess(source);
CompletionService<PartialResult> completionService = new ExecutorCompletionService<PartialResult>(executor);
for (final InterestedResult interestedResultItem : info)
completionService.submit(new Callable<PartialResult>() {
public PartialResult call() {
return InterestedResult.doAnOperationToGetPartialResult();
}
});
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<PartialResult> f = completionService.take();
PartialResult PartialResult = f.get();
processThisSegment(PartialResult);
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (ExecutionException e) {
throw somethinghrowable(e.getCause());
}
}
}
This is my solution, based in "AdamSkywalker" tip, and it works
package frss.main;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestHilos {
void procesar() {
ExecutorService es = Executors.newFixedThreadPool(4);
List<Runnable> tasks = getTasks();
CompletableFuture<?>[] futures = tasks.stream().map(task -> CompletableFuture.runAsync(task, es)).toArray(CompletableFuture[]::new);
CompletableFuture.allOf(futures).join();
es.shutdown();
System.out.println("FIN DEL PROCESO DE HILOS");
}
private List<Runnable> getTasks() {
List<Runnable> tasks = new ArrayList<Runnable>();
Hilo01 task1 = new Hilo01();
tasks.add(task1);
Hilo02 task2 = new Hilo02();
tasks.add(task2);
return tasks;
}
private class Hilo01 extends Thread {
#Override
public void run() {
System.out.println("HILO 1");
}
}
private class Hilo02 extends Thread {
#Override
public void run() {
try {
sleep(2000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("HILO 2");
}
}
public static void main(String[] args) {
TestHilos test = new TestHilos();
test.procesar();
}
}
You could use this code:
public class MyTask implements Runnable {
private CountDownLatch countDownLatch;
public MyTask(CountDownLatch countDownLatch {
this.countDownLatch = countDownLatch;
}
#Override
public void run() {
try {
//Do somethings
//
this.countDownLatch.countDown();//important
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
CountDownLatch countDownLatch = new CountDownLatch(NUMBER_OF_TASKS);
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
for (int i = 0; i < NUMBER_OF_TASKS; i++){
taskExecutor.execute(new MyTask(countDownLatch));
}
countDownLatch.await();
System.out.println("Finish tasks");
So I post my answer from linked question here, incase someone want a simpler way to do this
ExecutorService executor = Executors.newFixedThreadPool(10);
CompletableFuture[] futures = new CompletableFuture[10];
int i = 0;
while (...) {
futures[i++] = CompletableFuture.runAsync(runner, executor);
}
CompletableFuture.allOf(futures).join(); // THis will wait until all future ready.
I created the following working example. The idea is to have a way to process a pool of tasks (I am using a queue as example) with many Threads (determined programmatically by the numberOfTasks/threshold), and wait until all Threads are completed to continue with some other processing.
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/** Testing CountDownLatch and ExecutorService to manage scenario where
* multiple Threads work together to complete tasks from a single
* resource provider, so the processing can be faster. */
public class ThreadCountDown {
private CountDownLatch threadsCountdown = null;
private static Queue<Integer> tasks = new PriorityQueue<>();
public static void main(String[] args) {
// Create a queue with "Tasks"
int numberOfTasks = 2000;
while(numberOfTasks-- > 0) {
tasks.add(numberOfTasks);
}
// Initiate Processing of Tasks
ThreadCountDown main = new ThreadCountDown();
main.process(tasks);
}
/* Receiving the Tasks to process, and creating multiple Threads
* to process in parallel. */
private void process(Queue<Integer> tasks) {
int numberOfThreads = getNumberOfThreadsRequired(tasks.size());
threadsCountdown = new CountDownLatch(numberOfThreads);
ExecutorService threadExecutor = Executors.newFixedThreadPool(numberOfThreads);
//Initialize each Thread
while(numberOfThreads-- > 0) {
System.out.println("Initializing Thread: "+numberOfThreads);
threadExecutor.execute(new MyThread("Thread "+numberOfThreads));
}
try {
//Shutdown the Executor, so it cannot receive more Threads.
threadExecutor.shutdown();
threadsCountdown.await();
System.out.println("ALL THREADS COMPLETED!");
//continue With Some Other Process Here
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
/* Determine the number of Threads to create */
private int getNumberOfThreadsRequired(int size) {
int threshold = 100;
int threads = size / threshold;
if( size > (threads*threshold) ){
threads++;
}
return threads;
}
/* Task Provider. All Threads will get their task from here */
private synchronized static Integer getTask(){
return tasks.poll();
}
/* The Threads will get Tasks and process them, while still available.
* When no more tasks available, the thread will complete and reduce the threadsCountdown */
private class MyThread implements Runnable {
private String threadName;
protected MyThread(String threadName) {
super();
this.threadName = threadName;
}
#Override
public void run() {
Integer task;
try{
//Check in the Task pool if anything pending to process
while( (task = getTask()) != null ){
processTask(task);
}
}catch (Exception ex){
ex.printStackTrace();
}finally {
/*Reduce count when no more tasks to process. Eventually all
Threads will end-up here, reducing the count to 0, allowing
the flow to continue after threadsCountdown.await(); */
threadsCountdown.countDown();
}
}
private void processTask(Integer task){
try{
System.out.println(this.threadName+" is Working on Task: "+ task);
}catch (Exception ex){
ex.printStackTrace();
}
}
}
}
Hope it helps!
You could use your own subclass of ExecutorCompletionService to wrap taskExecutor, and your own implementation of BlockingQueue to get informed when each task completes and perform whatever callback or other action you desire when the number of completed tasks reaches your desired goal.
you should use executorService.shutdown() and executorService.awaitTermination method.
An example as follows :
public class ScheduledThreadPoolExample {
public static void main(String[] args) throws InterruptedException {
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
executorService.scheduleAtFixedRate(() -> System.out.println("process task."),
0, 1, TimeUnit.SECONDS);
TimeUnit.SECONDS.sleep(10);
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.DAYS);
}
}
if you use more thread ExecutionServices SEQUENTIALLY and want to wait EACH EXECUTIONSERVICE to be finished. The best way is like below;
ExecutorService executer1 = Executors.newFixedThreadPool(THREAD_SIZE1);
for (<loop>) {
executer1.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer1.shutdown();
try{
executer1.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
ExecutorService executer2 = Executors.newFixedThreadPool(THREAD_SIZE2);
for (true) {
executer2.execute(new Runnable() {
#Override
public void run() {
...
}
});
}
executer2.shutdown();
} catch (Exception e){
...
}
Try-with-Resources syntax on AutoCloseable executor service with Project Loom
Project Loom seeks to add new features to the concurrency abilities in Java.
One of those features is making the ExecutorService AutoCloseable. This means every ExecutorService implementation will offer a close method. And it means we can use try-with-resources syntax to automatically close an ExecutorService object.
The ExecutorService#close method blocks until all submitted tasks are completed. Using close takes the place of calling shutdown & awaitTermination.
Being AutoCloseable contributes to Project Loom’s attempt to bring “structured concurrency” to Java.
try (
ExecutorService executorService = Executors.… ;
) {
// Submit your `Runnable`/`Callable` tasks to the executor service.
…
}
// At this point, flow-of-control blocks until all submitted tasks are done/canceled/failed.
// After this point, the executor service will have been automatically shutdown, wia `close` method called by try-with-resources syntax.
For more information on Project Loom, search for talks and interviews given by Ron Pressler and others on the Project Loom team. Focus on the more recent, as Project Loom has evolved.
Experimental builds of Project Loom technology are available now, based on early-access Java 18.
Java 8 - We can use stream API to process stream. Please see snippet below
final List<Runnable> tasks = ...; //or any other functional interface
tasks.stream().parallel().forEach(Runnable::run) // Uses default pool
//alternatively to specify parallelism
new ForkJoinPool(15).submit(
() -> tasks.stream().parallel().forEach(Runnable::run)
).get();
ExecutorService WORKER_THREAD_POOL
= Executors.newFixedThreadPool(10);
CountDownLatch latch = new CountDownLatch(2);
for (int i = 0; i < 2; i++) {
WORKER_THREAD_POOL.submit(() -> {
try {
// doSomething();
latch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
// wait for the latch to be decremented by the two remaining threads
latch.await();
If doSomething() throw some other exceptions, the latch.countDown() seems will not execute, so what should I do?
This might help
Log.i(LOG_TAG, "shutting down executor...");
executor.shutdown();
while (true) {
try {
Log.i(LOG_TAG, "Waiting for executor to terminate...");
if (executor.isTerminated())
break;
if (executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) {
break;
}
} catch (InterruptedException ignored) {}
}
You could call waitTillDone() on this Runner class:
Runner runner = Runner.runner(4); // create pool with 4 threads in thread pool
while(...) {
runner.run(new MyTask()); // here you submit your task
}
runner.waitTillDone(); // and this blocks until all tasks are finished (or failed)
runner.shutdown(); // once you done you can shutdown the runner
You can reuse this class and call waitTillDone() as many times as you want to before calling shutdown(), plus your code is extremly simple. Also you don't have to know the number of tasks upfront.
To use it just add this gradle/maven compile 'com.github.matejtymes:javafixes:1.3.1' dependency to your project.
More details can be found here:
https://github.com/MatejTymes/JavaFixes
There is a method in executor getActiveCount() - that gives the count of active threads.
After spanning the thread, we can check if the activeCount() value is 0. Once the value is zero, it is meant that there are no active threads currently running which means task is finished:
while (true) {
if (executor.getActiveCount() == 0) {
//ur own piece of code
break;
}
}

Wait until any of Future<T> is done

I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished).
Currently they are presented as Future, so I need something like
/**
* Blocks current thread until one of specified futures is done and returns it.
*/
public static <T> Future<T> waitForAny(Collection<Future<T>> futures)
throws AllFuturesFailedException
Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance.
I could try using
new CountDownLatch(1)
and decrease countdown when task is complete and do
countdown.await()
, but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override
<T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable)
and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures.
All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like
WaitHandle.WaitAny(WaitHandle[] waitHandles)
in c#. Are there any well known solutions for such kind of problem?
UPDATE:
Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine.
Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
simple, check out ExecutorCompletionService.
ExecutorService.invokeAny
Why not just create a results queue and wait on the queue? Or more simply, use a CompletionService since that's what it is: an ExecutorService + result queue.
This is actually pretty easy with wait() and notifyAll().
First, define a lock object. (You can use any class for this, but I like to be explicit):
package com.javadude.sample;
public class Lock {}
Next, define your worker thread. He must notify that lock object when he's finished with his processing. Note that the notify must be in a synchronized block locking on the lock object.
package com.javadude.sample;
public class Worker extends Thread {
private Lock lock_;
private long timeToSleep_;
private String name_;
public Worker(Lock lock, String name, long timeToSleep) {
lock_ = lock;
timeToSleep_ = timeToSleep;
name_ = name;
}
#Override
public void run() {
// do real work -- using a sleep here to simulate work
try {
sleep(timeToSleep_);
} catch (InterruptedException e) {
interrupt();
}
System.out.println(name_ + " is done... notifying");
// notify whoever is waiting, in this case, the client
synchronized (lock_) {
lock_.notify();
}
}
}
Finally, you can write your client:
package com.javadude.sample;
public class Client {
public static void main(String[] args) {
Lock lock = new Lock();
Worker worker1 = new Worker(lock, "worker1", 15000);
Worker worker2 = new Worker(lock, "worker2", 10000);
Worker worker3 = new Worker(lock, "worker3", 5000);
Worker worker4 = new Worker(lock, "worker4", 20000);
boolean started = false;
int numNotifies = 0;
while (true) {
synchronized (lock) {
try {
if (!started) {
// need to do the start here so we grab the lock, just
// in case one of the threads is fast -- if we had done the
// starts outside the synchronized block, a fast thread could
// get to its notification *before* the client is waiting for it
worker1.start();
worker2.start();
worker3.start();
worker4.start();
started = true;
}
lock.wait();
} catch (InterruptedException e) {
break;
}
numNotifies++;
if (numNotifies == 4) {
break;
}
System.out.println("Notified!");
}
}
System.out.println("Everyone has notified me... I'm done");
}
}
As far as I know, Java has no analogous structure to the WaitHandle.WaitAny method.
It seems to me that this could be achieved through a "WaitableFuture" decorator:
public WaitableFuture<T>
extends Future<T>
{
private CountDownLatch countDownLatch;
WaitableFuture(CountDownLatch countDownLatch)
{
super();
this.countDownLatch = countDownLatch;
}
void doTask()
{
super.doTask();
this.countDownLatch.countDown();
}
}
Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new doTask() method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution.
Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources.
/**
* Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join().
*/
public static joinAny(Collection<Thread> threads, int numberToWaitFor)
{
CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor);
foreach(Thread thread in threads)
{
(new Thread(new JoinThreadHelper(thread, countDownLatch))).start();
}
countDownLatch.await();
}
class JoinThreadHelper
implements Runnable
{
Thread thread;
CountDownLatch countDownLatch;
JoinThreadHelper(Thread thread, CountDownLatch countDownLatch)
{
this.thread = thread;
this.countDownLatch = countDownLatch;
}
void run()
{
this.thread.join();
this.countDownLatch.countDown();
}
}
If you can use CompletableFutures instead then there is CompletableFuture.anyOf that does what you want, just call join on the result:
CompletableFuture.anyOf(futures).join()
You can use CompletableFutures with executors by calling the CompletableFuture.supplyAsync or runAsync methods.
Since you don't care which one finishes, why not just have a single WaitHandle for all threads and wait on that? Whichever one finishes first can set the handle.
See this option:
public class WaitForAnyRedux {
private static final int POOL_SIZE = 10;
public static <T> T waitForAny(Collection<T> collection) throws InterruptedException, ExecutionException {
List<Callable<T>> callables = new ArrayList<Callable<T>>();
for (final T t : collection) {
Callable<T> callable = Executors.callable(new Thread() {
#Override
public void run() {
synchronized (t) {
try {
t.wait();
} catch (InterruptedException e) {
}
}
}
}, t);
callables.add(callable);
}
BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(POOL_SIZE);
ExecutorService executorService = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 0, TimeUnit.SECONDS, queue);
return executorService.invokeAny(callables);
}
static public void main(String[] args) throws InterruptedException, ExecutionException {
final List<Integer> integers = new ArrayList<Integer>();
for (int i = 0; i < POOL_SIZE; i++) {
integers.add(i);
}
(new Thread() {
public void run() {
Integer notified = null;
try {
notified = waitForAny(integers);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("notified=" + notified);
}
}).start();
synchronized (integers) {
integers.wait(3000);
}
Integer randomInt = integers.get((new Random()).nextInt(POOL_SIZE));
System.out.println("Waking up " + randomInt);
synchronized (randomInt) {
randomInt.notify();
}
}
}

Categories

Resources