I know this question was answered many times, but I'm struggling to understand how it works.
So in my application the user must be able to select items which will be added to a queue (displayed in a ListView using an ObservableList<Task>) and each item needs to be processed sequentially by an ExecutorService.
Also that queue should be editable (change the order and remove items from the list).
private void handleItemClicked(MouseEvent event) {
if (event.getClickCount() == 2) {
File item = listView.getSelectionModel().getSelectedItem();
Task<Void> task = createTask(item);
facade.getTaskQueueList().add(task); // this list is bound to a ListView, where it can be edited
Future result = executor.submit(task);
// where executor is an ExecutorService of which type?
try {
result.get();
} catch (Exception e) {
// ...
}
}
}
Tried it with executor = Executors.newFixedThreadPool(1) but I don't have control over the queue.
I read about ThreadPoolExecutor and queues, but I'm struggling to understand it as I'm quite new to Concurrency.
I need to run that method handleItemClicked in a background thread, so that the UI does not freeze, how can I do that the best way?
Summed up: How can I implement a queue of tasks, which is editable and sequentially processed by a background thread?
Please help me figure it out
EDIT
Using the SerialTaskQueue class from vanOekel helped me, now I want to bind the List of tasks to my ListView.
ListProperty<Runnable> listProperty = new SimpleListProperty<>();
listProperty.set(taskQueue.getTaskList()); // getTaskList() returns the LinkedList from SerialTaskQueue
queueListView.itemsProperty().bind(listProperty);
Obviously this doesn't work as it's expecting an ObservableList. There is an elegant way to do it?
The simplest solution I can think of is to maintain the task-list outside of the executor and use a callback to feed the executor the next task if it is available. Unfortunately, it involves synchronization on the task-list and an AtomicBoolean to indicate a task executing.
The callback is simply a Runnable that wraps the original task to run and then "calls back" to see if there is another task to execute, and if so, executes it using the (background) executor.
The synchronization is needed to keep the task-list in order and at a known state. The task-list can be modified by two threads at the same time: via the callback running in the executor's (background) thread and via handleItemClicked method executed via the UI foreground thread. This in turn means that it is never exactly known when the task-list is empty for example. To keep the task-list in order and at a known fixed state, synchronization of the task-list is needed.
This still leaves an ambiguous moment to decide when a task is ready for execution. This is where the AtomicBoolean comes in: a value set is always immediatly availabe and read by any other thread and the compareAndSet method will always ensure only one thread gets an "OK".
Combining the synchronization and the use of the AtomicBoolean allows the creation of one method with a "critical section" that can be called by both foreground- and background-threads at the same time to trigger the execution of a new task if possible. The code below is designed and setup in such a way that one such method (runNextTask) can exist. It is good practice to make the "critical section" in concurrent code as simple and explicit as possible (which, in turn, generally leads to an efficient "critical section").
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class SerialTaskQueue {
public static void main(String[] args) {
ExecutorService executor = Executors.newSingleThreadExecutor();
// all operations on this list must be synchronized on the list itself.
SerialTaskQueue tq = new SerialTaskQueue(executor);
try {
// test running the tasks one by one
tq.add(new SleepSome(10L));
Thread.sleep(5L);
tq.add(new SleepSome(20L));
tq.add(new SleepSome(30L));
Thread.sleep(100L);
System.out.println("Queue size: " + tq.size()); // should be empty
tq.add(new SleepSome(10L));
Thread.sleep(100L);
} catch (Exception e) {
e.printStackTrace();
} finally {
executor.shutdownNow();
}
}
// all lookups and modifications to the list must be synchronized on the list.
private final List<Runnable> tasks = new LinkedList<Runnable>();
// atomic boolean used to ensure only 1 task is executed at any given time
private final AtomicBoolean executeNextTask = new AtomicBoolean(true);
private final Executor executor;
public SerialTaskQueue(Executor executor) {
this.executor = executor;
}
public void add(Runnable task) {
synchronized(tasks) { tasks.add(task); }
runNextTask();
}
private void runNextTask() {
// critical section that ensures one task is executed.
synchronized(tasks) {
if (!tasks.isEmpty()
&& executeNextTask.compareAndSet(true, false)) {
executor.execute(wrapTask(tasks.remove(0)));
}
}
}
private CallbackTask wrapTask(Runnable task) {
return new CallbackTask(task, new Runnable() {
#Override public void run() {
if (!executeNextTask.compareAndSet(false, true)) {
System.out.println("ERROR: programming error, the callback should always run in execute state.");
}
runNextTask();
}
});
}
public int size() {
synchronized(tasks) { return tasks.size(); }
}
public Runnable get(int index) {
synchronized(tasks) { return tasks.get(index); }
}
public Runnable remove(int index) {
synchronized(tasks) { return tasks.remove(index); }
}
// general callback-task, see https://stackoverflow.com/a/826283/3080094
static class CallbackTask implements Runnable {
private final Runnable task, callback;
public CallbackTask(Runnable task, Runnable callback) {
this.task = task;
this.callback = callback;
}
#Override public void run() {
try {
task.run();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
callback.run();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
// task that just sleeps for a while
static class SleepSome implements Runnable {
static long startTime = System.currentTimeMillis();
private final long sleepTimeMs;
public SleepSome(long sleepTimeMs) {
this.sleepTimeMs = sleepTimeMs;
}
#Override public void run() {
try {
System.out.println(tdelta() + "Sleeping for " + sleepTimeMs + " ms.");
Thread.sleep(sleepTimeMs);
System.out.println(tdelta() + "Slept for " + sleepTimeMs + " ms.");
} catch (Exception e) {
e.printStackTrace();
}
}
private String tdelta() { return String.format("% 4d ", (System.currentTimeMillis() - startTime)); }
}
}
Update: if groups of tasks need to be executed serial, have a look at the adapted implementation here.
Related
I have a Thread which runs always with while(true) loop and basically all it does is to add Runnable objects to an executor.
OrderExecutionThread:
public class OrderExecutionThread extends Thread implements Runnable {
final private static int ORDER_EXEC_THREADS_NUMBER = 10;
private boolean running = true;
private boolean flag = true;
private List<Order> firstSellsList = new ArrayList<>();
private List<Order> secondSellsList = new ArrayList<>();
private ManagedDataSource managedDataSource;
private ExecutorService executorService;
public OrderExecutionThread(ManagedDataSource managedDataSource) {
this.managedDataSource = managedDataSource;
this.executorService = Executors.newFixedThreadPool(ORDER_EXEC_THREADS_NUMBER);
}
#Override
public void run() {
while (running) {
if (!firstSellsList.isEmpty() && !firstBuysList.isEmpty()) {
initAndRunExecution(firstBuysList.get(0), firstSellsList.get(0));
}
}
private void initAndRunExecution(Order buy, Order sell) {
executorService.submit(new OrderExecution(buy, sell, managedDataSource));
}
}
I'm running this thread By doing this in my main class:
new Thread(orderExecutionThread).start();
The executor suppose to execute the OrderExecution runnable object which does this:
#Override
public void run() {
try {
connection = managedDataSource.getConnection();
makeExecution(sell, buy);
} catch (SQLException e) {
e.printStackTrace();
} finally {
try {
if (!connection.isClosed())
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
I know for sure that both lists are not empty and the initAndRunExecution is being called, however the order execution run method is not being called....
I know for sure that both lists are not empty and the initAndRunExecution is being called, however the order execution run method is not being called....
I suspect that this is a problem because your firstSellsList and firstBuysList are not synchronized collections. I suspect that other threads are adding to those lists but your OrderExecutionThread never sees the memory updates so just spins forever seeing empty lists. Whenever you share data between threads you need to worry about how the updates will be published and how the thread cache memory will be updated.
As #Fildor mentions in the comments, one solution would be to use a BlockingQueues instead of your Lists. The BlockQueue (for example LinkedBlockingQueue) is a synchronized class so this takes care of the memory sharing. An alternative benefit is that you don't have to do a spin-loop to watch for entries.
For example, your OrderExecutionThread might do something like:
private final BlockingQueue<Order> firstBuys = new LinkedBlockingQueue<>();
private final BlockingQueue<Order> firstSells = new LinkedBlockingQueue<>();
while (!Thread.currentThread().isInterrupted()) {
// wait until we get a buy
Order buy = firstBuys.take();
// wait until we get a sell
Order sell = firstSells.take();
initAndRunExecution(buy, sell);
}
This will wait until the lists get entries before running the orders.
I was wondering what the best way to create a Java Thread that does not terminate.
Currently, I basically have a "Runner" that basically looks like:
ExecutorService pool = Executors.newFixedThreadPool(3);
for (int i = 0; i < numThreads; ++i) {
pool.submit(new Task());
}
pool.shutdown();
and Task looks something like this
public class Task {
...
public void run() {
while(true) { }
}
}
There are two concerns I have with my approach:
Should I be creating a task that just returns after doing work and continue spawning threads that do minimal amounts of work? I'm concerned about the overhead, but am not sure how to measure it.
If I have a Thread that just loops infinitely, when I force quit the executable, will those Threads be shutdown and cleaned up? After some testing, it doesn't appear an InterruptException is being thrown when the code containing the ExecutorService is forcefully shutdown.
EDIT:
To elaborate, the Task looks like
public void run() {
while(true) {
// Let queue be a synchronized, global queue
if (queue has an element) {
// Pop from queue and do a very minimal amount of work on it
// Involves a small amount of network IO (maybe 10-100 ms)
} else {
sleep(2000);
}
}
}
I agree with #D Levant, Blocking queue is the key to use here. With blocking queue, you don't need to handle the queue-empty or queue-full scenario.
In your Task class,
while(true) {
// Let queue be a synchronized, global queue
if (queue has an element) {
// Pop from queue and do a very minimal amount of work on it
// Involves a small amount of network IO (maybe 10-100 ms)
} else {
sleep(2000);
}
}
Its really not a good approach, its inefficient because your while loop is continuously polling, even you have put the thread sleep(), but still its also a overhead of unnecessary context-switches every time the thread wake-ups and sleeps.
In my opinion, your approach of using Executors is looking good for your case. Thread creation is obviously a costly process, and Executors provide us the flexibility of re-using the same thread for different tasks.
You can just pass your task through execute(Runnable) or submit(Runnable/Callable) and then rest will be taken care by Executors internally. Executors internally uses blocking queue concept only.
You can even create your own thread pool by using the ThreadPoolExecutor class and passing the required parameter in its constructor, here you can pass your own blocking queue to hold the tasks. Rest thread-management will be taken care by it on basis of the configuration passes in constructor, So If you are really confident about the configuration parameters then you can go for it.
Now the last point, If you don't want to use the Java's in-built Executors framework, then you can design your solution by using BlockingQueue to hold tasks and starting a thread which will take the tasks from this blocking queue to execute, Below is the high-level implementation:
class TaskRunner {
private int noOfThreads; //The no of threads which you want to run always
private boolean started;
private int taskQueueSize; //No. of tasks that can be in queue at a time, when try to add more tasks, then you have to wait.
private BlockingQueue<Runnable> taskQueue;
private List<Worker> workerThreads;
public TaskRunner(int noOfThreads, int taskQueueSize) {
this.noOfThreads = noOfThreads;
this.taskQueueSize = taskQueueSize;
}
//You can pass any type of task(provided they are implementing Runnable)
public void submitTask(Runnable task) {
if(!started) {
init();
}
try {
taskQueue.put(task);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void shutdown() {
for(Worker worker : workerThreads){
worker.stopped = true;
}
}
private void init() {
this.taskQueue = new LinkedBlockingDeque<>(taskQueueSize);
this.workerThreads = new ArrayList<>(noOfThreads);
for(int i=0; i< noOfThreads; i++) {
Worker worker = new Worker();
workerThreads.add(worker);
worker.start();
}
}
private class Worker extends Thread {
private volatile boolean stopped;
public void run() {
if(!stopped) {
try {
taskQueue.take().run();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}
class Task1 implements Runnable {
#Override
public void run() {
//Your implementation for the task of type 1
}
}
class Task2 implements Runnable {
#Override
public void run() {
//Your implementation for the task of type 2
}
}
class Main {
public static void main(String[] args) {
TaskRunner runner = new TaskRunner(3,5);
runner.submitTask(new Task1());
runner.submitTask(new Task2());
runner.shutdown();
}
}
I'm looking to write some concurrent code which will process an event. This processing can take a long time.
Whilst that event is processing it should record incoming events and then process the last incoming events when it is free to run again. (The other events can be thrown away). This is a little bit like a FILO queue but I only need to store one element in the queue.
Ideally I would like to plug in my new Executor into my event processing architecture shown below.
public class AsyncNode<I, O> extends AbstractNode<I, O> {
private static final Logger log = LoggerFactory.getLogger(AsyncNode.class);
private Executor executor;
public AsyncNode(EventHandler<I, O> handler, Executor executor) {
super(handler);
this.executor = executor;
}
#Override
public void emit(O output) {
if (output != null) {
for (EventListener<O> node : children) {
node.handle(output);
}
}
}
#Override
public void handle(final I input) {
executor.execute(new Runnable() {
#Override
public void run() {
try{
emit(handler.process(input));
}catch (Exception e){
log.error("Exception occured whilst processing input." ,e);
throw e;
}
}
});
}
}
I wouldn't do either. I would have an AtomicReference to the event you want to process and add a task to process it in a destructive way.
final AtomicReference<Event> eventRef =
public void processEvent(Event event) {
eventRef.set(event);
executor.submit(new Runnable() {
public vodi run() {
Event e = eventRef.getAndSet(null);
if (e == null) return;
// process event
}
}
}
This will only ever process the next event when the executor is free, without customising the executor or queue (which can be used for other things)
This also scales to having keyed events i.e. you want to process the last event for a key.
I think the key to this is the "discard policy" you need to apply to your Executor. If you only want to handle the latest task then you need a queue size of one and a "discarding policy" of throw away the oldest. Here is an example of an Executor that will do this
Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
Then when your tasks come in just submit them to this executor, if there is already a queued job it will be replaced with the new one
latestTaskExecutor.execute(() -> doUpdate()));
Here is a example app showing this working
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class LatestUpdate {
private static final Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
private static final AtomicInteger counter = new AtomicInteger(0);
private static final Random random = new Random();
public static void main(String[] args) {
LatestUpdate latestUpdate = new LatestUpdate();
latestUpdate.run();
}
private void doUpdate(int number) {
System.out.println("Latest number updated is: " + number);
try { // Wait a random amount of time up to 5 seconds. Processing the update takes time...
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void run() {
// Updates a counter every second and schedules an update event
Thread counterUpdater = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000L); // Wait one second
} catch (InterruptedException e) {
e.printStackTrace();
}
counter.incrementAndGet();
// Schedule this update will replace any existing update waiting
latestTaskExecutor.execute(() -> doUpdate(counter.get()));
System.out.println("New number is: " + counter.get());
}
});
counterUpdater.start(); // Run the thread
}
}
This also covers the case for GUIs where once updates stop arriving you want the GUI to become eventually consistent with the last event received.
public class LatestTaskExecutor implements Executor {
private final AtomicReference<Runnable> lastTask =new AtomicReference<>();
private final Executor executor;
public LatestTaskExecutor(Executor executor) {
super();
this.executor = executor;
}
#Override
public void execute(Runnable command) {
lastTask.set(command);
executor.execute(new Runnable() {
#Override
public void run() {
Runnable task=lastTask.getAndSet(null);
if(task!=null){
task.run();
}
}
});
}
}
#RunWith( MockitoJUnitRunner.class )
public class LatestTaskExecutorTest {
#Mock private Executor executor;
private LatestTaskExecutor latestExecutor;
#Before
public void setup(){
latestExecutor=new LatestTaskExecutor(executor);
}
#Test
public void testRunSingleTask() {
Runnable run=mock(Runnable.class);
latestExecutor.execute(run);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor).execute(captor.capture());
captor.getValue().run();
verify(run).run();
}
#Test
public void discardsIntermediateUpdates(){
Runnable run=mock(Runnable.class);
Runnable run2=mock(Runnable.class);
latestExecutor.execute(run);
latestExecutor.execute(run2);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor,times(2)).execute(captor.capture());
for (Runnable runnable:captor.getAllValues()){
runnable.run();
}
verify(run2).run();
verifyNoMoreInteractions(run);
}
}
This answer is a modified version of the one from DD which minimzes submission of superfluous tasks.
An atomic reference is used to keep track of the latest event. A custom task is submitted to the queue for potentially processing an event, only the task that gets to read the latest event actually goes ahead and does useful work before clearing out the atomic reference to null. When other tasks get a chance to run and find no event is available to process, they just do nothing and pass away silently. Submitting superfluous tasks are avoided by tracking the number of available tasks in the queue. If there is at least one task pending in the queue, we can avoid submitting the task as the event will be handled when an already queued task is dequeued.
import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
public class EventExecutorService implements Executor {
private final Executor executor;
// the field which keeps track of the latest available event to process
private final AtomicReference<Runnable> latestEventReference = new AtomicReference<>();
private final AtomicInteger activeTaskCount = new AtomicInteger(0);
public EventExecutorService(final Executor executor) {
this.executor = executor;
}
#Override
public void execute(final Runnable eventTask) {
// update the latest event
latestEventReference.set(eventTask);
// read count _after_ updating event
final int activeTasks = activeTaskCount.get();
if (activeTasks == 0) {
// there is definitely no other task to process this event, create a new task
final Runnable customTask = new Runnable() {
#Override
public void run() {
// decrement the count for available tasks _before_ reading event
activeTaskCount.decrementAndGet();
// find the latest available event to process
final Runnable currentTask = latestEventReference.getAndSet(null);
if (currentTask != null) {
// if such an event exists, process it
currentTask.run();
} else {
// somebody stole away the latest event. Do nothing.
}
}
};
// increment tasks count _before_ submitting task
activeTaskCount.incrementAndGet();
// submit the new task to the queue for processing
executor.execute(customTask);
}
}
}
Though I like James Mudd's solution but it still enqueues a second task while previous is running which might be undesirable. If you want to always ignore/discard arriving task if previous is not completed you can make some wrapper like this:
public class DiscardingSubmitter {
private final ExecutorService es = Executors.newSingleThreadExecutor();
private Future<?> future = CompletableFuture.completedFuture(null); //to avoid null check
public void submit(Runnable r){
if (future.isDone()) {
future = es.submit(r);
}else {
//Task skipped, log if you want
}
}
}
I'm trying to find more information on how to bound the running time of a task created using ThreadPoolExecutor.
I want to create a self destructing, e.g. when time has passed (1m for example) then the thread will terminate itself automatically and return a null value. The key point here is that waiting for the thread to finish should not block the main thread (UI thread in our example).
I know I can use the get method, however it will block my application.
I was thinking about running an additional internal thread that will sleep for 1m and then will call interrupt on the main thread.
I attached an example code, it looks like a good idea, but I need another pair of eyes telling me if it makes sense.
public abstract class AbstractTask<T> implements Callable<T> {
private final class StopRunningThread implements Runnable {
/**
* Holds the main thread to interrupt. Cannot be null.
*/
private final Thread mMain;
public StopRunningThread(final Thread main) {
mMain = main;
}
#Override
public void run() {
try {
Thread.sleep(60 * 1000);
// Stop it.
mMain.interrupt();
} catch (final InterruptedException exception) {
// Ignore.
}
}
}
call() is called via a ThreadPool
public T call() {
try {
// Before running any task initialize the result so that the user
// won't
// think he/she has something.
mResult = null;
mException = null;
// Stop running thread.
mStopThread = new Thread(new StopRunningThread(
Thread.currentThread()));
mStopThread.start();
mResult = execute(); <-- A subclass implements this one
} catch (final Exception e) {
// An error occurred, ignore any result.
mResult = null;
mException = e;
// Log it.
Ln.e(e);
}
// In case it's out of memory do a special catch.
catch (final OutOfMemoryError e) {
// An error occurred, ignore any result.
mResult = null;
mException = new UncheckedException(e);
// Log it.
Ln.e(e);
} finally {
// Stop counting.
mStopThread.interrupt();
}
return mResult;
}
There are couple of points which I'm afraid of:
What will happen if execute() has an exception and immediately afterwards my external thread will interrupt, then I'll never catch the exception.
Memory/CPU consumption, I am using a thread pool to avoid the creation of new threads.
Do you see a better idea for reaching the same functionality ?
Doing this would be somewhat involved. First, you'd need to extend the ThreadPoolExecutor class. You'll need to override the "beforeExecute" and "afterExecute" methods. They would keep track of thread start times, and do cleanup after. Then you'd need a reaper to periodically check to see which threads need cleaning up.
This example uses a Map to record when each thread is started. The beforeExecute method populates this, and the afterExecute method cleans it up. There is a TimerTask which periodically executes and looks at all the current entries (ie. all the running threads), and calls Thread.interrupt() on all of them that have exceeded the given time limit.
Notice that I have given two extra constructor parameters: maxExecutionTime, and reaperInterval to control how long tasks are given, and how often to check for tasks to kill. I've omitted some constructors here for the the sake of brevity.
Keep in mind the tasks you submit have to play nice and allow themselves to be killed. This means you have to:
Check Thread.currentThread().isInterrupted() at regular intervals
during execution.
Try to avoid any blocking operation that does not declare
InterruptedException in it's throws clause. A prime example of this
would be InputStream/OutputStream usage, and you would use NIO
Channels instead. If you have to use these methods, check the interrupted flag immediately after returning from such an operation.
.
public class TimedThreadPoolExecutor extends ThreadPoolExecutor {
private Map<Thread, Long> threads = new HashMap<Thread, Long>();
private Timer timer;
public TimedThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue,
long maxExecutionTime,
long reaperInterval) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
startReaper(maxExecutionTime, reaperInterval);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
threads.remove(Thread.currentThread());
System.out.println("after: " + Thread.currentThread().getName());
super.afterExecute(r, t);
}
#Override
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
System.out.println("before: " + t.getName());
threads.put(t, System.currentTimeMillis());
}
#Override
protected void terminated() {
if (timer != null) {
timer.cancel();
}
super.terminated();
}
private void startReaper(final long maxExecutionTime, long reaperInterval) {
timer = new Timer();
TimerTask timerTask = new TimerTask() {
#Override
public void run() {
// make a copy to avoid concurrency issues.
List<Map.Entry<Thread, Long>> entries =
new ArrayList<Map.Entry<Thread, Long>>(threads.entrySet());
for (Map.Entry<Thread, Long> entry : entries) {
Thread thread = entry.getKey();
long start = entry.getValue();
if (System.currentTimeMillis() - start > maxExecutionTime) {
System.out.println("interrupting thread : " + thread.getName());
thread.interrupt();
}
}
}
};
timer.schedule(timerTask, reaperInterval, reaperInterval);
}
public static void main(String args[]) throws Exception {
TimedThreadPoolExecutor executor = new TimedThreadPoolExecutor(5,5, 1000L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(20),
1000L,
200L);
for (int i=0;i<10;i++) {
executor.execute(new Runnable() {
public void run() {
try {
Thread.sleep(5000L);
}
catch (InterruptedException e) {
}
}
});
}
executor.shutdown();
while (! executor.isTerminated()) {
executor.awaitTermination(1000L, TimeUnit.MILLISECONDS);
}
}
}
I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished).
Currently they are presented as Future, so I need something like
/**
* Blocks current thread until one of specified futures is done and returns it.
*/
public static <T> Future<T> waitForAny(Collection<Future<T>> futures)
throws AllFuturesFailedException
Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance.
I could try using
new CountDownLatch(1)
and decrease countdown when task is complete and do
countdown.await()
, but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override
<T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable)
and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures.
All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like
WaitHandle.WaitAny(WaitHandle[] waitHandles)
in c#. Are there any well known solutions for such kind of problem?
UPDATE:
Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine.
Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
simple, check out ExecutorCompletionService.
ExecutorService.invokeAny
Why not just create a results queue and wait on the queue? Or more simply, use a CompletionService since that's what it is: an ExecutorService + result queue.
This is actually pretty easy with wait() and notifyAll().
First, define a lock object. (You can use any class for this, but I like to be explicit):
package com.javadude.sample;
public class Lock {}
Next, define your worker thread. He must notify that lock object when he's finished with his processing. Note that the notify must be in a synchronized block locking on the lock object.
package com.javadude.sample;
public class Worker extends Thread {
private Lock lock_;
private long timeToSleep_;
private String name_;
public Worker(Lock lock, String name, long timeToSleep) {
lock_ = lock;
timeToSleep_ = timeToSleep;
name_ = name;
}
#Override
public void run() {
// do real work -- using a sleep here to simulate work
try {
sleep(timeToSleep_);
} catch (InterruptedException e) {
interrupt();
}
System.out.println(name_ + " is done... notifying");
// notify whoever is waiting, in this case, the client
synchronized (lock_) {
lock_.notify();
}
}
}
Finally, you can write your client:
package com.javadude.sample;
public class Client {
public static void main(String[] args) {
Lock lock = new Lock();
Worker worker1 = new Worker(lock, "worker1", 15000);
Worker worker2 = new Worker(lock, "worker2", 10000);
Worker worker3 = new Worker(lock, "worker3", 5000);
Worker worker4 = new Worker(lock, "worker4", 20000);
boolean started = false;
int numNotifies = 0;
while (true) {
synchronized (lock) {
try {
if (!started) {
// need to do the start here so we grab the lock, just
// in case one of the threads is fast -- if we had done the
// starts outside the synchronized block, a fast thread could
// get to its notification *before* the client is waiting for it
worker1.start();
worker2.start();
worker3.start();
worker4.start();
started = true;
}
lock.wait();
} catch (InterruptedException e) {
break;
}
numNotifies++;
if (numNotifies == 4) {
break;
}
System.out.println("Notified!");
}
}
System.out.println("Everyone has notified me... I'm done");
}
}
As far as I know, Java has no analogous structure to the WaitHandle.WaitAny method.
It seems to me that this could be achieved through a "WaitableFuture" decorator:
public WaitableFuture<T>
extends Future<T>
{
private CountDownLatch countDownLatch;
WaitableFuture(CountDownLatch countDownLatch)
{
super();
this.countDownLatch = countDownLatch;
}
void doTask()
{
super.doTask();
this.countDownLatch.countDown();
}
}
Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new doTask() method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution.
Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources.
/**
* Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join().
*/
public static joinAny(Collection<Thread> threads, int numberToWaitFor)
{
CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor);
foreach(Thread thread in threads)
{
(new Thread(new JoinThreadHelper(thread, countDownLatch))).start();
}
countDownLatch.await();
}
class JoinThreadHelper
implements Runnable
{
Thread thread;
CountDownLatch countDownLatch;
JoinThreadHelper(Thread thread, CountDownLatch countDownLatch)
{
this.thread = thread;
this.countDownLatch = countDownLatch;
}
void run()
{
this.thread.join();
this.countDownLatch.countDown();
}
}
If you can use CompletableFutures instead then there is CompletableFuture.anyOf that does what you want, just call join on the result:
CompletableFuture.anyOf(futures).join()
You can use CompletableFutures with executors by calling the CompletableFuture.supplyAsync or runAsync methods.
Since you don't care which one finishes, why not just have a single WaitHandle for all threads and wait on that? Whichever one finishes first can set the handle.
See this option:
public class WaitForAnyRedux {
private static final int POOL_SIZE = 10;
public static <T> T waitForAny(Collection<T> collection) throws InterruptedException, ExecutionException {
List<Callable<T>> callables = new ArrayList<Callable<T>>();
for (final T t : collection) {
Callable<T> callable = Executors.callable(new Thread() {
#Override
public void run() {
synchronized (t) {
try {
t.wait();
} catch (InterruptedException e) {
}
}
}
}, t);
callables.add(callable);
}
BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(POOL_SIZE);
ExecutorService executorService = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 0, TimeUnit.SECONDS, queue);
return executorService.invokeAny(callables);
}
static public void main(String[] args) throws InterruptedException, ExecutionException {
final List<Integer> integers = new ArrayList<Integer>();
for (int i = 0; i < POOL_SIZE; i++) {
integers.add(i);
}
(new Thread() {
public void run() {
Integer notified = null;
try {
notified = waitForAny(integers);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("notified=" + notified);
}
}).start();
synchronized (integers) {
integers.wait(3000);
}
Integer randomInt = integers.get((new Random()).nextInt(POOL_SIZE));
System.out.println("Waking up " + randomInt);
synchronized (randomInt) {
randomInt.notify();
}
}
}