I'm looking to write some concurrent code which will process an event. This processing can take a long time.
Whilst that event is processing it should record incoming events and then process the last incoming events when it is free to run again. (The other events can be thrown away). This is a little bit like a FILO queue but I only need to store one element in the queue.
Ideally I would like to plug in my new Executor into my event processing architecture shown below.
public class AsyncNode<I, O> extends AbstractNode<I, O> {
private static final Logger log = LoggerFactory.getLogger(AsyncNode.class);
private Executor executor;
public AsyncNode(EventHandler<I, O> handler, Executor executor) {
super(handler);
this.executor = executor;
}
#Override
public void emit(O output) {
if (output != null) {
for (EventListener<O> node : children) {
node.handle(output);
}
}
}
#Override
public void handle(final I input) {
executor.execute(new Runnable() {
#Override
public void run() {
try{
emit(handler.process(input));
}catch (Exception e){
log.error("Exception occured whilst processing input." ,e);
throw e;
}
}
});
}
}
I wouldn't do either. I would have an AtomicReference to the event you want to process and add a task to process it in a destructive way.
final AtomicReference<Event> eventRef =
public void processEvent(Event event) {
eventRef.set(event);
executor.submit(new Runnable() {
public vodi run() {
Event e = eventRef.getAndSet(null);
if (e == null) return;
// process event
}
}
}
This will only ever process the next event when the executor is free, without customising the executor or queue (which can be used for other things)
This also scales to having keyed events i.e. you want to process the last event for a key.
I think the key to this is the "discard policy" you need to apply to your Executor. If you only want to handle the latest task then you need a queue size of one and a "discarding policy" of throw away the oldest. Here is an example of an Executor that will do this
Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
Then when your tasks come in just submit them to this executor, if there is already a queued job it will be replaced with the new one
latestTaskExecutor.execute(() -> doUpdate()));
Here is a example app showing this working
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class LatestUpdate {
private static final Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
private static final AtomicInteger counter = new AtomicInteger(0);
private static final Random random = new Random();
public static void main(String[] args) {
LatestUpdate latestUpdate = new LatestUpdate();
latestUpdate.run();
}
private void doUpdate(int number) {
System.out.println("Latest number updated is: " + number);
try { // Wait a random amount of time up to 5 seconds. Processing the update takes time...
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void run() {
// Updates a counter every second and schedules an update event
Thread counterUpdater = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000L); // Wait one second
} catch (InterruptedException e) {
e.printStackTrace();
}
counter.incrementAndGet();
// Schedule this update will replace any existing update waiting
latestTaskExecutor.execute(() -> doUpdate(counter.get()));
System.out.println("New number is: " + counter.get());
}
});
counterUpdater.start(); // Run the thread
}
}
This also covers the case for GUIs where once updates stop arriving you want the GUI to become eventually consistent with the last event received.
public class LatestTaskExecutor implements Executor {
private final AtomicReference<Runnable> lastTask =new AtomicReference<>();
private final Executor executor;
public LatestTaskExecutor(Executor executor) {
super();
this.executor = executor;
}
#Override
public void execute(Runnable command) {
lastTask.set(command);
executor.execute(new Runnable() {
#Override
public void run() {
Runnable task=lastTask.getAndSet(null);
if(task!=null){
task.run();
}
}
});
}
}
#RunWith( MockitoJUnitRunner.class )
public class LatestTaskExecutorTest {
#Mock private Executor executor;
private LatestTaskExecutor latestExecutor;
#Before
public void setup(){
latestExecutor=new LatestTaskExecutor(executor);
}
#Test
public void testRunSingleTask() {
Runnable run=mock(Runnable.class);
latestExecutor.execute(run);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor).execute(captor.capture());
captor.getValue().run();
verify(run).run();
}
#Test
public void discardsIntermediateUpdates(){
Runnable run=mock(Runnable.class);
Runnable run2=mock(Runnable.class);
latestExecutor.execute(run);
latestExecutor.execute(run2);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor,times(2)).execute(captor.capture());
for (Runnable runnable:captor.getAllValues()){
runnable.run();
}
verify(run2).run();
verifyNoMoreInteractions(run);
}
}
This answer is a modified version of the one from DD which minimzes submission of superfluous tasks.
An atomic reference is used to keep track of the latest event. A custom task is submitted to the queue for potentially processing an event, only the task that gets to read the latest event actually goes ahead and does useful work before clearing out the atomic reference to null. When other tasks get a chance to run and find no event is available to process, they just do nothing and pass away silently. Submitting superfluous tasks are avoided by tracking the number of available tasks in the queue. If there is at least one task pending in the queue, we can avoid submitting the task as the event will be handled when an already queued task is dequeued.
import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
public class EventExecutorService implements Executor {
private final Executor executor;
// the field which keeps track of the latest available event to process
private final AtomicReference<Runnable> latestEventReference = new AtomicReference<>();
private final AtomicInteger activeTaskCount = new AtomicInteger(0);
public EventExecutorService(final Executor executor) {
this.executor = executor;
}
#Override
public void execute(final Runnable eventTask) {
// update the latest event
latestEventReference.set(eventTask);
// read count _after_ updating event
final int activeTasks = activeTaskCount.get();
if (activeTasks == 0) {
// there is definitely no other task to process this event, create a new task
final Runnable customTask = new Runnable() {
#Override
public void run() {
// decrement the count for available tasks _before_ reading event
activeTaskCount.decrementAndGet();
// find the latest available event to process
final Runnable currentTask = latestEventReference.getAndSet(null);
if (currentTask != null) {
// if such an event exists, process it
currentTask.run();
} else {
// somebody stole away the latest event. Do nothing.
}
}
};
// increment tasks count _before_ submitting task
activeTaskCount.incrementAndGet();
// submit the new task to the queue for processing
executor.execute(customTask);
}
}
}
Though I like James Mudd's solution but it still enqueues a second task while previous is running which might be undesirable. If you want to always ignore/discard arriving task if previous is not completed you can make some wrapper like this:
public class DiscardingSubmitter {
private final ExecutorService es = Executors.newSingleThreadExecutor();
private Future<?> future = CompletableFuture.completedFuture(null); //to avoid null check
public void submit(Runnable r){
if (future.isDone()) {
future = es.submit(r);
}else {
//Task skipped, log if you want
}
}
}
Related
I'm using a task that creates other tasks. Those tasks in turn may or may not create subsequent tasks. I don't know beforehand how many tasks will be created in total. At some point, no more tasks will be created, and all the task will finish.
When the last task is done, I must do some extra stuff.
Which threading mechanism should be used? I've read about CountDownLatch, Cyclic Barrier and Phaser but none seem to fit.
I've also tried using ExecutorService, but I've encountered some issues such as the inability to execute something at the end, and you can see my attempt below:
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
public class Issue {
public static void main(String[] args) throws InterruptedException {
var count = new AtomicInteger(1);
var executor = Executors.newFixedThreadPool(3);
class Task implements Runnable {
final int id = count.getAndIncrement();
#Override
public void run() {
try {
MILLISECONDS.sleep((long)(Math.random() * 1000L + 1000L));
} catch (InterruptedException e) {
// Do nothing
}
if (id < 5) {
executor.submit(new Task());
executor.submit(new Task());
}
System.out.println(id);
}
}
executor.execute(new Task());
executor.shutdown();
// executor.awaitTermination(20, TimeUnit.SECONDS);
System.out.println("Hello");
}
}
This outputs an exception because tasks are added after shutdown() is called, but the expected output would be akin to:
1
2
3
4
5
6
7
8
9
Hello
Which threading mechanism can help me do that?
It seems pretty tricky. If there is even a single task that's either in the queue or currently executing, then since you can't say whether or not it will spawn another task, you have no way to know how long it may run for. It may be the start of a chain of tasks that takes another 2 hours.
I think all the information you'd need to achieve this is encapsulated by the executor implementations. You need to know what's running and what's in the queue.
I think you're unfortunately looking at having to write your own executor. It needn't be complicated and it doesn't have to conform to the JDK's interfaces if you don't want it to. Just something that maintains a thread pool and a queue of tasks. Add the ability to attach listeners to the executor. When the queue is empty and there are no actively executing tasks then you can notify the listeners.
Here's a quick code sketch.
class MyExecutor
{
private final AtomicLong taskId = new AtomicLong();
private final Map<Long, Runnable> idToQueuedTask = new ConcurrentHashMap<>();
private final AtomicLong runningTasks = new AtomicLong();
private final ExecutorService delegate = Executors.newFixedThreadPool(3);
public void submit(Runnable task) {
long id = taskId.incrementAndGet();
final Runnable wrapped = () -> {
taskStarted(id);
try {
task.run();
}
finally {
taskEnded();
}
};
idToQueuedTask.put(id, wrapped);
delegate.submit(wrapped);
}
private void taskStarted(long id) {
idToQueuedTask.remove(id);
runningTasks.incrementAndGet();
}
private void taskEnded() {
final long numRunning = runningTasks.decrementAndGet();
if (numRunning == 0 && idToQueuedTask.isEmpty()) {
System.out.println("Done, time to notify listeners");
}
}
public static void main(String[] args) {
MyExecutor executor = new MyExecutor();
executor.submit(() -> {
System.out.println("Parent task");
try {
Thread.sleep(1000);
}
catch (Exception e) {}
executor.submit(() -> {
System.out.println("Child task");
});
});
}
}
If you change your ExecutorService to this:
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(3);
You could then use the count functions to wait:
while(executor.getTaskCount() > executor.getCompletedTaskCount())
{
TimeUnit.SECONDS.sleep(10L);
}
executor.shutdown();
System.out.println("Hello");
I have the code below:
#Override
public boolean start() {
boolean b = false;
if (status != RUNNING) {
LOGGER.info("Starting Auto Rescheduler Process...");
try {
b = super.start();
final ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("Rescheduler-Pool-%d").build();
ExecutorService exServ = Executors.newSingleThreadExecutor(threadFactory);
service = MoreExecutors.listeningDecorator(exServ);
} catch (Exception e) {
LOGGER.error("Error starting Auto Rescheduler Process! {}", e.getMessage());
LOGGER.debug("{}", e);
b = false;
}
} else {
LOGGER.info("Asked to start Auto Rescheduler Process but it had already started. Ignoring...");
}
return b;
}
The AutoRescheduler is the runnable below:
private class AutoScheduler implements Runnable {
private static final String DEFAULT_CONFIGURABLE_MINUTES_VALUE = "other";
private static final long DEFAULT_DELAY_MINUTES = 60L;
#Override
public void run() {
try {
while (!Thread.currentThread().isInterrupted()) {
//BLOCKS HERE UNTIL A FINISHED EVENT IS PUT IN QUEUE
final FinishedEvent fEvent = finishedEventsQueue.take();
LOGGER.info("Received a finished Event for {} and I am going to reschedule it", fEvent);
final MyTask task = fEvent.getSource();
final LocalDateTime nextRunTime = caclulcateNextRightTime(task);
boolean b = scheduleEventService.scheduleEventANew(task, nextRunTime);
if (b) {
cronController.loadSchedule();
LOGGER.info("Rescheduled event {} for {}", task, nextRunTime);
}
} catch (InterruptedException e) {
LOGGER.error("Interrupted while waiting for a new finishedEventQueue");
Thread.currentThread().interrupt();
}
}
I see events being caught and put in the queue. Normally I then see them being rescheduled by the AutoReschduler
However from time to time I stop seeing them being rescheduled which leads me to believe that the reschedulingThread dies silently. After this happens no more events are taken from the queue until I restart the process (I have a GUI that allows me to call the stop() and start() methods of the public class). After I restart it though, the blocked events are rescheduled normally which means that they are in the queue indeed.
Does anyone have an idea?
EDIT
I have reproduced the error in Eclipse. The thread does not die (I have tested with the ExecutorService as well. However take() still does not take the item from the queue although it is placed there.
I know this question was answered many times, but I'm struggling to understand how it works.
So in my application the user must be able to select items which will be added to a queue (displayed in a ListView using an ObservableList<Task>) and each item needs to be processed sequentially by an ExecutorService.
Also that queue should be editable (change the order and remove items from the list).
private void handleItemClicked(MouseEvent event) {
if (event.getClickCount() == 2) {
File item = listView.getSelectionModel().getSelectedItem();
Task<Void> task = createTask(item);
facade.getTaskQueueList().add(task); // this list is bound to a ListView, where it can be edited
Future result = executor.submit(task);
// where executor is an ExecutorService of which type?
try {
result.get();
} catch (Exception e) {
// ...
}
}
}
Tried it with executor = Executors.newFixedThreadPool(1) but I don't have control over the queue.
I read about ThreadPoolExecutor and queues, but I'm struggling to understand it as I'm quite new to Concurrency.
I need to run that method handleItemClicked in a background thread, so that the UI does not freeze, how can I do that the best way?
Summed up: How can I implement a queue of tasks, which is editable and sequentially processed by a background thread?
Please help me figure it out
EDIT
Using the SerialTaskQueue class from vanOekel helped me, now I want to bind the List of tasks to my ListView.
ListProperty<Runnable> listProperty = new SimpleListProperty<>();
listProperty.set(taskQueue.getTaskList()); // getTaskList() returns the LinkedList from SerialTaskQueue
queueListView.itemsProperty().bind(listProperty);
Obviously this doesn't work as it's expecting an ObservableList. There is an elegant way to do it?
The simplest solution I can think of is to maintain the task-list outside of the executor and use a callback to feed the executor the next task if it is available. Unfortunately, it involves synchronization on the task-list and an AtomicBoolean to indicate a task executing.
The callback is simply a Runnable that wraps the original task to run and then "calls back" to see if there is another task to execute, and if so, executes it using the (background) executor.
The synchronization is needed to keep the task-list in order and at a known state. The task-list can be modified by two threads at the same time: via the callback running in the executor's (background) thread and via handleItemClicked method executed via the UI foreground thread. This in turn means that it is never exactly known when the task-list is empty for example. To keep the task-list in order and at a known fixed state, synchronization of the task-list is needed.
This still leaves an ambiguous moment to decide when a task is ready for execution. This is where the AtomicBoolean comes in: a value set is always immediatly availabe and read by any other thread and the compareAndSet method will always ensure only one thread gets an "OK".
Combining the synchronization and the use of the AtomicBoolean allows the creation of one method with a "critical section" that can be called by both foreground- and background-threads at the same time to trigger the execution of a new task if possible. The code below is designed and setup in such a way that one such method (runNextTask) can exist. It is good practice to make the "critical section" in concurrent code as simple and explicit as possible (which, in turn, generally leads to an efficient "critical section").
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class SerialTaskQueue {
public static void main(String[] args) {
ExecutorService executor = Executors.newSingleThreadExecutor();
// all operations on this list must be synchronized on the list itself.
SerialTaskQueue tq = new SerialTaskQueue(executor);
try {
// test running the tasks one by one
tq.add(new SleepSome(10L));
Thread.sleep(5L);
tq.add(new SleepSome(20L));
tq.add(new SleepSome(30L));
Thread.sleep(100L);
System.out.println("Queue size: " + tq.size()); // should be empty
tq.add(new SleepSome(10L));
Thread.sleep(100L);
} catch (Exception e) {
e.printStackTrace();
} finally {
executor.shutdownNow();
}
}
// all lookups and modifications to the list must be synchronized on the list.
private final List<Runnable> tasks = new LinkedList<Runnable>();
// atomic boolean used to ensure only 1 task is executed at any given time
private final AtomicBoolean executeNextTask = new AtomicBoolean(true);
private final Executor executor;
public SerialTaskQueue(Executor executor) {
this.executor = executor;
}
public void add(Runnable task) {
synchronized(tasks) { tasks.add(task); }
runNextTask();
}
private void runNextTask() {
// critical section that ensures one task is executed.
synchronized(tasks) {
if (!tasks.isEmpty()
&& executeNextTask.compareAndSet(true, false)) {
executor.execute(wrapTask(tasks.remove(0)));
}
}
}
private CallbackTask wrapTask(Runnable task) {
return new CallbackTask(task, new Runnable() {
#Override public void run() {
if (!executeNextTask.compareAndSet(false, true)) {
System.out.println("ERROR: programming error, the callback should always run in execute state.");
}
runNextTask();
}
});
}
public int size() {
synchronized(tasks) { return tasks.size(); }
}
public Runnable get(int index) {
synchronized(tasks) { return tasks.get(index); }
}
public Runnable remove(int index) {
synchronized(tasks) { return tasks.remove(index); }
}
// general callback-task, see https://stackoverflow.com/a/826283/3080094
static class CallbackTask implements Runnable {
private final Runnable task, callback;
public CallbackTask(Runnable task, Runnable callback) {
this.task = task;
this.callback = callback;
}
#Override public void run() {
try {
task.run();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
callback.run();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
// task that just sleeps for a while
static class SleepSome implements Runnable {
static long startTime = System.currentTimeMillis();
private final long sleepTimeMs;
public SleepSome(long sleepTimeMs) {
this.sleepTimeMs = sleepTimeMs;
}
#Override public void run() {
try {
System.out.println(tdelta() + "Sleeping for " + sleepTimeMs + " ms.");
Thread.sleep(sleepTimeMs);
System.out.println(tdelta() + "Slept for " + sleepTimeMs + " ms.");
} catch (Exception e) {
e.printStackTrace();
}
}
private String tdelta() { return String.format("% 4d ", (System.currentTimeMillis() - startTime)); }
}
}
Update: if groups of tasks need to be executed serial, have a look at the adapted implementation here.
We need to implement a feature that allows us to cancel a future job. Given that this job is doing DB calls and we need to rollback\cleanup any updates made before cancel was fired.
This is what I have tried, but "Thread.currentThread().isInterrupted()" always return false:
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
final Future future = executor.submit(new Callable() {
#Override
public Boolean call() throws Exception {
// Do Some DB calls
if (Thread.currentThread().isInterrupted()) {
// Will need to roll back
throw new InterruptedException();
}
return true;
}
});
executor.schedule(new Runnable() {
public void run() {
future.cancel(true);
}
}, 1, TimeUnit.SECONDS);
Is this the right approach to achieve our target? And how to know if the job was cancelled in order to cancel\roll back changes?
I believe that you complete the database calls before the second task gets a chance to run. When you have only a single executor it is possible that it does not schedule time for the second scheduled task before the first completes. This following snippet does get interrupted:
import java.util.*;
import java.util.concurrent.*;
public class Main {
public static void main(String[] arg) {
ScheduledExecutorService runner = Executors.newScheduledThreadPool(2);
// If this is 1 then this will never be interrupted.
final Future f = runner.submit(new Callable<Boolean>() {
public Boolean call() throws Exception {
System.out.println("Calling");
while (! Thread.currentThread().isInterrupted()) {
;
}
System.out.println("Interrupted");
return true;
}
});
runner.schedule(new Runnable() {
public void run() {
System.out.println("Interrupting");
f.cancel(true);
}
}, 1, TimeUnit.SECONDS);
}
}
First it seems the thread pool is not creating new thread for you so your cancel task will get called only after the DB task finishes. So I changed the pool size in yours example to 2 and it worked.
I'm trying to find more information on how to bound the running time of a task created using ThreadPoolExecutor.
I want to create a self destructing, e.g. when time has passed (1m for example) then the thread will terminate itself automatically and return a null value. The key point here is that waiting for the thread to finish should not block the main thread (UI thread in our example).
I know I can use the get method, however it will block my application.
I was thinking about running an additional internal thread that will sleep for 1m and then will call interrupt on the main thread.
I attached an example code, it looks like a good idea, but I need another pair of eyes telling me if it makes sense.
public abstract class AbstractTask<T> implements Callable<T> {
private final class StopRunningThread implements Runnable {
/**
* Holds the main thread to interrupt. Cannot be null.
*/
private final Thread mMain;
public StopRunningThread(final Thread main) {
mMain = main;
}
#Override
public void run() {
try {
Thread.sleep(60 * 1000);
// Stop it.
mMain.interrupt();
} catch (final InterruptedException exception) {
// Ignore.
}
}
}
call() is called via a ThreadPool
public T call() {
try {
// Before running any task initialize the result so that the user
// won't
// think he/she has something.
mResult = null;
mException = null;
// Stop running thread.
mStopThread = new Thread(new StopRunningThread(
Thread.currentThread()));
mStopThread.start();
mResult = execute(); <-- A subclass implements this one
} catch (final Exception e) {
// An error occurred, ignore any result.
mResult = null;
mException = e;
// Log it.
Ln.e(e);
}
// In case it's out of memory do a special catch.
catch (final OutOfMemoryError e) {
// An error occurred, ignore any result.
mResult = null;
mException = new UncheckedException(e);
// Log it.
Ln.e(e);
} finally {
// Stop counting.
mStopThread.interrupt();
}
return mResult;
}
There are couple of points which I'm afraid of:
What will happen if execute() has an exception and immediately afterwards my external thread will interrupt, then I'll never catch the exception.
Memory/CPU consumption, I am using a thread pool to avoid the creation of new threads.
Do you see a better idea for reaching the same functionality ?
Doing this would be somewhat involved. First, you'd need to extend the ThreadPoolExecutor class. You'll need to override the "beforeExecute" and "afterExecute" methods. They would keep track of thread start times, and do cleanup after. Then you'd need a reaper to periodically check to see which threads need cleaning up.
This example uses a Map to record when each thread is started. The beforeExecute method populates this, and the afterExecute method cleans it up. There is a TimerTask which periodically executes and looks at all the current entries (ie. all the running threads), and calls Thread.interrupt() on all of them that have exceeded the given time limit.
Notice that I have given two extra constructor parameters: maxExecutionTime, and reaperInterval to control how long tasks are given, and how often to check for tasks to kill. I've omitted some constructors here for the the sake of brevity.
Keep in mind the tasks you submit have to play nice and allow themselves to be killed. This means you have to:
Check Thread.currentThread().isInterrupted() at regular intervals
during execution.
Try to avoid any blocking operation that does not declare
InterruptedException in it's throws clause. A prime example of this
would be InputStream/OutputStream usage, and you would use NIO
Channels instead. If you have to use these methods, check the interrupted flag immediately after returning from such an operation.
.
public class TimedThreadPoolExecutor extends ThreadPoolExecutor {
private Map<Thread, Long> threads = new HashMap<Thread, Long>();
private Timer timer;
public TimedThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue,
long maxExecutionTime,
long reaperInterval) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
startReaper(maxExecutionTime, reaperInterval);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
threads.remove(Thread.currentThread());
System.out.println("after: " + Thread.currentThread().getName());
super.afterExecute(r, t);
}
#Override
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
System.out.println("before: " + t.getName());
threads.put(t, System.currentTimeMillis());
}
#Override
protected void terminated() {
if (timer != null) {
timer.cancel();
}
super.terminated();
}
private void startReaper(final long maxExecutionTime, long reaperInterval) {
timer = new Timer();
TimerTask timerTask = new TimerTask() {
#Override
public void run() {
// make a copy to avoid concurrency issues.
List<Map.Entry<Thread, Long>> entries =
new ArrayList<Map.Entry<Thread, Long>>(threads.entrySet());
for (Map.Entry<Thread, Long> entry : entries) {
Thread thread = entry.getKey();
long start = entry.getValue();
if (System.currentTimeMillis() - start > maxExecutionTime) {
System.out.println("interrupting thread : " + thread.getName());
thread.interrupt();
}
}
}
};
timer.schedule(timerTask, reaperInterval, reaperInterval);
}
public static void main(String args[]) throws Exception {
TimedThreadPoolExecutor executor = new TimedThreadPoolExecutor(5,5, 1000L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(20),
1000L,
200L);
for (int i=0;i<10;i++) {
executor.execute(new Runnable() {
public void run() {
try {
Thread.sleep(5000L);
}
catch (InterruptedException e) {
}
}
});
}
executor.shutdown();
while (! executor.isTerminated()) {
executor.awaitTermination(1000L, TimeUnit.MILLISECONDS);
}
}
}