I have a problem understanding the bug. I have a code which runs tasks in a thread pool. But the time of execution of a single task is limited. Here is the code:
public static abstract class CustomRunnable implements Runnable {
private CountDownLatch latch;
public void setLatch(CountDownLatch latch) {
this.latch = latch;
}
protected void threadFinished() {
latch.countDown();
}
}
And executions:
CountDownLatch latch = new CountDownLatch(threads.size());
ExecutorService executor = Executors.newFixedThreadPool(numberOfThreads);
ScheduledExecutorService canceller = Executors.newSingleThreadScheduledExecutor();
try {
for(CustomRunnable thread : threads) {
thread.setLatch(latch);
Future<?> future = executor.submit(thread);
canceller.schedule(new Runnable() {
#Override
public void run() {
future.cancel(true);
thread.threadFinished();
}
}, 1, TimeUnit.MINUTES);
}
latch.await();
executor.shutdown();
canceller.shutdown();
}
catch(InterruptedException e) {
e.printStackTrace();
}
But latch.await(); fires much more earlier than expected (this code is used in a few places). Are there any ideas how to fix that? Thanks!
Related
I need to stop all the scheduled runnables when shutting down this executor I save the scheduled futures in a list when using execute(Runnable runnable), however, this does not work, the runnables are still being runned after calling shutdown but i'm in fact stopping them using scheduledFutures.forEach(scheduledFuture -> scheduledFuture.cancel(false)) in that method, why does this happen?.
public class ThreadPool {
private final ScheduledThreadPoolExecutor scheduledThreadPoolExecutor;
private final List<ScheduledFuture<?>> scheduledFutures = new ArrayList<>();
public ThreadPool(ScheduledThreadPoolExecutor scheduledThreadPoolExecutor) {
this.scheduledThreadPoolExecutor = scheduledThreadPoolExecutor;
}
public void execute(Runnable runnable) {
scheduledFutures.add(
scheduledThreadPoolExecutor.scheduleWithFixedDelay(runnable, 0, 20, TimeUnit.SECONDS));
}
public void shutdown() {
scheduledFutures.forEach(scheduledFuture -> scheduledFuture.cancel(false));
scheduledThreadPoolExecutor.shutdown();
try {
if (scheduledThreadPoolExecutor.awaitTermination(1, TimeUnit.SECONDS)) {
return;
}
scheduledThreadPoolExecutor.shutdownNow();
} catch (Exception exception) {
Thread.currentThread().interrupt();
}
}
I have a pausable thread pool executor implementation just like in the documentation of the ThreadPoolExecutor class. I have a simple test that does the following:
class PausableThreadPoolExecutor extends ThreadPoolExecutor {
public static PausableThreadPoolExecutor newSingleThreadExecutor() {
return new PausableThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
/** isPaused */
private boolean isPaused;
/** pauseLock */
private ReentrantLock pauseLock = new ReentrantLock();
/** unpaused */
private Condition unpaused = this.pauseLock.newCondition();
public PausableThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
#Override
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
this.pauseLock.lock();
try {
while (this.isPaused) {
this.unpaused.await();
}
} catch (InterruptedException ie) {
t.interrupt();
} finally {
this.pauseLock.unlock();
}
}
public void pause() {
this.pauseLock.lock();
try {
this.isPaused = true;
} finally {
this.pauseLock.unlock();
}
}
public void resume() {
this.pauseLock.lock();
try {
this.isPaused = false;
this.unpaused.signalAll();
} finally {
this.pauseLock.unlock();
}
}
public static void main(String[] args) {
PausableThreadPoolExecutor p = PausableThreadPoolExecutor.newSingleThreadExecutor();
p.pause();
p.execute(new Runnable() {
public void run() {
for (StackTraceElement ste : Thread.currentThread().getStackTrace()) {
System.out.println(ste);
}
}
});
p.shutdownNow();
}
}
Interestingly the call to shutDownNow will cause the Runnable to run. Is this normal? As I understand the shutDownNow will try to stop the actively executing tasks by interrupting them. But the interrupt seems to wake up the task an execute it. Can someone explain this ?
Interestingly the call to shutDownNow will cause the Runnable to run. Is this normal?
Not sure it is "normal" but it is certainly expected given your code. In your beforeExecute(...) method I see the following:
this.pauseLock.lock();
try {
while (this.isPaused) {
this.unpaused.await();
}
} catch (InterruptedException ie) {
t.interrupt();
} finally {
this.pauseLock.unlock();
}
The job will lopp waiting for the isPaused boolean to be set to false. However, if the job is interrupted the this.unpaused.await() will throw InterruptedException which breaks out of the while loop, the thread is reinterrupted which is always a good pattern, beforeExecute() returns, and the job is allowed to execute. Interrupting a thread doesn't kill it unless you have specific code to handle the interruption.
If you want to stop the job when it is interrupted then you could throw a RuntimeException in the beforeExecute() handler when you see that the job as been interrupted:
} catch (InterruptedException ie) {
t.interrupt();
throw new RuntimeException("Thread was interrupted so don't run");
A cleaner approach might be to check to see if you are interrupted in the run() method and then exit:
public void run() {
if (Thread.currentThread().isInterrupted()) {
return;
}
...
I'm using Spring 4.3.8.RELEASE with Java 7. I want to create a thread factory to help manage certain workers in my application. I declare my thread factory like so
<bean id="myprojectThreadFactory" class="org.springframework.scheduling.concurrent.CustomizableThreadFactory">
<constructor-arg value="prefix-"/>
</bean>
<bean id="myprojectTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="threadFactory" ref="myprojectThreadFactory"/>
<property name="corePoolSize" value="${myproject.core.thread.pool.size}" />
<property name="maxPoolSize" value="${myproject.max.thread.pool.size}" />
</bean>
However, I'm having trouble "join"ing on the threads. That is, I want to wait for all work to be completed before continuing with a certain task so I have
m_importEventsWorker.work();
m_threadExecutor.shutdown();
System.out.println("done.");
in which my thread pool is executed like so
public void work(final MyWorkUnit pmyprojectOrg)
{
final List<MyWorkUnit> allOrgs = new ArrayList<MyWorkUnit>();
if (pmyprojectOrg != null)
{
processData(pmyprojectOrg.getmyprojectOrgId());
} else {
allOrgs.addAll(m_myprojectSvc.findAllWithNonEmptyTokens());
// Cue up threads to execute
for (final MyWorkUnit myprojectOrg : allOrgs)
{
m_threadExecutor.execute(new Thread(new Runnable(){
#Override
public void run()
{
System.out.println("started.");
processData(myprojectOrg.getmyprojectOrgId());
}
}));
} // for
Yet what gets printed out is
done.
started.
started.
So clearly I'm not waiting. What's the right way to wait for my threads to finish working?
You can create a fixed thread pool by using ExecutorService and check whether the pool size is empty or not:
ExecutorService executor = Executors.newFixedThreadPool(50);
If you run your tasks by using this executor and check the thread pool size periodically by using #Scheduled fixedRate or fixedDelay, you can see if they are finished or not.
ThreadPoolExecutor poolInfo = (ThreadPoolExecutor) executor;
Integer activeTaskCount = poolInfo.getActiveCount();
if(activeTaskCount = 0) {
//If it is 0, it means threads are waiting for tasks, they have no assigned tasks.
//Do whatever you want here!
}
A CountDownLatch is initialized with a given count. This count is decremented by calls to the countDown() method. Threads waiting for this count to reach zero can call one of the await() methods. Calling await() blocks the thread until the count reaches zero.
You can use CountDownLatch to main thread to wait for completion all the task.You can declare CountDownLatch with size as number of task CountDownLatch latch = new CountDownLatch(3); in main thread call await() method to wait and every task completion call countDown()
public void work(final MyWorkUnit pmyprojectOrg)
{
final List<MyWorkUnit> allOrgs = new ArrayList<MyWorkUnit>();
if (pmyprojectOrg != null)
{
processData(pmyprojectOrg.getmyprojectOrgId());
} else {
allOrgs.addAll(m_myprojectSvc.findAllWithNonEmptyTokens());
CountDownLatch latch = new CountDownLatch(allOrgs.size());
// Cue up threads to execute
for (final MyWorkUnit myprojectOrg : allOrgs)
{
m_threadExecutor.execute(new Thread(new Runnable(){
#Override
public void run()
{
System.out.println("started.");
processData(myprojectOrg.getmyprojectOrgId());
latch.countDown();
}
}));
}
//After for loop
latch.await();
Example:
CountDownLatch latch = new CountDownLatch(3);
Waiter waiter = new Waiter(latch);
Decrementer decrementer = new Decrementer(latch);
new Thread(waiter) .start();
new Thread(decrementer).start();
public class Waiter implements Runnable{
CountDownLatch latch = null;
public Waiter(CountDownLatch latch) {
this.latch = latch;
}
public void run() {
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Waiter Released");
}
}
public class Decrementer implements Runnable {
CountDownLatch latch = null;
public Decrementer(CountDownLatch latch) {
this.latch = latch;
}
public void run() {
try {
Thread.sleep(1000);
this.latch.countDown();
Thread.sleep(1000);
this.latch.countDown();
Thread.sleep(1000);
this.latch.countDown();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Since I'm using Spring's ThreadPoolTaskExecutor, I found the below which suited my needs ...
protected void waitForThreadPool(final ThreadPoolTaskExecutor threadPoolExecutor)
{
threadPoolExecutor.setWaitForTasksToCompleteOnShutdown(true);
threadPoolExecutor.shutdown();
try {
threadPoolExecutor.getThreadPoolExecutor().awaitTermination(30, TimeUnit.SECONDS);
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
} // waitForThreadPool
What is the best practice approach to launch a pool of 1000's of tasks (where up to 4 should be able to execute in parallel) and automatically timeout them if they take more than 3 seconds (individually)?
While I found that ExecutorService seems to be helpful (see SSCE from another post below), I don't see how to make this work for multiple tasks running in parallel (as the future.get(3, TimeUnit.SECONDS) is executing on the same thread than the one launching the tasks, hence no opportunity to launch multiple tasks in parallel):
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class Test {
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> future = executor.submit(new Task());
try {
System.out.println("Started..");
System.out.println(future.get(3, TimeUnit.SECONDS));
System.out.println("Finished!");
} catch (TimeoutException e) {
future.cancel(true);
System.out.println("Terminated!");
}
executor.shutdownNow();
}
}
class Task implements Callable<String> {
#Override
public String call() throws Exception {
Thread.sleep(4000); // Just to demo a long running task of 4 seconds.
return "Ready!";
}
}
Thanks!
If you have to monitor each task to kill it when it exceeds the timeout period, either
the task itself has to keep track of time and quit appropriately, OR
you have to create a second watchdog thread for every task. The watchdog thread sets a timer and sleeps, waking up after the timeout interval expires and then terminating the task if it's still running.
This is a tricky one. Here’s what I came up with:
public class TaskQueue<T> {
private static final Logger logger =
Logger.getLogger(TaskQueue.class.getName());
private final Collection<Callable<T>> tasks;
private final int maxTasks;
private int addsPending;
private final Collection<T> results = new ArrayList<T>();
private final ScheduledExecutorService executor;
public TaskQueue() {
this(4);
}
public TaskQueue(int maxSimultaneousTasks) {
maxTasks = maxSimultaneousTasks;
tasks = new ArrayDeque<>(maxTasks);
executor = Executors.newScheduledThreadPool(maxTasks * 3);
}
private void addWhenAllowed(Callable<T> task)
throws InterruptedException,
ExecutionException {
synchronized (tasks) {
while (tasks.size() >= maxTasks) {
tasks.wait();
}
tasks.add(task);
if (--addsPending <= 0) {
tasks.notifyAll();
}
}
Future<T> future = executor.submit(task);
executor.schedule(() -> future.cancel(true), 3, TimeUnit.SECONDS);
try {
T result = future.get();
synchronized (tasks) {
results.add(result);
}
} catch (CancellationException e) {
logger.log(Level.FINE, "Canceled", e);
} finally {
synchronized (tasks) {
tasks.remove(task);
if (tasks.isEmpty()) {
tasks.notifyAll();
}
}
}
}
public void add(Callable<T> task) {
synchronized (tasks) {
addsPending++;
}
executor.submit(new Callable<Void>() {
#Override
public Void call()
throws InterruptedException,
ExecutionException {
addWhenAllowed(task);
return null;
}
});
}
public Collection<T> getAllResults()
throws InterruptedException {
synchronized (tasks) {
while (addsPending > 0 || !tasks.isEmpty()) {
tasks.wait();
}
return new ArrayList<T>(results);
}
}
public void shutdown() {
executor.shutdown();
}
}
I suspect it could be done more cleanly using Locks and Conditions instead of synchronization.
According to Javadoc:
A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero.
That means in the below code, since I initialized the CountDownLatch to 1. All threads should get unblocked from its await method as soon as the latch calls countdown.
But, the main thread is waiting for all the threads to complete. And also, I didn't join the main thread to end of the other threads. Why is the main thread waiting?
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicLong;
public class Sample implements Runnable {
private CountDownLatch latch;
public Sample(CountDownLatch latch)
{
this.latch = latch;
}
private static AtomicLong number = new AtomicLong(0);
public long next() {
return number.getAndIncrement();
}
public static void main(String[] args) {
CountDownLatch latch = new CountDownLatch(1);
for (int threadNo = 0; threadNo < 4000; threadNo++) {
Runnable t = new Sample(latch);
new Thread(t).start();
}
try {
latch.countDown();
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public void run() {
try {
latch.await();
Thread.sleep(100);
System.out.println("Count:"+next());
} catch (Exception e) {
e.printStackTrace();
}
}
}
Try running the following modified version of your code:
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicLong;
public class Test implements Runnable {
private CountDownLatch latch;
public Test(CountDownLatch latch)
{
this.latch = latch;
}
private static AtomicLong number = new AtomicLong(0);
public long next() {
return number.getAndIncrement();
}
public static void main(String[] args) {
CountDownLatch latch = new CountDownLatch(1);
for (int threadNo = 0; threadNo < 1000; threadNo++) {
Runnable t = new Test(latch);
new Thread(t).start();
}
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println( "done" );
}
#Override
public void run() {
try {
Thread.sleep(1000 + (int) ( Math.random() * 3000 ));
System.out.println(next());
} catch (Exception e) {
e.printStackTrace();
} finally {
latch.countDown();
}
}
}
You should see something like:
0 done 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
This indicates that the main thread did, in fact, unblock from the latch.await() call after the first thread called latch.countDown()
you are starting 4000 threads and they are only waiting 100 milliseconds. you are most likely overwhelming the box (and all threads will be ending at roughly the same time). add a sleep in your thread start look and try increasing the timeout to see it work as your expect.