Thread pool not accepting new tasks - java

I feel like my java concurrency knowledge is getting rusty, I am trying to figure out why the thread pool doesn't accept more tasks in the following code:
ExecutorService e = Executors.newFixedThreadPool(aNumber);
// Task 1
for (int i=0; i<n; i++)
e.submit(new aRunnable());
while (!e.isTerminated());
System.out.println("Task 1 done");
// Task 2
for (int i=0; i<n; i++)
e.submit(new anotherRunnable());
while (!e.isTerminated());
System.out.println("Task 2 done");
It never gets to start Task 2, the thread "freezes" when the last task from Task 1 one is run like if it was waiting for something else to finish.
What's wrong?

It never gets to start Task 2, the thread "freezes" when the last task from Task 1 one is run like if it was waiting for something else to finish.
It is waiting. ExecutorService.isTerminated() waits for the thread pool tasks to finish after the pool has been shutdown. Since you've never called e.shutdown(); your loop will spin forever. To quote from the ExecutorService javadocs:
Returns true if all tasks have completed following shut down. Note that isTerminated is never true unless either shutdown or shutdownNow was called first.
You've not shut the service down so that will never be true. In general, anything that spins in a while loop like that is an antipattern – at the very least put a Thread.sleep(10); in the loop. Typically we use e.awaitTermination(...) but again, that's only after you've called e.shutdown();. And you don't want to shut the ExecutorService down because you are going to be submitting more tasks to it.
If you want to wait for all of your tasks to finish then submit more tasks I'd do something like the following and call get() on the Futures that are returned from the first batch of submitting tasks. Something like:
List<Future> futures = new ArrayList<Future>();
for (int i = 0; i < n; i++) {
futures.add(e.submit(new aRunnable()));
}
// now go back and wait for all of those tasks to finish
for (Future future : futures) {
future.get();
}
// now you can go forward and submit other tasks to the thread-pool

If you want to know when a specific task finishes, use an ExecutorService, which will return a Future<> (a handle that you can use to get the status of a specific job) -- the executor itself doesn't terminate until you shut it down. Think of an executor like 'batch queue' or a 'coprocessor' waiting around for you to throw some work in the hopper.
Update: Gray answered this much better than I - see his post. -- (how do people type that fast??)

Related

ThreadPoolExecutor shutdown API doc verbiage "does not wait"

In the documentation for ThreadPoolExector#shutdown it says:
This method does not wait for previously submitted tasks to complete execution
What does that mean?
Because I would take it to mean that queued tasks that have been submitted may not finish, but that's not what happens; see this example code, which calls shutdown before it's done starting all submitted tasks:
package example;
import java.util.concurrent.*;
public class ExecutorTest {
public static void main(String ... args) {
ExecutorService executorService = Executors.newFixedThreadPool(3);
for (int i = 0; i < 10; i++) {
final int count = i;
executorService.execute(() -> {
System.out.println("starting " + count);
try {
Thread.sleep(10000L);
} catch (InterruptedException e) {
System.out.println("interrupted " + count);
}
System.out.println("ended " + count);
});
}
executorService.shutdown();
}
}
Which prints:
C:\>java -cp . example.ExecutorTest
starting 0
starting 2
starting 1
ended 2
ended 0
starting 3
starting 4
ended 1
starting 5
ended 3
ended 5
ended 4
starting 7
starting 6
starting 8
ended 7
ended 6
ended 8
starting 9
ended 9
C:\>
In this example it seems pretty clear that submitted tasks do complete execution. I've run this on JDK8 with Oracle and IBM JDKs and get the same result.
So what is that line in the documentation trying to say? Or did somebody write this for shutdownNow and cut-n-paste it into the documentation for shutdown inadvertently?
In the doc of ThreadPoolExector#shutdown, there is one more sentence:
This method does not wait for previously submitted tasks to complete
execution. Use awaitTermination to do that.
In this context, it means the caller thread does not wait for previously submitted tasks to complete execution. In other words, shutdown() does not block the caller thread.
And if you do need block the caller thread, use ThreadPoolExector#awaitTermination(long timeout, TimeUnit unit):
Blocks until all tasks have completed execution after a shutdown
request, or the timeout occurs, or the current thread is interrupted,
whichever happens first.
Full quote of the javadoc of shutdown():
Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down.
This method does not wait for previously submitted tasks to complete execution. Use awaitTermination to do that.
Shutting down the executor prevents new tasks from being submitted.
Already submitted tasks, whether started or still waiting in the queue, will complete execution.
If you don't want queued tasks to execute, call shutdownNow():
Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution. These tasks are drained (removed) from the task queue upon return from this method.
This method does not wait for actively executing tasks to terminate. Use awaitTermination to do that.
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. This implementation cancels tasks via Thread.interrupt(), so any task that fails to respond to interrupts may never terminate.
Whether already started tasks are stopped depends on the task, as described in the last paragraph.

Executors NewFixedThreadPool not giving the expected result

I am trying to execute multiple threads in scala and for a simple test I run this code:
Executors.newFixedThreadPool(20).execute( new Runnable {
override def run(): Unit = {
println("Thread Started!")
}
})
As far as I could understand, it would create 20 threads and call the
print function, but this is not what's happening. It creates only one
thread, executes the print and hangs.
Can someone explain me this phenomena?
The reason it hangs is that you don't shut down the ExecutorService. In Java (sorry, not familiar with Scala):
ExecutorService executor = Executors.newFixedThreadPool(20); // or 1.
executor.execute(() -> System.out.println("..."));
executor.shutdown();
As to why you only see the message once: you create 20 threads, and give just one of them work. Threads won't do anything if you don't give them anything to do.
I think you assumed that this code would execute the runnable on each thread in the pool. That's simply not the case.
If you want to actually do this 20 times in different threads, you need to a) submit 20 runnables; b) synchronise the runnables in order that they actually need to run on separate threads:
CountdownLatch latch = new CountdownLatch(1);
ExecutorService executor = Executors.newFixedThreadPool(20);
for (int i = 0; i < 20; ++i) {
executor.execute(() -> {
latch.await(); // exception handling omitted for clarity.
System.out.println("...");
});
}
latch.countdown();
executor.shutdown();
The latch here ensures that the threads wait for each other before proceeding. Without it, the trivial work could easily be done on one thread before submitting another, so you wouldn't use all of the threads in the pool.

Java multithreading - schedule a task everytime the task finishes its job

I want to run a task that will contain a Timer in it that does another task. I need to wait till that sub-task is done executing before i can run another "parent task".
So how can i make the main task wait till its sub tasks are finished executing before shooting another task?
I thought of notifying it with boolean isDone in each task but im not sure if its proper
You can use a CountDownLatch in the parent thread which will wait until the child finishes it's work and calls the countDown() method so that the parent thread may continue it's work. You can have multiple children and you can adjust the CountDownLatch's count value to be equal to them.
I wouldn't recomend using a volatile variable since you will have to continuosly put the parent thread to sleep and check if the variable has changed after it wakes up.
To wait for a bunch of tasks to finish: invokeAll
// Assume we have an ExecutorService "pool" and tl is list of tasks
List<Future<SomeType>> results = pool.invokeAll( tl ); // will block until all tasks in tl are completed
Or
// Assume we have an ExecutorService "pool" and N is the count of tasks
List<Future<SomeType>> batch = new ArrayList<>(N);
for( int i = 0; i < N; i++ ){
batch.add(pool.submit(new Task(i)));
}
for( Future fut : batch ) fut.get();
/* get will block until the task is done.
* If it is already done it will return immediately.
* So if all futures in the list return from get, all tasks are done.
*/

Shutting down ExecutorService

According to documentation, when shutdown() is invoked, any tasks that were already submitted (I assume via submit() or execute) will be executed. When shutdownNow() is invoked, the executor will halt all tasks waiting to be processed, as well as attempt to stop actively executing tasks.
What I would like to clarify is the exact meaning of "waiting to be processed." For example, say I have an executor, and I call execute() on some number of Runnable objects (assume all of these objects effectively ignore interruptions). I know that if I now call shutdown, all of these objects will finish executing, regardless.
However, if I call shutdownNow at this point, will it have the same effect as calling shutdown? Or are some of the objects not executed? In other words, if I want an executor to exit as fast as possible, is my best option always to call shutdownNow(), even when the Runnables passed to the executor all effectively ignore interruptions?
Let's say you have this fabulous Runnable that is not interruptible for 10 seconds once it's started:
Runnable r = new Runnable() {
#Override
public void run() {
long endAt = System.currentTimeMillis() + 10000;
while (System.currentTimeMillis() < endAt);
}
};
And you have an executor with just 1 thread and you schedule the runnable 10 times:
ExecutorService executor = Executors.newFixedThreadPool(1);
for (int i = 0; i < 10; i++)
executor.execute(r);
And now you decide to call shutdown:
The executor continues for the full 10 x 10 seconds and everything scheduled will be executed. The tasks don't see that you're shutting down their executor. shutdown can be used if you want a "short lived" executor just for a few tasks. You can immediately call shutdown and it will get cleaned up later.
Alternatively shutdownNow():
Takes 10 seconds. The already running task is attempted to be interrupted, but that obviously has no effect so it continues to run. The other 9 tasks that were still waiting in the queue are "cancelled" and returned to you as List so you could do something with them, like schedule them later. Could also take 0 seconds if the first task is not yet started. You'd get all tasks back. The method is used whenever you want to abort an entire executor.
What I would like to clarify is the exact meaning of "waiting to be processed".
It means all tasks whose run() method has not yet been called (by the executor).
If I call shutdownNow at this point, will it have the same effect as calling shutdown?
No.
Or is it possible that some of the objects will not be executed?
That is correct.
In other words, if I want an executor to exit as fast as possible, is my best option always to call shutdownNow(), even when the Runnables passed to the executor all effectively ignore interruptions?
That is correct.
Better still, recode the Runnables to pay attention to interrupts ... or put a timeout on the shutdown ...
The API for shutdownNow method says that :
There are no guarantees beyond best-effort attempts to stop processing
actively executing tasks. For example, typical implementations will
cancel via Thread.interrupt(), so any task that fails to respond to
interrupts may never terminate.
source

Program does not terminate immediately when all ExecutorService tasks are done

I put a bunch of runnable objects into an ExecutorService:
// simplified content of main method
ExecutorService threadPool = Executors.newCachedThreadPool();
for(int i = 0; i < workerCount; i++) {
threadPool.execute(new Worker());
}
I would expect my program/process to stop immediately after all workers are done. But according to my log, it takes another 20-30 seconds until that happens. The workers do not allocate any resources, in fact, they do nothing at the moment.
Don't get me wrong, this is not a crucial problem for me, I'm just trying to understand what is happening and I'm wondering if this is normal behavior.
Executors.newCachedThreadPool() uses Executors.defaultThreadFactory() for its ThreadFactory. defaultThreadFactory's javadocs say that "each new thread is created as a non-daemon thread" (emphasis added). So, the threads created for the newCachedThreadPool are non-daemon. That means that they'll prevent the JVM from exiting naturally (by "naturally" I mean that you can still call System.exit(1) or kill the program to cause the JVM to halt).
The reason the app finishes at all is that each thread created within the newCachedThreadPool times out and closes itself after some time of inactivity. When the last one of them closes itself, if your application doesn't have any non-daemon threads left, it'll quit.
You can (and should) close the ExecutorService down manually via shutdown or shutdownNow.
See also the JavaDoc for Thread, which talks about daemon-ness.
I would expect my program/process to stop immediately after all workers are done. But according to my log, it takes another 20-30 seconds until that happens. The workers do not allocate any resources, in fact, they do nothing at the moment.
The problem is that you are not shutting down your ExecutorService. After you submit all of the jobs to the service, you should shutdown the service or the JVM will not terminate unless all of the threads in it are daemon threads. If you do not shutdown the thread-pool then any threads associated with the ExecutorService, again if not daemon, will stop the JVM from finishing. If you've submitted any tasks to a cached thread pool then you will have to wait for the threads to timeout and get reaped before the JVM will finish.
ExecutorService threadPool = Executors.newCachedThreadPool();
for(int i = 0; i < workerCount; i++) {
threadPool.execute(new Worker());
}
// you _must_ do this after submitting all of your workers
threadPool.shutdown();
Starting the threads as daemon is most likely not what you want to do because your application may stop before the tasks have completed and all of the tasks will be terminated immediately at that time. I just did a quick audit and of the 178 times we use ExecutorService classes in our production code, only 2 of them were started as daemon threads. The rest are properly shutdown.
If you need to force an ExecutorService to stop when the application is exiting then using shutdownNow() with proper handling of the thread interrupt flags is in order.
Basically on an ExecutorService you call shutdown() and then awaitTermination():
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
while(...) {
taskExecutor.execute(new MyTask());
}
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}
From the javadoc for Executors.newCachedThreadPool():
Threads that have not been used for sixty seconds are terminated and removed from the cache.
It is usually a good idea to call shutdown() on an ExecutorService if you know that no new tasks will be submitted to it. Then all tasks in the queue will complete, but the service will then shut down immediately.
(Alternately, if you don't care if all the tasks complete - for example, if they are handling background calculations that are irrelevant once your main UI is gone - then you can create a ThreadFactory that sets all the threads in that pool to be daemon.)
For multi threading of ExecutorService
Solution is
threadPool.shutdown();
It is due to combination keepAliveTime=60L, timeunit=TimeUnit.SECONDS and corePoolSize=0*: when thread completes task, it does not terminate immediately, it may** wait during keepAliveTime for a new task.
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
*if core poolSize != 0 see method allowCoreThreadTimeOut() of ThreadPoolExecutor
**waiting depends on combination of current quantity of running threads in pool, corePoolSize and maximumPoolSize

Categories

Resources