I am learning locks and conditions in Java and have implemented producer and consumer code.Idea is to have 1 producer and N Consumer . But when i run the code , always 1 consumer thread read
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class ProdConsumer {
int MAX_ELEMENTS = 10;
Queue<Integer> q ;
Lock lock = new ReentrantLock(false);
Condition read = lock.newCondition();
Condition write = lock.newCondition();
ProdConsumer(){
q = new LinkedList<>();
}
public static void main(String[] args) throws InterruptedException {
ProdConsumer pq = new ProdConsumer();
new Thread(() -> {
try {
new Producer(pq.q).produce(pq.lock,pq.read,pq.write,pq.MAX_ELEMENTS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
new Thread(() -> {
try {
System.out.println("Started " + Thread.currentThread().getName());
new Consumer(pq.q).consume(pq.lock,pq.read,pq.write);
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
new Thread(() -> {
try {
System.out.println("Started " + Thread.currentThread().getName());
new Consumer(pq.q).consume(pq.lock,pq.read,pq.write);
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
}
}
class Producer {
Queue q;
Producer(Queue q) {
this.q = q;
}
void produce(Lock lock, Condition read, Condition write, int MAX_ELEMENTS) throws InterruptedException {
lock.lock();
try {
int count = 0;
while (count < 1000) {
while (q.size() == MAX_ELEMENTS) {
write.await();
}
q.add(count);
System.out.println(Thread.currentThread().getName() + "PRODUCED " + count);
count++;
read.signalAll();
}
} finally {
lock.unlock();
}
}
}
class Consumer {
Queue q;
Consumer(Queue q ){
this.q = q;
}
void consume(Lock lock, Condition read, Condition write) throws InterruptedException {
lock.lock();
try {
while (true) {
while (q.size() == 0) {
read.await();
}
System.out.println(Thread.currentThread().getName() + "CONSUMED " + q.poll());
Thread.sleep((long)(Math.random() * 1000));
write.signalAll();
}
} finally {
lock.unlock();
}
}
}
Result:
Started Thread-1
Started Thread-2
Thread-0PRODUCED 0
Thread-0PRODUCED 1
Thread-0PRODUCED 2
Thread-0PRODUCED 3
Thread-0PRODUCED 4
Thread-0PRODUCED 5
Thread-0PRODUCED 6
Thread-0PRODUCED 7
Thread-0PRODUCED 8
Thread-0PRODUCED 9
Thread-1CONSUMED 0
Thread-1CONSUMED 1
Thread-1CONSUMED 2
Thread-1CONSUMED 3
Thread-1CONSUMED 4
Thread-1CONSUMED 5
Thread-1CONSUMED 6
Thread-1CONSUMED 7
Thread-1CONSUMED 8
Thread-1CONSUMED 9
Here we can see , even tough both thread starts , always Thread one is consuming.
My thought process is , when producer signals all , then either of consumer thread should start. What am i doing wrong ?
Unfortunately you have implemented the producer-consumer with a dependency on both the queue size and locks, and the queue access is not thread-safe. The implementation is unlikely to work properly as producer could call signalAll several times when neither consumer is using read.
Because your producer starts first it could either add the queue so that both consumers see q.size() > 0 when they start-up or they might both see q.size() == 0 and miss the producer saying read.signalAll() before they enter read.await(). Thus this while loop below is not doing as you expect in both consumers:
while (q.size() == 0) {
read.await();
}
To demonstrate the above, switch the new Threads around and start up both Consumer first BEFORE the Producer, and change the consumer so they rely only on only the read locks not queue size to determine when to read the queue.
// while (q.size() == 0) {
read.await();
// }
That avoids not-threadsafe access to q.size() and q.poll() but still may fail if both consumers do not hit read.await() so you might need to sleep before starting the producer.
A better (and reliable) solution would replace locks with thread-safe Queue such as new ArrayBlockingQueue<>(MAX_ELEMENTS) and use q.put() / q.take() to ensure consumers are exactly handling everything from the producer, and then you won't need to worry about the order of producer-consumer startup (nor add a sleep before producer start-up).
Also note that System.out in these threads introduces some inter-thread synchronisation (see PrintWriter source code) so the results may not be same without your logging.
Related
I found myself puzzled with this behaviour for wait() and notifyAll():
import java.util.LinkedList;
import java.util.Queue;
class Producer implements Runnable {
Queue<Integer> sharedMessages;
Integer i = 0;
Producer(Queue<Integer> sharedMessages) {
this.sharedMessages = sharedMessages;
}
public void produce(Integer i) {
synchronized (sharedMessages){
System.out.println("Producing message " + i);
this.sharedMessages.add(i);
}
}
public void run(){
synchronized (sharedMessages) {
while (i < 100) {
produce(i++);
}
}
}
}
Consumer:
class Consumer implements Runnable{
Queue<Integer> sharedMessages;
Consumer(Queue<Integer> sharedMessages) {
this.sharedMessages = sharedMessages;
}
public void consume() {
synchronized (sharedMessages) {
System.out.println(sharedMessages.remove() + " consumed by " + Thread.currentThread().getName().toString());
}
}
#Override
public void run(){
synchronized (sharedMessages){
while(sharedMessaged.size() > 0){
System.out.println(Thread.currentThread().getName() + " going to consume");
consume();
try {
sharedMessages.wait();
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + ": I was interrupted from my sleep!");
}
sharedMessages.notifyAll();
}
}
}
}
Here's how I'm creating the threads:
public class Main {
public static void main(String[] args) {
Queue<Integer> sharedMessages = new LinkedList<>();
new Thread(new Producer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
}
}
The output looks something like this:
Producing message 0
Producing message 1
...
Producing message 98
Producing message 99
Thread-6 going to consume
0 consumed by Thread-6
Thread-5 going to consume
1 consumed by Thread-5
Thread-4 going to consume
2 consumed by Thread-4
Thread-3 going to consume
3 consumed by Thread-3
Thread-2 going to consume
4 consumed by Thread-2
Thread-1 going to consume
5 consumed by Thread-1
And then the application keeps running, without consumers consuming any messages after 5.
Since wait() and notifyAll() are created on the same monitor, sharedMessages, and the while loop keeps on running, shouldn't the consumer threads keep on running, alternatively consuming messages?
NOTE: This question is NOT about a Bounded Blocking Queue / typical Producer Consumer. I'm trying to gain a better understanding of wait() and notifyAll and this behaviour caught my attention. I am probably missing something here, and I am looking for answers pointing out what I am missing and NOT a certain another way of doing it.
Your Producer thread locks the queue, then adds 100 messages without ever releasing the lock, and finally releases the lock before terminating, without ever notifying anyone.
Your 6 Consumer threads will each consume a message, then call wait().
At this point, the Producer thread has ended, and the 6 Consumer threads are waiting.
Who did you envision would be notifying them to wake them up?
Your current code is producing its output, because all your consumers will consume at most one item - depending on if they can run before or after your producer - and then they will wait unconditionally. There is nothing to wake them up, so they won't run again.
So, the behaviour you see is expected for the code you have written, as the only notifyAll() calls happens by consumers after their unconditional wait, so in essence never (except under spurious wake ups). In addition, your overly large synchronized blocks hinder the threads from running concurrently.
The primary changes you need to make are:
Reduce the size of your synchronized blocks (ideally it should only cover producing or consuming a single item)
Do not wait unconditionally, only wait when there are no items in the queue and the producer is still active (you will need a way to signal the producer is done)
Have the producer call notifyAll() after each item (alternatively, call notify() after each item, and notifyAll() after all items have been produced).
You've already got two good answers, but if you want to compare your next solution to something, here's some code I had wrote a while ago for a simple example. It's lightly tested but seems to work OK.
Notice instead of a Queue I implement my own circular buffer. It does the same thing, but its implementation is a little closer to what you might see in some low level (and optimized) object.
public class ProducerConsumer {
public static void main(String[] args) throws InterruptedException {
CircularBuffer buffer = new CircularBuffer();
Counter producer1 = new Counter( buffer, 1000 );
Counter producer2 = new Counter( buffer, 2000 );
Counter producer3 = new Counter( buffer, 3000 );
Counter producer4 = new Counter( buffer, 4000 );
ExecutorService exe = Executors.newCachedThreadPool();
exe.execute( producer1 );
exe.execute( producer2 );
exe.execute( producer3 );
exe.execute( producer4 );
Printer consumer = new Printer( buffer );
exe.execute( consumer );
Thread.sleep( 100 );// wait a bit
exe.shutdownNow();
exe.awaitTermination( 10, TimeUnit.SECONDS );
}
}
// Producer
class Counter implements Runnable {
private final CircularBuffer output;
private final int startingValue;
public Counter(CircularBuffer output, int startingValue) {
this.output = output;
this.startingValue = startingValue;
}
#Override
public void run() {
try {
for( int i = startingValue; ; i++ )
output.put(i);
} catch (InterruptedException ex) {
// exit...
}
}
}
class CircularBuffer {
private final int[] buffer = new int[20];
private int head;
private int count;
public synchronized void put( int i ) throws InterruptedException {
while( count == buffer.length ) wait();// full
buffer[head++] = i;
head %= buffer.length;
count++;
notifyAll();
}
public synchronized int get() throws InterruptedException {
while( count == 0 ) wait(); // empty
int tail = (head - count) % buffer.length;
tail = (tail < 0) ? tail + buffer.length : tail;
int retval = buffer[tail];
count--;
notifyAll();
return retval;
}
}
// Consumer
class Printer implements Runnable {
private final CircularBuffer input;
public Printer(CircularBuffer input) {
this.input = input;
}
#Override
public void run() {
try {
for( ;; )
System.out.println( input.get() );
} catch (InterruptedException ex) {
// exit...
}
}
}
I would like to run thread one after another.
Is there any alternative way to Marathon with Java 8?
Without using ExecuterService:
public class Marathon {
public static void main(String[] args) throws InterruptedException {
Runnable task = () -> {
for (int i = 0; i < 10; i++) {
System.out.println(Thread.currentThread().getName()+ " is running... " + i);
try {
Thread.sleep(200);
} catch (InterruptedException e) {
}
}
};
Thread t1 = new Thread(task, "Mary");
Thread t2 = new Thread(task, "David");
t1.start();
t1.join(100);
t2.start();
}
}
Output:
Mary is running... 0
David is running... 0
Mary is running... 1
David is running... 1
...
Following code doesn't work as Marathon :
public class Marathon2 {
public static void main(String[] args)
throws InterruptedException, ExecutionException, TimeoutException {
ExecutorService service = null;
Runnable task = () -> {
try {
for (int i = 0; i < 10; i++) {
System.out.println(Thread.currentThread().getName()
+ " is running... " + i);
}
TimeUnit.MILLISECONDS.sleep(100);
} catch (InterruptedException e) {
}
};
try {
service = Executors.newFixedThreadPool(4);
Future<?> job1 = service.submit(task);
job1.get(500, TimeUnit.MILLISECONDS);
Future<?> job2 = service.submit(task);
} finally {
if (service != null)
service.shutdown();
}
}
}
Output:
pool-1-thread-1 is running... 0
...
pool-1-thread-1 is running... 9
pool-1-thread-2 is running... 0
...
pool-1-thread-2 is running... 9
Is it possible to do with ExecuterService?
Expected:
pool-1-thread-1 is running... 0
pool-1-thread-2 is running... 0
...
pool-1-thread-1 is running... 9
pool-1-thread-2 is running... 9
Without dealing with any threads nor with Executors directly you can do it with a CompletableFuture
Runnable runnable = () -> System.out.println("hi");
Runnable runnable1 = () -> System.out.println("there");
CompletableFuture<Void> all = CompletableFuture.runAsync(runnable).thenRun(runnable1);
all.whenComplete((x,th) -> {
System.out.println("both done");
});
Note that this would use the common ForkJoin pool but you can still provide your own.
The two classes are not doing the same thing. You can probably reach the solution yourself by comparing them closely. First, do you know exactly how your first class (Marathon) works? In particular, what do you think the following line does?
t1.join(100);
The thread t1, which has just started running, has just gone into a loop which counts up once every 200 milliseconds. The join(100) call simply causes the current (main) thread to wait 100 milliseconds. You will achieve exactly the same results by replacing that line with this one:
Thread.sleep(100);
Now that the main thread has slept for 100 milliseconds, it starts thread t2. Now the two threads are running in parallel, and every 200 milliseconds both threads output a line, the second thread delayed by 100 milliseconds so that they appear evenly interleaved.
Now let's look at your second method, Marathon2. A few differences from the first class are immediately obvious:
The sleep in the Runnable is outside the loop, instead of inside.
The sleep in the Runnable is only 100 milliseconds, instead of 200.
The maximum wait in the main thread is 500 milliseconds, instead of 100.
The Future.get method causes a TimeoutException instead of just continuing. We can simply replace this call with a sleep anyway, since that's all that the first class does.
So, ironing out the differences, we get the following Marathon2 class which behaves in a similar manner to the other class (Marathon), with interleaved threads:
public class Marathon2 {
public static void main(String[] args)
throws InterruptedException, ExecutionException, TimeoutException {
ExecutorService service = null;
Runnable task = () -> {
try {
for (int i = 0; i < 10; i++) {
System.out.println(Thread.currentThread().getName()
+ " is running... " + i);
TimeUnit.MILLISECONDS.sleep(200);
}
} catch (InterruptedException e) {
}
};
try {
service = Executors.newFixedThreadPool(4);
Future<?> job1 = service.submit(task);
TimeUnit.MILLISECONDS.sleep(100);
Future<?> job2 = service.submit(task);
} finally {
if (service != null)
service.shutdown();
}
}
}
I was looking at the ThreadPoolExecutor class and I found that it allows to specify the maximum pool size and the core pool size.
I understand, a little, about when to change the core and maximum pool sizes based on the answer here: When is specifying separate core and maximum pool sizes in ThreadPoolExecutor a good idea?
However, I would like to know what are these 'core threads'. I always get 0 when I use the getCorePoolSize() method of a ThreadPoolExecutor
SSCCE here:
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ThreadPoolExecutor;
public class PoolSize {
public static void main(String[] args) {
// Create a cached thread pool
ExecutorService cachedPool = Executors.newCachedThreadPool();
// Cast the object to its class type
ThreadPoolExecutor pool = (ThreadPoolExecutor) cachedPool;
// Create a Callable object of anonymous class
Callable<String> aCallable = new Callable<String>(){
String result = "Callable done !";
#Override
public String call() throws Exception {
// Print a value
System.out.println("Callable at work !");
// Sleep for 5 sec
Thread.sleep(0);
return result;
}
};
// Create a Runnable object of anonymous class
Runnable aRunnable = new Runnable(){
#Override
public void run() {
try {
// Print a value
System.out.println("Runnable at work !");
// Sleep for 5 sec
Thread.sleep(0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
// Submit the two tasks for execution
Future<String> callableFuture = cachedPool.submit(aCallable);
Future<?> runnableFuture = cachedPool.submit(aRunnable);
System.out.println("Core threads: " + pool.getCorePoolSize());
System.out.println("Largest number of simultaneous executions: "
+ pool.getLargestPoolSize());
System.out.println("Maximum number of allowed threads: "
+ pool.getMaximumPoolSize());
System.out.println("Current threads in the pool: "
+ pool.getPoolSize());
System.out.println("Currently executing threads: "
+ pool.getTaskCount());
pool.shutdown(); // shut down
}
}
core threads is the minimum which is always running just in case you want to pass it a task. The cached pool by default has a core of 0 as you might expect.
For the fixed thread pool, the core and the maximum are the same i.e. whatever you set the fixed size to.
The core threads are just standard threads but will be always kept alive in the pool, and then the other non-core threads will end their lives after the run() method finished.
But how could these core threads be always alive? That's because they are always waiting for taking a task from the workQueue shared within the pool. By default, the workQueue is a BlockingQueue, its take() method will block the current thread indefinitely until a task becomes available.
Here comes the key point, which threads will become the core threads? They may not be the first started ones or the last ones, but the ones(corePoolSize) that last the longest. Easier to understand from the code.
private Runnable getTask() {
boolean timedOut = false; // Did the last poll() time out?
for (;;) {
int c = ctl.get();
int rs = runStateOf(c);
// Check if queue empty only if necessary.
if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
decrementWorkerCount();
return null;
}
int wc = workerCountOf(c);
//------------- key code ------------------
// Are workers subject to culling?
boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
if ((wc > maximumPoolSize || (timed && timedOut))
&& (wc > 1 || workQueue.isEmpty())) {
if (compareAndDecrementWorkerCount(c))
return null;
continue;
}
//------------- key code ------------------
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
}
What I just said above is based on allowCoreThreadTimeOut set as false.
Actually, I prefer to call core threads as core workers.
I was doing a research in producers and consumer design patterns with regards to threads in java, I recently explored in java 5 with the introduction With introduction of BlockingQueue Data Structure in Java 5 Its now much simpler because BlockingQueue provides this control implicitly by introducing blocking methods put() and take(). Now you don't require to use wait and notify to communicate between Producer and Consumer. BlockingQueue put() method will block if Queue is full in case of Bounded Queue and take() will block if Queue is empty. In next section we will see a code example of Producer Consumer design pattern. I have developed the below program but please also let me know the old style approach of waut() and notify() , I want to develop the same logic with old style approach also
Folks please advise how this can be implemented in , classical way is using wait() and notify() method to communicate between Producer and Consumer thread and blocking each of them on individual condition like full queue and empty queue...?
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ProducerConsumerPattern {
public static void main(String args[]){
//Creating shared object
BlockingQueue sharedQueue = new LinkedBlockingQueue();
//Creating Producer and Consumer Thread
Thread prodThread = new Thread(new Producer(sharedQueue));
Thread consThread = new Thread(new Consumer(sharedQueue));
//Starting producer and Consumer thread
prodThread.start();
consThread.start();
}
}
//Producer Class in java
class Producer implements Runnable {
private final BlockingQueue sharedQueue;
public Producer(BlockingQueue sharedQueue) {
this.sharedQueue = sharedQueue;
}
#Override
public void run() {
for(int i=0; i<10; i++){
try {
System.out.println("Produced: " + i);
sharedQueue.put(i);
} catch (InterruptedException ex) {
Logger.getLogger(Producer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
//Consumer Class in Java
class Consumer implements Runnable{
private final BlockingQueue sharedQueue;
public Consumer (BlockingQueue sharedQueue) {
this.sharedQueue = sharedQueue;
}
#Override
public void run() {
while(true){
try {
System.out.println("Consumed: "+ sharedQueue.take());
} catch (InterruptedException ex) {
Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
Output:
Produced: 0
Produced: 1
Consumed: 0
Produced: 2
Consumed: 1
Produced: 3
Consumed: 2
Produced: 4
Consumed: 3
Produced: 5
Consumed: 4
Produced: 6
Consumed: 5
Produced: 7
Consumed: 6
Produced: 8
Consumed: 7
Produced: 9
Consumed: 8
Consumed: 9
If you want to know another way to do this try using an ExecutorService
public static void main(String... args) {
ExecutorService service = Executors.newSingleThreadExecutor();
for (int i = 0; i < 100; i++) {
System.out.println("Produced: " + i);
final int finalI = i;
service.submit(new Runnable() {
#Override
public void run() {
System.out.println("Consumed: " + finalI);
}
});
}
service.shutdown();
}
With just 10 tasks the producer can be finished before the consumer starts. If you try 100 tasks you may find them interleaved.
If you want to understand how a BlockingQueue works, for educational purposes, you can always have a look on its source code.
The simplest way could be to synchronize the offer() and take() methods, and once the queue is full and someone is trying to offer() an element - invoke wait(). When someone is taking an element, notify() the sleeping thread. (Same idea when trying to take() from an empty queue).
Remember to make sure all your wait() calls are nested in loops that checks if the conditions are met each time the thread is awakened.
If you are planning to implement it from scratch for product purposes - I'd strongly argue against it. You should use an existing, tested libraries and components as much as possible.
I can do this wait-notify stuff in my sleep (or at least I think I can). Java 1.4 source provided beautiful examples of all this, but they've switched to doing everything with atomics and it's a lot more complicated now. The wait-notify does provide flexibility and power, though the other methods can shield you from the dangers of concurrency and make for simpler code.
To do this, you want some fields, like so:
private final ConcurrentLinkedQueue<Intger> sharedQueue =
new ConcurrentLinkedQueue<>();
private volatile boolean waitFlag = true;
Your Producer.run would look like this:
public void run() {
for (int i = 0; i < 100000, i++) {
System.out.println( "Produced: " + i );
sharedQueue.add( new Integer( i ) );
if (waitFlag) // volatile access is cheaper than synch.
synchronized (sharedQueue) { sharedQueue.notifyAll(); }
}
}
And Consumer.run:
public void run() {
waitFlag = false;
for (;;) {
Integer ic = sharedQueue.poll();
if (ic == null) {
synchronized (sharedQueue) {
waitFlag = true;
// An add might have come through before waitFlag was set.
ic = sharedQueue.poll();
if (ic == null) {
try { sharedQueue.wait(); }
catch (InterruptedException ex) {}
waitFlag = false;
continue;
}
waitFlag = true;
}
}
System.out.println( "Consumed: " + ic );
}
}
This keeps synchronizing to a minimum. If all goes well, there's only one look at a volatile field per add. You should be able to run any number of producers simultaneously. (Consumer's would be trickier--you'd have to give up waitFlag.) You could use a different object for wait/notifyAll.
What is a way to simply wait for all threaded process to finish? For example, let's say I have:
public class DoSomethingInAThread implements Runnable{
public static void main(String[] args) {
for (int n=0; n<1000; n++) {
Thread t = new Thread(new DoSomethingInAThread());
t.start();
}
// wait for all threads' run() methods to complete before continuing
}
public void run() {
// do something here
}
}
How do I alter this so the main() method pauses at the comment until all threads' run() methods exit? Thanks!
You put all threads in an array, start them all, and then have a loop
for(i = 0; i < threads.length; i++)
threads[i].join();
Each join will block until the respective thread has completed. Threads may complete in a different order than you joining them, but that's not a problem: when the loop exits, all threads are completed.
One way would be to make a List of Threads, create and launch each thread, while adding it to the list. Once everything is launched, loop back through the list and call join() on each one. It doesn't matter what order the threads finish executing in, all you need to know is that by the time that second loop finishes executing, every thread will have completed.
A better approach is to use an ExecutorService and its associated methods:
List<Callable> callables = ... // assemble list of Callables here
// Like Runnable but can return a value
ExecutorService execSvc = Executors.newCachedThreadPool();
List<Future<?>> results = execSvc.invokeAll(callables);
// Note: You may not care about the return values, in which case don't
// bother saving them
Using an ExecutorService (and all of the new stuff from Java 5's concurrency utilities) is incredibly flexible, and the above example barely even scratches the surface.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class DoSomethingInAThread implements Runnable
{
public static void main(String[] args) throws ExecutionException, InterruptedException
{
//limit the number of actual threads
int poolSize = 10;
ExecutorService service = Executors.newFixedThreadPool(poolSize);
List<Future<Runnable>> futures = new ArrayList<Future<Runnable>>();
for (int n = 0; n < 1000; n++)
{
Future f = service.submit(new DoSomethingInAThread());
futures.add(f);
}
// wait for all tasks to complete before continuing
for (Future<Runnable> f : futures)
{
f.get();
}
//shut down the executor service so that this thread can exit
service.shutdownNow();
}
public void run()
{
// do something here
}
}
instead of join(), which is an old API, you can use CountDownLatch. I have modified your code as below to fulfil your requirement.
import java.util.concurrent.*;
class DoSomethingInAThread implements Runnable{
CountDownLatch latch;
public DoSomethingInAThread(CountDownLatch latch){
this.latch = latch;
}
public void run() {
try{
System.out.println("Do some thing");
latch.countDown();
}catch(Exception err){
err.printStackTrace();
}
}
}
public class CountDownLatchDemo {
public static void main(String[] args) {
try{
CountDownLatch latch = new CountDownLatch(1000);
for (int n=0; n<1000; n++) {
Thread t = new Thread(new DoSomethingInAThread(latch));
t.start();
}
latch.await();
System.out.println("In Main thread after completion of 1000 threads");
}catch(Exception err){
err.printStackTrace();
}
}
}
Explanation:
CountDownLatch has been initialized with given count 1000 as per your requirement.
Each worker thread DoSomethingInAThread will decrement the CountDownLatch, which has been passed in constructor.
Main thread CountDownLatchDemo await() till the count has become zero. Once the count has become zero, you will get below line in output.
In Main thread after completion of 1000 threads
More info from oracle documentation page
public void await()
throws InterruptedException
Causes the current thread to wait until the latch has counted down to zero, unless the thread is interrupted.
Refer to related SE question for other options:
wait until all threads finish their work in java
Avoid the Thread class altogether and instead use the higher abstractions provided in java.util.concurrent
The ExecutorService class provides the method invokeAll that seems to do just what you want.
Consider using java.util.concurrent.CountDownLatch. Examples in javadocs
Depending on your needs, you may also want to check out the classes CountDownLatch and CyclicBarrier in the java.util.concurrent package. They can be useful if you want your threads to wait for each other, or if you want more fine-grained control over the way your threads execute (e.g., waiting in their internal execution for another thread to set some state). You could also use a CountDownLatch to signal all of your threads to start at the same time, instead of starting them one by one as you iterate through your loop. The standard API docs have an example of this, plus using another CountDownLatch to wait for all threads to complete their execution.
As Martin K suggested java.util.concurrent.CountDownLatch seems to be a better solution for this. Just adding an example for the same
public class CountDownLatchDemo
{
public static void main (String[] args)
{
int noOfThreads = 5;
// Declare the count down latch based on the number of threads you need
// to wait on
final CountDownLatch executionCompleted = new CountDownLatch(noOfThreads);
for (int i = 0; i < noOfThreads; i++)
{
new Thread()
{
#Override
public void run ()
{
System.out.println("I am executed by :" + Thread.currentThread().getName());
try
{
// Dummy sleep
Thread.sleep(3000);
// One thread has completed its job
executionCompleted.countDown();
}
catch (InterruptedException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}.start();
}
try
{
// Wait till the count down latch opens.In the given case till five
// times countDown method is invoked
executionCompleted.await();
System.out.println("All over");
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
If you make a list of the threads, you can loop through them and .join() against each, and your loop will finish when all the threads have. I haven't tried it though.
http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#join()
Create the thread object inside the first for loop.
for (int i = 0; i < threads.length; i++) {
threads[i] = new Thread(new Runnable() {
public void run() {
// some code to run in parallel
}
});
threads[i].start();
}
And then so what everyone here is saying.
for(i = 0; i < threads.length; i++)
threads[i].join();
You can do it with the Object "ThreadGroup" and its parameter activeCount:
As an alternative to CountDownLatch you can also use CyclicBarrier e.g.
public class ThreadWaitEx {
static CyclicBarrier barrier = new CyclicBarrier(100, new Runnable(){
public void run(){
System.out.println("clean up job after all tasks are done.");
}
});
public static void main(String[] args) {
for (int i = 0; i < 100; i++) {
Thread t = new Thread(new MyCallable(barrier));
t.start();
}
}
}
class MyCallable implements Runnable{
private CyclicBarrier b = null;
public MyCallable(CyclicBarrier b){
this.b = b;
}
#Override
public void run(){
try {
//do something
System.out.println(Thread.currentThread().getName()+" is waiting for barrier after completing his job.");
b.await();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
e.printStackTrace();
}
}
}
To use CyclicBarrier in this case barrier.await() should be the last statement i.e. when your thread is done with its job. CyclicBarrier can be used again with its reset() method. To quote javadocs:
A CyclicBarrier supports an optional Runnable command that is run once per barrier point, after the last thread in the party arrives, but before any threads are released. This barrier action is useful for updating shared-state before any of the parties continue.
The join() was not helpful to me. see this sample in Kotlin:
val timeInMillis = System.currentTimeMillis()
ThreadUtils.startNewThread(Runnable {
for (i in 1..5) {
val t = Thread(Runnable {
Thread.sleep(50)
var a = i
kotlin.io.println(Thread.currentThread().name + "|" + "a=$a")
Thread.sleep(200)
for (j in 1..5) {
a *= j
Thread.sleep(100)
kotlin.io.println(Thread.currentThread().name + "|" + "$a*$j=$a")
}
kotlin.io.println(Thread.currentThread().name + "|TaskDurationInMillis = " + (System.currentTimeMillis() - timeInMillis))
})
t.start()
}
})
The result:
Thread-5|a=5
Thread-1|a=1
Thread-3|a=3
Thread-2|a=2
Thread-4|a=4
Thread-2|2*1=2
Thread-3|3*1=3
Thread-1|1*1=1
Thread-5|5*1=5
Thread-4|4*1=4
Thread-1|2*2=2
Thread-5|10*2=10
Thread-3|6*2=6
Thread-4|8*2=8
Thread-2|4*2=4
Thread-3|18*3=18
Thread-1|6*3=6
Thread-5|30*3=30
Thread-2|12*3=12
Thread-4|24*3=24
Thread-4|96*4=96
Thread-2|48*4=48
Thread-5|120*4=120
Thread-1|24*4=24
Thread-3|72*4=72
Thread-5|600*5=600
Thread-4|480*5=480
Thread-3|360*5=360
Thread-1|120*5=120
Thread-2|240*5=240
Thread-1|TaskDurationInMillis = 765
Thread-3|TaskDurationInMillis = 765
Thread-4|TaskDurationInMillis = 765
Thread-5|TaskDurationInMillis = 765
Thread-2|TaskDurationInMillis = 765
Now let me use the join() for threads:
val timeInMillis = System.currentTimeMillis()
ThreadUtils.startNewThread(Runnable {
for (i in 1..5) {
val t = Thread(Runnable {
Thread.sleep(50)
var a = i
kotlin.io.println(Thread.currentThread().name + "|" + "a=$a")
Thread.sleep(200)
for (j in 1..5) {
a *= j
Thread.sleep(100)
kotlin.io.println(Thread.currentThread().name + "|" + "$a*$j=$a")
}
kotlin.io.println(Thread.currentThread().name + "|TaskDurationInMillis = " + (System.currentTimeMillis() - timeInMillis))
})
t.start()
t.join()
}
})
And the result:
Thread-1|a=1
Thread-1|1*1=1
Thread-1|2*2=2
Thread-1|6*3=6
Thread-1|24*4=24
Thread-1|120*5=120
Thread-1|TaskDurationInMillis = 815
Thread-2|a=2
Thread-2|2*1=2
Thread-2|4*2=4
Thread-2|12*3=12
Thread-2|48*4=48
Thread-2|240*5=240
Thread-2|TaskDurationInMillis = 1568
Thread-3|a=3
Thread-3|3*1=3
Thread-3|6*2=6
Thread-3|18*3=18
Thread-3|72*4=72
Thread-3|360*5=360
Thread-3|TaskDurationInMillis = 2323
Thread-4|a=4
Thread-4|4*1=4
Thread-4|8*2=8
Thread-4|24*3=24
Thread-4|96*4=96
Thread-4|480*5=480
Thread-4|TaskDurationInMillis = 3078
Thread-5|a=5
Thread-5|5*1=5
Thread-5|10*2=10
Thread-5|30*3=30
Thread-5|120*4=120
Thread-5|600*5=600
Thread-5|TaskDurationInMillis = 3833
As it's clear when we use the join:
The threads are running sequentially.
The first sample takes 765 Milliseconds while the second sample takes 3833 Milliseconds.
Our solution to prevent blocking other threads was creating an ArrayList:
val threads = ArrayList<Thread>()
Now when we want to start a new thread we most add it to the ArrayList:
addThreadToArray(
ThreadUtils.startNewThread(Runnable {
...
})
)
The addThreadToArray function:
#Synchronized
fun addThreadToArray(th: Thread) {
threads.add(th)
}
The startNewThread funstion:
fun startNewThread(runnable: Runnable) : Thread {
val th = Thread(runnable)
th.isDaemon = false
th.priority = Thread.MAX_PRIORITY
th.start()
return th
}
Check the completion of the threads as below everywhere it's needed:
val notAliveThreads = ArrayList<Thread>()
for (t in threads)
if (!t.isAlive)
notAliveThreads.add(t)
threads.removeAll(notAliveThreads)
if (threads.size == 0){
// The size is 0 -> there is no alive threads.
}
The problem with:
for(i = 0; i < threads.length; i++)
threads[i].join();
...is, that threads[i + 1] never can join before threads[i].
Except the "latch"ed ones, all solutions have this lack.
No one here (yet) mentioned ExecutorCompletionService, it allows to join threads/tasks according to their completion order:
public class ExecutorCompletionService<V>
extends Object
implements CompletionService<V>
A CompletionService that uses a supplied Executor to execute tasks. This class arranges that submitted tasks are, upon completion, placed on a queue accessible using take. The class is lightweight enough to be suitable for transient use when processing groups of tasks.
Usage Examples.
Suppose you have a set of solvers for a certain problem, each returning a value of some type Result, and would like to run them concurrently, processing the results of each of them that return a non-null value, in some method use(Result r). You could write this as:
void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException, ExecutionException {
CompletionService<Result> cs = new ExecutorCompletionService<>(e);
solvers.forEach(cs::submit);
for (int i = solvers.size(); i > 0; i--) {
Result r = cs.take().get();
if (r != null)
use(r);
}
}
Suppose instead that you would like to use the first non-null result of the set of tasks, ignoring any that encounter exceptions, and cancelling all other tasks when the first one is ready:
void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException {
CompletionService<Result> cs = new ExecutorCompletionService<>(e);
int n = solvers.size();
List<Future<Result>> futures = new ArrayList<>(n);
Result result = null;
try {
solvers.forEach(solver -> futures.add(cs.submit(solver)));
for (int i = n; i > 0; i--) {
try {
Result r = cs.take().get();
if (r != null) {
result = r;
break;
}
} catch (ExecutionException ignore) {}
}
} finally {
futures.forEach(future -> future.cancel(true));
}
if (result != null)
use(result);
}
Since: 1.5 (!)
Assuming use(r) (of Example 1) also asynchronous, we had a big advantage. #