I was doing a research in producers and consumer design patterns with regards to threads in java, I recently explored in java 5 with the introduction With introduction of BlockingQueue Data Structure in Java 5 Its now much simpler because BlockingQueue provides this control implicitly by introducing blocking methods put() and take(). Now you don't require to use wait and notify to communicate between Producer and Consumer. BlockingQueue put() method will block if Queue is full in case of Bounded Queue and take() will block if Queue is empty. In next section we will see a code example of Producer Consumer design pattern. I have developed the below program but please also let me know the old style approach of waut() and notify() , I want to develop the same logic with old style approach also
Folks please advise how this can be implemented in , classical way is using wait() and notify() method to communicate between Producer and Consumer thread and blocking each of them on individual condition like full queue and empty queue...?
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ProducerConsumerPattern {
public static void main(String args[]){
//Creating shared object
BlockingQueue sharedQueue = new LinkedBlockingQueue();
//Creating Producer and Consumer Thread
Thread prodThread = new Thread(new Producer(sharedQueue));
Thread consThread = new Thread(new Consumer(sharedQueue));
//Starting producer and Consumer thread
prodThread.start();
consThread.start();
}
}
//Producer Class in java
class Producer implements Runnable {
private final BlockingQueue sharedQueue;
public Producer(BlockingQueue sharedQueue) {
this.sharedQueue = sharedQueue;
}
#Override
public void run() {
for(int i=0; i<10; i++){
try {
System.out.println("Produced: " + i);
sharedQueue.put(i);
} catch (InterruptedException ex) {
Logger.getLogger(Producer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
//Consumer Class in Java
class Consumer implements Runnable{
private final BlockingQueue sharedQueue;
public Consumer (BlockingQueue sharedQueue) {
this.sharedQueue = sharedQueue;
}
#Override
public void run() {
while(true){
try {
System.out.println("Consumed: "+ sharedQueue.take());
} catch (InterruptedException ex) {
Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
Output:
Produced: 0
Produced: 1
Consumed: 0
Produced: 2
Consumed: 1
Produced: 3
Consumed: 2
Produced: 4
Consumed: 3
Produced: 5
Consumed: 4
Produced: 6
Consumed: 5
Produced: 7
Consumed: 6
Produced: 8
Consumed: 7
Produced: 9
Consumed: 8
Consumed: 9
If you want to know another way to do this try using an ExecutorService
public static void main(String... args) {
ExecutorService service = Executors.newSingleThreadExecutor();
for (int i = 0; i < 100; i++) {
System.out.println("Produced: " + i);
final int finalI = i;
service.submit(new Runnable() {
#Override
public void run() {
System.out.println("Consumed: " + finalI);
}
});
}
service.shutdown();
}
With just 10 tasks the producer can be finished before the consumer starts. If you try 100 tasks you may find them interleaved.
If you want to understand how a BlockingQueue works, for educational purposes, you can always have a look on its source code.
The simplest way could be to synchronize the offer() and take() methods, and once the queue is full and someone is trying to offer() an element - invoke wait(). When someone is taking an element, notify() the sleeping thread. (Same idea when trying to take() from an empty queue).
Remember to make sure all your wait() calls are nested in loops that checks if the conditions are met each time the thread is awakened.
If you are planning to implement it from scratch for product purposes - I'd strongly argue against it. You should use an existing, tested libraries and components as much as possible.
I can do this wait-notify stuff in my sleep (or at least I think I can). Java 1.4 source provided beautiful examples of all this, but they've switched to doing everything with atomics and it's a lot more complicated now. The wait-notify does provide flexibility and power, though the other methods can shield you from the dangers of concurrency and make for simpler code.
To do this, you want some fields, like so:
private final ConcurrentLinkedQueue<Intger> sharedQueue =
new ConcurrentLinkedQueue<>();
private volatile boolean waitFlag = true;
Your Producer.run would look like this:
public void run() {
for (int i = 0; i < 100000, i++) {
System.out.println( "Produced: " + i );
sharedQueue.add( new Integer( i ) );
if (waitFlag) // volatile access is cheaper than synch.
synchronized (sharedQueue) { sharedQueue.notifyAll(); }
}
}
And Consumer.run:
public void run() {
waitFlag = false;
for (;;) {
Integer ic = sharedQueue.poll();
if (ic == null) {
synchronized (sharedQueue) {
waitFlag = true;
// An add might have come through before waitFlag was set.
ic = sharedQueue.poll();
if (ic == null) {
try { sharedQueue.wait(); }
catch (InterruptedException ex) {}
waitFlag = false;
continue;
}
waitFlag = true;
}
}
System.out.println( "Consumed: " + ic );
}
}
This keeps synchronizing to a minimum. If all goes well, there's only one look at a volatile field per add. You should be able to run any number of producers simultaneously. (Consumer's would be trickier--you'd have to give up waitFlag.) You could use a different object for wait/notifyAll.
Related
I found myself puzzled with this behaviour for wait() and notifyAll():
import java.util.LinkedList;
import java.util.Queue;
class Producer implements Runnable {
Queue<Integer> sharedMessages;
Integer i = 0;
Producer(Queue<Integer> sharedMessages) {
this.sharedMessages = sharedMessages;
}
public void produce(Integer i) {
synchronized (sharedMessages){
System.out.println("Producing message " + i);
this.sharedMessages.add(i);
}
}
public void run(){
synchronized (sharedMessages) {
while (i < 100) {
produce(i++);
}
}
}
}
Consumer:
class Consumer implements Runnable{
Queue<Integer> sharedMessages;
Consumer(Queue<Integer> sharedMessages) {
this.sharedMessages = sharedMessages;
}
public void consume() {
synchronized (sharedMessages) {
System.out.println(sharedMessages.remove() + " consumed by " + Thread.currentThread().getName().toString());
}
}
#Override
public void run(){
synchronized (sharedMessages){
while(sharedMessaged.size() > 0){
System.out.println(Thread.currentThread().getName() + " going to consume");
consume();
try {
sharedMessages.wait();
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + ": I was interrupted from my sleep!");
}
sharedMessages.notifyAll();
}
}
}
}
Here's how I'm creating the threads:
public class Main {
public static void main(String[] args) {
Queue<Integer> sharedMessages = new LinkedList<>();
new Thread(new Producer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
new Thread(new Consumer(sharedMessages)).start();
}
}
The output looks something like this:
Producing message 0
Producing message 1
...
Producing message 98
Producing message 99
Thread-6 going to consume
0 consumed by Thread-6
Thread-5 going to consume
1 consumed by Thread-5
Thread-4 going to consume
2 consumed by Thread-4
Thread-3 going to consume
3 consumed by Thread-3
Thread-2 going to consume
4 consumed by Thread-2
Thread-1 going to consume
5 consumed by Thread-1
And then the application keeps running, without consumers consuming any messages after 5.
Since wait() and notifyAll() are created on the same monitor, sharedMessages, and the while loop keeps on running, shouldn't the consumer threads keep on running, alternatively consuming messages?
NOTE: This question is NOT about a Bounded Blocking Queue / typical Producer Consumer. I'm trying to gain a better understanding of wait() and notifyAll and this behaviour caught my attention. I am probably missing something here, and I am looking for answers pointing out what I am missing and NOT a certain another way of doing it.
Your Producer thread locks the queue, then adds 100 messages without ever releasing the lock, and finally releases the lock before terminating, without ever notifying anyone.
Your 6 Consumer threads will each consume a message, then call wait().
At this point, the Producer thread has ended, and the 6 Consumer threads are waiting.
Who did you envision would be notifying them to wake them up?
Your current code is producing its output, because all your consumers will consume at most one item - depending on if they can run before or after your producer - and then they will wait unconditionally. There is nothing to wake them up, so they won't run again.
So, the behaviour you see is expected for the code you have written, as the only notifyAll() calls happens by consumers after their unconditional wait, so in essence never (except under spurious wake ups). In addition, your overly large synchronized blocks hinder the threads from running concurrently.
The primary changes you need to make are:
Reduce the size of your synchronized blocks (ideally it should only cover producing or consuming a single item)
Do not wait unconditionally, only wait when there are no items in the queue and the producer is still active (you will need a way to signal the producer is done)
Have the producer call notifyAll() after each item (alternatively, call notify() after each item, and notifyAll() after all items have been produced).
You've already got two good answers, but if you want to compare your next solution to something, here's some code I had wrote a while ago for a simple example. It's lightly tested but seems to work OK.
Notice instead of a Queue I implement my own circular buffer. It does the same thing, but its implementation is a little closer to what you might see in some low level (and optimized) object.
public class ProducerConsumer {
public static void main(String[] args) throws InterruptedException {
CircularBuffer buffer = new CircularBuffer();
Counter producer1 = new Counter( buffer, 1000 );
Counter producer2 = new Counter( buffer, 2000 );
Counter producer3 = new Counter( buffer, 3000 );
Counter producer4 = new Counter( buffer, 4000 );
ExecutorService exe = Executors.newCachedThreadPool();
exe.execute( producer1 );
exe.execute( producer2 );
exe.execute( producer3 );
exe.execute( producer4 );
Printer consumer = new Printer( buffer );
exe.execute( consumer );
Thread.sleep( 100 );// wait a bit
exe.shutdownNow();
exe.awaitTermination( 10, TimeUnit.SECONDS );
}
}
// Producer
class Counter implements Runnable {
private final CircularBuffer output;
private final int startingValue;
public Counter(CircularBuffer output, int startingValue) {
this.output = output;
this.startingValue = startingValue;
}
#Override
public void run() {
try {
for( int i = startingValue; ; i++ )
output.put(i);
} catch (InterruptedException ex) {
// exit...
}
}
}
class CircularBuffer {
private final int[] buffer = new int[20];
private int head;
private int count;
public synchronized void put( int i ) throws InterruptedException {
while( count == buffer.length ) wait();// full
buffer[head++] = i;
head %= buffer.length;
count++;
notifyAll();
}
public synchronized int get() throws InterruptedException {
while( count == 0 ) wait(); // empty
int tail = (head - count) % buffer.length;
tail = (tail < 0) ? tail + buffer.length : tail;
int retval = buffer[tail];
count--;
notifyAll();
return retval;
}
}
// Consumer
class Printer implements Runnable {
private final CircularBuffer input;
public Printer(CircularBuffer input) {
this.input = input;
}
#Override
public void run() {
try {
for( ;; )
System.out.println( input.get() );
} catch (InterruptedException ex) {
// exit...
}
}
}
When I first read about interface BlockingQueue I read that: Producer blocks any more put() calls in a queue if it has no more space. And the opposite, it blocks method take(), if there are no items to take. I thought that it internally works same as wait() and notify(). For example, when there are no more elements to read internally wait() is called until Producer adds one more and calls notify()..or that's what we would do in 'old producer/consumer pattern. BUT IT DOESN'T WORK LIKE THAT IN BLOCKING QUEUE. How? What is the point? I am honestly surprised!
I will demonstrate:
public class Testing {
BlockingQueue<Integer> blockingQueue = new ArrayBlockingQueue<>(3);
synchronized void write() throws InterruptedException {
for (int i = 0; i < 6; i++) {
blockingQueue.put(i);
System.out.println("Added " + i);
Thread.sleep(1000);
}
}
synchronized void read() throws InterruptedException {
for (int i = 0; i < 6; i++) {
System.out.println("Took: " + blockingQueue.take());
Thread.sleep(3000);
}
}
}
class Test1 {
public static void main(String[] args) {
Testing testing = new Testing();
new Thread(new Runnable() {
#Override
public void run() {
try {
testing.write();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}).start();
new Thread(new Runnable() {
#Override
public void run() {
try {
testing.read();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}).start();
}
}
OUTPUT:
Added 0
Added 1
Added 2
'program hangs'.
My questions is how does take() and put() BLOCK if they don't use wait() or notify() internally? Do they have some while loops that burns CPU circles fast? I am frankly confused.
Here's the current implementation of ArrayBlockingQueue#put:
/**
* Inserts the specified element at the tail of this queue, waiting
* for space to become available if the queue is full.
*
* #throws InterruptedException {#inheritDoc}
* #throws NullPointerException {#inheritDoc}
*/
public void put(E e) throws InterruptedException {
Objects.requireNonNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
while (count == items.length)
notFull.await();
enqueue(e);
} finally {
lock.unlock();
}
}
You'll see that, instead of using wait() and notify(), it invokes notFull.await(); where notFull is a Condition.
The documentation of Condition states the following:
Condition factors out the Object monitor methods (wait, notify and notifyAll) into distinct objects to give the effect of having multiple wait-sets per object, by combining them with the use of arbitrary Lock implementations. Where a Lock replaces the use of synchronized methods and statements, a Condition replaces the use of the Object monitor methods.
If you go through below code, you will get an idea that how producer/consumer problem will get resolve using BlokingQueue interface.
Here you are able to see that same queue has been shared by Producer and Consumer.
And from main class you are starting both thread Producer and Consumer.
class Producer implements Runnable {
protected BlockingQueue blockingQueue = null;
public Producer(BlockingQueue blockingQueue) {
this.blockingQueue = blockingQueue;
}
#Override
public void run() {
for (int i = 0; i < 6; i++) {
try {
blockingQueue.put(i);
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Added " + i);
}
}
}
class Consumer implements Runnable {
protected BlockingQueue blockingQueue = null;
public Consumer(BlockingQueue blockingQueue) {
this.blockingQueue = blockingQueue;
}
#Override
public void run() {
for (int i = 0; i < 6; i++) {
try {
System.out.println("Took: " + blockingQueue.take());
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class Test1 {
public static void main(String[] args) throws InterruptedException {
BlockingQueue queue = new ArrayBlockingQueue(3);
Producer producer = new Producer(queue);
Consumer consumer = new Consumer(queue);
new Thread(producer).start();
new Thread(consumer).start();
Thread.sleep(4000);
}
}
This code will print output like
Took: 0
Added 0
Added 1
Added 2
Took: 1
Added 3
Added 4
Took: 2
Added 5
Took: 3
Took: 4
Took: 5
(I'm sure some or all parts of my answer could be something that you have already understood, in that case, please just consider it as a clarification :)).
1. Why did your code example using BlockingQueue get to ‘program hangs’?
1.1 Conceptually
First of all, if we can leave out the implementation level detail such as ‘wait()’, ‘notify()’, etc for a second, conceptually, all implementation in JAVA of BlockingQueue do work to the specification, i.e. like you said:
‘Producer blocks any more put() calls in a queue if it has no more
space. And the opposite, it blocks method take(), if there are no
items to take.’
So, conceptually, the reason that your code example hangs is because
1.1.1.
the thread calling the (synchronized) write() runs first and alone, and not until ‘testing.write()’ returns in this thread, the 2nd thread calling the (synchronized) read() will ever have a chance to run — this is the essence of ‘synchronized’ methods in the same object.
1.1.2.
Now, in your example, conceptually, ‘testing.write()’ will never return, in that for loop, it will ‘put’ the first 3 elements onto the queue and then kinda ‘spin wait’ for the 2nd thread to consume/’take’ some of these elements so it can ‘put’ more, but that will never happen due to aforementioned reason in 1.1.1
1.2 Programmatically
1.2.1.
(For producer) In ArrayBlockingQueue#put, the ‘spin wait’ I mentioned in 1.1.2 took form of
while (count == items.length) notFull.await();
1.2.2.
(For consumer) In ArrayBlockingQueue#take, it calls dequeue(), which in turn calls notFull.signal(), which will end the ‘spin wait’ in 1.2.1
2.Now, back to your original post’s title ‘What is the point of BlockingQueue not being able to work in synchronized Producer/Consumer methods?’.
2.1.
If I take the literal meaning of this question, then an answer could be ‘there are reasons for a convenient BlockingQueue facility to exist in JAVA other than using them in synchronized methods/blocks’, i.e. they can certainly live outside of any ‘synchronized’ structure and facilitate a vanilla producer/consumer implementation.
2.2.
However, if you meant to inquire one step further - Why can’t JAVA BlockQueue implementations work easily/nicely/smoothly in synchronized methods/blocks?
That will be a different question, a valid and interesting one that I am also incidentally puzzling about.
Specifically, see this post for further information (note that in this post, the consumer thread ‘hangs’ because of EMPTY queue and its possession of the exclusive lock, as opposed to your case where the producer thread ‘hangs’ because of FULL queue and its possession of the exclusive lock; but the core of the problems should be the same)
As I am trying to learn the multi-threading part of JAVA programming, I have the following issue when dealing with One Producer - Multiple Consumer coding.
What I'm trying to achieve is: multiple consumer threads taking items out of the queue in the order of how they were put into the queue. in other words, make the consumer threads maintain a FIFO manner overall.
final BlockingDeque<String> deque = new LinkedBlockingDeque<String>();
Runnable rb = new Runnable() {
public void run() {
try {
System.out.println(deque.takeLast());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
deque.putFirst("a");
deque.putFirst("b");
deque.putFirst("c");
deque.putFirst("d");
ExecutorService pool = Executors.newFixedThreadPool(4);
pool.submit(rb);
pool.submit(rb);
pool.submit(rb);
pool.submit(rb);
WHAT I AM LOOKING FOR:
a
b
c
d
WHAT IT ACTUALLY OUTPUTS:
b
c
a
d
OR in random orders
Any simple solutions to solve this? Thank you!
In your case the problem is that
System.out.println(deque.takeLast());
are actually two instructions which together are not atomic. Imagine such scenario :
Thread 1 takes string from queue.
Thread 2 takes string from queue.
Thread 2 prints value.
Thread 1 prints value.
So it all depends how operating system will manage the threads execution.
In your case one possible solution would be to add synchronized keyword to run method :
Runnable rb = new Runnable() {
public synchronized void run() {
try {
String s = deque.takeLast();
System.out.println(s);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
This will synchronize on instance of anonymous class which you created here. Since you are passing the same runnable to ExecutorService - it should work.
Or you can synchornize on your queue object since your runnable, which has access to queue object, will be executed in many threads, as you passed it to ExecutorService :
Runnable rb = new Runnable() {
public void run() {
synchronized (deque) {
try {
String s = deque.takeLast();
System.out.println(s);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
Also remember about closing your thread pool because now your application will never exit.
I have two threads both of which accesses an Vector. t1 adds a random number, while t2 removes and prints the first number. Below is the code and the output. t2 seems to execute only once (before t1 starts) and terminates forever. Am I missing something here? (PS: Tested with ArrayList as well)
import java.util.Random;
import java.util.Vector;
public class Main {
public static Vector<Integer> list1 = new Vector<Integer>();
public static void main(String[] args) throws InterruptedException {
System.out.println("Main started!");
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("writer started! ");
Random rand = new Random();
for(int i=0; i<10; i++) {
int x = rand.nextInt(100);
list1.add(x);
System.out.println("writer: " + x);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
});
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("reader started! ");
while(!list1.isEmpty()) {
int x = list1.remove(0);
System.out.println("reader: "+x);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
});
t2.start();
t1.start();
t1.join();
t2.join();
}
}
Output:
Main started!
reader started!
writer started!
writer: 40
writer: 9
writer: 23
writer: 5
writer: 41
writer: 29
writer: 72
writer: 73
writer: 95
writer: 46
This sounds like a toy to understand concurrency, so I didn't mention it before, but I will now (at the top because it is important).
If this is meant to be production code, don't roll your own. There are plenty of well implemented (debugged) concurrent data structures in java.util.concurrent. Use them.
When consuming, you need to not shutdown your consumer based on "all items consumed". This is due to a race condition where the consumer might "race ahead" of the producer and detect an empty list only because the producer hasn't yet written the items for consumption.
There are a number of ways to accomplish a shutdown of the consumer, but none of them can be done by looking at the data to be consumed in isolation.
My recommendation is that the producer "signals" the consumer when the producer is done producing. Then the consumer will stop when it has both the "signal" no more data is being produced AND the list is empty.
Alternative techniques include creating a "shutdown" item. The "producer" adds the shutdown item, and the consumer only shuts down when the "shutdown" item is seen. If you have a group of consumers, keep in mind that you shouldn't remove the shutdown item (or only one consumer would shutdown).
Also, the consumer could "monitor" the producer, such that if the producer is "alive / existent" and the list is empty, the consumer assumes that more data will become available. Shutdown occurs when the producer is dead / non-existent AND no data is available.
Which technique you use will depend on the approach you prefer and the problem you're trying to solve.
I know that people like the elegant solutions, but if your single producer is aware of the single consumer, the first option looks like.
public class Producer {
public void shutdown() {
addRemainingItems();
consumer.shutdown();
}
}
where the Consumer looks like {
public class Consumer {
private boolean shuttingDown = false;
public void shutdown() {
shuttingDown = true;
}
public void run() {
if (!list.isEmpty() && !shuttingDown) {
// pull item and process
}
}
}
Note that such lack of locking around items on the list is inherently dangerous, but you stated only a single consumer, so there's no contention for reading from the list.
Now if you have multiple consumers, you need to provide protections to assure that a single item isn't pulled by two threads at the same time (and need to communicate in such a manner that all threads shutdown).
I think this is a typical Producer–consumer problem. Try to have a look into Semaphore.
Update: The issue`s gone after changing the while loop in the consumer (reader). Instead of exiting the thread if the list is empty, it now enters the loop but does not do anything. Below is the updated reader thread. Of course a decent shutdown mechanism can also be added to the code such as Edwin suggested.
public void run() {
System.out.println("reader started! ");
while(true) {
if(!list1.isEmpty()) {
int x = list1.remove(0);
System.out.println("reader: "+x);
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
Please note, this is not a code snippet taken from a real product or will go in one!
What is a way to simply wait for all threaded process to finish? For example, let's say I have:
public class DoSomethingInAThread implements Runnable{
public static void main(String[] args) {
for (int n=0; n<1000; n++) {
Thread t = new Thread(new DoSomethingInAThread());
t.start();
}
// wait for all threads' run() methods to complete before continuing
}
public void run() {
// do something here
}
}
How do I alter this so the main() method pauses at the comment until all threads' run() methods exit? Thanks!
You put all threads in an array, start them all, and then have a loop
for(i = 0; i < threads.length; i++)
threads[i].join();
Each join will block until the respective thread has completed. Threads may complete in a different order than you joining them, but that's not a problem: when the loop exits, all threads are completed.
One way would be to make a List of Threads, create and launch each thread, while adding it to the list. Once everything is launched, loop back through the list and call join() on each one. It doesn't matter what order the threads finish executing in, all you need to know is that by the time that second loop finishes executing, every thread will have completed.
A better approach is to use an ExecutorService and its associated methods:
List<Callable> callables = ... // assemble list of Callables here
// Like Runnable but can return a value
ExecutorService execSvc = Executors.newCachedThreadPool();
List<Future<?>> results = execSvc.invokeAll(callables);
// Note: You may not care about the return values, in which case don't
// bother saving them
Using an ExecutorService (and all of the new stuff from Java 5's concurrency utilities) is incredibly flexible, and the above example barely even scratches the surface.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class DoSomethingInAThread implements Runnable
{
public static void main(String[] args) throws ExecutionException, InterruptedException
{
//limit the number of actual threads
int poolSize = 10;
ExecutorService service = Executors.newFixedThreadPool(poolSize);
List<Future<Runnable>> futures = new ArrayList<Future<Runnable>>();
for (int n = 0; n < 1000; n++)
{
Future f = service.submit(new DoSomethingInAThread());
futures.add(f);
}
// wait for all tasks to complete before continuing
for (Future<Runnable> f : futures)
{
f.get();
}
//shut down the executor service so that this thread can exit
service.shutdownNow();
}
public void run()
{
// do something here
}
}
instead of join(), which is an old API, you can use CountDownLatch. I have modified your code as below to fulfil your requirement.
import java.util.concurrent.*;
class DoSomethingInAThread implements Runnable{
CountDownLatch latch;
public DoSomethingInAThread(CountDownLatch latch){
this.latch = latch;
}
public void run() {
try{
System.out.println("Do some thing");
latch.countDown();
}catch(Exception err){
err.printStackTrace();
}
}
}
public class CountDownLatchDemo {
public static void main(String[] args) {
try{
CountDownLatch latch = new CountDownLatch(1000);
for (int n=0; n<1000; n++) {
Thread t = new Thread(new DoSomethingInAThread(latch));
t.start();
}
latch.await();
System.out.println("In Main thread after completion of 1000 threads");
}catch(Exception err){
err.printStackTrace();
}
}
}
Explanation:
CountDownLatch has been initialized with given count 1000 as per your requirement.
Each worker thread DoSomethingInAThread will decrement the CountDownLatch, which has been passed in constructor.
Main thread CountDownLatchDemo await() till the count has become zero. Once the count has become zero, you will get below line in output.
In Main thread after completion of 1000 threads
More info from oracle documentation page
public void await()
throws InterruptedException
Causes the current thread to wait until the latch has counted down to zero, unless the thread is interrupted.
Refer to related SE question for other options:
wait until all threads finish their work in java
Avoid the Thread class altogether and instead use the higher abstractions provided in java.util.concurrent
The ExecutorService class provides the method invokeAll that seems to do just what you want.
Consider using java.util.concurrent.CountDownLatch. Examples in javadocs
Depending on your needs, you may also want to check out the classes CountDownLatch and CyclicBarrier in the java.util.concurrent package. They can be useful if you want your threads to wait for each other, or if you want more fine-grained control over the way your threads execute (e.g., waiting in their internal execution for another thread to set some state). You could also use a CountDownLatch to signal all of your threads to start at the same time, instead of starting them one by one as you iterate through your loop. The standard API docs have an example of this, plus using another CountDownLatch to wait for all threads to complete their execution.
As Martin K suggested java.util.concurrent.CountDownLatch seems to be a better solution for this. Just adding an example for the same
public class CountDownLatchDemo
{
public static void main (String[] args)
{
int noOfThreads = 5;
// Declare the count down latch based on the number of threads you need
// to wait on
final CountDownLatch executionCompleted = new CountDownLatch(noOfThreads);
for (int i = 0; i < noOfThreads; i++)
{
new Thread()
{
#Override
public void run ()
{
System.out.println("I am executed by :" + Thread.currentThread().getName());
try
{
// Dummy sleep
Thread.sleep(3000);
// One thread has completed its job
executionCompleted.countDown();
}
catch (InterruptedException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}.start();
}
try
{
// Wait till the count down latch opens.In the given case till five
// times countDown method is invoked
executionCompleted.await();
System.out.println("All over");
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
If you make a list of the threads, you can loop through them and .join() against each, and your loop will finish when all the threads have. I haven't tried it though.
http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#join()
Create the thread object inside the first for loop.
for (int i = 0; i < threads.length; i++) {
threads[i] = new Thread(new Runnable() {
public void run() {
// some code to run in parallel
}
});
threads[i].start();
}
And then so what everyone here is saying.
for(i = 0; i < threads.length; i++)
threads[i].join();
You can do it with the Object "ThreadGroup" and its parameter activeCount:
As an alternative to CountDownLatch you can also use CyclicBarrier e.g.
public class ThreadWaitEx {
static CyclicBarrier barrier = new CyclicBarrier(100, new Runnable(){
public void run(){
System.out.println("clean up job after all tasks are done.");
}
});
public static void main(String[] args) {
for (int i = 0; i < 100; i++) {
Thread t = new Thread(new MyCallable(barrier));
t.start();
}
}
}
class MyCallable implements Runnable{
private CyclicBarrier b = null;
public MyCallable(CyclicBarrier b){
this.b = b;
}
#Override
public void run(){
try {
//do something
System.out.println(Thread.currentThread().getName()+" is waiting for barrier after completing his job.");
b.await();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
e.printStackTrace();
}
}
}
To use CyclicBarrier in this case barrier.await() should be the last statement i.e. when your thread is done with its job. CyclicBarrier can be used again with its reset() method. To quote javadocs:
A CyclicBarrier supports an optional Runnable command that is run once per barrier point, after the last thread in the party arrives, but before any threads are released. This barrier action is useful for updating shared-state before any of the parties continue.
The join() was not helpful to me. see this sample in Kotlin:
val timeInMillis = System.currentTimeMillis()
ThreadUtils.startNewThread(Runnable {
for (i in 1..5) {
val t = Thread(Runnable {
Thread.sleep(50)
var a = i
kotlin.io.println(Thread.currentThread().name + "|" + "a=$a")
Thread.sleep(200)
for (j in 1..5) {
a *= j
Thread.sleep(100)
kotlin.io.println(Thread.currentThread().name + "|" + "$a*$j=$a")
}
kotlin.io.println(Thread.currentThread().name + "|TaskDurationInMillis = " + (System.currentTimeMillis() - timeInMillis))
})
t.start()
}
})
The result:
Thread-5|a=5
Thread-1|a=1
Thread-3|a=3
Thread-2|a=2
Thread-4|a=4
Thread-2|2*1=2
Thread-3|3*1=3
Thread-1|1*1=1
Thread-5|5*1=5
Thread-4|4*1=4
Thread-1|2*2=2
Thread-5|10*2=10
Thread-3|6*2=6
Thread-4|8*2=8
Thread-2|4*2=4
Thread-3|18*3=18
Thread-1|6*3=6
Thread-5|30*3=30
Thread-2|12*3=12
Thread-4|24*3=24
Thread-4|96*4=96
Thread-2|48*4=48
Thread-5|120*4=120
Thread-1|24*4=24
Thread-3|72*4=72
Thread-5|600*5=600
Thread-4|480*5=480
Thread-3|360*5=360
Thread-1|120*5=120
Thread-2|240*5=240
Thread-1|TaskDurationInMillis = 765
Thread-3|TaskDurationInMillis = 765
Thread-4|TaskDurationInMillis = 765
Thread-5|TaskDurationInMillis = 765
Thread-2|TaskDurationInMillis = 765
Now let me use the join() for threads:
val timeInMillis = System.currentTimeMillis()
ThreadUtils.startNewThread(Runnable {
for (i in 1..5) {
val t = Thread(Runnable {
Thread.sleep(50)
var a = i
kotlin.io.println(Thread.currentThread().name + "|" + "a=$a")
Thread.sleep(200)
for (j in 1..5) {
a *= j
Thread.sleep(100)
kotlin.io.println(Thread.currentThread().name + "|" + "$a*$j=$a")
}
kotlin.io.println(Thread.currentThread().name + "|TaskDurationInMillis = " + (System.currentTimeMillis() - timeInMillis))
})
t.start()
t.join()
}
})
And the result:
Thread-1|a=1
Thread-1|1*1=1
Thread-1|2*2=2
Thread-1|6*3=6
Thread-1|24*4=24
Thread-1|120*5=120
Thread-1|TaskDurationInMillis = 815
Thread-2|a=2
Thread-2|2*1=2
Thread-2|4*2=4
Thread-2|12*3=12
Thread-2|48*4=48
Thread-2|240*5=240
Thread-2|TaskDurationInMillis = 1568
Thread-3|a=3
Thread-3|3*1=3
Thread-3|6*2=6
Thread-3|18*3=18
Thread-3|72*4=72
Thread-3|360*5=360
Thread-3|TaskDurationInMillis = 2323
Thread-4|a=4
Thread-4|4*1=4
Thread-4|8*2=8
Thread-4|24*3=24
Thread-4|96*4=96
Thread-4|480*5=480
Thread-4|TaskDurationInMillis = 3078
Thread-5|a=5
Thread-5|5*1=5
Thread-5|10*2=10
Thread-5|30*3=30
Thread-5|120*4=120
Thread-5|600*5=600
Thread-5|TaskDurationInMillis = 3833
As it's clear when we use the join:
The threads are running sequentially.
The first sample takes 765 Milliseconds while the second sample takes 3833 Milliseconds.
Our solution to prevent blocking other threads was creating an ArrayList:
val threads = ArrayList<Thread>()
Now when we want to start a new thread we most add it to the ArrayList:
addThreadToArray(
ThreadUtils.startNewThread(Runnable {
...
})
)
The addThreadToArray function:
#Synchronized
fun addThreadToArray(th: Thread) {
threads.add(th)
}
The startNewThread funstion:
fun startNewThread(runnable: Runnable) : Thread {
val th = Thread(runnable)
th.isDaemon = false
th.priority = Thread.MAX_PRIORITY
th.start()
return th
}
Check the completion of the threads as below everywhere it's needed:
val notAliveThreads = ArrayList<Thread>()
for (t in threads)
if (!t.isAlive)
notAliveThreads.add(t)
threads.removeAll(notAliveThreads)
if (threads.size == 0){
// The size is 0 -> there is no alive threads.
}
The problem with:
for(i = 0; i < threads.length; i++)
threads[i].join();
...is, that threads[i + 1] never can join before threads[i].
Except the "latch"ed ones, all solutions have this lack.
No one here (yet) mentioned ExecutorCompletionService, it allows to join threads/tasks according to their completion order:
public class ExecutorCompletionService<V>
extends Object
implements CompletionService<V>
A CompletionService that uses a supplied Executor to execute tasks. This class arranges that submitted tasks are, upon completion, placed on a queue accessible using take. The class is lightweight enough to be suitable for transient use when processing groups of tasks.
Usage Examples.
Suppose you have a set of solvers for a certain problem, each returning a value of some type Result, and would like to run them concurrently, processing the results of each of them that return a non-null value, in some method use(Result r). You could write this as:
void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException, ExecutionException {
CompletionService<Result> cs = new ExecutorCompletionService<>(e);
solvers.forEach(cs::submit);
for (int i = solvers.size(); i > 0; i--) {
Result r = cs.take().get();
if (r != null)
use(r);
}
}
Suppose instead that you would like to use the first non-null result of the set of tasks, ignoring any that encounter exceptions, and cancelling all other tasks when the first one is ready:
void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException {
CompletionService<Result> cs = new ExecutorCompletionService<>(e);
int n = solvers.size();
List<Future<Result>> futures = new ArrayList<>(n);
Result result = null;
try {
solvers.forEach(solver -> futures.add(cs.submit(solver)));
for (int i = n; i > 0; i--) {
try {
Result r = cs.take().get();
if (r != null) {
result = r;
break;
}
} catch (ExecutionException ignore) {}
}
} finally {
futures.forEach(future -> future.cancel(true));
}
if (result != null)
use(result);
}
Since: 1.5 (!)
Assuming use(r) (of Example 1) also asynchronous, we had a big advantage. #