Java wait/notify implementation without synchronized - java

I have a scenario with dozens of producer and one single consumer. Timing is critical: for performance reason I want to avoid any locking of producers and I want the consumer to wait as little as possible when no messages are ready.
I've started using a ConcurrentLinkedQueue, but I don't like to call sleep on the consumer when queue.poll() == null because I could waste precious milliseconds, and I don't want to use yield because I end up wasting cpu.
So I came to implement a sort of ConcurrentBlockingQueue so that the consumer can run something like:
T item = queue.poll();
if(item == null) {
wait();
item = queue.poll();
}
return item;
And producer something like:
queue.offer(item);
notify();
Unfortunately wait/notify only works on synchronized block, which in turn would drastically reduce producer performance. Is there any other implementation of wait/notify mechanism that does not require synchronization?
I am aware of the risks related to not having wait and notify synchronized, and I managed to resolve them by having an external thread running the following:
while(true) {
notify();
sleep(100);
}

I've started using a ConcurrentLinkedQueue, but I don't like to call sleep on the consumer when queue.poll() == null
You should check the BlockingQueue interface, which has a take method that blocks until an item becomes available.
It has several implementations as detailed in the javadoc, but ConcurrentLinkedQueue is not one of them:
All Known Implementing Classes:
ArrayBlockingQueue, DelayQueue, LinkedBlockingDeque, LinkedBlockingQueue, LinkedTransferQueue, PriorityBlockingQueue, SynchronousQueue

I came out with the following implementation:
private final ConcurrentLinkedQueue<T> queue = new ConcurrentLinkedQueue<>();
private final Semaphore semaphore = new Semaphore(0);
private int size;
public void offer(T item) {
size += 1;
queue.offer(item);
semaphore.release();
}
public T poll(long timeout, TimeUnit unit) {
semaphore.drainPermits();
T item = queue.poll();
if (item == null) {
try {
semaphore.tryAcquire(timeout, unit);
} catch (InterruptedException ex) {
}
item = queue.poll();
}
if (item == null) {
size = 0;
} else {
size = Math.max(0, size - 1);
}
return item;
}
/** An inaccurate representation O(1)-access of queue size. */
public int size() {
return size;
}
With the following properties:
producers never go to SLEEP state (which I think can go with BlockingQueue implementations that use Lock in offer(), or with synchronized blocks using wait/notify)
consumer only goes to SLEEP state when queue is empty but it is soon woken up whenever a producer offer an item (no fixed-time sleep, no yield)
consumer can be sometime woken up even with empty queue, but it's ok here to waste some cpu cycle
Is there any equivalent implementation in jdk that I'm not aware of? Open for criticism.

Related

How to block a thread until a queue has a certain object in Java? [duplicate]

Can I get a complete simple scenario i.e. tutorial that suggest how this should be used, specifically with a Queue?
The wait() and notify() methods are designed to provide a mechanism to allow a thread to block until a specific condition is met. For this I assume you're wanting to write a blocking queue implementation, where you have some fixed size backing-store of elements.
The first thing you have to do is to identify the conditions that you want the methods to wait for. In this case, you will want the put() method to block until there is free space in the store, and you will want the take() method to block until there is some element to return.
public class BlockingQueue<T> {
private Queue<T> queue = new LinkedList<T>();
private int capacity;
public BlockingQueue(int capacity) {
this.capacity = capacity;
}
public synchronized void put(T element) throws InterruptedException {
while(queue.size() == capacity) {
wait();
}
queue.add(element);
notify(); // notifyAll() for multiple producer/consumer threads
}
public synchronized T take() throws InterruptedException {
while(queue.isEmpty()) {
wait();
}
T item = queue.remove();
notify(); // notifyAll() for multiple producer/consumer threads
return item;
}
}
There are a few things to note about the way in which you must use the wait and notify mechanisms.
Firstly, you need to ensure that any calls to wait() or notify() are within a synchronized region of code (with the wait() and notify() calls being synchronized on the same object). The reason for this (other than the standard thread safety concerns) is due to something known as a missed signal.
An example of this, is that a thread may call put() when the queue happens to be full, it then checks the condition, sees that the queue is full, however before it can block another thread is scheduled. This second thread then take()'s an element from the queue, and notifies the waiting threads that the queue is no longer full. Because the first thread has already checked the condition however, it will simply call wait() after being re-scheduled, even though it could make progress.
By synchronizing on a shared object, you can ensure that this problem does not occur, as the second thread's take() call will not be able to make progress until the first thread has actually blocked.
Secondly, you need to put the condition you are checking in a while loop, rather than an if statement, due to a problem known as spurious wake-ups. This is where a waiting thread can sometimes be re-activated without notify() being called. Putting this check in a while loop will ensure that if a spurious wake-up occurs, the condition will be re-checked, and the thread will call wait() again.
As some of the other answers have mentioned, Java 1.5 introduced a new concurrency library (in the java.util.concurrent package) which was designed to provide a higher level abstraction over the wait/notify mechanism. Using these new features, you could rewrite the original example like so:
public class BlockingQueue<T> {
private Queue<T> queue = new LinkedList<T>();
private int capacity;
private Lock lock = new ReentrantLock();
private Condition notFull = lock.newCondition();
private Condition notEmpty = lock.newCondition();
public BlockingQueue(int capacity) {
this.capacity = capacity;
}
public void put(T element) throws InterruptedException {
lock.lock();
try {
while(queue.size() == capacity) {
notFull.await();
}
queue.add(element);
notEmpty.signal();
} finally {
lock.unlock();
}
}
public T take() throws InterruptedException {
lock.lock();
try {
while(queue.isEmpty()) {
notEmpty.await();
}
T item = queue.remove();
notFull.signal();
return item;
} finally {
lock.unlock();
}
}
}
Of course if you actually need a blocking queue, then you should use an implementation of the BlockingQueue interface.
Also, for stuff like this I'd highly recommend Java Concurrency in Practice, as it covers everything you could want to know about concurrency related problems and solutions.
Not a queue example, but extremely simple :)
class MyHouse {
private boolean pizzaArrived = false;
public void eatPizza(){
synchronized(this){
while(!pizzaArrived){
wait();
}
}
System.out.println("yumyum..");
}
public void pizzaGuy(){
synchronized(this){
this.pizzaArrived = true;
notifyAll();
}
}
}
Some important points:
1) NEVER do
if(!pizzaArrived){
wait();
}
Always use while(condition), because
a) threads can sporadically awake
from waiting state without being
notified by anyone. (even when the
pizza guy didn't ring the chime,
somebody would decide try eating the
pizza.).
b) You should check for the
condition again after acquiring the
synchronized lock. Let's say pizza
don't last forever. You awake,
line-up for the pizza, but it's not
enough for everybody. If you don't
check, you might eat paper! :)
(probably better example would be
while(!pizzaExists){ wait(); }.
2) You must hold the lock (synchronized) before invoking wait/nofity. Threads also have to acquire lock before waking.
3) Try to avoid acquiring any lock within your synchronized block and strive to not invoke alien methods (methods you don't know for sure what they are doing). If you have to, make sure to take measures to avoid deadlocks.
4) Be careful with notify(). Stick with notifyAll() until you know what you are doing.
5)Last, but not least, read Java Concurrency in Practice!
Even though you asked for wait() and notify() specifically, I feel that this quote is still important enough:
Josh Bloch, Effective Java 2nd Edition, Item 69: Prefer concurrency utilities to wait and notify (emphasis his):
Given the difficulty of using wait and notify correctly, you should use the higher-level concurrency utilities instead [...] using wait and notify directly is like programming in "concurrency assembly language", as compared to the higher-level language provided by java.util.concurrent. There is seldom, if ever, reason to use wait and notify in new code.
Have you taken a look at this Java Tutorial?
Further, I'd advise you to stay the heck away from playing with this kind of stuff in real software. It's good to play with it so you know what it is, but concurrency has pitfalls all over the place. It's better to use higher level abstractions and synchronized collections or JMS queues if you are building software for other people.
That is at least what I do. I'm not a concurrency expert so I stay away from handling threads by hand wherever possible.
Example
public class myThread extends Thread{
#override
public void run(){
while(true){
threadCondWait();// Circle waiting...
//bla bla bla bla
}
}
public synchronized void threadCondWait(){
while(myCondition){
wait();//Comminucate with notify()
}
}
}
public class myAnotherThread extends Thread{
#override
public void run(){
//Bla Bla bla
notify();//Trigger wait() Next Step
}
}
The question asks for a wait() + notify() involving a queue(buffer). The first thing that comes to mind is a producer-consumer scenario using a buffer.
Three Components in our system:
Queue [Buffer] - A fixed-size queue shared between threads
Producer - A thread produces/inserts values to the buffer
Consumer - A thread consumes/removes values from the buffer
PRODUCER THREAD:
The producer inserts values in the buffer and until the buffer is full.
If the buffer is full, the producer call wait() and enters the wait stage, until the consumer awakes it.
static class Producer extends Thread {
private Queue<Integer> queue;
private int maxSize;
public Producer(Queue<Integer> queue, int maxSize, String name) {
super(name);
this.queue = queue;
this.maxSize = maxSize;
}
#Override
public void run() {
while (true) {
synchronized (queue) {
if (queue.size() == maxSize) {
try {
System.out.println("Queue is full, " + "Producer thread waiting for " + "consumer to take something from queue");
queue.wait();
} catch (Exception ex) {
ex.printStackTrace();
}
}
Random random = new Random();
int i = random.nextInt();
System.out.println(" ^^^ Producing value : " + i);
queue.add(i);
queue.notify();
}
sleepRandom();
}
}
}
CONSUMER THREAD:
Consumer thread removes value from the buffer until the buffer is empty.
If the buffer is empty, consumer calls wait() method and enter wait state until a producer sends a notify signal.
static class Consumer extends Thread {
private Queue<Integer> queue;
private int maxSize;
public Consumer(Queue<Integer> queue, int maxSize, String name) {
super(name);
this.queue = queue;
this.maxSize = maxSize;
}
#Override
public void run() {
Random random = new Random();
while (true) {
synchronized (queue) {
if (queue.isEmpty()) {
System.out.println("Queue is empty," + "Consumer thread is waiting" + " for producer thread to put something in queue");
try {
queue.wait();
} catch (Exception ex) {
ex.printStackTrace();
}
}
System.out.println(" vvv Consuming value : " + queue.remove());
queue.notify();
}
sleepRandom();
}
}
}
UTIL METHOD:
public static void sleepRandom(){
Random random = new Random();
try {
Thread.sleep(random.nextInt(250));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Application Code:
public static void main(String args[]) {
System.out.println("How to use wait and notify method in Java");
System.out.println("Solving Producer Consumper Problem");
Queue<Integer> buffer = new LinkedList<>();
int maxSize = 10;
Thread producer = new Producer(buffer, maxSize, "PRODUCER");
Thread consumer = new Consumer(buffer, maxSize, "CONSUMER");
producer.start();
consumer.start();
}
A Sample Output:
^^^ Producing value : 1268801606
vvv Consuming value : 1268801606
Queue is empty,Consumer thread is waiting for producer thread to put something in queue
^^^ Producing value : -191710046
vvv Consuming value : -191710046
^^^ Producing value : -1096119803
vvv Consuming value : -1096119803
^^^ Producing value : -1502054254
vvv Consuming value : -1502054254
Queue is empty,Consumer thread is waiting for producer thread to put something in queue
^^^ Producing value : 408960851
vvv Consuming value : 408960851
^^^ Producing value : 2140469519
vvv Consuming value : 65361724
^^^ Producing value : 1844915867
^^^ Producing value : 1551384069
^^^ Producing value : -2112162412
vvv Consuming value : -887946831
vvv Consuming value : 1427122528
^^^ Producing value : -181736500
^^^ Producing value : -1603239584
^^^ Producing value : 175404355
vvv Consuming value : 1356483172
^^^ Producing value : -1505603127
vvv Consuming value : 267333829
^^^ Producing value : 1986055041
Queue is full, Producer thread waiting for consumer to take something from queue
vvv Consuming value : -1289385327
^^^ Producing value : 58340504
vvv Consuming value : 1244183136
^^^ Producing value : 1582191907
Queue is full, Producer thread waiting for consumer to take something from queue
vvv Consuming value : 1401174346
^^^ Producing value : 1617821198
vvv Consuming value : -1827889861
vvv Consuming value : 2098088641
Example for wait() and notifyall() in Threading.
A synchronized static array list is used as resource and wait() method is called if the array list is empty. notify() method is invoked once a element is added for the array list.
public class PrinterResource extends Thread{
//resource
public static List<String> arrayList = new ArrayList<String>();
public void addElement(String a){
//System.out.println("Add element method "+this.getName());
synchronized (arrayList) {
arrayList.add(a);
arrayList.notifyAll();
}
}
public void removeElement(){
//System.out.println("Remove element method "+this.getName());
synchronized (arrayList) {
if(arrayList.size() == 0){
try {
arrayList.wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}else{
arrayList.remove(0);
}
}
}
public void run(){
System.out.println("Thread name -- "+this.getName());
if(!this.getName().equalsIgnoreCase("p4")){
this.removeElement();
}
this.addElement("threads");
}
public static void main(String[] args) {
PrinterResource p1 = new PrinterResource();
p1.setName("p1");
p1.start();
PrinterResource p2 = new PrinterResource();
p2.setName("p2");
p2.start();
PrinterResource p3 = new PrinterResource();
p3.setName("p3");
p3.start();
PrinterResource p4 = new PrinterResource();
p4.setName("p4");
p4.start();
try{
p1.join();
p2.join();
p3.join();
p4.join();
}catch(InterruptedException e){
e.printStackTrace();
}
System.out.println("Final size of arraylist "+arrayList.size());
}
}

How to get the ThreadPoolExecutor to increase threads to max before queueing?

I've been frustrated for some time with the default behavior of ThreadPoolExecutor which backs the ExecutorService thread-pools that so many of us use. To quote from the Javadocs:
If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full.
What this means is that if you define a thread pool with the following code, it will never start the 2nd thread because the LinkedBlockingQueue is unbounded.
ExecutorService threadPool =
new ThreadPoolExecutor(1 /*core*/, 50 /*max*/, 60 /*timeout*/,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(/* unlimited queue*/));
Only if you have a bounded queue and the queue is full are any threads above the core number started. I suspect a large number of junior Java multithreaded programmers are unaware of this behavior of the ThreadPoolExecutor.
Now I have specific use case where this is not-optimal. I'm looking for ways, without writing my own TPE class, to work around it.
My requirements are for a web service that is making call-backs to a possibly unreliable 3rd party.
I don't want to make the call-back synchronously with the web-request, so I want to use a thread-pool.
I typically get a couple of these a minute so I don't want to have a newFixedThreadPool(...) with a large number of threads that mostly are dormant.
Every so often I get a burst of this traffic and I want to scale up the number of threads to some max value (let's say 50).
I need to make a best attempt to do all callbacks so I want to queue up any additional ones above 50. I don't want to overwhelm the rest of my web-server by using a newCachedThreadPool().
How can I work around this limitation in ThreadPoolExecutor where the queue needs to be bounded and full before more threads will be started? How can I get it to start more threads before queuing tasks?
Edit:
#Flavio makes a good point about using the ThreadPoolExecutor.allowCoreThreadTimeOut(true) to have the core threads timeout and exit. I considered that but I still wanted the core-threads feature. I did not want the number of threads in the pool to drop below the core-size if possible.
How can I work around this limitation in ThreadPoolExecutor where the queue needs to be bounded and full before more threads will be started.
I believe I have finally found a somewhat elegant (maybe a little hacky) solution to this limitation with ThreadPoolExecutor. It involves extending LinkedBlockingQueue to have it return false for queue.offer(...) when there are already some tasks queued. If the current threads are not keeping up with the queued tasks, the TPE will add additional threads. If the pool is already at max threads, then the RejectedExecutionHandler will be called which does the put(...) into the queue.
It certainly is strange to write a queue where offer(...) can return false and put() never blocks so that's the hack part. But this works well with TPE's usage of the queue so I don't see any problem with doing this.
Here's the code:
// extend LinkedBlockingQueue to force offer() to return false conditionally
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
private static final long serialVersionUID = -6903933921423432194L;
#Override
public boolean offer(Runnable e) {
// Offer it to the queue if there is 0 items already queued, else
// return false so the TPE will add another thread. If we return false
// and max threads have been reached then the RejectedExecutionHandler
// will be called which will do the put into the queue.
if (size() == 0) {
return super.offer(e);
} else {
return false;
}
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1 /*core*/, 50 /*max*/,
60 /*secs*/, TimeUnit.SECONDS, queue);
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
// This does the actual put into the queue. Once the max threads
// have been reached, the tasks will then queue up.
executor.getQueue().put(r);
// we do this after the put() to stop race conditions
if (executor.isShutdown()) {
throw new RejectedExecutionException(
"Task " + r + " rejected from " + e);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
});
With this mechanism, when I submit tasks to the queue, the ThreadPoolExecutor will:
Scale the number of threads up to the core size initially (here 1).
Offer it to the queue. If the queue is empty it will be queued to be handled by the existing threads.
If the queue has 1 or more elements already, the offer(...) will return false.
If false is returned, scale up the number of threads in the pool until they reach the max number (here 50).
If at the max then it calls the RejectedExecutionHandler
The RejectedExecutionHandler then puts the task into the queue to be processed by the first available thread in FIFO order.
Although in my example code above, the queue is unbounded, you could also define it as a bounded queue. For example, if you add a capacity of 1000 to the LinkedBlockingQueue then it will:
scale the threads up to max
then queue up until it is full with 1000 tasks
then block the caller until space becomes available to the queue.
Also, if you needed to use offer(...) in the
RejectedExecutionHandler then you could use the offer(E, long, TimeUnit) method instead with Long.MAX_VALUE as the timeout.
Warning:
If you expect tasks to be added to the executor after it has been shutdown, then you may want to be smarter about throwing RejectedExecutionException out of our custom RejectedExecutionHandler when the executor-service has been shutdown. Thanks to #RaduToader for pointing this out.
Edit:
Another tweak to this answer could be to ask the TPE if there are idle threads and only enqueue the item if there is so. You would have to make a true class for this and add ourQueue.setThreadPoolExecutor(tpe); method on it.
Then your offer(...) method might look something like:
Check to see if the tpe.getPoolSize() == tpe.getMaximumPoolSize() in which case just call super.offer(...).
Else if tpe.getPoolSize() > tpe.getActiveCount() then call super.offer(...) since there seem to be idle threads.
Otherwise return false to fork another thread.
Maybe this:
int poolSize = tpe.getPoolSize();
int maximumPoolSize = tpe.getMaximumPoolSize();
if (poolSize >= maximumPoolSize || poolSize > tpe.getActiveCount()) {
return super.offer(e);
} else {
return false;
}
Note that the get methods on TPE are expensive since they access volatile fields or (in the case of getActiveCount()) lock the TPE and walk the thread-list. Also, there are race conditions here that may cause a task to be enqueued improperly or another thread forked when there was an idle thread.
Set core size and max size to the same value, and allow core threads to be removed from the pool with allowCoreThreadTimeOut(true).
I've already got two other answers on this question, but I suspect this one is the best.
It's based on the technique of the currently accepted answer, namely:
Override the queue's offer() method to (sometimes) return false,
which causes the ThreadPoolExecutor to either spawn a new thread or reject the task, and
set the RejectedExecutionHandler to actually queue the task on rejection.
The problem is when offer() should return false. The currently accepted answer returns false when the queue has a couple of tasks on it, but as I've pointed out in my comment there, this causes undesirable effects. Alternately, if you always return false, you'll keep spawning new threads even when you have threads waiting on the queue.
The solution is to use Java 7 LinkedTransferQueue and have offer() call tryTransfer(). When there is a waiting consumer thread the task will just get passed to that thread. Otherwise, offer() will return false and the ThreadPoolExecutor will spawn a new thread.
BlockingQueue<Runnable> queue = new LinkedTransferQueue<Runnable>() {
#Override
public boolean offer(Runnable e) {
return tryTransfer(e);
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue);
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
Note: I now prefer and recommend my other answer.
Here's a version which feels to me much more straightforward: Increase the corePoolSize (up to the limit of maximumPoolSize) whenever a new task is executed, then decrease the corePoolSize (down to the limit of the user specified "core pool size") whenever a task completes.
To put it another way, keep track of the number of running or enqueued tasks, and ensure that the corePoolSize is equal to the number of tasks as long as it is between the user specified "core pool size" and the maximumPoolSize.
public class GrowBeforeQueueThreadPoolExecutor extends ThreadPoolExecutor {
private int userSpecifiedCorePoolSize;
private int taskCount;
public GrowBeforeQueueThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
userSpecifiedCorePoolSize = corePoolSize;
}
#Override
public void execute(Runnable runnable) {
synchronized (this) {
taskCount++;
setCorePoolSizeToTaskCountWithinBounds();
}
super.execute(runnable);
}
#Override
protected void afterExecute(Runnable runnable, Throwable throwable) {
super.afterExecute(runnable, throwable);
synchronized (this) {
taskCount--;
setCorePoolSizeToTaskCountWithinBounds();
}
}
private void setCorePoolSizeToTaskCountWithinBounds() {
int threads = taskCount;
if (threads < userSpecifiedCorePoolSize) threads = userSpecifiedCorePoolSize;
if (threads > getMaximumPoolSize()) threads = getMaximumPoolSize();
setCorePoolSize(threads);
}
}
As written the class doesn't support changing the user specified corePoolSize or maximumPoolSize after construction, and doesn't support manipulating the work queue directly or via remove() or purge().
We have a subclass of ThreadPoolExecutor that takes an additional creationThreshold and overrides execute.
public void execute(Runnable command) {
super.execute(command);
final int poolSize = getPoolSize();
if (poolSize < getMaximumPoolSize()) {
if (getQueue().size() > creationThreshold) {
synchronized (this) {
setCorePoolSize(poolSize + 1);
setCorePoolSize(poolSize);
}
}
}
}
maybe that helps too, but yours looks more artsy of course…
The recommended answer resolves only one (1) of the issue with the JDK thread pool:
JDK thread pools are biased towards queuing. So instead of spawning a new thread, they will queue the task. Only if the queue reaches its limit will the thread pool spawn a new thread.
Thread retirement does not happen when load lightens. For example if we have a burst of jobs hitting the pool that causes the pool to go to max, followed by light load of max 2 tasks at a time, the pool will use all threads to service the light load preventing thread retirement. (only 2 threads would be needed…)
Unhappy with the behavior above, I went ahead and implemented a pool to overcome the deficiencies above.
To resolve 2) Using Lifo scheduling resolves the issue. This idea was presented by Ben Maurer at ACM applicative 2015 conference:
Systems # Facebook scale
So a new implementation was born:
LifoThreadPoolExecutorSQP
So far this implementation improves async execution perfomance for ZEL.
The implementation is spin capable to reduce context switch overhead, yielding superior performance for certain use cases.
Hope it helps...
PS: JDK Fork Join Pool implement ExecutorService and works as a "normal" thread pool, Implementation is performant, It uses LIFO Thread scheduling, however there is no control over internal queue size, retirement timeout..., and most importantly tasks cannot be interrupted when canceling them
Note: I now prefer and recommend my other answer.
I have another proposal, following to the original idea of changing the queue to return false. In this one all tasks can enter the queue, but whenever a task is enqueued after execute(), we follow it with a sentinel no-op task which the queue rejects, causing a new thread to spawn, which will execute the no-op immediately followed by something from the queue.
Because worker threads may be polling the LinkedBlockingQueue for a new task, it's possible for a task to get enqueued even when there's an available thread. To avoid spawning new threads even when there are threads available, we need to keep track of how many threads are waiting for new tasks on the queue, and only spawn a new thread when there are more tasks on the queue than waiting threads.
final Runnable SENTINEL_NO_OP = new Runnable() { public void run() { } };
final AtomicInteger waitingThreads = new AtomicInteger(0);
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
#Override
public boolean offer(Runnable e) {
// offer returning false will cause the executor to spawn a new thread
if (e == SENTINEL_NO_OP) return size() <= waitingThreads.get();
else return super.offer(e);
}
#Override
public Runnable poll(long timeout, TimeUnit unit) throws InterruptedException {
try {
waitingThreads.incrementAndGet();
return super.poll(timeout, unit);
} finally {
waitingThreads.decrementAndGet();
}
}
#Override
public Runnable take() throws InterruptedException {
try {
waitingThreads.incrementAndGet();
return super.take();
} finally {
waitingThreads.decrementAndGet();
}
}
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue) {
#Override
public void execute(Runnable command) {
super.execute(command);
if (getQueue().size() > waitingThreads.get()) super.execute(SENTINEL_NO_OP);
}
};
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
if (r == SENTINEL_NO_OP) return;
else throw new RejectedExecutionException();
}
});
The best solution that I can think of is to extend.
ThreadPoolExecutor offers a few hook methods: beforeExecute and afterExecute. In your extension you could maintain use a bounded queue to feed in tasks and a second unbounded queue to handle overflow. When someone calls submit, you could attempt to place the request into the bounded queue. If you're met with an exception, you just stick the task in your overflow queue. You could then utilize the afterExecute hook to see if there is anything in the overflow queue after finishing a task. This way, the executor will take care of the stuff in it's bounded queue first, and automatically pull from this unbounded queue as time permits.
It seems like more work than your solution, but at least it doesn't involve giving queues unexpected behaviors. I also imagine that there's a better way to check the status of the queue and threads rather than relying on exceptions, which are fairly slow to throw.
Note: For JDK ThreadPoolExecutor when you have a bounded queue, you are only creating new threads when offer is returning false. You might obtain something usefull with CallerRunsPolicy which creates a bit of BackPressure and directly calls run in caller thread.
I need tasks to be executed from threads created by the pool and have an ubounded queue for scheduling, while the number of threads within the pool may grow or shrink between corePoolSize and maximumPoolSize so...
I ended up doing a full copy paste from ThreadPoolExecutor and change a bit the execute method because
unfortunately this could not be done by extension(it calls private methods).
I didn't wanted to spawn new threads just immediately when new request arrive and all threads are busy(because I have in general short lived tasks). I've added a threshold but feel free to change it to your needs ( maybe for mostly IO is better to remove this threshold)
private final AtomicInteger activeWorkers = new AtomicInteger(0);
private volatile double threshold = 0.7d;
protected void beforeExecute(Thread t, Runnable r) {
activeWorkers.incrementAndGet();
}
protected void afterExecute(Runnable r, Throwable t) {
activeWorkers.decrementAndGet();
}
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && this.workQueue.offer(command)) {
int recheck = this.ctl.get();
if (!isRunning(recheck) && this.remove(command)) {
this.reject(command);
} else if (workerCountOf(recheck) == 0) {
this.addWorker((Runnable) null, false);
}
//>>change start
else if (workerCountOf(recheck) < maximumPoolSize //
&& (activeWorkers.get() > workerCountOf(recheck) * threshold
|| workQueue.size() > workerCountOf(recheck) * threshold)) {
this.addWorker((Runnable) null, false);
}
//<<change end
} else if (!this.addWorker(command, false)) {
this.reject(command);
}
}
Below is a solution using two Threadpools both with core and max pool size as same. The second pool is used when the 1st pool is busy.
import java.util.concurrent.Future;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class MyExecutor {
ThreadPoolExecutor tex1, tex2;
public MyExecutor() {
tex1 = new ThreadPoolExecutor(15, 15, 5, TimeUnit.SECONDS, new LinkedBlockingQueue<>());
tex1.allowCoreThreadTimeOut(true);
tex2 = new ThreadPoolExecutor(45, 45, 100, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
tex2.allowCoreThreadTimeOut(true);
}
public Future<?> submit(Runnable task) {
ThreadPoolExecutor ex = tex1;
int excessTasks1 = tex1.getQueue().size() + tex1.getActiveCount() - tex1.getCorePoolSize();
if (excessTasks1 >= 0) {
int excessTasks2 = tex2.getQueue().size() + tex2.getActiveCount() - tex2.getCorePoolSize();;
if (excessTasks2 <= 0 || excessTasks2 / (double) tex2.getCorePoolSize() < excessTasks1 / (double) tex1.getCorePoolSize()) {
ex = tex2;
}
}
return ex.submit(task);
}
}

Multithreading programming in Java, using semaphores

I'm Learning Java multithreading and I have problem, I can't understand Semaphores. How can I execute threads in this order? for example : on image1 : the 5-th thread start running only then 1-st and 2-nd is finished to execute.
Image 2:
Image 1:
I upload now images for better understanding . :))
Usually in java you use mutexes (also called monitors), which prohibits that two or more threads access the code region proctected by that mutex
That code region is defined using the sychronized statement
sychronized(mutex) {
// mutual exclusive code begin
// ...
// ...
// mutual exclusive code end
}
where mutex is defined as e.g:
Object mutex = new Object();
To prevent a task from beeing started you need advanced technics, such as barriers, defined in java.util.concurrency package.
But first make yourself confortable with the synchronized statement.
If you think that you will often use multi threading in java, you might want to read
"Java Concurrency in Practise"
Synchronized is used so that each thread will enter that method or that portion of the code on at a time. If you want to
public class CountingSemaphore {
private int value = 0;
private int waitCount = 0;
private int notifyCount = 0;
public CountingSemaphore(int initial) {
if (initial > 0) {
value = initial;
}
}
public synchronized void waitForNotify() {
if (value <= waitCount) {
waitCount++;
try {
do {
wait();
} while (notifyCount == 0);
} catch (InterruptedException e) {
notify();
} finally {
waitCount--;
}
notifyCount--;
}
value--;
}
public synchronized void notifyToWakeup() {
value++;
if (waitCount > notifyCount) {
notifyCount++;
notify();
}
}
}
This is an implementation of a counting semaphore. It maintains counter variables ‘value’, ‘waitCount’ and ‘notifyCount’. This makes the thread to wait if value is lesser than waitCount and notifyCount is empty.
You can use Java Counting Semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource. For example, here is a class that uses a semaphore to control access to a pool of items:
class Pool {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
public Object getItem() throws InterruptedException {
available.acquire();
return getNextAvailableItem();
}
public void putItem(Object x) {
if (markAsUnused(x))
available.release();
}
// Not a particularly efficient data structure; just for demo
protected Object[] items = ... whatever kinds of items being managed
protected boolean[] used = new boolean[MAX_AVAILABLE];
protected synchronized Object getNextAvailableItem() {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (!used[i]) {
used[i] = true;
return items[i];
}
}
return null; // not reached
}
protected synchronized boolean markAsUnused(Object item) {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (item == items[i]) {
if (used[i]) {
used[i] = false;
return true;
} else
return false;
}
}
return false;
}
}
Before obtaining an item each thread must acquire a permit from the semaphore, guaranteeing that an item is available for use. When the thread has finished with the item it is returned back to the pool and a permit is returned to the semaphore, allowing another thread to acquire that item. Note that no synchronization lock is held when acquire() is called as that would prevent an item from being returned to the pool. The semaphore encapsulates the synchronization needed to restrict access to the pool, separately from any synchronization needed to maintain the consistency of the pool itself.
A semaphore initialized to one, and which is used such that it only has at most one permit available, can serve as a mutual exclusion lock. This is more commonly known as a binary semaphore, because it only has two states: one permit available, or zero permits available. When used in this way, the binary semaphore has the property (unlike many Lock implementations), that the "lock" can be released by a thread other than the owner (as semaphores have no notion of ownership). This can be useful in some specialized contexts, such as deadlock recovery.

java concurrency: multi-producer one-consumer

I have a situation where different threads populate a queue (producers) and one consumer retrieve element from this queue. My problem is that when one of these elements are retrieved from the queue some is missed (missing signal?). The producers code is:
class Producer implements Runnable {
private Consumer consumer;
Producer(Consumer consumer) { this.consumer = consumer; }
#Override
public void run() {
consumer.send("message");
}
}
and they are created and run with:
ExecutorService executor = Executors.newSingleThreadExecutor();
for (int i = 0; i < 20; i++) {
executor.execute(new Producer(consumer));
}
Consumer code is:
class Consumer implements Runnable {
private Queue<String> queue = new ConcurrentLinkedQueue<String>();
void send(String message) {
synchronized (queue) {
queue.add(message);
System.out.println("SIZE: " + queue.size());
queue.notify();
}
}
#Override
public void run() {
int counter = 0;
synchronized (queue) {
while(true) {
try {
System.out.println("SLEEP");
queue.wait(10);
} catch (InterruptedException e) {
Thread.interrupted();
}
System.out.println(counter);
if (!queue.isEmpty()) {
queue.poll();
counter++;
}
}
}
}
}
When the code is run I get sometimes 20 elements added and 20 retrieved, but in other cases the elements retrieved are less than 20. Any idea how to fix that?
I'd suggest you use a BlockingQueue instead of a Queue. A LinkedBlockingDeque might be a good candidate for you.
Your code would look like this:
void send(String message) {
synchronized (queue) {
queue.put(message);
System.out.println("SIZE: " + queue.size());
}
}
and then you'd need to just
queue.take()
on your consumer thread
The idea is that .take() will block until an item is available in the queue and then return exactly one (which is where I think your implementation suffers: missing notification while polling). .put() is responsible for doing all the notifications for you. No wait/notifies needed.
The issue in your code is probably because you are using notify instead of notifyAll. The former will only wake up a single thread, if there is one waiting on the lock. This allows a race condition where no thread is waiting and the signal is lost. A notifyAll will force correctness at a minor performance cost by requiring all threads to wake up to check whether they can obtain the lock.
This is best explained in Effective Java 1st ed (see p.150). The 2nd edition removed this tip since programmers are expected to use java.util.concurrent which provides stronger correctness guarantees.
It looks like bad idea to use ConcurrentLinkedQueue and synchronization both at the same time. It defies the purpose of concurrent data structures in the first place.
There is no problem with ConcurrentLinkedQueue data structure and replacing it with BlockingQueue will solve the problem but this is not the root cause.
Problem is with queue.wait(10). This is timed wait method. It will acquire lock again once 10ms elapses.
Notification (queue.notify() ) will get lost because there is no consumer thread waiting on it if 10ms has elapsed.
Producer will not be able to add to the queue since they can't acquire lock because lock is claimed again by the consumer.
Moving to BlockingQueue solved your problem because you removed your wait(10) code and wait and notify was taken care by BlockingQueue data structure.

Producer-consumer problem with a twist

The producer is finite, as should be the consumer.
The problem is when to stop, not how to run.
Communication can happen over any type of BlockingQueue.
Can't rely on poisoning the queue(PriorityBlockingQueue)
Can't rely on locking the queue(SynchronousQueue)
Can't rely on offer/poll exclusively(SynchronousQueue)
Probably even more exotic queues in existence.
Creates a queued seq on another (presumably lazy) seq s. The queued
seq will produce a concrete seq in the background, and can get up to
n items ahead of the consumer. n-or-q can be an integer n buffer
size, or an instance of java.util.concurrent BlockingQueue. Note
that reading from a seque can block if the reader gets ahead of the
producer.
http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/seque
My attempts so far + some tests: https://gist.github.com/934781
Solutions in Java or Clojure appreciated.
class Reader {
private final ExecutorService ex = Executors.newSingleThreadExecutor();
private final List<Object> completed = new ArrayList<Object>();
private final BlockingQueue<Object> doneQueue = new LinkedBlockingQueue<Object>();
private int pending = 0;
public synchronized Object take() {
removeDone();
queue();
Object rVal;
if(completed.isEmpty()) {
try {
rVal = doneQueue.take();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
pending--;
} else {
rVal = completed.remove(0);
}
queue();
return rVal;
}
private void removeDone() {
Object current = doneQueue.poll();
while(current != null) {
completed.add(current);
pending--;
current = doneQueue.poll();
}
}
private void queue() {
while(pending < 10) {
pending++;
ex.submit(new Runnable() {
#Override
public void run() {
doneQueue.add(compute());
}
private Object compute() {
//do actual computation here
return new Object();
}
});
}
}
}
Not exactly an answer I'm afraid, but a few remarks and more questions. My first answer would be: use clojure.core/seque. The producer needs to communicate end-of-seq somehow for the consumer to know when to stop, and I assume the number of produced elements is not known in advance. Why can't you use an EOS marker (if that's what you mean by queue poisoning)?
If I understand your alternative seque implementation correctly, it will break when elements are taken off the queue outside your function, since channel and q will be out of step in that case: channel will hold more #(.take q) elements than there are elements in q, causing it to block. There might be ways to ensure channel and q are always in step, but that would probably require implementing your own Queue class, and it adds so much complexity that I doubt it's worth it.
Also, your implementation doesn't distinguish between normal EOS and abnormal queue termination due to thread interruption - depending on what you're using it for you might want to know which is which. Personally I don't like using exceptions in this way — use exceptions for exceptional situations, not for normal flow control.

Categories

Resources