I saw this self implemented bounded blocking queue.
A change was made to it, aiming to eleminate competition by replacing notifyAll with notify.
But I don't quite get what's the point of the 2 extra variables added: waitOfferCount and waitPollCount.
Their initial values are both 0.
Diff after and before they're added is below:
Offer:
Poll:
My understanding is that the 2 variables purpose is that you won't do useless notify calls when there's nothing wait on the object. But what harm would it do if not done this way?
Another thought is that they may have something to do with the switch from notifyAll to notify, but again I think we can safely use notify even without them?
Full code below:
class FairnessBoundedBlockingQueue implements Queue {
protected final int capacity;
protected Node head;
protected Node tail;
// guard: canPollCount, head
protected final Object pollLock = new Object();
protected int canPollCount;
protected int waitPollCount;
// guard: canOfferCount, tail
protected final Object offerLock = new Object();
protected int canOfferCount;
protected int waitOfferCount;
public FairnessBoundedBlockingQueue(int capacity) {
this.capacity = capacity;
this.canPollCount = 0;
this.canOfferCount = capacity;
this.waitPollCount = 0;
this.waitOfferCount = 0;
this.head = new Node(null);
this.tail = head;
}
public boolean offer(Object obj) throws InterruptedException {
synchronized (offerLock) {
while (canOfferCount <= 0) {
waitOfferCount++;
offerLock.wait();
waitOfferCount--;
}
Node node = new Node(obj);
tail.next = node;
tail = node;
canOfferCount--;
}
synchronized (pollLock) {
++canPollCount;
if (waitPollCount > 0) {
pollLock.notify();
}
}
return true;
}
public Object poll() throws InterruptedException {
Object result;
synchronized (pollLock) {
while (canPollCount <= 0) {
waitPollCount++;
pollLock.wait();
waitPollCount--;
}
result = head.next.value;
head.next.value = null;
head = head.next;
canPollCount--;
}
synchronized (offerLock) {
canOfferCount++;
if (waitOfferCount > 0) {
offerLock.notify();
}
}
return result;
}
}
You would need to ask the authors of that change what they thought they were achieving with that change.
My take is as follows:
Changing from notifyAll() to notify() is a good thing. If there are N threads waiting on a queue's offerLock or pollLock, then this avoids N - 1 unnecessary wakeups.
It seems that the counters are being used avoid calling notify() when there isn't a thread waiting. This looks to me like a doubtful optimization. AFAIK a notify on a mutex when nothing is waiting is very cheap. So this may make a small difference ... but it is unlikely to be significant.
If you really want to know, write some benchmarks. Write 4 versions of this class with no optimization, the notify optimization, the counter optimization and both of them. Then compare the results ... for different levels of queue contention.
I'm not sure what "fairness" is supposed to mean here, but I can't see anything in this class to guarantee that threads that are waiting in offer or poll get treated fairly.
Another thought is that they may have something to do with the switch from notifyAll to notify, but again I think we can safely use notify even without them?
Yes, since two locks (pollLock and offerLock) are used, it is no problem to change notyfiAll to notify without these two variables. But if you are using a lock, you must use notifyAll.
My understanding is that the 2 variables purpose is that you won't do useless notify calls when there's nothing wait on the object. But what harm would it do if not done this way?
Yes, these two variables are to avoid useless notify calls. These two variables also bring in additional operations. I think benchmarking may be needed to determine performance in different scenarios.
Besides,
1.As a blocking queue, it should implement the interface BlockingQueue, and both poll and offer methods shoule be non-blocking. It should use take and put.
2.This is not a Fairness queue.
Related
I have a scenario with dozens of producer and one single consumer. Timing is critical: for performance reason I want to avoid any locking of producers and I want the consumer to wait as little as possible when no messages are ready.
I've started using a ConcurrentLinkedQueue, but I don't like to call sleep on the consumer when queue.poll() == null because I could waste precious milliseconds, and I don't want to use yield because I end up wasting cpu.
So I came to implement a sort of ConcurrentBlockingQueue so that the consumer can run something like:
T item = queue.poll();
if(item == null) {
wait();
item = queue.poll();
}
return item;
And producer something like:
queue.offer(item);
notify();
Unfortunately wait/notify only works on synchronized block, which in turn would drastically reduce producer performance. Is there any other implementation of wait/notify mechanism that does not require synchronization?
I am aware of the risks related to not having wait and notify synchronized, and I managed to resolve them by having an external thread running the following:
while(true) {
notify();
sleep(100);
}
I've started using a ConcurrentLinkedQueue, but I don't like to call sleep on the consumer when queue.poll() == null
You should check the BlockingQueue interface, which has a take method that blocks until an item becomes available.
It has several implementations as detailed in the javadoc, but ConcurrentLinkedQueue is not one of them:
All Known Implementing Classes:
ArrayBlockingQueue, DelayQueue, LinkedBlockingDeque, LinkedBlockingQueue, LinkedTransferQueue, PriorityBlockingQueue, SynchronousQueue
I came out with the following implementation:
private final ConcurrentLinkedQueue<T> queue = new ConcurrentLinkedQueue<>();
private final Semaphore semaphore = new Semaphore(0);
private int size;
public void offer(T item) {
size += 1;
queue.offer(item);
semaphore.release();
}
public T poll(long timeout, TimeUnit unit) {
semaphore.drainPermits();
T item = queue.poll();
if (item == null) {
try {
semaphore.tryAcquire(timeout, unit);
} catch (InterruptedException ex) {
}
item = queue.poll();
}
if (item == null) {
size = 0;
} else {
size = Math.max(0, size - 1);
}
return item;
}
/** An inaccurate representation O(1)-access of queue size. */
public int size() {
return size;
}
With the following properties:
producers never go to SLEEP state (which I think can go with BlockingQueue implementations that use Lock in offer(), or with synchronized blocks using wait/notify)
consumer only goes to SLEEP state when queue is empty but it is soon woken up whenever a producer offer an item (no fixed-time sleep, no yield)
consumer can be sometime woken up even with empty queue, but it's ok here to waste some cpu cycle
Is there any equivalent implementation in jdk that I'm not aware of? Open for criticism.
I am learning about synchronized methods as a means of preventing race conditions and unwanted behavior in Java. I was presented with the following example, and told the race condition is quite subtle:
public class Messages {
private String message = null;
private int count = 2;
// invariant 0 <= count && count <= 2
public synchronized void put(String message) {
while( count < 2 )
this.wait();
this.message = message;
this.count = 0;
this.notifyAll();
}
public synchronized String getMessage() {
while( this.count == 2 )
this.wait();
String result = this.message;
this.count += 1;
this.notifyAll();
return result;
}
}
Subtle or not, I think I have a fundamental misunderstanding of what synchronized methods do. I was under the impression they restrict access to threads through use of a lock token (or similar), and thus can never race. How, then, does this example have a race condition, if its methods are synchronized? Can anyone help clarify?
I presume that what the author had in mind is that, since count goes from 0 to 2, two threads might call put() in sequence, and the reader threads would thus miss one of the messages.
It's indeed a race condition: readers and putters compete for the same lock, and if the messages being read depends on which thread is notified by notifyAll().
Think about ways that count could become > 2...
That code has a bad smell, too. What is count supposed to be counting? Why does get increment it and put reset it? Why the unnecessary use of 'this'? If I saw code like that in a project, I would look at it very carefully...
Multi threading is when you use new Thread(runnable).start(); this starts a new thread and goes to the run() method. The runnable is any class that implements runnable. Or extends thread a synchronized method makes sure that if these threads want to read data changed by the synchronized method it will be possible, otherwise it might be unchanged, or worse, half-changed.
Java's synchronized methods buys you mutual exclusion between the two methods, which means that you can assume they will not interleave.
However, you still have a race condition because you can get different behavior depending on which method runs first.
As JB Nizet suggested in his answer, consider what happens with each of the two orderings (assume they are running in different threads).
A race condition occurs whenever two entities compete for a single resource, which can cause unpredictable behavior if the outcome depends on the order. When you use notifyAll() all threads are woken up and they race to obtain the lock they were waiting for, and it's impossible to say which will execute next.
I don't think having a count value >2 is the problem if the code can work as expected.
Since both put() and getMessage() methods are synchronized both method's can't be called at the same time. So if an thread calls getMessage() and has the count value 2, another thread can't call the put() method to set the count = 0 and notify the waiting thread.There is too much synchronizing, which cause an deadlock. So the while part shouldn't be synchronized and can solved like this.
public void put(String message) {
while( count < 2 )
this.wait();
synchronzied(this){
this.message = message;
this.count = 0;
this.notifyAll();
}
}
public String getMessage() {
while( this.count == 2 )
this.wait();
synchronized(this){
String result = this.message;
this.count += 1;
this.notifyAll();
}
return result;
}
The producer is finite, as should be the consumer.
The problem is when to stop, not how to run.
Communication can happen over any type of BlockingQueue.
Can't rely on poisoning the queue(PriorityBlockingQueue)
Can't rely on locking the queue(SynchronousQueue)
Can't rely on offer/poll exclusively(SynchronousQueue)
Probably even more exotic queues in existence.
Creates a queued seq on another (presumably lazy) seq s. The queued
seq will produce a concrete seq in the background, and can get up to
n items ahead of the consumer. n-or-q can be an integer n buffer
size, or an instance of java.util.concurrent BlockingQueue. Note
that reading from a seque can block if the reader gets ahead of the
producer.
http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/seque
My attempts so far + some tests: https://gist.github.com/934781
Solutions in Java or Clojure appreciated.
class Reader {
private final ExecutorService ex = Executors.newSingleThreadExecutor();
private final List<Object> completed = new ArrayList<Object>();
private final BlockingQueue<Object> doneQueue = new LinkedBlockingQueue<Object>();
private int pending = 0;
public synchronized Object take() {
removeDone();
queue();
Object rVal;
if(completed.isEmpty()) {
try {
rVal = doneQueue.take();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
pending--;
} else {
rVal = completed.remove(0);
}
queue();
return rVal;
}
private void removeDone() {
Object current = doneQueue.poll();
while(current != null) {
completed.add(current);
pending--;
current = doneQueue.poll();
}
}
private void queue() {
while(pending < 10) {
pending++;
ex.submit(new Runnable() {
#Override
public void run() {
doneQueue.add(compute());
}
private Object compute() {
//do actual computation here
return new Object();
}
});
}
}
}
Not exactly an answer I'm afraid, but a few remarks and more questions. My first answer would be: use clojure.core/seque. The producer needs to communicate end-of-seq somehow for the consumer to know when to stop, and I assume the number of produced elements is not known in advance. Why can't you use an EOS marker (if that's what you mean by queue poisoning)?
If I understand your alternative seque implementation correctly, it will break when elements are taken off the queue outside your function, since channel and q will be out of step in that case: channel will hold more #(.take q) elements than there are elements in q, causing it to block. There might be ways to ensure channel and q are always in step, but that would probably require implementing your own Queue class, and it adds so much complexity that I doubt it's worth it.
Also, your implementation doesn't distinguish between normal EOS and abnormal queue termination due to thread interruption - depending on what you're using it for you might want to know which is which. Personally I don't like using exceptions in this way — use exceptions for exceptional situations, not for normal flow control.
I made a simple synchronized Stack object in Java, just for training purposes.
Here is what I did:
public class SynchronizedStack {
private ArrayDeque<Integer> stack;
public SynchronizedStack(){
this.stack = new ArrayDeque<Integer>();
}
public synchronized Integer pop(){
return this.stack.pop();
}
public synchronized int forcePop(){
while(isEmpty()){
System.out.println(" Stack is empty");
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return this.stack.pop();
}
public synchronized void push(int i){
this.stack.push(i);
notifyAll();
}
public boolean isEmpty(){
return this.stack.isEmpty();
}
public synchronized void pushAll(int[] d){
for(int i = 0; i < d.length; i++){
this.stack.push(i);
}
notifyAll();
}
public synchronized String toString(){
String s = "[";
Iterator<Integer> it = this.stack.iterator();
while(it.hasNext()){
s += it.next() + ", ";
}
s += "]";
return s;
}
}
Here are my questions:
Is it OK not to synchronize the isEmtpy() method? I figured it was because even if another Thread is modifying the stack at the same time, it would still return a coherent result (there is no operation that goes into a isEmpty state that is neither initial or final). Or is it a better design to have all the methods of a synchronized object synchronized?
I don't like the forcePop() method. I just wanted to create a thread that was able to wait until an item was pushed into the stack before poping an element, and I thought the best option was to do the loop with the wait() in the run() method of the thread, but I can't because it throws an IllegalMonitorStatException. What is the proper method to do something like this?
Any other comment/suggestion?
Thank you!
Stack itself is already synchronized, so it doesn't make sense to apply synchronization again (use ArrayDeque if you want a non-synchronized stack implementation)
It's NOT OK (aside from the fact from the previous point), because lack of synchronization may cause memory visibility effects.
forcePop() is pretty good. Though it should pass InterruptedException without catching it to follow the contract of interruptable blocking method. It would allow you to interrupt a thread blocked at forcePop() call by calling Thread.interrupt().
Assuming that stack.isEmpty() won't need synchronization might be true, but you are relying on an implementation detail of a class that you have no control over.
The javadocs of Stack state that the class is not thread-safe, so you should synchronize all access.
I think you're mixing idioms a little. You are backing your SynchronizedStack with java.util.Stack, which in turn is backed by java.util.Vector, which is synchronized. I think you should encapsulate the wait() and notify() behaivor in another class.
The only problem with not synchronizing isEmpty() is that you don't know what's happening underneath. While your reasoning is, well, reasonable, it assumes that the underlying Stack is also behaving in a reasonable manner. Which it probably is in this case, but you can't rely on it in general.
And the second part of your question, there's nothing wrong with a blocking pop operation, see this for a complete implementation of all the possible strategies.
And one other suggestion: if you're creating a class that is likely to be re-used in several parts of an application (or even several applications), don't use synchronized methods. Do this instead:
public class Whatever {
private Object lock = new Object();
public void doSomething() {
synchronized( lock ) {
...
}
}
}
The reason for this is that you don't really know if users of your class want to synchronize on your Whatever instances or not. If they do, they might interfere with the operation of the class itself. This way you've got your very own private lock which nobody can interfere with.
I've been reading up on Doug Lea's 'Concurrency Programming in Java' book. As you may know, Doug originally wrote the Java Concurrency API. However, something has caused me some confusion and I was hoping to gain a few my opinions on this little conundrum!
Take the following code from Doug Lea's queuing example...
class LinkedQueue {
protected Node head = new Node(null);
protected Node last = head;
protected final Object pollLock = new Object();
protected final Object putLock = new Object();
public void put(Object x) {
Node node = new Node(x);
synchronized (putLock) { // insert at end of list
synchronized (last) {
last.next = node; // extend list
last = node;
}
}
}
public Object poll() { // returns null if empty
synchronized (pollLock) {
synchronized (head) {
Object x = null;
Node first = head.next; // get to first real node
if (first != null) {
x = first.object;
first.object = null; // forget old object
head = first; // first becomes new head
}
return x;
}
}
}
static class Node { // local node class for queue
Object object;
Node next = null;
Node(Object x) { object = x; }
}
}
This a quite a nice Queue. It uses two monitors so a Producer and a Consumer can access the Queue at the same time. Nice! However, the synchronization on 'last' and 'head' is confusing me here. The book states this is needed for for the situation whereby Queue is currently or about to have 0 entries. Ok, fair enough and this kind of makes sense.
However, then I looked at the Java Concurrency LinkedBlockingQueue. The original version of the Queue don't synchronize on head or tail (I also wanted to post another link to the modern version which also suffers from the same problem but I couldn't do so because I'm a newbie). I wonder why not? Am I missing something here? Is there some part of the idiosyncratic nature of the Java Memory Model I'm missing? I would have thought for visibility purposes that this synchronization is needed? I'd appreciate some expert opinions!
In the version you put up a link for as well as the version in the latest JRE the item inside the Node class is volatile which enforces reads and writes to be visible to all other threads, here is a more in depth explaination http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
The subtlety here is that synchronized(null) would throw a NullPointerException,so neither head nor last is allowed to become null. They are both initialized to the value of the same dummy node that is never returned or removed from either list.
put() and poll() are synchronized on two different locks. The methods would need to synchronize on the same lock to be thread-safe with respect to one another if they could modify the same value from different threads. The only situation in which this is a problem is when head == last (i.e. they are the same object, referenced through different member variables). This is why the code synchronizes on head and last - most of the time these will be fast, uncontented locks, but occasionally head and last will be the same instance and one of the threads will have to block the other.
The only time that visibility is an issue is when the queue is nearly empty, the rest of the time put() and poll() work on different ends of the queue and don't interfere with each other.