I am reading a book titled "Beginning Algorithms", that has examples in Java. In the chapter about queues, it explains the "blocking queue", and ... even when my background is C# and not Java, something looks funny to me.
This is part of the code ( I have omitted non relevant parts ):
public void enqueue(Object value){
synchronized(_mutex){
while(size == _max_size){
waitForNotification();
}
_queue.enqueue(value);
_mutex.notifyAll();
}
}
private void waitForNotification(){
try {
_mutex.wait();
} catch( InterruptedException e){
// Ignore
}
}
public Object dequeue() throws EmptyQueueException {
synchronized(_mutex){
while(isEmpty()){
waitForNotification();
}
Object value = _queue.dequeue();
_mutex.notifyAll();
return value;
}
}
I see two major problems.
First, if the queue is full, 5 threads are waiting to add items, and other thread dequeue 1 item, the other 5 will be released, will check at the same time that "size() == _max_size" is not true anymore and they will try to call "_queue.enqueue" 5 times, overflowing the queue.
Second, similarly the same happens with "dequeue". If several threads are blocked trying to dequeue items because the queue is empty, adding one, will cause all of them to check that the queue is not empty anymore, and all of them will try to dequeue, getting null or an exception I guess.
Am I right? I C# there is a "Monitor.Pulse" that only releases one blocked thread, would that be the solution?
Cheers.
You are disregarding the synchronized statement. This allows only one thread to acquire the _mutex. Consequently, only that one thread will be able to check the value of size because the while statement is within the synchronized block.
As described in this thread, the wait() method actually releases the _mutex object and waits for a call to notify(), or notifyAll() in this case. Furthermore, notifyAll() will only grant the lock on _mutex to one of the waiting threads.
Related
For an ArrayBlockingQueue in Java, does queue.add(element) ever lock up the thread it is in? I have an application with dozens of threads running that will all put information into one ArrayBlockingQueue. The threads cannot afford to be locked up for any short amount of time. If they are all putting objects into the queue, will the add method instantly move on and let the queue put the object into itself in the future or will it wait until it actually is put inside the queue?
ArrayBlockingQueue is implementation of Queue which additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
add method inserts the specified element at the tail of this queue if it is possible to do so immediately without exceeding the queue's capacity, returning true upon success and throwing an IllegalStateException if this queue is full.
Attempts to put an element into a full queue will result in the operation blocking; attempts to take an element from an empty queue will similarly block.
Once created, the capacity cannot be changed.
Yes when you call add method in ArrayBlockingQueue it will take lock to do the operation or else how it will make threadsafe. How you will put your object to any shared variable in multi-threaded environment.You need synchronization.You can check some non-blocking collection (can create own linked list).Where you will add your value then a single daemon thread will read one by one and put in queue.
JAVA Implementation
add method internally call offer.If you don't want to wait more than a given time you can use public boolean tryLock(long timeout, TimeUnit unit)
public boolean offer(E e) {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lock();
try {
if (count == items.length)
return false;
else {
enqueue(e);
return true;
}
} finally {
lock.unlock();
}
}
In ArrayBlockingQueue concurrent operations guarded with java.util.concurrent.locks.ReentrantLock. And operations are synchronous. When you add an item to the queue add operation returns after enqueue operation completed.
I know this question has been asked and answered many times before, but I just couldn't figure out a trick on the examples found around internet, like this or that one.
Both of these solutions check for emptiness of the blocking queue's array/queue/linkedlist to notifyAll waiting threads in put() method and vice versa in get() methods. A comment in the second link emphasizes this situation and mentions that that's not necessary.
So the question is; It also seems a bit odd to me to check whether the queue is empty | full to notify all waiting threads. Any ideas?
Thanks in advance.
I know this is an old question by now, but after reading the question and answers I couldn't help my self, I hope you find this useful.
Regarding checking if the queue is actually full or empty before notifying other waiting threads, you're missing something which is both methods put (T t) and T get() are both synchronized methods, meaning that only one thread can enter one of these methods at a time, yet this will not prevent them from working together, so if a thread-a has entered put (T t) method another thread-b can still enter and start executing the instructions in T get() method before thread-a has exited put (T t), and so this double-checking design is will make the developer feel a little bit more safe because you can't know if future cpu context switching if will or when will happen.
A better and a more recommended approach is to use Reentrant Locks and Conditions:
//I've edited the source code from this link
Condition isFullCondition;
Condition isEmptyCondition;
Lock lock;
public BQueue() {
this(Integer.MAX_VALUE);
}
public BQueue(int limit) {
this.limit = limit;
lock = new ReentrantLock();
isFullCondition = lock.newCondition();
isEmptyCondition = lock.newCondition();
}
public void put (T t) {
lock.lock();
try {
while (isFull()) {
try {
isFullCondition.await();
} catch (InterruptedException ex) {}
}
q.add(t);
isEmptyCondition.signalAll();
} finally {
lock.unlock();
}
}
public T get() {
T t = null;
lock.lock();
try {
while (isEmpty()) {
try {
isEmptyCondition.await();
} catch (InterruptedException ex) {}
}
t = q.poll();
isFullCondition.signalAll();
} finally {
lock.unlock();
}
return t;
}
Using this approach there's no need for double checking, because the lock object is shared between the two methods, meaning only one thread a or b can enter any of these methods at a time unlike synchronized methods which creates different monitors, and only those threads waiting because the queue is full will be notified when there's more space, and the same goes for threads waiting because the queue is empty, this will lead to a better cpu utilization.
you can find more detailed example with source code here
I think logically there is no harm doing that extra check before notifyAll().
You can simply notifyAll() once you put/get something from the queue. Everything will still work, and your code is shorter. However, there is also no harm checking if anyone is potentially waiting (by checking if hitting the boundary of queue) before you invoke notifyAll(). This extra piece of logic saves unnecessary notifyAll() invocations.
It just depends on you want a shorter and cleaner code, or you want your code to run more efficiently. (Haven't looked into notifyAll() 's implementation. If it is a cheap operation if there is no-one waiting, the performance gain may not be obvious for that extra checking anyway)
The reason why the authors used notifyAll() is simple: they had no clue whether or not it was necessary, so they decided for the "safer" option.
In the above example it would be sufficient to just call notify() as for each single element added, only a single thread waiting can be served under all circumstances.
This becomes more obvious, if your queue as well has the option to add multiple elements in one step like addAll(Collection<T> list), as in this case more than one thread waiting on an empty list could be served, to be exact: as many threads as elements have been added.
The notifyAll() however causes an extra overhead in the special single-element case, as many threads are woken up unnecessarily and therefore have to be put to sleep again, blocking queue access in the meantime. So replacing notifyAll() with notify() would improve speed in this special case.
But then not using wait/notify and synchronized at all, but instead use the concurrent package would increase speed by a lot more than any smart wait/notify implementation could ever get to.
I would like to write a simple blocking queue implementation which will help the people to understand this easily. This is for someone who is novice to this.
class BlockingQueue {
private List queue = new LinkedList();
private int limit = 10;
public BlockingQueue(int limit){
this.limit = limit;
}
public synchronized void enqueue(Object ele) throws InterruptedException {
while(queue.size() == limit)
wait();
if(queue.size() == 0)
notifyAll();
// add
queue.add(ele);
}
public synchronized Object deque() throws InterruptedException {
while (queue.size() == 0)
wait();
if(queue.size() == limit)
notifyAll();
return queue.remove(0);
}
}
I'm trying to implement a simple blocking queue in Java ME. In JavaME API, the concurrency utilities of Java SE are not available, so I have to use wait-notify like in the old times.
This is my provisional implementation. I'm using notify instead of notifyAll because in my project there are multiple producers but only a single consumer. I used an object for wait-notify on purpose to improve readability, despite it wastes a reference:
import java.util.Vector;
public class BlockingQueue {
private Vector queue = new Vector();
private Object queueLock = new Object();
public void put(Object o){
synchronized(queueLock){
queue.addElement(o);
queueLock.notify();
}
}
public Object take(){
Object ret = null;
synchronized (queueLock) {
while (queue.isEmpty()){
try {
queueLock.wait();
} catch (InterruptedException e) {}
}
ret = queue.elementAt(0);
queue.removeElementAt(0);
}
return ret;
}
}
My main question is about the put method. Could I put the queue.addElement line out of the synchronized block? Will performance improve if so?
Also, the same applies to take: could I take the two operations on queue out of the synchronized block?
Any other possible optimization?
EDIT:
As #Raam correctly pointed out, the consumer thread can starve when being awakened in wait. So what are the alternatives to prevent this? (Note: In JavaME I don't have all these nice classes from Java SE. Think of it as the old Java v1.2)
The Vector class makes no guarantees to be thread safe, and you should synchronize access to it, like you have done. Unless you have evidence that your current solution has performance problems, I wouldn't worry about it.
On a side note, I see no harm in using notifyAll rather than notify to support multiple consumers.
synchronized is used to protect access to shared state and ensure atomicity.
Note that methods of Vector are already synchronized, therefore Vector protects it own shared state itself. So, your synchronization blocks are only needed to ensure atomicity of your operations.
You certainly cannot move operations on queue from the synchronized block in your take() method, because atomicity is crucial for correctness of that method. But, as far as I understand, you can move queue operation from the synchronized block in the put() method (I cannot imagine a situation when it can go wrong).
However, the reasoning above is purely theoretical, because in all cases you have double synchronization: your synchronize on queueLock and methods of Vector implicitly synchronize on queue. Therefore proposed optimization doesn't make sense, its correctness depends on presence of that double synchronization.
To avoid double synchronization you need to synchronize on queue as well:
synchronized (queue) { ... }
Another option would be to use non-synchronized collection (such as ArrayList) instead of Vector, but JavaME doesn't support it. In this case you won't be able to use proposed optimization as well because synchronized blocks also protect shared state of the non-synchronized collection.
Unless you have performance issues specifically due to garbage collection, I would rather use a linked list than a Vector to implement a queue (first in,first out).
I would also write code that would be reused when your project (or another) gets multiple consumers. Although in that case, you need to be aware that the Java language specifications do not impose a way to implement monitors. In practice, that means that you don't control which consumer thread gets notified (half of the existing Java Virtual Machines implement monitors using a FIFO model and the other half implement monitors using a LIFO model)
I also think that whoever is using the blocking class is also supposed to deal with the InterruptedException. After all, the client code would have to deal with a null Object return otherwise.
So, something like this:
/*package*/ class LinkedObject {
private Object iCurrentObject = null;
private LinkedObject iNextLinkedObject = null;
LinkedObject(Object aNewObject, LinkedObject aNextLinkedObject) {
iCurrentObject = aNewObject;
iNextLinkedObject = aNextLinkedObject;
}
Object getCurrentObject() {
return iCurrentObject;
}
LinkedObject getNextLinkedObject() {
return iNextLinkedObject;
}
}
public class BlockingQueue {
private LinkedObject iLinkedListContainer = null;
private Object iQueueLock = new Object();
private int iBlockedThreadCount = 0;
public void appendObject(Object aNewObject) {
synchronized(iQueueLock) {
iLinkedListContainer = new iLinkedListContainer(aNewObject, iLinkedListContainer);
if(iBlockedThreadCount > 0) {
iQueueLock.notify();//one at a time because we only appended one object
}
} //synchonized(iQueueLock)
}
public Object getFirstObject() throws InterruptedException {
Object result = null;
synchronized(iQueueLock) {
if(null == iLinkedListContainer) {
++iBlockedThreadCount;
try {
iQueueLock.wait();
--iBlockedThreadCount; // instead of having a "finally" statement
} catch (InterruptedException iex) {
--iBlockedThreadCount;
throw iex;
}
}
result = iLinkedListcontainer.getCurrentObject();
iLinkedListContainer = iLinkedListContainer.getNextLinkedObject();
if((iBlockedThreadCount > 0) && (null != iLinkedListContainer )) {
iQueueLock.notify();
}
}//synchronized(iQueueLock)
return result;
}
}
I think that if you try to put less code in the synchronized blocks, the class will not be correct anymore.
There seem to be some issues with this approach. You can have scenarios where the consumer can miss notifications and wait on the queue even when there are elements in the queue.
Consider the following sequence in chronological order
T1 - Consumer acquires the queueLock and then calls wait. Wait will release the lock and cause the thread to wait for a notification
T2 - One producer acquires the queueLock and adds an element to the queue and calls notify
T3 - The Consumer thread is notified and attempts to acquire queueLock BUT fails as another producer comes at the same time. (from the notify java doc - The awakened thread will compete in the usual manner with any other threads that might be actively competing to synchronize on this object; for example, the awakened thread enjoys no reliable privilege or disadvantage in being the next thread to lock this object.)
T4 - The second producer now adds another element and calls notify. This notify is lost as the consumer is waiting on queueLock.
So theoretically its possible for the consumer to starve (forever stuck trying to get the queueLock) also you can run into a memory issue with multiple producers adding elements to the queue which are not being read and removed from the queue.
Some changes that I would suggest is as follows -
Keep an upper bound to the number of items that can be added to the queue.
Ensure that the consumer always read all the elements. Here is a program which shows how the producer - consumer problem can be coded.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Java Synchronization
I'm reading the book Beginning Android Games.
It uses synchronized() a lot but I don't really understand what it does. I haven't used Java in a long time and I'm not sure if I ever used multithreading.
In the Canvas examples it uses synchronized(this). However in the OpenGL ES example, it creates an Object called stateChanged and then uses synchronized(stateChanged). When the game state changes it calls stateChanged.wait() and then stateChanged.notifyAll();
Some code:
Object stateChanged = new Object();
//The onPause() looks like this:
public void onPause()
{
synchronized(stateChanged)
{
if(isFinishing())
state = GLGameState.Finished;
else
state = GLGameState.Paused;
while(true)
{
try
{
stateChanged.wait();
break;
} catch(InterruptedException e)
{
}
}
}
}
//The onDrawSurface looks like this:
public void onDrawFrame(GL10 gl)
{
GLGameState state = null;
synchronized(stateChanged)
{
state = this.state;
}
if(state == GLGameState.Running)
{
}
if(state == GLGameState.Paused)
{
synchronized(stateChanged)
{
this.state = GLGameState.Idle;
stateChanged.notifyAll();
}
}
if(state == GLGameState.Finished)
{
synchronized(stateChanged)
{
this.state = GLGameState.Idle;
stateChanged.notifyAll();
}
}
}
//the onResume() looks like this:
synchronized(stateChanged)
{
state = GLGameState.Running;
startTime = System.nanoTime();
}
The synchronized keyword is used to keep variables or methods thread-safe. If you wrap a variable in a synchronized block like so:
synchronized(myVar) {
// Logic involing myVar
}
Then any attempts to modify the value of myVar from another thread while the logic inside the synchronized block is running will wait until the block has finished execution. It ensures that the value going into the block will be the same through the lifecycle of that block.
This Java Tutorial can probably help you understand what using synchronized on an object does.
When object.wait() is called it will release the lock held on that object (which happens when you say synchronized(object)), and freeze the thread. The thread then waits until object.notify() or object.notifyAll() is called by a separate thread. Once one of these calls occurs, it will allow any threads that were stopped due to object.wait() to continue. This does not mean that the thread that called object.notify() or object.notifyAll() will freeze and pass control to a waiting thread, it just means these waiting threads are now able to continue, whereas before they were not.
When used like this:
private synchronized void someMehtod()
You get these effects:
1. First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
2. Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
(Taken from here)
You get a similar effect when you use a synchronized block of code:
private void someMethod() {
// some actions...
synchronized(this) {
// code here has synchronized access
}
// more actions...
}
As explained here
Java (which Android is based on) can run under multiple threads that can utilize multiple cpu cores. Multi-threading means that you can have Java doing two processes at the exact same moment. If you have a block of code or method that you need to ensure can only be operated by one thread at a time, you synchronize that block of code.
Here is the official Java explanation from Oracle
It's important to know that there is a processor/io costs involved with using synchronized and you only want to use it when you need it. It is also important to research what Java classes/methods are thread safe. For instance, the ++ increment operator is not guarateed to be thread safe, whereas you can easily create a block of synchronized code that increments a value using += 1.
only one thread can be active and inside block synchronized by given object.
calling wait stops gives up this right and deactivates current thread until someone call notify(all)()
Then the inactive thread start wanting to run in the synchronized block again, but is treated equaly with all other threads that wants it. Only one somehow chosen (programmer cannot influence nor depend on which one) actualy gets there.
Synchronized keyword in java is used for 2 things.
First meaning is so called critical section, i.e. part of code that can be accessed by one thread simultaneously. The object you pass to synchronized allows some kind of naming: if one code is run in synchronized(a) it cannot access other block that is into synchronized(a) but can access block of code into synchronized(b).
Other issue is inter-thread communication. Thread can wait until other thread notifies it. Both wait and notify must be written into synchronized block.
It was a very short description. I'd suggest you to search for some tutorial about multithreading and read it.
The keyword synchronized, together with the wait and notify operations form a nonblocking condition monitor, a construct useful for coordinating multiple threads.
I've been having some thread related problems recently with a consumer that takes points. Here is the original, which works fine except for taking up a lot of cpu constantly checking the queue. The idea is that cuePoint can be called casually and the main thread keeps going.
import java.util.List;
import java.util.ArrayList;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class PointConsumer implements Runnable {
public static final int MAX_QUEUE_SIZE=500;
BlockingQueue<Point> queue;
public PointConsumer (){
this.queue=new ArrayBlockingQueue<Point>(MAX_QUEUE_SIZE);
}
public void cuePoint(Point p){
try{
this.queue.add(p);
}
catch(java.lang.IllegalStateException i){}
}
public void doFirstPoint(){
if(queue.size()!=0){
Point p=queue.poll();
//operation with p that will take a while
}
}
public void run() {
while(true){
doFirstPoint();
}
}
}
I tried to fix the cpu issue by adding notify() every time the cue function is called, and re-working doFirstPoint() to something like this:
public void doFirstPoint(){
if(queue.size()!=0){
//operation with p that will take a while
}
else{
try{
wait();
}
catch(InterruptedException ie){}
}
}
However, I found out that notify() and wait() only work in synchronized functions. When I make doFirstPoint and cuePoint synchronized, the main thread which has called cuePoint will be kept waiting.
I had a few ideas to get around this, including making the thread an object and having it be directly notified, but I wasn't sure if that would cause more problems than it would fix, be very bad form, or simply not work. Is there a simple solution to this problem that I'm missing?
The point of BlockingQueue is that you don't have to write this code yourself.
Just call take() instead, which will wait until an object is inserted onto the queue, or use poll but with a timeout, so that it only returns null if the timeout elapses.
EDIT: Just to clarify the answer - as it's in the comments - not only does this mean you can remove the wait/notify code; you also remove the size check as the queue does that for you.
Just to add on what's been said, it's important to note the difference between put() and add(). If your queue is full, points you try and insert may never actually be inserted in the queue, because an IllegalStateException will be thrown, while put() will wait, if necessary, to insert the point.
Documentation for add() states
Adds the specified element to this queue if it is possible to do so
immediately, returning true upon success, else throwing an
IllegalStateException.
while put states
Adds the specified element to this queue, waiting if necessary for
space to become available