Is the ArrayBlockingQueue add method instant? - java

For an ArrayBlockingQueue in Java, does queue.add(element) ever lock up the thread it is in? I have an application with dozens of threads running that will all put information into one ArrayBlockingQueue. The threads cannot afford to be locked up for any short amount of time. If they are all putting objects into the queue, will the add method instantly move on and let the queue put the object into itself in the future or will it wait until it actually is put inside the queue?

ArrayBlockingQueue is implementation of Queue which additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
add method inserts the specified element at the tail of this queue if it is possible to do so immediately without exceeding the queue's capacity, returning true upon success and throwing an IllegalStateException if this queue is full.
Attempts to put an element into a full queue will result in the operation blocking; attempts to take an element from an empty queue will similarly block.
Once created, the capacity cannot be changed.

Yes when you call add method in ArrayBlockingQueue it will take lock to do the operation or else how it will make threadsafe. How you will put your object to any shared variable in multi-threaded environment.You need synchronization.You can check some non-blocking collection (can create own linked list).Where you will add your value then a single daemon thread will read one by one and put in queue.
JAVA Implementation
add method internally call offer.If you don't want to wait more than a given time you can use public boolean tryLock(long timeout, TimeUnit unit)
public boolean offer(E e) {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lock();
try {
if (count == items.length)
return false;
else {
enqueue(e);
return true;
}
} finally {
lock.unlock();
}
}

In ArrayBlockingQueue concurrent operations guarded with java.util.concurrent.locks.ReentrantLock. And operations are synchronous. When you add an item to the queue add operation returns after enqueue operation completed.

Related

Java Blocking Queue Implementation Questions

The common implementation is here, Java's built-in implementation is here. I have two questions regarding these two implementations:
1) The first implementation use synchronized key word on put() and take() methods, which means only one thread can access one method. Let's say if thread A call put() and found the queue is full, so it's waiting, then no one can ever call take() method since the lock is not released yet, how can the implementation be used?
2) Java's built-in uses two locks: takeLock and putLock, and used in put() and take() respectively. I saw that the interval queue is a linked list, which is not thread-safe, how can that be done?
As already mentioned in some of the comments the first implementation just uses traditional wait()/notify() mechanism where one thread waits (and of course releasing lock) for being notified by other threads.
The second one uses different locks each for put and take operations. So the individual operations (simultaneous put() or take()) are synchronous. But they need to communicate with each other when queue is full or empty. So they interact with each other through condition. Checkout the two private methods-
/**
* Signals a waiting take. Called only from put/offer (which do not
* otherwise ordinarily lock takeLock.)
*/
private void signalNotEmpty() {
final ReentrantLock takeLock = this.takeLock;
takeLock.lock();
try {
notEmpty.signal();
} finally {
takeLock.unlock();
}
}
/**
* Signals a waiting put. Called only from take/poll.
*/
private void signalNotFull() {
final ReentrantLock putLock = this.putLock;
putLock.lock();
try {
notFull.signal();
} finally {
putLock.unlock();
}
}
put method signals other threads trying to take/poll from empty queue and take method signals other threads trying to put elements into full queue.

Is BlockingQueue completely thread safe in Java

I know that the documentation says that the object is thread safe but does that mean that all access to it from all methods are thread safe? So if I call put() on it from many threads at once and take() on it at the same instance, will nothing bad happen?
I ask because this answer is making me second guess:
https://stackoverflow.com/a/22006181/4164238
The quick answer is yes, they are thread safe. But lets not leave it there ...
Firstly a little house keeping, BlockingQueue is an interface, and any implementation that is not thread safe will be breaking the documented contract. The link that you included was referring to LinkedBlockingQueue, which has some cleverness to it.
The link that you included makes an interesting observation, yes there are two locks within LinkedBlockingQueue. However it fails to understand that the edge case that a 'simple' implementation would have fallen foul of was in-fact being handled, which is why the take and put methods are more complicated than one would at first expect.
LinkedBlockingQueue is optimized to avoid using the same lock on both reading and writing, this reduces contention however for correct behavior it relies on the queue not being empty. When the queue has elements within it, then the push and the pop points are not at the same region of memory and contention can be avoided. However when the queue is empty then the contention cannot be avoided, and so extra code is required to handle this common 'edge' case. This is a common trade off between code complexity and performance/scalability.
The question then follows, how does LinkedBlockingQueue know when the queue is empty/not empty and thus handle the threading then? The answer is that it uses an AtomicInteger and a Condition as two extra concurrent data structures. The AtomicInteger is used to check whether the length of the queue is zero and the Condition is used to wait for a signal to notify a waiting thread when the queue is probably in the desired state. This extra coordination does have an overhead, however in measurements it has been shown that when ramping up the number of concurrent threads that the overheads of this technique are lower than the contention that is introduced by using a single lock.
Below I have copied the code from LinkedBlockingQueue and added comments explaining how they work. At a high level, take() first locks out all other calls to take() and then signals put() as necessary. put() works in a similar way, first it blocks out all other calls to put() and then signals take() if necessary.
From the put() method:
// putLock coordinates the calls to put() only; further coordination
// between put() and take() follows below
putLock.lockInterruptibly();
try {
// block while the queue is full; count is shared between put() and take()
// and is safely visible between cores but prone to change between calls
// a while loop is used because state can change between signals, which is
// why signals get rechecked and resent.. read on to see more of that
while (count.get() == capacity) {
notFull.await();
}
// we know that the queue is not full so add
enqueue(e);
c = count.getAndIncrement();
// if the queue is not full, send a signal to wake up
// any thread that is possibly waiting for the queue to be a little
// emptier -- note that this is logically part of 'take()' but it
// has to be here because take() blocks itself
if (c + 1 < capacity)
notFull.signal();
} finally {
putLock.unlock();
}
if (c == 0)
signalNotEmpty();
From take()
takeLock.lockInterruptibly();
try {
// wait for the queue to stop being empty
while (count.get() == 0) {
notEmpty.await();
}
// remove element
x = dequeue();
// decrement shared count
c = count.getAndDecrement();
// send signal that the queue is not empty
// note that this is logically part of put(), but
// for thread coordination reasons is here
if (c > 1)
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
signalNotFull();
Yes, all implementations of BlockingQueue are thread safe for put and take and all actions.
The link just goes halfway...and is not covering the full details. It is thread safe.
That answer is a little strange - for a start, BlockingQueue is an interface so it doesn't have any locks. Implementations such as ArrayBlockingQueue use the same lock for add() and take() so would be fine. Generally, if any implementation is not thread safe then it is a buggy implementation.
I think #Chris K has missed some points. "When the queue has elements within it, then the push and the pop points are not at the same region of memory and contention can be avoided. ", notice that when the queue has one element, head.next and tail points to the same node and put() and take() can both get locks and execute.
I think empty and full condition can be solved by synchronized put() and take(). However when it comes to one element, the lb queue has a null dummy head node, which may has something to do with the thread safety.
I tried this implementation on Leetcode
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingDeque;
class FooBar {
private final BlockingQueue<Object> line = new LinkedBlockingDeque<>(1);
private static final Object PRESENT = new Object();
private int n;
public FooBar(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
line.put(PRESENT);
// printFoo.run() outputs "foo". Do not change or remove this line.
printFoo.run();
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
line.take();
// printBar.run() outputs "bar". Do not change or remove this line.
printBar.run();
}
}
}
With n = 3, mosttimes I get a correct response of foobarfoobarfoorbar but sometimes I get barbarfoofoofoobar which is quite surprising.
I resolved to use using ReentrantLock and Condition, #chris-k can you shed more light

Blocking queue in Java

I am reading a book titled "Beginning Algorithms", that has examples in Java. In the chapter about queues, it explains the "blocking queue", and ... even when my background is C# and not Java, something looks funny to me.
This is part of the code ( I have omitted non relevant parts ):
public void enqueue(Object value){
synchronized(_mutex){
while(size == _max_size){
waitForNotification();
}
_queue.enqueue(value);
_mutex.notifyAll();
}
}
private void waitForNotification(){
try {
_mutex.wait();
} catch( InterruptedException e){
// Ignore
}
}
public Object dequeue() throws EmptyQueueException {
synchronized(_mutex){
while(isEmpty()){
waitForNotification();
}
Object value = _queue.dequeue();
_mutex.notifyAll();
return value;
}
}
I see two major problems.
First, if the queue is full, 5 threads are waiting to add items, and other thread dequeue 1 item, the other 5 will be released, will check at the same time that "size() == _max_size" is not true anymore and they will try to call "_queue.enqueue" 5 times, overflowing the queue.
Second, similarly the same happens with "dequeue". If several threads are blocked trying to dequeue items because the queue is empty, adding one, will cause all of them to check that the queue is not empty anymore, and all of them will try to dequeue, getting null or an exception I guess.
Am I right? I C# there is a "Monitor.Pulse" that only releases one blocked thread, would that be the solution?
Cheers.
You are disregarding the synchronized statement. This allows only one thread to acquire the _mutex. Consequently, only that one thread will be able to check the value of size because the while statement is within the synchronized block.
As described in this thread, the wait() method actually releases the _mutex object and waits for a call to notify(), or notifyAll() in this case. Furthermore, notifyAll() will only grant the lock on _mutex to one of the waiting threads.

Java class as a Monitor

i need to write a java program but i need some advice before starting on my own.
The program i will be writing is to do the following:
Simulate a shop takes advanced order for donuts
The shop would not take further orders, once 5000 donuts have been ordered
Ok i am kind of stuck thinking if i should be writing the java-class to act as a Monitor or should i use Java-Semaphore class instead?
Please advice me. Thanks for the help.
Any java object can work as a monitor via the wait/notify methods inherited from Object:
Object monitor = new Object();
// thread 1
synchronized(monitor) {
monitor.wait();
}
// thread 2
synchronized(monitor) {
monitor.notify();
}
Just make sure to hold the lock on the monitor object when calling these methods (don't worry about the wait, the lock is released automatically to allow other threads to acquire it). This way, you have a convenient mechanism for signalling among threads.
It seems to me like you are implementing a bounded producer-consumer queue. In this case:
The producer will keep putting items in a shared queue.
If the queue size reaches 5000, it will call wait on a shared monitor and go to sleep.
When it puts an item, it will call notify on the monitor to wake up the consumer if it's waiting.
The consumer will keep taking items from the queue.
When it takes an item, it will call notify on the monitor to wake up the producer.
If the queue size reaches 0 the consumer calls wait and goes to sleep.
For an even more simplified approach, have a loop at the various implementation of BlockingQueue, which provides the above features out of the box!
It seems to me that the core of this exercise is updating a counter (number of orders taken), in a thread-safe and atomic fashion. If implemented incorrectly, your shop could end up taking more than 5000 pre-orders due to missed updates and possibly different threads seeing stale values of the counter.
The simplest way to update a counter atomically is to use synchronized methods to get and increment it:
class DonutShop {
private int ordersTaken = 0;
public synchronized int getOrdersTaken() {
return ordersTaken;
}
public synchronized void increaseOrdersBy(int n) {
ordersTaken += n;
}
// Other methods here
}
The synchronized methods mean that only one thread can be calling either method at any time (and they also provide a memory barrier to ensure that different threads see the same value rather than locally cached ones which may be outdated). This ensures a consistent view of the counter across all threads in your application.
(Note that I didn't have a "set" method but an "increment" method. The problem with "set" is that if client has to call shop.set(shop.get() + 1);, another thread could have incremented the value between the calls to get and set, so this update would be lost. By making the whole increment operation atomic - because it's in the synchronized block - this situation cannot occur.
In practice I would probably use an AtomicInteger instead, which is basically a wrapper around an int to allow for atomic queries and updates, just like the DonutShop class above. It also has the advantage that it's more efficient in terms of minimising exclusive blocking, and it's part of the standard library so will be more immediately familiar to other developers than a class you've written yourself.
In terms of correctness, either will suffice.
Like Tudor wrote, you can use any object as monitor for general purpose locking and synchronization.
However, if you got the requirement that only x orders (x=5000 for your case) can be processed at any one time, you could use the java.util.concurrent.Semaphore class. It is made specifically for use cases where you can only have fixed number of jobs running - it is called permits in the terminology of Semaphore
If you do the processing immediately, you can go with
private Semaphore semaphore = new Semaphore(5000);
public void process(Order order)
{
if (semaphore.tryAcquire())
{
try
{
//do your processing here
}
finally
{
semaphore.release();
}
}
else
{
throw new IllegalStateException("can't take more orders");
}
}
If if takes more than that (human input required, starting another thread/process, etc.), you need to add callback for when the processing is over, like:
private Semaphore semaphore = new Semaphore(5000);
public void process(Order order)
{
if (semaphore.tryAcquire())
{
//start a new job to process order
}
else
{
throw new IllegalStateException("can't take more orders");
}
}
//call this from the job you started, once it is finished
public void processingFinished(Order order)
{
semaphore.release();
//any other post-processing for that order
}

BlockingQueue - blocked drainTo() methods

BlockingQueue has the method called drainTo() but it is not blocked. I need a queue that I want to block but also able to retrieve queued objects in a single method.
Object first = blockingQueue.take();
if ( blockingQueue.size() > 0 )
blockingQueue.drainTo( list );
I guess the above code will work but I'm looking for an elegant solution.
Are you referring to the comment in the JavaDoc:
Further, the behavior of this operation is undefined if the specified collection
is modified while the operation is in progress.
I believe that this refers to the collection list in your example:
blockingQueue.drainTo(list);
meaning that you cannot modify list at the same time you are draining from blockingQueue into list. However, the blocking queue internally synchronizes so that when drainTo is called, puts and (see note below) gets will block. If it did not do this, then it would not be truly Thread-safe. You can look at the source code and verify that drainTo is Thread-safe regarding the blocking queue itself.
Alternately, do you mean that when you call drainTo that you want it to block until at least one object has been added to the queue? In that case, you have little choice other than:
list.add(blockingQueue.take());
blockingQueue.drainTo(list);
to block until one or more items have been added, and then drain the entire queue into the collection list.
Note: As of Java 7, a separate lock is used for gets and puts. Put operations are now permitted during a drainTo (and a number of other take operations).
If you happen to use Google Guava, there's a nifty Queues.drain() method.
Drains the queue as BlockingQueue.drainTo(Collection, int), but if the
requested numElements elements are not available, it will wait for
them up to the specified timeout.
I found this pattern useful.
List<byte[]> blobs = new ArrayList<byte[]>();
if (queue.drainTo(blobs, batch) == 0) {
blobs.add(queue.take());
}
With the API available, I don't think you are going to get much more elegant. Other than you can remove the size test.
If you are wanting to atomically retrieve a contiguous sequence of elements even if another removal operation coincides, I don't believe even drainTo guarantees that.
Source code:
596: public int drainTo(Collection<? super E> c) {
//arg. check
603: lock.lock();
604: try {
608: for (n = 0 ; n != count ; n++) {
609: c.add(items[n]);
613: }
614: if (n > 0) {
618: notFull.signalAll();
619: }
620: return n;
621: } finally {
622: lock.unlock();
623: }
624: }
ArrayBlockingQueue is eager to return 0. BTW, it could do it before taking the lock.

Categories

Resources