Java's BlockingQueue design question - java

the method java.util.concurrent.BlockingQueue.add(E e)'s JavaDoc reads:
boolean add(E e)
Inserts the specified element into
this queue if it is possible to do so
immediately without violating capacity
restrictions, returning true upon
success and throwing an
IllegalStateException if no space is
currently available. When using a
capacity-restricted queue, it is
generally preferable to use offer.
My question is: will it ever return false? if not, why does this method return a boolean?
It seems weird to me. What is the design decision behind this?
Thanks for your knowledge!
Manuel

It follows the contract of Collection.add(E e) (since BlockingQueue is a subtype of Collection):
If a collection refuses to add a
particular element for any reason
other than that it already contains
the element, it must throw an
exception (rather than returning
false). This preserves the invariant
that a collection always contains the
specified element after this call
returns.

The decision behind is: fail fast. The IllegalStateException will be thrown, if the queue has a limited capacity. IllegalStateException is a RuntimeException. So if the exception gets thrown, you probably have a fault in your application logic or your application logic is not defensive enough. Or to say it in other words: If you like to use a limited queue, your application should deal with it properly (use offer instead).

I'm guessing it has a boolean return type because it's a subinterface of Queue, which also has a boolean add(E obj) method (which in turn is derived from Collection). Certain Queue implementations reject attempts to add objects to the queue by returning false.
Thus, the answer to your question is that implementations of BlockingQueue will never return false.

The method returns a boolean because it overrides Collection#add(E e).

Related

what thread safety issue does java.util.Stack or java.util.Queue has

If I ignore the size() inaccuracy, and assume I allocated large enough underlying Vector so that no reallocation happens, what thread safety issue does java.util.Stack or java.util.Queue has?
I cannot think of a valid/reasonable consistency argument to say they are thread unsafe.
Anybody has some insights?
"Thread safe" isn't an absolute attribute for a class -- what's safe or unsafe is your usage of the object. You can come up with unsafe ways to use a ConcurrentHashMap, and you can come up with thread-safe ways to use a plain HashMap.
When people say a class is thread-safe, they generally mean that each method is implemented in a way that's thread-safe on its own. In that sense, a Stack is thread-safe. But its interface doesn't allow for easy/safe handling of common use cases, so in that sense it's not very thread-safe.
For instance, if your code checks that the Stack is not empty, and if so, pop an element -- that's unsafe because it could be that it had one element (and thus was not empty), but someone else popped it before you got a chance to (in which case you're trying to pop an empty stack, and will get an exception).
To be more thread-safe, you really need a single method that handles that case for you. A BlockingQueue gives you that. For instance, take() will block until there's a value to pop, while poll() will instantly return back a value or null if there's no element to pop.
Stack, which extends Vector, has every method synchronized. This means that interactions with individual methods are thread-safe.
Queue is an interface. The safety of use across threads is up to the individual implementations. For example, an ArrayBlockingQueue is thread safe, but a LinkedList is not.
Look at this method from ArrayBlockingQueue (leave any existing synchronisation aside):
private void insert(E x) {
items[putIndex] = x;
// HERE
putIndex = inc(putIndex);
++count;
notEmpty.signal();
}
Let thread A progress until HERE, and let thread B take over and execute the method; then let A continue. It is easy to see that B's E x overwrites A's E x, with count being incremented by 2 and putIndex being advanced twice.
Similar HEREs can be found in other methods as well.
All data structures with memory for data and variables for bookkeeping are blatantly vulnerable to unsynced concurrent access.

How to simulate ConcurrentModificationException in own class?

I have been reading "Effective Java" Item 60, which is "Favor the use of standard exceptions".
Another general-purpose exception worth knowing about is ConcurrentModificationException. This exception should be thrown if an object that was designed for use by a single thread or with external synchronization detects that it is being concurrently modified.
Normally people face with CME when they try remove from a collection while looping.
But in here I am interested about what would be a concise example on detecting a concurrent modification on self-implemented class object?
I expect it to be something like synchronizing on internal object and related boolean flag, if another thread confronts the flag being false, then throwing exception.
For a simple research I have found in source of ArrayList:
final void checkForComodification() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
}
but the principle behind how modCount is maintained. I cannot find where it is being decremented.
The modification count is essentially the "revision number" of the collection's state. It is incremented every time there is a structural change to the collection. It is never decremented.
When starting iteration, the iterator remembers the value of the mod count. It then periodically checks that the container's current mod count still equals the remembered value. If it doesn't -- meaning there has been a structural change since iteration began -- a ConcurrentModificationException is thrown.
As for how you would implement this kind of behaviour yourself: Your class depends on some assumptions to work properly, which could be violated if access to an object is not synchronized properly. Try to check those assumptions and throw a CME if you find that they are not true.
In the example of ArrayList, the assumption is that nobody will change the structure of the list while you are iterating over it, so ArrayList keeps track of a modification count that is not supposed to change during an iteration.
However, this checking is only there to make it more likely that a bad access will cause a clean exception instead of weird behaviour - in other words, this exception is just a help to developers and does not need to enforce correctness, because correctness is already compromised when you encounter it.
It is a good idea to offer this kind of help where it does not impact the performance of your class much, but using e.g. synchronization to ensure correct use is probably a bad idea - then you might as well make the class threadsafe to begin with.
This is why the API documentation for ArrayList says:
Note that the fail-fast behavior of an iterator cannot be guaranteed
as it is, generally speaking, impossible to make any hard guarantees
in the presence of unsynchronized concurrent modification. Fail-fast
iterators throw ConcurrentModificationException on a best-effort
basis. Therefore, it would be wrong to write a program that depended
on this exception for its correctness: the fail-fast behavior of
iterators should be used only to detect bugs.
So, the general approach is following: just remember current state of your object and check its state every time when trying to access the object, if state changed, throw your exception!

I keep getting java.util.concurrentmodificationexception.. How to fix this?

I've been working on this snippet of code. Here is the pseudocode of what I want to happen:
a.check if sections(which is a list) size is 0.
b.if sections size is 0, then automatically enroll the student to the section by calling sections.add(newSection)
c.else if sections size is not zero, check for conflicts with schedule
d.if there are no conflicts, then enroll the student to the section by calling sections.add(newSection)
e.else do nothing
Java keeps throwing "java.util.concurrentmodificationexception" error on me. I know, I'm not supposed to alter the size of the ArrayList while traversing the list because it will modify the iterator. Is there another way to solve this? :D
Thanks a lot.
Your help is highly appreciated. :)
public String enrollsTo(Section newSection){
StringBuffer result = new StringBuffer();
String resultNegative = "Failed to enroll in this section.";
String resultPositive = "Successfully enrolled in section: " + newSection.getSectionName() + ".";
int previousSectionSize = sections.size();
if(this.sections.isEmpty()){
this.sections.add(newSection);
result.append(resultPositive);
}else{
for(Iterator<Section> iterator = sections.iterator(); iterator.hasNext() ; ){
Section thisSection = iterator.next();
if(thisSection.conflictsDayWith(newSection)==false &&
thisSection.conflictsTimeWith(newSection)==false){
this.sections.add(newSection); //<-- i believe the problem lies here.
result.append(resultPositive);
}
}
}
// if(this.sections.size() == previousSectionSize){
// result.append(resultNegative);
// }
return result.toString();
}
Don't do sections.add(newSection) inside your for loop, as this is a modification of the collection you're currently iterating over.
Also, don't you want to check all sections before deciding whether to add the newSection or not? Maybe something like this:
boolean conflict = false;
for (...) {
if (/* check for conflict */) {
conflict = true;
break;
}
}
if (!conflict) {
sections.add(newSection);
}
From the javadoc for ConcurrentModificationException (my emphasis):
This exception may be thrown by methods that have detected concurrent
modification of an object when such modification is not permissible.
For example, it is not generally permissible for one thread to modify
a Collection while another thread is iterating over it. In general,
the results of the iteration are undefined under these circumstances.
Some Iterator implementations (including those of all the general
purpose collection implementations provided by the JRE) may choose to
throw this exception if this behavior is detected. Iterators that do
this are known as fail-fast iterators, as they fail quickly and
cleanly, rather that risking arbitrary, non-deterministic behavior at
an undetermined time in the future.
Note that this exception does not always indicate that an object has
been concurrently modified by a different thread. If a single thread
issues a sequence of method invocations that violates the contract of
an object, the object may throw this exception. For example, if a
thread modifies a collection directly while it is iterating over the
collection with a fail-fast iterator, the iterator will throw this
exception.
Note that fail-fast behavior cannot be guaranteed as it is, generally
speaking, impossible to make any hard guarantees in the presence of
unsynchronized concurrent modification. Fail-fast operations throw
ConcurrentModificationException on a best-effort basis. Therefore, it
would be wrong to write a program that depended on this exception for
its correctness: ConcurrentModificationException should be used only
to detect bugs.
Potential solution: instead of adding directly to the list you're iterating, add to a temporary list and then when you've finished iterating, do an addAll().
While iterating collection, you can't modify it. The line this.sections.add(newSection); throwing the exception. You may need to use some boolean marker to check the condition
if(thisSection.conflictsDayWith(newSection)==false &&
thisSection.conflictsTimeWith(newSection)==false)
After the for loop if your boolean marker is true, then you can write
this.sections.add(newSection);
result.append(resultPositive);
You're correct in your assumption,
this.sections.add(newSection);
is definitely the source of your problem.
Simplest solution: Have a boolean representing the section's availability. Start out assuming it is available. If there is any conflict in your iterator, set it to false. After the iterator, add the section if the section is available (boolean true).
ConcurrentModificationExceptions often occur when you are modifying a collection while you are iterating over its elements. Read this tutorial for more details and this old SO post Why does it.next() throw java.util.ConcurrentModificationException?
I agree with #sudocode that you don't want to be adding the newSection every time you find even a section which doesn't conflict. I would have thought that when you step through the code in your debugger this would be apparent. ;)
BTW another (more obscure) way to do this without a flag is
CHECK: {
for (...) {
if (/* check for conflict */)
break CHECK;
}
sections.add(newSection);
}

LinkedList Vs ConcurrentLinkedQueue

Currently in a multithreaded environment, we are using a LinkedList to hold data. Sometimes in the logs we get NoSuchElementException while it is polling the linkedlist. Please help in understanding the performance impact if we move from the linkedlist to ConcurrentLinkedQueue implementation.
Thanks,
Sachin
When you get a NoSuchElementException then this maybe because of not synchronizing properly.
For example: You're checking with it.hasNext() if an element is in the list and afterwards trying to fetch it with it.next(). This may fail when the element has been removed in between and that can also happen when you use synchronized versions of Collection API.
So your problem cannot really be solved with moving to ConcurrentLinkedQueue. You may not getting an exception but you've to be prepared that null is returned even when you checked before that it is not empty. (This is still the same error but implementation differs.) This is true as long as there is no proper synchronization in YOUR code having checks for emptiness and element retrieving in the SAME synchronized scope.
There is a good chance that you trade NoSuchElementException for having new NullPointerException afterwards.
This may not be an answer directly addressing your question about performance, but having NoSuchElementException in LinkedList as a reason to move to ConcurrentLinkedQueue sounds a bit strange.
Edit
Some pseudo-code for broken implementations:
//list is a LinkedList
if(!list.isEmpty()) {
... list.getFirst()
}
Some pseudo-code for proper sync:
//list is a LinkedList
synchronized(list) {
if(!list.isEmpty()) {
... list.getFirst()
}
}
Some code for "broken" sync (does not work as intended).
This maybe the result of directly switching from LinkedList to CLQ in the hope of getting rid of synchronization on your own.
//queue is instance of CLQ
if(!queue.isEmpty()) { // Does not really make sense, because ...
... queue.poll() //May return null! Good chance for NPE here!
}
Some proper code:
//queue is instance of CLQ
element = queue.poll();
if(element != null) {
...
}
or
//queue is instance of CLQ
synchronized(queue) {
if(!queue.isEmpty()) {
... queue.poll() //is not null
}
}
ConcurrentLinkedQueue [is] an unbounded, thread-safe, FIFO-ordered queue. It uses a linked structure, similar to those we saw in Section 13.2.2 as the basis for skip lists, and in Section 13.1.1 for hash table overflow chaining. We noticed there that one of the main attractions of linked structures is that the insertion and removal operations implemented by pointer rearrangements perform in constant time. This makes them especially useful as queue implementations, where these operations are always required on cells at the ends of the structure, that is, cells that do not need to be located using the slow sequential search of linked structures.
ConcurrentLinkedQueue uses a CAS-based wait-free algorithm that is, one that guarantees that any thread can always complete its current operation, regardless of the state of other threads accessing the queue. It executes queue insertion and removal operations in constant time, but requires linear time to execute size. This is because the algorithm, which relies on co-operation between threads for insertion and removal, does not keep track of the queue size and has to iterate over the queue to calculate it when it is required.
From Java Generics and Collections, ch. 14.2.
Note that ConcurrentLinkedQueue does not implement the List interface, so it suffices as a replacement for LinkedList only if the latter was used purely as a queue. In this case, ConcurrentLinkedQueue is obviously a better choice. There should be no big performance issue unless its size is frequently queried. But as a disclaimer, you can only be sure about performance if you measure it within your own concrete environment and program.

A bounded BlockingQueue that doesn't block

The title of this question makes me doubt if this exist, but still:
I'm interested in whether there is an implemented of Java's BlockingQueue, that is bounded by size, and never blocks, but rather throws an exception when trying to enqueue too many elements.
Edit - I'm passing the BlockingQueue to an Executor, which I suppose uses its add() method, not offer(). One can write a BlockingQueue that wraps another BlockingQueue and delegates calls to add() to offer().
Edit: Based on your new description I believe that you're asking the wrong question. If you're using a Executor you should probably define a custom RejectedExecutionHandler rather than modifying the queue. This only works if you're using a ThreadPoolExecutor, but if you're not it would probably be a better idea to modify the Executor rather than the queue.
It's my opinion that it's a mistake to override offer and make it behave like add. Interface methods constitute a contract. Client code that uses blocking queues depends on the methods actually doing what the documentation specifies. Breaking that rule opens up for a world of hurt. That, And it's inelegant.
The add() method on BlockingQueues does that, but they also have an offer() method which is generally a better choice. From the documentation for offer():
Inserts the specified element at the
tail of this queue if it is possible
to do so immediately without exceeding
the queue's capacity, returning true
upon success and false if this queue
is full. This method is generally
preferable to method add(E), which can
fail to insert an element only by
throwing an exception.
This works for all such queues regardless of the specific implementation (ArrayBlockingQueue, LinkedBlockingQueue etc.)
BlockingQueue<String> q = new LinkedBlockingQueue<String>(2);
System.out.println(q.offer("foo")); // true
System.out.println(q.offer("bar")); // true
System.out.println(q.offer("baz")); // false
One can write a BlockingQueue that
wraps another BlockingQueue and
delegates calls to add() to offer().
If that is supposed to be a question ... the answer is "Yes", but you can do it more neatly by creating a subclass that overrides the add(). The only catch (in both cases) is that your version of add cannot throw any checked exceptions that aren't in the method you are overriding, so your "would block" exception will need to be unchecked.
this is sad, you cannot block, there are so many use cases where you would want to block, the whole idea of providing your own bounded blocking queue to the executor has no meaning.
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
if (poolSize >= corePoolSize || !addIfUnderCorePoolSize(command)) {
if (runState == RUNNING && workQueue.***offer***(command)) {
if (runState != RUNNING || poolSize == 0)
ensureQueuedTaskHandled(command);
}
else if (!addIfUnderMaximumPoolSize(command))
reject(command); // is shutdown or saturated
}
}
A simple use case to get queries executed from source db in batch (executor), enrich in batch and put into another db (executor), you would want to execute queries only as fast as they are being put into another db. In which case, the dest executor should accept a blocking bounded executor to solve the problem than keep polling and checking how many were completed to execute more queries.
oops more, see my remainder comment:

Categories

Resources