ConcurrentLinkedDeque vs LinkedBlockingDeque - java

I need to have a thread-safe LIFO structure and found that I can use thread-safe implementations of Deque for this. Java 7 has introduced ConcurrentLinkedDeque and Java 6 has LinkedBlockingDeque.
If I were to use only the non-blocking methods in LinkedBlockingDeque such as addFirst() and removeFirst() does it have any difference to ConcurrentLinkedDeque?
i.e. If you disregard the blocking aspect, is there any other difference between ConcurrentLinkedDeque and LinkedBlockingDeque, apart from LinkedBlockingDeque being bounded?

To quote the great Doug Lea (my emphasis)
LinkedBlockingDeque vs ConcurrentLinkedDeque
The LinkedBlockingDeque class is intended to be the "standard" blocking deque class. The current implementation has relatively low overhead but relatively poor scalability. ...
... ConcurrentLinkedDeque has almost the opposite performance profile as LinkedBlockingDeque: relatively high overhead, but very good scalability. ... in concurrent applications, it is not all that common to want a Deque that is thread safe yet does not support blocking. And most of those that do are probably better off with special-case solutions.
He seems to be suggesting that you should use LinkedBlockingDeque unless you specifically need the features of ConcurrentLinkedDeque.

ConcurentLinkedDequeue is lock-free (see comments in source code) while LinkedBlockingQueue uses locking. That is the former is supposed to be more efficient

Two things:
1: If I were to use only the non-blocking methods in LinkedBlockingDeque such as addFirst() and removeFirst() does it have any difference to ConcurrentLinkedDeque?
These methods do have difference in terms of concurrent locking behavior, in LinkedBlockingDeque:
public E removeFirst() {
E x = pollFirst();
..
}
public E pollFirst() {
lock.lock(); //Common lock for while list
try {
return unlinkFirst();
} finally {
lock.unlock();
}
}
Similarly for addFirst method. In ConcurrentLinkedDeque this locking behavior for both the method is different and is more efficient as it doesn't lock the whole list but a subset of it, checking source for ConcurrentLinkedDeque will give you more clarity on this.
2: From javadoc of ConcurrentLinkedDeque:
Beware that, unlike in most collections, the size method is NOT a
constant-time operation.
..
Additionally, the bulk operations addAll, removeAll, retainAll,
containsAll, equals, and toArray are not guaranteed to be performed
atomically.
Above is not true for LinkedBlockingDeque

First thing both LinkedBlockingDeque and ConcurrentLinkedDeque both are thread safe but which one to use depends on your application requirement.
For example,
LinkedBlockingDequeue : Use this collection if you want that at a time only single thread can operate on your data and when you need blocking for your application.
ConcurrentLinkedDeque: This is also thread safe collection deque, If you application is multi threaded and you want that each one of your thread can access the data then ConcurrentLinkedDequeue is the best choise for it.
As in your question,
1. I need to have a thread-safe LIFO structure,
Use LinkedBlockingDeque if at a time you want only single thread can operate your data.
Use ConcurrentLinkedDeque if you want that each thread can access the shared data
2. If you disregard the blocking aspect, is there any other difference between ConcurrentLinkedDeque and LinkedBlockingDeque,
Yes, there is a difference as LinkedBlockingDeque is using locking mechanism and ConcurrentLinkedDeque is not this may affect the performance when you want to operate your data.

Related

Are Synchronized Blocks needed for Blocking Queues

public BlockingQueue<Message> Queue;
Queue = new LinkedBlockingQueue<>();
I know if I use, say a synchronized List, I need to surround it in synchronized blocks to safely use it across threads
Is that the same for Blocking Queues?
No you do not need to surround with synchronized blocks.
From the JDK javadocs...
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically unless specified otherwise in an implementation. So it is possible, for example, for addAll(c) to fail (throwing an exception) after adding only some of the elements in c.
Just want to point out that from my experience the classes in the java.util.concurrent package of the JDK do not need synchronization blocks. Those classes manage the concurrency for you and are typically thread-safe. Whether intentional or not, seems like the java.util.concurrent has superseded the need to use synchronization blocks in modern Java code.
Depends on use case, will explain 2 scenarios where you may need synchronized blocks or dont need it.
Case 1: Not required while using queuing methods e.g. put, take etc.
Why not required is explained here, important line is below:
BlockingQueue implementations are thread-safe. All queuing methods
achieve their effects atomically using internal locks or other forms
of concurrency control.
Case 2: Required while iterating over blocking queues and most concurrent collections
Since iterator (one example from comments) is weakly consistent, meaning it reflects some but not necessarily all of the changes that have been made to its backing collection since it was created. So if you care about reflecting all changes you need to use synchronized blocks/ Locks while iterating.
You are thinking about synchronization at too low a level. It doesn't have anything to do with what classes you use. It's about protecting data and objects that are shared between threads.
If one thread is able to modify any single data object or group of related data objects while other threads are able to look at or modify the same object(s) at the same time, then you probably need synchronization. The reason is, it often is not possible for one thread to modify data in a meaningful way without temporarily putting the data into an invalid state.
The purpose of synchronization is to prevent other threads from seeing the invalid state and possibly doing bad things to the same data or to other data as a result.
Java's Collections.synchronizedList(...) gives you a way for two or more threads to share a List in such a way that the list itself is safe from being corrupted by the action of the different threads. But, It does not offer any protection for the data objects that are in the List. If your application needs that protection, then it's up to you to supply it.
If you need the equivalent protection for a queue, you can use any of the several classes that implement java.util.concurrent.BlockingQueue. But beware! The same caveat applies. The queue itself will be protected from corruption, but the protection does not automatically extend to the objects that your threads pass through the queue.

Is synchronized enough to make the drainTo() method of a BlockingQueue atomic?

If I simply do something like this:
synchronized(taskQueue) { //taskQueue is a BlockingQueue
taskQueue.drainTo(tasks); //tasks is a list
}
Am I assured that concurrent calls to taskQueue.put() and taskQueue.take() can not be executed inside the synchronized block?
In other words, am I making the drainTo() method atomic?
Or more generally, how do I make a composition of thread safe operations atomic?
Example:
if(taskQueue.size() == 1) {
/*Do a lot of things here, but I do not want other threads
to change the size of the queue here with take or put*/
}
//taskQueue.size() must still be equal to 1
See below excerpt from Java docs of BlockingQueue
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms
of concurrency control. However, the bulk Collection operations
addAll, containsAll, retainAll and removeAll are not necessarily
performed atomically unless specified otherwise in an implementation.
So it is possible, for example, for addAll(c) to fail (throwing an
exception) after adding only some of the elements in c.
Also, check the example which shows that a BlockingQueue implementation can safely be used with multiple producers and multiple consumers.
So, if you are not using bulk Collection operations like addAll, containsAll, retainAll and removeAll then you are thread safe.
You even don't need synchronized(taskQueue) { and can directly use taskQueue.drainTo(tasks); because BlockingQueue implementations are thread-safe for non-bulk-collection operations like put, take, drainTo etc.
Hope this helps!
Take a LinkedBlockingQueue as an example, it has a 'takeLock' and 'putLock' which are its member variables.
So client side synchronization dose not help here, since other 'take' actions are not guarded by this lock, even if this lock comes from the queue itself.
The drainTo() method is guarded by 'takeLock', for any other 'take' operation it's thread safe. But for the 'put' operations, they are guarded by 'putLock', so will not be affected.
So I think nothing is needed here!

Is ConcurrentSkipListMap put method thread-safe?

Recently while exploring ConcurrentSkipListMap I went through its implementation and found that its put method is not thread-safe. It internally calls doPut which actually adds the item. But I found that this method does not use any kind of lock similar to ConcurrentHashMap.
Therefore, I want to know whether add is thread-safe or not. Looking at the method it seems that it is not thread-safe--that is if this method is executed by two threads simultaneously then a problem may occur.
I know ConcurrentSkipListMap internally uses a skiplist data structure but I was expecting add method to be thread safe. Am I understanding anything wrong ? Is ConcurrentSkipListMap really not thread-safe ?
Just because it doesn't use a Lock doesn't make it thread unsafe. The Skip list structure can be implemented lock free.
You should read the API carefully.
... Insertion, removal, update, and access operations safely execute concurrently by multiple threads. Iterators are weakly consistent, returning elements reflecting the state of the map at some point at or since the creation of the iterator. They do not throw ConcurrentModificationException, and may proceed concurrently with other operations. ...
The comments in the implementation say:
Given the use of tree-like index nodes, you might wonder why this
doesn't use some kind of search tree instead, which would support
somewhat faster search operations. The reason is that there are no
known efficient lock-free insertion and deletion algorithms for search
trees. The immutability of the "down" links of index nodes (as opposed
to mutable "left" fields in true trees) makes this tractable using
only CAS operations.
So they use some low level programming features with compare-and-swap operations to make changes to the map atomic. With this they ensure thread safety without the need to synchronize access.
You can read it in more detail in the source code.
We should trust Java API. And this is what java.util.concurrent package docs says:
Concurrent Collections
Besides Queues, this package supplies Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap, ConcurrentSkipListMap, ConcurrentSkipListSet, CopyOnWriteArrayList, and CopyOnWriteArraySet.

Excluding concurrent activity from a concurrent Java collection

Joshua Bloch's Effective Java, Second Edition, item 69 , states that
[...] To provide
high concurrency, these implementations manage their own synchronization internally (Item 67). Therefore, it is impossible to exclude concurrent activity from
a concurrent collection; locking it will have no effect but to slow the program.
Is this last statement correct? If two threads lock the collection and perform several operations within that lock, these operations might still be interleaved?
For the statement to be correct I would expect that either these collections run threads internally with which you cannot synchronize, or they somehow "override" the standard synchronization behavior such that a statement like synchronized(map){ ... } behaves different than on a 'normal' object. From the answers/comments to related questions I think neither if these is true:
Exclusively Locking ConcurrentHashMap
ConcurrentHashMap and compound operations
To avoid possible misinterpretation:
I'm aware that concurrent collections are designed exactly to avoid this global locking, my question is whether it's possible in principle
I find Effective Java an excellent book and I'm just seeking clarity on a particular item.
Sources suggest that ConcurrentHashMap uses an internal mechanism for locking (static final class [More ...] Segment<K,V> extends ReentrantLock) and does not therefore use any synchronized methods for it's locking mechanism.
It should therefore be simple to use the Map as a lock and synchronize on it - in the same way you could use a new Object() or your own ReentrantLock. However, it would not affect the inner workings of the Map which is - I think - what he is trying to say.
This might clarify it (hint from another Item 67):
It is not possible for clients to perform external synchronization on such a method because there can be no guarantee that unrelated clients will do likewise.
Your code is a client to these internally-synchronized concurrent implementations. Even if you use external lock (to slow yourself down), other clients may not and will still execute internal implementation concurrently.

Java (Android), thread safe FIFO without locking?

I have a producer-consumer situation with exactly two threads. One takes objects from a pool and puts them in a fifo, the other one reads the objects (multiples at a time), does calculations, removes them from the list and puts them back in the pool.
With ConcurrentLinkedQueue that pattern should be thread safe without additional locks. Each object is only written once, read once and removed once. Add() and Poll() are safe in CLQ.
a) Is this correct?
b) Which other Containers support this specific pattern? I remember things about LinkedList or even ArrayList being safe because of some atomic effects with "getSize()" or "head=..." but i am not sure and could not find it.
Yes, the methods add and poll of ConcurrentLinkedQueue are thread-safe (as all other methods).
No, do not use ArrayList or LinkedList directly in a concurrent environment. These classes are not thread-safe by definition:
Note that this implementation is not synchronized. If multiple threads access an ArrayList instance concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally.
If you are not satisfied with ConcurrentLinkedQueue, have a look at all those container implementations in package java.util.concurrent:
ConcurrentLinkedDeque (is a Queue)
LinkedBlockingQueue (is a BlockingQueue)
LinkedBlockingDeque (is a BlockingDeque)
ArrayBlockingQueue (is a BlockingQueue)
I assume, either Queue or BlockingQueue is the interface of your choice.

Categories

Resources