Joshua Bloch's Effective Java, Second Edition, item 69 , states that
[...] To provide
high concurrency, these implementations manage their own synchronization internally (Item 67). Therefore, it is impossible to exclude concurrent activity from
a concurrent collection; locking it will have no effect but to slow the program.
Is this last statement correct? If two threads lock the collection and perform several operations within that lock, these operations might still be interleaved?
For the statement to be correct I would expect that either these collections run threads internally with which you cannot synchronize, or they somehow "override" the standard synchronization behavior such that a statement like synchronized(map){ ... } behaves different than on a 'normal' object. From the answers/comments to related questions I think neither if these is true:
Exclusively Locking ConcurrentHashMap
ConcurrentHashMap and compound operations
To avoid possible misinterpretation:
I'm aware that concurrent collections are designed exactly to avoid this global locking, my question is whether it's possible in principle
I find Effective Java an excellent book and I'm just seeking clarity on a particular item.
Sources suggest that ConcurrentHashMap uses an internal mechanism for locking (static final class [More ...] Segment<K,V> extends ReentrantLock) and does not therefore use any synchronized methods for it's locking mechanism.
It should therefore be simple to use the Map as a lock and synchronize on it - in the same way you could use a new Object() or your own ReentrantLock. However, it would not affect the inner workings of the Map which is - I think - what he is trying to say.
This might clarify it (hint from another Item 67):
It is not possible for clients to perform external synchronization on such a method because there can be no guarantee that unrelated clients will do likewise.
Your code is a client to these internally-synchronized concurrent implementations. Even if you use external lock (to slow yourself down), other clients may not and will still execute internal implementation concurrently.
Related
public BlockingQueue<Message> Queue;
Queue = new LinkedBlockingQueue<>();
I know if I use, say a synchronized List, I need to surround it in synchronized blocks to safely use it across threads
Is that the same for Blocking Queues?
No you do not need to surround with synchronized blocks.
From the JDK javadocs...
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically unless specified otherwise in an implementation. So it is possible, for example, for addAll(c) to fail (throwing an exception) after adding only some of the elements in c.
Just want to point out that from my experience the classes in the java.util.concurrent package of the JDK do not need synchronization blocks. Those classes manage the concurrency for you and are typically thread-safe. Whether intentional or not, seems like the java.util.concurrent has superseded the need to use synchronization blocks in modern Java code.
Depends on use case, will explain 2 scenarios where you may need synchronized blocks or dont need it.
Case 1: Not required while using queuing methods e.g. put, take etc.
Why not required is explained here, important line is below:
BlockingQueue implementations are thread-safe. All queuing methods
achieve their effects atomically using internal locks or other forms
of concurrency control.
Case 2: Required while iterating over blocking queues and most concurrent collections
Since iterator (one example from comments) is weakly consistent, meaning it reflects some but not necessarily all of the changes that have been made to its backing collection since it was created. So if you care about reflecting all changes you need to use synchronized blocks/ Locks while iterating.
You are thinking about synchronization at too low a level. It doesn't have anything to do with what classes you use. It's about protecting data and objects that are shared between threads.
If one thread is able to modify any single data object or group of related data objects while other threads are able to look at or modify the same object(s) at the same time, then you probably need synchronization. The reason is, it often is not possible for one thread to modify data in a meaningful way without temporarily putting the data into an invalid state.
The purpose of synchronization is to prevent other threads from seeing the invalid state and possibly doing bad things to the same data or to other data as a result.
Java's Collections.synchronizedList(...) gives you a way for two or more threads to share a List in such a way that the list itself is safe from being corrupted by the action of the different threads. But, It does not offer any protection for the data objects that are in the List. If your application needs that protection, then it's up to you to supply it.
If you need the equivalent protection for a queue, you can use any of the several classes that implement java.util.concurrent.BlockingQueue. But beware! The same caveat applies. The queue itself will be protected from corruption, but the protection does not automatically extend to the objects that your threads pass through the queue.
Recently while exploring ConcurrentSkipListMap I went through its implementation and found that its put method is not thread-safe. It internally calls doPut which actually adds the item. But I found that this method does not use any kind of lock similar to ConcurrentHashMap.
Therefore, I want to know whether add is thread-safe or not. Looking at the method it seems that it is not thread-safe--that is if this method is executed by two threads simultaneously then a problem may occur.
I know ConcurrentSkipListMap internally uses a skiplist data structure but I was expecting add method to be thread safe. Am I understanding anything wrong ? Is ConcurrentSkipListMap really not thread-safe ?
Just because it doesn't use a Lock doesn't make it thread unsafe. The Skip list structure can be implemented lock free.
You should read the API carefully.
... Insertion, removal, update, and access operations safely execute concurrently by multiple threads. Iterators are weakly consistent, returning elements reflecting the state of the map at some point at or since the creation of the iterator. They do not throw ConcurrentModificationException, and may proceed concurrently with other operations. ...
The comments in the implementation say:
Given the use of tree-like index nodes, you might wonder why this
doesn't use some kind of search tree instead, which would support
somewhat faster search operations. The reason is that there are no
known efficient lock-free insertion and deletion algorithms for search
trees. The immutability of the "down" links of index nodes (as opposed
to mutable "left" fields in true trees) makes this tractable using
only CAS operations.
So they use some low level programming features with compare-and-swap operations to make changes to the map atomic. With this they ensure thread safety without the need to synchronize access.
You can read it in more detail in the source code.
We should trust Java API. And this is what java.util.concurrent package docs says:
Concurrent Collections
Besides Queues, this package supplies Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap, ConcurrentSkipListMap, ConcurrentSkipListSet, CopyOnWriteArrayList, and CopyOnWriteArraySet.
In Java an Object itself can act as a lock for guarding its own state . This convention is used in many built in classes like Vector and other synchronized collections where every method is synchronized and thus guarded by the intrinsic lock of the object itself . Is this good or bad ? Please give reasons also .
Pros
It's simple.
You can control the lock externally.
Cons
It breaks encapuslation.
You can't change its locking behaviour without changing its implied contract.
For the most part, it doesn't matter unless you are developing an API which will be widely used. So while using synchronised(this) is not ideal, it is simple.
Well Vector, Hashtable, etc. were synchronized like this internally and we all know what happened to them...
I honestly can't find any good reason to do synchronization like this. Here are the disadvantages that I see:
There's almost always a more efficient way of ensuring thread-safety than just putting a lock on the entire method.
It slows down the code in single threaded environments because you pay the overhead of locking and unlocking without actually needing the lock.
It gives a false sense of security because although each operation is synchronized, sequences of operations are not and you can still accidentally create data races. Imagine a collection which is synchronized on each method and the following code:
if(collection.isEmpty()) {
collection.add(...);
}
Assuming the aim is to have only a single item added, the above code is not thread safe because a thread can be interrupted between the if check and the actual call to add, even though both operations are synchronized individually, so it is possible to actually get two items in the collection.
We know that ConcurrentHashMap can provide concurrent access to multiple threads to boost performance , and inside this class, segments are synchronized up (am I right?). Question is, can this design guarantee the thread safety? Say we have 30+ threads accessing &changing an object mapped by the same key in a ConcurrentHashMap instance, my guess is, they still have to line up for that, don't they?
From my recollection that the book "Java Concurrency in Practice" says the ConcurrentHashMap provide concurrent reading and a decent level of concurrent writing. in the aforementioned scenario, and if my guess is correct, the performance won't be better than using the Collection's static synchonization wrapper api?
Thanks for clarifying,
John
You will still have to synchronize any access to the object being modified, and as you suspect all access to the same key will still have contention. The performance improvement comes in access to different keys, which is of course the more typical case.
All a ConcurrentMap can give you wrt to concurrency is that modifications to the map itself are done atomically, and that any writes happen-before any reads (this is important as it provides safe publishing of any reference from the map.
Safe-publishing means that any (mutable) object retrieved from the map will be seen with all writes to it before it was placed in the map. It won't help for publishing modifications that are made after retrieving it though.
However, concurrency and thread-safety is generally hard to reason about and make correct if you have mutable objects that are being modified by multiple parties. Usually you have to lock in order to get it right. A better approach is often to use immutable objects in conjunction with the ConcurrentMap conditional putIfAbsent/replace methods and linearize your algorithm that way. This lock-free style tends to be easier to reason about.
Question is, can this design guarantee the thread safety?
It guarantees the thread safety of the map; i.e. that access and updates on the map have a well defined and orderly behaviour in the presence of multiple threads performing updates simultaneously.
It does guarantee thread safety of the key or value objects. And it does not provide any form of higher level synchronization.
Say we have 30+ threads accessing &changing an object mapped by the same key in a ConcurrentHashMap instance, my guess is, they still have to line up for that, don't they?
If you have multiple threads trying to use the same key, then their operations will inevitably be serialized to some degree. That is unavoidable.
In fact, from briefly looking at the source code, it looks like ConcurrentHashMap falls back to using conventional locks if there is too much contention for a particular segment of the map. And if you have multiple threads trying to access AND update the same key simultaneously, that will trigger locking.
first remember that a thread safe tool doesn't guarantee thread safe usage of it in and of itself
the if(!map.contains(k))map.put(k,v); construct to putIfAbsent for example is not thread safe
and each value access/modification still has to be made thread safe independently
Reads are concurrent, even for the same key, so performance will be better for typical applications.
When using any of the java.util.concurrent classes, do I still need to synchronize access on the instance to avoid visibility issues between difference threads?
Elaborating the question a bit more
When using an instance of java.util.concurrent, is it possible that one thread modify the instance (i.e., put an element in a concurrent hashmap) and a subsequent thread won't be seeing the modification?
My question arises from the fact that The Java Memory Model allows threads to cache values instead of fetching them directly from memory if the access to the value is not synchronized.
On the java.util.concurrent package Memory Consistency Properties, you can check the Javadoc API for the package:
The methods of all classes in
java.util.concurrent and its
subpackages extend these guarantees to
higher-level synchronization. In
particular:
Actions in a thread prior to placing an object into any
concurrent collection
happen-before actions subsequent to the access or removal of that
element from the collection in
another thread.
[...]
Actions prior to "releasing" synchronizer methods such as
Lock.unlock, Semaphore.release, and
CountDownLatch.countDown
happen-before actions subsequent to a successful "acquiring" method
such as Lock.lock,
Semaphore.acquire, Condition.await,
and CountDownLatch.await on the
same synchronizer object in another
thread.
[...]
So, the classes in this package make sure of the concurrency, making use of a set of classes for thread control (Lock, Semaphore, etc.). This classes handle the happen-before logic programmatically, i.e. managing a FIFO stack of concurrent threads, locking and releasing current and subsequent threads (i.e. using Thread.wait() and Thread.resume(), etc.
Then, (theoretically) you don't need to synchronize your statements accessing this classes, because they are controlling concurrent threads access programmatically.
Because the ConcurrentHashMap (for example) is designed to be used from a concurrent context, you don't need to synchronise it further. In fact, doing so could undermine the optimisations it introduces.
For example, Collections.synchronizedMap(...) represents a way to make a map thread safe, as I understand it, it works essentially by wrapping all the calls within the synchronized keyword. Something like ConcurrentHashMap on the other hand creates synchronized "buckets" across the elements in the collection, causing finer grained concurrency control and therefore giving less lock contention under heavy usage. It may also not lock on reads for example. If you wrap this again with some synchronised access, you could undermine this. Obviously, you have to be careful that all access to the collection is syncrhronised etc which is another advantage of the newer library; you don't have to worry (as much!).
The java.lang.concurrent collections may implement their thread safety via syncrhonised. in which case the language specification guarantees visibility. They may implement things without using locks. I'm not as clear on this, but I assume it the same visibility would be in place here.
If you're seeing what looks like lost updates in your code, it may be that its just a race condition. Something like the ConcurrentHashpMap will give you the most recent value on a read and the write may not have yet been written. It's often a trade off between accuracy and performance.
The point is; java.util.concurrent stuff is meant to do this stuff so I'd be confident that it ensures visibility and use of volatile and/or addition syncrhonisation shouldn't be needed.