The ConcurrentHashMap provides thread-safe but the docs state:
" However, even though all operations are thread-safe, retrieval operations do not entail locking"
So from this I understand that getting or setting a key and value are thread-safe, but modifying the actual VALUE of any given key isn't (by value I actaully mean the value or state of that object).
I'm just confused on how this works, at the moment I think things work like this.
The ConcurrentHashMap only gaurantees the key's are thread-safe in terms setting/getting them. But the object you put inside the map has to gaurd for concurrency by itself.
Is this correct?
But the object you put inside the map has to gaurd for concurrency by itself.
Your understanding is correct.
From the documentation:
However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access.
What the above is also saying is that there is no built-in mechanism for automatic locking of the hash map while the reading takes place. In particular, this means that get() operations can overlap with concurrent modifications performed by other threads.
The document goes on to explain the concurrency semantics:
Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove). Retrievals reflect the results of the most recently completed update operations holding upon their onset. For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries. Similarly, Iterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
What you say is true by default -- there would be no way for the map to enforce the thread safety of either its keys or its values since these are objects that come from outside. What you read about the retrieval of objects, however, has nothing to do with that fact. The map doesn't block while retrieving a value so another update may be happening at the same time (these operation can overlap).
The basic idea of the ConcurrentHashMap is that only modifications use a lock, while retrieval-only operations don't. This is possible because the entire data structure and the operations on it are defined in a way that allows get() to only ever see a "consistent enough" state of the map to do its work. If there's currently an insert operation in progress, then get() either sees the result or it doesn't, but it won't ever see a partial result or even temporarily invalid data.
Related
Task is to keep track of some running processes. Keeping that information in memory is just fine, so I'm using a concurrent hash map to store that data:
ConcurrentHashMap<String, ProcessMetaData> RUNNING_PROCESSES = new ConcurrentHashMap();
It's all good and fine with safely putting new objects in the map, problem is that state of those processes change so I have to update ProcessMetaData from time to time. I made ProcessMetaData immutable and use ConcurrentHashMap's compute() method to update values, but now the problem is ProcessMetaData gets more complicated and keeping it immutable gets hardly manageable. The question is - as long as I only update ProcessMetaData in atomic (as per javadoc) compute() method - the object may be mutable and overall things will still be thread-safe? Is my assumption correct?
As long as you only access the value within the function passed to compute, modifications made in that function are safe.
This, however, is a pointless theoretical view. The purpose of storing values into a collection or map, is to eventually retrieve and use them. And this is where the problems start.
The compute method returns the result value just like get returns the currently stored value. Once a caller starts using that value, this use may be concurrent to subsequent compute operations on the map. The get method may even retrieve the value while a compute operation is in progress. Allowing non-blocking retrieval operation is one of ConcurrentHashMap’s main features. Therefore, all kind of race conditions may occur.
So, using a mutable object and modifying an already stored value in compute is only safe, when you use the map as write-only memory, which is a far-fetched scenario. It might work when you use a different thread safe mechanism to ensure that all updates have been completed before starting to read the map, but your use case seems to be different.
If I start a forEach action on a ConcurrentHashMap, and other threads are still performing puts on this map, will I see new updates into other bins?
The reason for this is I am trying to find the most effective way to broadcast the contents of a ConcurrentHashMap to listeners, without causing contention to writers of new data to the map. But I want all listeners to receive the same snapshot of the Map when I notify the listeners.
It's not fail safe, but updates are not reflected in way that you think they are; so if you have already seen a bin before you will not get updates from that bin anymore.
If you know how spread works internally you can even cause an OOM(this was an excellent comment from a question I've answered from Holger, but I can't seem to find it right now... )
ConcurrentHashMap<Integer, Integer> chm = new ConcurrentHashMap<>(500_000_000);
chm.put(1, 1);
chm.forEach((key, value) -> chm.put(++value^(value>>>16), value));
The class-level API docs have this to say:
Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove). Retrievals reflect the results of the most recently completed update operations holding upon their onset. (More formally, an update operation for a given key bears a happens-before relation with any (non-null) retrieval for that key reporting the updated value.) For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries. Similarly, Iterators, Spliterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
(Emphasis added.) That does not explicitly address forEach(), but I would expect its behavior to be similar to that achievable via an Iterator over the map's entry set. That is, the forEach() iteration would reflect the map's contents as of some fixed point in time. Therefore, I do not think it at all safe to suppose that modifications to the map by other threads will be seen by the forEach(). I would in fact expect modifications by other threads generally not to be reflected in the forEach()'s behavior, though there is room in the spec to allow it seeing some modifications.
To provide a snapshot of a map you need to copy the map at a given point. If the iterator would simply be a snapshot, it would have to create a copy initially as well. As that costs extra memory and computation it won't do that, and then there are architectural reasons why that might not be desirable anyways.
Any non-null result returned from get(key) and related access methods bears a happens-before relation with the associated insertion or update. The result of any bulk operation reflects the composition of these per-element relations
In these lines it is (not so clearly) stated, that any changes that happened before any get (iterator or single call) are already included and reflected by that get operation. So a forEach bulk operation will work on the most recent state of the map at any given time.
You already gave the (only) solution to this problem in your question: create a local snap-shot using the copy constructor of the map prior to distribution. It is an extra memory overhead, but that is the only way to get a snapshot.
I have one doubt. What will happen if I get from map at same time when I am putting to map some data?
What I mean is if map.get() and map.put() are called by two separate processes at the same time. Will get() wait until put() has been executed?
It depends on which Map implementation you are using.
For example, ConcurrentHashMap supports full concurrency, and get() will not wait for put() to get executed, and stated in the Javadoc :
* <p> Retrieval operations (including <tt>get</tt>) generally do not
* block, so may overlap with update operations (including
* <tt>put</tt> and <tt>remove</tt>). Retrievals reflect the results
* of the most recently <em>completed</em> update operations holding
* upon their onset.
Other implementations (such as HashMap) don't support concurrency and shouldn't be used by multiple threads at the same time.
It might throw ConcurrentModificationException- not sure about it. It is always better to use synchronizedMap.This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be "wrapped" using the Collections.synchronizedMap method.This is best done at creation time, to prevent accidental unsynchronized access to the map:
Map map = Collections.synchronizedMap(new HashMap(...));
Map is an interface, so the answer depends on the implementation you're using.
Generally speaking, the simpler implementations of this interface, such as HashMap and TreeMap are not thread safe. If you don't have some synchronization built around them, concurrently puting and geting will result in an undefined behavior - you may get the new value, you may get the old one, bust most probably you'd just get a ConcurrentModificationException, or something worse.
If you want to handle the same Map from different threads, either use one of the implementations of a ConcurrentMap (e.g., a ConcurrentHashMap), which guarantees a happens-before-sequence (i.e., if the get was fired before the put, you'd get the old value, even if the put is ongoing, and vise-versa), or synchronize the Map's access (e.g., by calling Collections#synchronizedMap(Map).
Suppose there is a ConcurrentHashMap and there are two threads.
If both threads are reading some data from the same bucket, then my understanding says that both can read that bucket concurrently, as CHM does not block reading operations.
But suppose one thread is writing (put) to a bucket. Then, can a second thread simultaneously read (get) from the same bucket or will the second thread have to wait for the put operation to complete?
If it were Hashtable then get will have to wait until the put operation is complete. But in case of CHM how it will behave?
There is no need for speculation. The source code for ConcurrentHashMap is open, and anyone can read it. (This is JDK 8 build 128, the first JDK 8 release candidate.)
You should have no trouble understanding it, as it's only 6,300 lines long. :-) Actually, a good fraction of this is comments, and most of the code goes toward handling edge cases. The straightforward paths of get() and put() aren't terribly complicated and are only a few dozen lines of code.
Your understanding of read operations (get(), contains()) is correct; there is no blocking. Hashing to a bucket and searching within the bucket, if necessary, is straightforward, with no locking. Memory visibility is ensured by volatile reads. (At lines 622-623, the val and next fields of Node are volatile.) Read operations proceed concurrently with other reads and also with writes to the same bucket.
The policy for removing and replacing values is fairly straightforward in that the head of the bucket is locked while the bucket is being searched and modified. See the synchronized block at line 1117 of replaceNode. A put that adds to an existing bucket is similar; see the synchronized block at line 1027 of putVal. These operations will of course block other threads attempting to remove, replace, or add entries to this same bucket. If a value is in the midst of being replaced, a thread that is getting the value for this key will see either the old value or the new value, depending on whether the reading thread finds the node before or after the value is replaced by the writing thread.
There is a special case for putting the first element into a bucket. At lines 1018-1020, if putVal finds a bucket empty, it will create a new Node and CAS (compare-and-swap) it into place. If this succeeds, the operation is complete. If two threads are attempting to add nodes into the same bucket more-or-less simultaneously, the CAS for the first will succeed, and the CAS for the second will fail. But note that this code is within a for-loop (line 1014). The thread whose CAS has failed simply goes around the loop and retries. In fact, all the other write operations are within a loop. The general approach is that operations proceed optimistically but are checked for concurrent writers. If the optimistic attempt fails, the operation is retried and goes through a (possibly) different path based on the now updated state.
Hi as Per my knowledge ConcurrentHashMap allows multiple readers to read concurrently without any blocking. This is achieved by partitioning Map into different parts based on concurrency level and locking only a portion of Map during updates. Default concurrency level is 16, and accordingly Map is divided into 16 part and each part is governed with different lock. This means, 16 thread can operate on Map simultaneously, until they are operating on different part of Map. This makes ConcurrentHashMap high performance despite keeping thread-safety intact. Though, it comes with caveat. Since update operations like put(), remove(), putAll() or clear() is not synchronized, concurrent retrieval may not reflect most recent change on Map.
I hope this will help..
This is from the JavaDocs of ConcurrentHashMap class:
"Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove). Retrievals reflect the results of the most recently completed update operations holding upon their onset"
In Hastable concurrent operations will lock the whole collection, but in ConcurrentHashMap only one bucket will be locked.
From the doc:
A hash table supporting full concurrency of retrievals and adjustable
expected concurrency for updates. This class obeys the same functional
specification as Hashtable, and includes versions of methods
corresponding to each method of Hashtable. However, even though all
operations are thread-safe, retrieval operations do not entail
locking, and there is not any support for locking the entire table in
a way that prevents all access. This class is fully interoperable with
Hashtable in programs that rely on its thread safety but not on its
synchronization details.
Retrieval operations (including get) generally do not block, so may
overlap with update operations (including put and remove). Retrievals
reflect the results of the most recently completed update operations
holding upon their onset. For aggregate operations such as putAll and
clear, concurrent retrievals may reflect insertion or removal of only
some entries. Similarly, Iterators and Enumerations return elements
reflecting the state of the hash table at some point at or since the
creation of the iterator/enumeration. They do not throw
ConcurrentModificationException. However, iterators are designed to be
used by only one thread at a time.
So, you shouldn't expect operations to synchronize exactly as a Hashtable, but the same (series of) operation are threadsafe. The second highlighted sentence does not imply, but in my opinion strongly suggest, what is going on here: a put in progress, i.e. not finished, will not block a get - the get will simply not see the changes yet.
Although I have not worked myself through the whole CHM class, this piece of documentation supports my hypothesis (taken from OpenJDK 6)
static final class Segment<K,V> extends ReentrantLock implements Serializable {
/*
* Segments maintain a table of entry lists that are always
* kept in a consistent state, so can be read (via volatile
* reads of segments and tables) without locking. This
* requires replicating nodes when necessary during table
* resizing, so the old lists can be traversed by readers
* still using old version of table.
When an update is "complete" doesn't seem to be explicitly defined; generally as soon as the new bucket is linked into the list of buckets, I guess. CHM also makes heavy use of volatile fields to ensure that threads read the most recent buckets in the list.
I have been reading for concurency since yesterday and i dont know much things... However some things are starting to getting clear...
I understand why double check locking isnt safe (i wonder what is the propability the rare condition to occur) but volatile fixes the issue in 1.5 +....
But i wonder if this occurs with putifAbsent
like...
myObj = new myObject("CodeMonkey");
cHashM.putIfAbsent("keyy",myObj);
Then does this ensures that myObj would be 100% intialiased when another thread does a cHashM.get() ??? Because it could have a reference isnt completely initialised (the double check lock problem)
If you invoke concurrentHashMap.get(key) and it returns an object, that object is guaranteed to be fully initialized. Each put (or putIfAbsent) will obtain a bucket specific lock and will append the element to the bucket's entries.
Now you may go through the code and notice that the get method doesnt obtain this same lock. So you can argue that there can be an out of date read, that isn't true either. The reason here is that value within the entry itself is volatile. So you will be sure to get the most up to date read.
putIfAbsent method in ConcurrentHashMap is check-if-absent-then-set method. It's an atomic operation. But to answer the following part: "Then does this ensures that myObj would be 100% intialiased when another thread does a cHashM.get() ", it would depend on when the object is put into the HashMap. Usually there is a happens-before precedence, i.e., if the caller gets first before the object is placed in the map, then null would be returned, else the value would be returned.
The relevant part of the documentation is this:
Memory consistency effects: As with
other concurrent collections, actions
in a thread prior to placing an object
into a ConcurrentMap as a key or value
happen-before actions subsequent to
the access or removal of that object
from the ConcurrentMap in another
thread.
-- java.util.ConcurrentMap
So, yes you have your happens-before relationship.
I'm not an expert on this, but looking at the implementation of Segment in ConcurrentHashMap I see that the volatile field count appears to be used to ensure proper visibility between threads. All read operations have to read the count field and all write operations have to write to it. From comments in the class:
Read operations can thus proceed without locking, but rely
on selected uses of volatiles to ensure that completed
write operations performed by other threads are
noticed. For most purposes, the "count" field, tracking the
number of elements, serves as that volatile variable
ensuring visibility. This is convenient because this field
needs to be read in many read operations anyway:
- All (unsynchronized) read operations must first read the
"count" field, and should not look at table entries if
it is 0.
- All (synchronized) write operations should write to
the "count" field after structurally changing any bin.
The operations must not take any action that could even
momentarily cause a concurrent read operation to see
inconsistent data. This is made easier by the nature of
the read operations in Map. For example, no operation
can reveal that the table has grown but the threshold
has not yet been updated, so there are no atomicity
requirements for this with respect to reads.