I am getting an error in the for(Entry...) loop where after calling dfs(), it will say concurrentmodificationexception. I don't know why it is happening even though visitedOrder is not related with the foreach loop. How can this be fixed?
public TreeMap<Integer, Integer> DFS()
{
TreeMap<Integer, Integer> stack = new TreeMap<Integer, Integer>();
TreeMap<Integer, Integer> visitedOrder = stack;
for(int i = 1; i < graph[0].length-1; i++)
{
stack.put(i, 0);
}
for(Entry<Integer, Integer> vertex : stack.entrySet())
{
if(vertex.getValue() == 0)
dfs(vertex.getKey(), visitedOrder);
}
System.out.println(visitedOrder.values());
return visitedOrder;
}
public void dfs(int vertex, TreeMap<Integer, Integer> visited)
{
visited.put(vertex, order++);
int currVertex = vertex;
for(int i = vertex; i < graph[0].length-1;i++)
{
if(graph[vertex][i+1] == 1)
{
dfs(++currVertex, visited);
break;
}
currVertex++;
}
}
Here is the Javadoc for "Class ConcurrentModificationException":
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/ConcurrentModificationException.html
This exception may be thrown by methods that have detected concurrent
modification of an object when such modification is not permissible.
For example, it is not generally permissible for one thread to modify
a Collection while another thread is iterating over it. In general,
the results of the iteration are undefined under these circumstances.
Some Iterator implementations (including those of all the general
purpose collection implementations provided by the JRE) may choose to
throw this exception if this behavior is detected. Iterators that do
this are known as fail-fast iterators, as they fail quickly and
cleanly, rather that risking arbitrary, non-deterministic behavior at
an undetermined time in the future.
Note that this exception does not always indicate that an object has
been concurrently modified by a different thread. If a single thread
issues a sequence of method invocations that violates the contract of
an object, the object may throw this exception. For example, if a
thread modifies a collection directly while it is iterating over the
collection with a fail-fast iterator, the iterator will throw this
exception.
As it happens, that's precisely what you're doing: modifying the very structure you're using in your "foreach" loop.
WORKAROUND:
If you believe your design is correct, then substitute a simple for loop: for (int i=0; i < myContainer.size(); i++) ...
I don't know why it is happening even though visitedOrder is not
related with the foreach loop.
You are trying to modify the TreeMap while you are reading.
You are just pointing the reference here in this line. So its just the same TreeMap with different reference name.
TreeMap<Integer, Integer> stack = new TreeMap<Integer, Integer>();
TreeMap<Integer, Integer> visitedOrder = stack;
There is just one TreeMap instance that is created when you do a new TreeMap<Integer, Integer>(). The stack variable refers to this instance; the visitedOrder variable also refers the same instance. And when you call dfs(int vertex, TreeMap<Integer, Integer> visited), the visited parameter also refers to the same TreeMap instance.
Now you're iterating over the entry set of this TreeMap instance in the for(Entry<Integer,... loop. While iterating, you call the dfs(int, TreeMap<Interge, Integer>) method and within this method, you invoke a put on the TreeMap instance and that modifies the instance; hence the ConcurrentModificationException.
From the code you've provided, my understanding is that you are trying to convert a graph array to a TreeMap by doing a DFS. You are iterating over the TreeMap referenced by stack and trying to populate visitedOrder. To resolve the exception you are getting, just point the visitedorder variable to a new TreeMap<Integer, Integer>() instance.
Note that the fix I've suggested is aimed at fixing the exception while keeping you code flow and logic unchanged as I only have a limited picture of your solution.
Related
I am maintaining some legacy code and found some implementation with synchronized key-word on ConcurrentHashMap. It seem unnecessary to me:
public class MyClass{
private final Map<MyObj, Map<String, List<String>>> conMap = new ConcurrentHashMap<>();
//...
//adding new record into conMap:
private void addToMap(MyObj id, String name, String value){
conMap.putIfAbsent(id, new ConcurrentHashMap<>());
Map<String, List<String>> subMap = conMap.get(id);
synchronized(subMap){ // <-- is it necessary?
subMap.putIfAbsent(name, new ArrayList<>());
subMap.get(name).add(value);
}
}
//...
public void doSomthing((MyObj id){
List<Map<String, List<String>>> mapsList = new LinkedList<>();
for(MyObj objId: conMap.keySet()){
if(objId.key1.equals(id.key1)){
mapsList.add(conMap.get(objId));
}
}
for(Map<String, List<String>> map: mapsList){
synchronized(map){ // <-- is it necessary?
if(timeout <= 0){
log(map.size());
for(List<String> value: map.values(){
log(id, value);
}
}
else{
int sum = 0;
for(map.Entry<String, List<String>> val: map.entrySet()){
sum += val.getValue().size();
}
log(sum);
map.wait(timeout);
}
}
//...
}
So, is it reasonable to use synchronized key on object that already concurrent? Or those are two different things?
In this case:
synchronized(subMap){ // <-- is it necessary?
subMap.putIfAbsent(name, new ArrayList<>());
subMap.get(name).add(value);
}
the synchronized is necessary. Without it, you could have two threads simultaneously updating the same ArrayList instance. Since ArrayList is not thread-safe, the addToMap method would not be thread-safe either.
In this case:
synchronized(map){ // <-- is it necessary?
if(/*condition*/){
log(map.size());
for(List<String> value: map.values(){
log(id, value);
}
}
else{
int sum = 0;
for(map.Entry<String, List<String>> val: map.entrySet()){
sum += val.getValue().size();
}
log(sum);
map.wait(timeout);
}
the synchronized is necessary.
In the if branch, the log method (or something called from it) will probably call ArrayList::toString which will iterate each ArrayList. Without the synchronizing at the submap level, there could be a simultaneous add by another thread (e.g. an addToMap call). That means that there are memory hazards, and a ConcurrentModificationException may be possible in the toString() method.
In the else branch, the size() call is accessing a size field in each ArrayList in the submap. Without the synchronizing at the submap level, there could be a simultaneous add on one of those list. That could cause the size() method to return a stale value. In addition, you are not guaranteed to see map entries added to a submap while your are iterating it. If either of those events happen, the sum could be inaccurate. (Whether that is really an issue depends on the requirements for this method: inaccurate counts could be acceptable.)
ConcurrentHashMap synchronizes each individual method call itself, so that no other thread can access the map (and possibly break the internal data structure of the map).
Synchronized block synchronizes two or more consecutive method calls, so that no other thread can modify the data structure between the calls (and possibly break the consistency of the data, with regards to the application logic).
Note that the synchornized block only works if all access to the HashMap is performed from synchronized blocks using the same monitor object.
It is sort of necessary, as multiple threads may try to append to the same ArrayList at the same time. The synchonized is protecting against that happening as ArrayList is obviously not synchronized.
Since Java 8 we have computeIfAbsent which means the puts followed by gets they are doing can be simplified. I would write it like this, no synchronization required:
conMap.computeIfAbsent(id, k -> new ConcurrentHashMap<>())
.computeIfAbsent(name, k -> new CopyOnWriteArrayList<>()) // or other thread-safe list
.add(value);
Other answers don't adequately this bit...
for(Map<String, List<String>> map: mapsList){
synchronized(map){ // <-- is it necessary?
if(/*condition*/){
...iterate over map...
}
else {
...iterate over map...
}
}
}
Is it necessary? Hard to tell.
What is /*condition*/ ? Does synchronizing on map prevent some other thread A from changing the value of /*condition*/ after thread B has tested it, but before or while thread B is performing either of the two branches? If so, then the synchronized block could be very important.
How about those iterations? Does synchronizing on map prevent some other thread A from changing the contents of the map while thread B is iterating? If so, then the synchronized block could be very important.
Code:
I have a HashMap
private Map<K, V> map = new HashMap<>();
One method will put K-V pair into it by calling put(K,V).
The other method wants to extract a set of random elements from its values:
int size = map.size(); // size > 0
V[] value_array = map.values().toArray(new V[size]);
Random rand = new Random();
int start = rand.nextInt(size); int end = rand.nextInt(size);
// return value_array[start .. end - 1]
The two methods are called in two different concurrent threads.
Error:
I got a ConcurrentModificationException error:
at java.util.HashMap$HashIterator.nextEntry(Unknown Source)
at java.util.HashMap$ValueIterator.next(Unknown Source)
at java.util.AbstractCollection.toArray(Unknown Source)
It seems that the toArray() method in one thread is actually iterating over the HashMap and a put() modification in other thread occurs.
Question: How to avoid "ConcurrentModificationException" while using HashMap.values().toArray() and HashMap.put() in concurrent threads?
Directly avoiding using values().toArray() in the second method is also OK.
You need to provide some level of synchronization so that the call to put is blocked while the toArray call is executing and vice versa. There are three two simple approaches:
Wrap your calls to put and toArray in synchronized blocks that synchronize on the same lock object (which might be the map itself or some other object).
Turn your map into a synchronized map using Collections.synchronizedMap()
private Map<K, V> map = Collections.synchronizedMap(new HashMap<>());
Use a ConcurrentHashMap instead of a HashMap.
EDIT: The problem with using Collections.synchronizedMap is that once the call to values() returns, the concurrency protection will disappear. At that point, calls to put() and toArray() might execute concurrently. A ConcurrentHashMap has a somewhat similar problem, but it can still be used. From the docs for ConcurrentHashMap.values():
The view's iterator is a "weakly consistent" iterator that will never throw ConcurrentModificationException, and guarantees to traverse elements as they existed upon construction of the iterator, and may (but is not guaranteed to) reflect any modifications subsequent to construction.
I would use ConcurrentHashMap instead of a HashMap and protect it from concurrent reading and modification by different threads. See the below implementation. It is not possible for thread 1 and thread 2 to read and write at the same time. When thread 1 is extracting values from Map to an array, all other threads that invoke storeInMap(K, V) will suspend and wait on the map until the first thread is done with the object.
Note: I do not use synchronized method in this context; I do not completely rule out synchronized method but I would use it with caution. A synchronized method is actually just syntax sugar for getting the lock on 'this' and holding it for the duration of the method so it can hurt throughput.
private Map<K, V> map = new ConcurrentHashMap<K, V>();
// thread 1
public V[] pickRandom() {
int size = map.size(); // size > 0
synchronized(map) {
V[] value_array = map.values().toArray(new V[size]);
}
Random rand = new Random();
int start = rand.nextInt(size);
int end = rand.nextInt(size);
return value_array[start .. end - 1]
}
// thread 2
public void storeInMap(K, V) {
synchronized(map) {
map.put(K,V);
}
}
Not sure what is triggering a java.util.ConcurrentModificationException when I iterate over the LinkedHashMap structure in the code below. Using the Map.Entry approach works fine. Did not get a good explanation on what is triggering this from the previous posts.
Any help would be appreciated.
import java.util.LinkedHashMap;
import java.util.Map;
public class LRU {
// private Map<String,Integer> m = new HashMap<String,Integer>();
// private SortedMap<String,Integer> lru_cache = Collections.synchronizedSortedMap(new TreeMap<String, Integer>());
private static final int MAX_SIZE = 3;
private LinkedHashMap<String,Integer> lru_cache = new LinkedHashMap<String,Integer>(MAX_SIZE, 0.1F, true){
#Override
protected boolean removeEldestEntry(Map.Entry eldest) {
return(lru_cache.size() > MAX_SIZE);
}
};
public Integer get1(String s){
return lru_cache.get(s);
}
public void displayMap(){
/**
* Exception in thread "main" java.util.ConcurrentModificationException
at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:373)
at java.util.LinkedHashMap$KeyIterator.next(LinkedHashMap.java:384)
at LRU.displayMap(LRU.java:23)
at LRU.main(LRU.java:47)
*/
*for(String key : lru_cache.keySet()){
System.out.println(lru_cache.get(key));
}*
// This parser works fine
// for(Map.Entry<String, Integer> kv : lru_cache.entrySet()){
// System.out.println(kv.getKey() + ":" + kv.getValue());
// }
}
public void set(String s, Integer val){
if(lru_cache.containsKey(s)){
lru_cache.put(s, get1(s) + val);
}
else{
lru_cache.put(s, val);
}
}
public static void main(String[] args) {
LRU lru = new LRU();
lru.set("Di", 1);
lru.set("Da", 1);
lru.set("Daa", 1);
lru.set("Di", 1);
lru.set("Di", 1);
lru.set("Daa", 2);
lru.set("Doo", 2);
lru.set("Doo", 1);
lru.set("Sa", 2);
lru.set("Na", 1);
lru.set("Di", 1);
lru.set("Daa", 1);
lru.displayMap();
}
}
Read the Javadoc for LinkedHashMap:
A structural modification is any operation that adds or deletes one or
more mappings or, in the case of access-ordered linked hash maps,
affects iteration order. In insertion-ordered linked hash maps, merely
changing the value associated with a key that is already contained in
the map is not a structural modification. In access-ordered linked
hash maps, merely querying the map with get is a structural
modification.
Since you're passing in true to the LinkedHashMap constructor, it is in access order and when you are trying to get something from it, you are structurally modifying it.
Also note that when you use the enhanced for syntax, you are actually using an iterator. Simplified quote from JLS §14.14.2:
The enhanced for statement has the form:
EnhancedForStatement:
for ( TargetType Identifier : Expression ) Statement
[...]
If the type of Expression is a subtype of Iterable<X> for some type
argument X, then let I be the type java.util.Iterator<X>; otherwise,
let I be the raw type java.util.Iterator.
The enhanced for statement is equivalent to a basic for statement of
the form:
for (I #i = Expression.iterator(); #i.hasNext(); ) {
TargetType Identifier =
(TargetType) #i.next();
Statement
}
#i is an automatically generated identifier that is distinct from any other identifiers (automatically generated or otherwise) that are in
scope (§6.3) at the point where the enhanced for statement occurs.
Also, in the Javadoc for LinkedHashMap:
The iterators returned by the iterator method of the collections
returned by all of this class's collection view methods are
fail-fast: if the map is structurally modified at any time after the iterator is created, in any way except through the iterator's own
remove method, the iterator will throw a
ConcurrentModificationException.
Therefore, when you are calling get on the map, you are performing structural modifications to it, causing the iterator in the enhanced-for to throw an exception. I think you meant to do this, which avoids calling get:
for (Integer i : lru_cache.values()) {
System.out.println(i);
}
You're using an access-ordered linked hash map: from the spec at http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html,
A structural modification is any operation that adds or deletes one or
more mappings or, in the case of access-ordered linked hash maps,
affects iteration order. In insertion-ordered linked hash maps, merely
changing the value associated with a key that is already contained in
the map is not a structural modification. In access-ordered linked
hash maps, merely querying the map with get is a structural
modification.)
Simply calling get is enough to be considered a structural modification, triggering the exception. If you use the entrySet() sequence you're only querying the entry and NOT the map, so you don't trigger the ConcurrentModificationException.
In the constructor of LinkedHashMap you pass true to get the LRU behaviour (meaning the eviction policy is access order rather than false for insertion order).
So every time you call get(key) the underlying Map.Entry increments an access counter AND reorders the collection by moving the (last accessed) Map.Entry to the head of the list.
The iterator (implicitly created by the for loop) checks the modified flag, which is different from the copy it took originally, so throws the ConcurrentModificationException.
To avoid this you should use the entrySet() as the implementation is inherited from java.util.HashMap and therefore the iterator doesn't check the modification flags:
for(Map.Entry<String,Integer> e : lru_cache.entrySet()){
System.out.println(e.getValue());
}
Be aware this class isn't threadsafe so in concurrent environments you will need to use an potentially expensive guards like Collections.synchronizedMap(Map). In this scenario a better option might be Google's Guava Cache.
Your code
for(String key : lru_cache.keySet()){
System.out.println(lru_cache.get(key));
}
Actually compiles to:
Iterator<String> it = lru_cache.keySet().iterator();
while (it.hasNext()) {
String key = it.next();
System.out.println(lru_cache.get(key));
}
Next, your LRU cache shrinks itself to MAX_SIZE elements not when calling set(), but when calling get() - above answers explain why.
Thus we have following behavior:
new iterator created to iterate over lru_cache.keySet() collection
lru_cache.get() called to extract element from your cache
get() invocation truncates lru_cache to MAX_SIZE elements (in your case 3)
iterator it becomes invalid due to collection modification and throws on next iteration.
java.util.ConcurrentModificationException : If there are any structural changes (additions, removals, rehashing, etc.) to the underlying list while the iterator exists. The iterator checks to see if the list has changed before each operation. This is known as 'failsafe operation'.
If a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will throw this exception.Here you cannot call the get() method while using an iterator because calling get() structurally modifies the map and hence the next call to one of the iterators method fails and throws a ConcurrentModificationException.
It is coz of fail-fast behaviour of collections framework also when you modify the list (by adding or removing elements) while traversing a list with this error will be there with Iterator. I came across this error some time back . Refer below threads to for detail info.
ConcurrentModificationException when adding inside a foreach loop in ArrayList
Though this says array list, it applies for most of the collection(s) data strucutres.
Concurrent Modification Exception : adding to an ArrayList
http://docs.oracle.com/javase/6/docs/api/java/util/ConcurrentModificationException.html
I stumbled over this odd bug. Seems like Collections.sort() does not modify the sorted list in a way that enables a detection of concurrent modifications when also iterating over the same list. Example code:
List<Integer> my_list = new ArrayList<Integer>();
my_list.add(2);
my_list.add(1);
for (Integer num : my_list) {
/*
* print list
*/
StringBuilder sb = new StringBuilder();
for (Integer i : my_list)
sb.append(i).append(",");
System.out.println("List: " + sb.toString());
/*
* sort list
*/
System.out.println("CurrentElement: " + num);
Collections.sort(my_list);
}
outputs
List: 2,1,
CurrentElement: 2
List: 1,2,
CurrentElement: 2
One would expect a ConcurrentModificationException, but it is not being raised and the code works although it shouldn't.
Why would it throw ConcurrentModificationException when you are not adding/removing elements from your collection while iterating?
Note that ConcurrentModificationException would only occur when a new element is added in to your collection or remove from your collection while iterating. i.e., when your Collection is Structurally modified.
(Structural modifications are those that change the size of this list,
or otherwise perturb it in such a fashion that iterations in progress
may yield incorrect results.)
sort wouldn't structurally modify your Collection, all it does is modify the order.
Below code would throw ConcurrentModificationException as it add's an extra element into the collection while iterating.
for(Integer num : my_list) {
my_list.add(12);
}
If you look at the source of sort method in Collections class, its not throwing ConcurrentModificationException.
This implementation dumps the specified list into an array, sorts the
array, and iterates over the list resetting each element from the
corresponding position in the array. This avoids the n2 log(n)
performance that would result from attempting to sort a linked list in
place.
public static <T extends Comparable<? super T>> void sort(List<T> list) {
Object[] a = list.toArray();
Arrays.sort(a);
ListIterator<T> i = list.listIterator();
for (int j=0; j<a.length; j++) {
i.next();
i.set((T)a[j]);
}
}
Extract from the book java Generics and Collections:
The policy of the iterators for the Java 2 collections is to fail
fast, as described in Section 11.1: every time they access the backing
collection, they check it for structural modification (which, in
general, means that elements have been added or removed from the
collection). If they detect structural modification, they fail
immediately, throwing ConcurrentModificationException rather than
continuing to attempt to iterate over the modified collection with
unpredictable results.
Speaking of functionality I don't see why it should not throw ConcurrentModificationException. But according to documentation the iterator throws the exception when it notices structural modification and structural modification is defined as:
Structural modifications are those that change the size of the list,
or otherwise perturb it in such a fashion that iterations in progress
may yield incorrect results.
I think there is an argument for claiming that sort rearranging the elements causes the iterator to yield wrong results, but I haven't checked what are right results for iterator defined to be.
Speaking of implementation, it is easy to see why it does not: See the source for ArrayList and Collections:
ArrayList.modCount changes with the so called structural modifications
ListItr methods make a copy of its value in init and check that it hasn't changed in its methods
Collections.sort calls ListItr.set which calls ArratList.set. This last method does not increment modCount
So ListItr.next() sees the same modCount and no exception is thrown.
For Android, it depends on API versions. From API 26, Collections#sort(List<T>, Comparator<? super T>) actually calls List#sort(Comparator<? super E>). So, if you sort ArrayList, you can get ConcurrentModificationException depending on whether you've modified the list in another thread. Here's the source code from java/util/ArrayList.java that throws the exception:
public void sort(Comparator<? super E> c) {
final int expectedModCount = modCount;
Arrays.sort((E[]) elementData, 0, size, c);
if (modCount != expectedModCount) {
throw new ConcurrentModificationException();
}
modCount++;
}
Code:
public class MyClass {
private Map<Integer,String> myMap=new HashMap<Integer, String>();
...........................
void methodFillMap(){
myMap.put(.....);
.....................
}
}
What is correct:
void methodFillMap(){
myMap.clear();
myMap.put(.....);
.....................
}
or
void methodFillMap(){
myMap=null;
myMap.put(.....);
.....................
}
or better
void methodFillMap(){
myMap=new HashMap<Integer, String>();
myMap.put(.....);
.....................
}
NO. they are not the same.
map = null assigns null to the Map reference.
map.clear() clears the content of the map, but the object still exists and map still references to it.
void methodFillMap(){
myMap=null;
myMap.put(.....);
will simply throw a NullPointerException.
To clear a map you should use myMap.clear().
By the way there are two differences between reinstantiating the map and using clear:
-clear won't resize the map. IF the HashMap contained n buckets, after a clear it will still contain n empty buckets, with performance consequences (positive or negative depending on your usage of the map)
-if you use clear you are not throwing away the object, thus it will not be managed through next GC, with impact (positive) on GC time if this happens a lot.
The last one is the best one if you not coding for a system with very limited memmory then it's the first one that is best
In the first case you have to clear the hashtabel whitch takes some computation.
The secound won't even work since you just got a null reference and not a hashmap.
In the third case you just throw away the old hashmap and let the garbage collector handle the old one.
After setting the map to null, putting anything inside of it will result in a NullPointerException.
The are not the same because map=nul does not nulify the map entries. map=null only nullifies the reference to the map.
See the clear implementation from JDK 7 below:
public void clear() {
modCount++;
Entry[] tab = table;
for (int i = 0; i < tab.length; i++)
tab[i] = null;
size = 0;
}
I would use map.clear().