I keep getting a concurrent modification exception on my code. I'm simply iterating through a hashmap and modifying values. From researching this I found people said to use iterators and iterator.remove, etc. I tried implementing with this and still kept getting the error. I thought maybe multiple threads accessed it? (Although in my code this block is only run in one thread) So I put it in a synchronized block. However, I'm still getting the error.....
Map map= Collections.synchronizedMap(questionNumberAnswerCache);
synchronized (map) {
for (Iterator<Map.Entry<String, Integer>> it = questionNumberAnswerCache.entrySet().iterator(); it.hasNext(); ) {
Map.Entry<String, Integer> entry = it.next();
if (entry.getKey() == null || entry.getValue() == null) {
continue;
} else {
try {
Question me = Question.getQuery().get(entry.getKey());
int i = Activity.getQuery()
.whereGreaterThan(Constants.kQollegeActivityCreatedAtKey, lastUpdated.get("AnswerNumberCache " + entry.getKey()))
.whereEqualTo(Constants.kQollegeActivityTypeKey, Constants.kQollegeActivityTypeAnswer)
.whereEqualTo(Constants.kQollegeActivityQuestionKey, me)
.find().size();
lastUpdated.put("AnswerNumberCache " + entry.getKey(), Calendar.getInstance().getTime());
int old_num = entry.getValue();
entry.setValue(i + old_num);
} catch (ParseException e) {
entry.setValue(0);
}
}
}
}
Error:
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:787)
at java.util.HashMap$EntryIterator.next(HashMap.java:824)
at java.util.HashMap$EntryIterator.next(HashMap.java:822)
at com.juryroom.qollege_android_v1.QollegeCache.refreshQuestionAnswerNumberCache(QollegeCache.java:379)
at com.juryroom.qollege_android_v1.QollegeCache.refreshQuestionCaches(QollegeCache.java:267)
at com.juryroom.qollege_android_v1.UpdateCacheService.onHandleIntent(UpdateCacheService.java:28)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:65)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.os.HandlerThread.run(HandlerThread.java:61)
What is happening:
The iterator is looping through the map. The map isn't really like a list, because it doesn't care about order. So when you add something to the map it might get inserted into the middle, somewhere in the middle of the objects you already looped through, at the end, etc. So instead of giving you random behavior it fails.
Your solutions:
Synchronized map and synchronized blocks allow you to have two threads going at it at the same time. It doesn't really help here, since the problem is that the same thread is modifying it in an illegal manner.
What you should do:
You could just save the keys you want to modify. Making a map with keys and new values won't be a problem unless this is a really time critical piece of code.
Then you just iterate through the newValues map and update the oldValues map. Since you are not iterating through the map being updated it's not a problem.
Or you could simply iterate just through the keys (for String s : yourMap) and then look up the values you want to change. Since you are just iterating through the keys you are free to change the values (but you can't remove values).
You could also try to use a ConcurrentHashMap which should allow you to modify it, but the behavior is undefined so this is risky. Just changing values shouldn't lead to problems, but if you add or remove you never know if it will end up being iterated through or not.
Create an object, and is locked to it - a good way to shoot yourself in the foot.
I recommend the following code to remove the hash map.
HashMap<Key, Object> hashMap = new HashMap<>();
LinkedList<Key> listToRemove = new LinkedList<>();
for(Map.Entry<Key, Object> s : hashMap.entrySet()) {
if(s.getValue().equals("ToDelete")){
listToRemove.add(s.getKey());
}
}
for(Key s : listToRemove) {
hashMap.remove(s);
}
It's not the most beautiful and fastest option, but it should help you to understand how to work with HashMap.
As you will understand how to work my option. You can learn how to work iterators, how to work iterators in loop. (rather than simply copy-paste)
Iterator it = tokenMap.keySet())
while(it.hasNext()) {
if(/* some condition */) it.remove();
}
I would suggest the following for your use case:
for(Key key : hashMap.keySet()) {
Object value = hashMap.get(key);
if(<condition>){
hashMap.put(key, <new value>);
}
If you are not deleting any entries and just changing the value, this should work for you.
Related
I understand that when I try to modify (add in that case) the list I got ConcurrentModificationException, but what is the best solution to fix that?
for (Map.Entry<String, Child> entry : children.entrySet() {
childEvent.child = entry.getValue();
if (childEvent.getDate() != null && childEvent.getDate().equals(selectedDate)) {
if(this.selectedDayevents.isEmpty()) {
// List
this.selectedDayevents.add(childEvent);
}
for (CareDay selectedCareDay : this.selectedDayevents) {
// Here I have to combine data in some cases...
}
}
}
One simple way around this problem is to iterate over a copy of the entry set:
for (Map.Entry<String, Child> entry : new HashSet<>(children.entrySet())) {
// same code
}
If your map is not too big and you’re not doing it very often, you won’t notice a difference in performance.
If your requirement is to enable concurrent access to the collection, then you should explore java concurrency APIs and especially ConcurrentHashMap or ConcurrentSkipListMap for your case.
I am maintaining some legacy code and found some implementation with synchronized key-word on ConcurrentHashMap. It seem unnecessary to me:
public class MyClass{
private final Map<MyObj, Map<String, List<String>>> conMap = new ConcurrentHashMap<>();
//...
//adding new record into conMap:
private void addToMap(MyObj id, String name, String value){
conMap.putIfAbsent(id, new ConcurrentHashMap<>());
Map<String, List<String>> subMap = conMap.get(id);
synchronized(subMap){ // <-- is it necessary?
subMap.putIfAbsent(name, new ArrayList<>());
subMap.get(name).add(value);
}
}
//...
public void doSomthing((MyObj id){
List<Map<String, List<String>>> mapsList = new LinkedList<>();
for(MyObj objId: conMap.keySet()){
if(objId.key1.equals(id.key1)){
mapsList.add(conMap.get(objId));
}
}
for(Map<String, List<String>> map: mapsList){
synchronized(map){ // <-- is it necessary?
if(timeout <= 0){
log(map.size());
for(List<String> value: map.values(){
log(id, value);
}
}
else{
int sum = 0;
for(map.Entry<String, List<String>> val: map.entrySet()){
sum += val.getValue().size();
}
log(sum);
map.wait(timeout);
}
}
//...
}
So, is it reasonable to use synchronized key on object that already concurrent? Or those are two different things?
In this case:
synchronized(subMap){ // <-- is it necessary?
subMap.putIfAbsent(name, new ArrayList<>());
subMap.get(name).add(value);
}
the synchronized is necessary. Without it, you could have two threads simultaneously updating the same ArrayList instance. Since ArrayList is not thread-safe, the addToMap method would not be thread-safe either.
In this case:
synchronized(map){ // <-- is it necessary?
if(/*condition*/){
log(map.size());
for(List<String> value: map.values(){
log(id, value);
}
}
else{
int sum = 0;
for(map.Entry<String, List<String>> val: map.entrySet()){
sum += val.getValue().size();
}
log(sum);
map.wait(timeout);
}
the synchronized is necessary.
In the if branch, the log method (or something called from it) will probably call ArrayList::toString which will iterate each ArrayList. Without the synchronizing at the submap level, there could be a simultaneous add by another thread (e.g. an addToMap call). That means that there are memory hazards, and a ConcurrentModificationException may be possible in the toString() method.
In the else branch, the size() call is accessing a size field in each ArrayList in the submap. Without the synchronizing at the submap level, there could be a simultaneous add on one of those list. That could cause the size() method to return a stale value. In addition, you are not guaranteed to see map entries added to a submap while your are iterating it. If either of those events happen, the sum could be inaccurate. (Whether that is really an issue depends on the requirements for this method: inaccurate counts could be acceptable.)
ConcurrentHashMap synchronizes each individual method call itself, so that no other thread can access the map (and possibly break the internal data structure of the map).
Synchronized block synchronizes two or more consecutive method calls, so that no other thread can modify the data structure between the calls (and possibly break the consistency of the data, with regards to the application logic).
Note that the synchornized block only works if all access to the HashMap is performed from synchronized blocks using the same monitor object.
It is sort of necessary, as multiple threads may try to append to the same ArrayList at the same time. The synchonized is protecting against that happening as ArrayList is obviously not synchronized.
Since Java 8 we have computeIfAbsent which means the puts followed by gets they are doing can be simplified. I would write it like this, no synchronization required:
conMap.computeIfAbsent(id, k -> new ConcurrentHashMap<>())
.computeIfAbsent(name, k -> new CopyOnWriteArrayList<>()) // or other thread-safe list
.add(value);
Other answers don't adequately this bit...
for(Map<String, List<String>> map: mapsList){
synchronized(map){ // <-- is it necessary?
if(/*condition*/){
...iterate over map...
}
else {
...iterate over map...
}
}
}
Is it necessary? Hard to tell.
What is /*condition*/ ? Does synchronizing on map prevent some other thread A from changing the value of /*condition*/ after thread B has tested it, but before or while thread B is performing either of the two branches? If so, then the synchronized block could be very important.
How about those iterations? Does synchronizing on map prevent some other thread A from changing the contents of the map while thread B is iterating? If so, then the synchronized block could be very important.
I would like to implement the following logic:
-the following structure is to be used
//Map<String, CopyOnWriteArrayList> keeping the pending updates
//grouped by the id of the updated object
final Map<String, List<Update>> updatesPerId = new ConcurrentHashMap<>();
-n producers will add updates to updatesPerId map (for the same id, 2 updates can be added at the same time)
-one TimerThread will run from time to time and has to process the received updates. Something like:
final Map<String, List<Update>> toBeProcessed = new HashMap<>(updatesPerId);
updatesPerId.clear();
// iterate over toBeProcessed and process them
Is there any way to make this logic thread safe without synchronizing the adding logic from producers and the logic from timerThread(consumer)? I am thinking about an atomic clear+get but it seems that ConcurrentMap does not provide something like that.
Also, I have to mention that updates should be kept by updated object id so I cannot replace the map with a queue or something else.
Any ideas?
Thanks!
You can leverage the fact that ConcurrentHashMap.compute executes atomically.
You can put into the updatesPerId like so:
updatesPerId.compute(k, (k, list) -> {
if (list == null) list = new ArrayList<>();
// ... add to the list
// Return a non-null list, so the key/value pair is stored in the map.
return list;
});
This is not using computeIfAbsent then adding to the return value, which would not be atomic.
Then in your thread to remove things:
for (String key : updatesPerId.keySet()) {
List<Update> list = updatesPerId.put(key, null);
updatesPerId.compute(key, (k, list) -> {
// ... Process the contents of the list.
// Removes the key/value pair from the map.
return null;
});
}
So, adding a key to the list (or processing all the values for that key) might block if you so happen to try to process the key in both places at once; otherwise, it will not be blocked.
Edit: as pointed out by #StuartMarks, it might be better to simply get all things out of the map first, and then process them later, in order to avoid blocking other threads trying to add:
Map<String, List<Update>> newMap = new HashMap<>();
for (String key : updatesPerId.keySet()) {
newMap.put(key, updatesPerId.remove(key));
}
// ... Process entries in newMap.
I'd suggest using LinkedBlockingQueue instead of CopyOnWriteArrayList as the map value. With COWAL, adds get successively more expensive, so adding N elements results in N^2 performance. LBQ addition is O(1). Also, LBQ has drainTo which can be used effectively here. You could do this:
final Map<String, Queue<Update>> updatesPerId = new ConcurrentHashMap<>();
Producer:
updatesPerId.computeIfAbsent(id, LinkedBlockingQueue::new).add(update);
Consumer:
updatesPerId.forEach((id, queue) -> {
List<Update> updates = new ArrayList<>();
queue.drainTo(updates);
processUpdates(id, updates);
});
This is somewhat different from what you had suggested. This technique processes the updates for each id, but lets producers continue to add updates to the map while this is going on. This leaves map entries and queues in the map for each id. If the ids end up getting reused a lot, the number of map entries will plateau at a high-water mark.
If new ids are continually coming in, and old ids becoming disused, the map will grow continually, which probably isn't what you want. If this is the case you could use the technique in Andy Turner's answer.
If the consumer really needs to snapshot and clear the entire map, I think you have to use locking, which you wanted to avoid.
Is there any way to make this logic thread safe without synchronizing the adding logic from producers and the logic from timerThread(consumer)?
In short, no - depending on what you mean by "synchronizing".
The easiest way is to wrap your Map into a class of your own.
class UpdateManager {
Map<String,List<Update>> updates = new HashMap<>();
public void add(Update update) {
synchronized (updates) {
updates.computeIfAbsent(update.getKey(), k -> new ArrayList<>()).add(update);
}
}
public Map<String,List<Update>> getUpdatesAndClear() {
synchronized (updates) {
Map<String,List<Update>> copy = new HashMap<>(updates);
updates.clear();
return copy;
}
}
}
Right now I am trying to create a producer/consumer thread, the producer thread goes through all possible combinations of letters and creates their respective MD5 hashes. Then each combination and its hash is put into the HashMap<String,String>. Now in my consumer thread I want to be able to use the Queue<> collection on the hashmap so my consumer thread may call poll() etc thus removing values atc like a Queue but still giving me the capability of seeing both the combination and its hash when calling poll() How would I go about doing this? I have the HashMap but dont know how to 'make' or cast it as a Queue.
Thanks.
You should not use a HashMap without handling the thread-safety of your code. Else, you may end with a Live-lock.
To be able to iterate your Map with the order in which keys were inserted, you can use a LinkedHashMap.
Map m = Collections.synchronizedMap(new LinkedHashMap(...));
The producer would push entries like this (nothing special):
m.put(key, object)
The consumer would poll entries like this:
while (someCondition) {
Map.Entry nextEntry = null;
// This block is equivalent to polling
{
synchronized(s) {
Iterator i = s.iterator(); // Must be in the synchronized block
if (i.hasNext()) {
nextEntry = i.next();
i.remove();
}
}
}
if (nextEntry != null) {
// Process the entry
...
} else {
// Sleep for some time
...
}
// process
}
The LinkedHashMap type is like a combination of a HashMap and a Queue - it stores key/value pairs, but also remembers the order in which they were inserted. This might be exactly the type you're looking for. There is no explicit poll() function, but if you get an iterator over the LinkedHashMap you will visit the elements in the order in which they were added. You could probably then write a function like this:
public <KeyType, ValueType> KeyType first(LinkedHashMap<KeyType, ValueType> map) {
assert !map.isEmpty();
return map.iterator().next();
}
which will give you back the first element. Just make sure to synchronize appropriately.
Alternatively, you could consider just storing key/value pairs inside a Queue by defining a helper class Pair and then storing Pairs in the queue.
Hope this helps!
I suggest you create a Queue of EntrySet -
Queue<EntrySet<String,String>> queue = new SynchronousQueue<EntrySet<String,String>>();
for (EntrySet<String,String> entry:map.entrySet()) {
queue.add(entry);
}
You can consider using another type of queue, which lets you put the elements, and only the prdocuer waits in case of non empty such as LinkedBlockingQueue.
The producer will then be able to recompose a map based on the EntrySet objects, if needed.
I know that it's typically a big no-no to remove from a list using java's "foreach" and that one should use iterator.remove(). But is it safe to remove() if I'm looping over a HashMap's keySet()? Like this:
for(String key : map.keySet()) {
Node n = map.get(key).optimize();
if(n == null) {
map.remove(key);
} else {
map.put(key, n);
}
}
EDIT:
I hadn't noticed that you weren't really adding to the map - you were just changing the value within the entry. In this case, pstanton's (pre-edit1) solution is nearly right, but you should call setValue on the entry returned by the iterator, rather than calling map.put. (It's possible that map.put will work, but I don't believe it's guaranteed - whereas the docs state that entry.setValue will work.)
for (Iterator<Map.Entry<String, Node>> it = map.entrySet().iterator();
it.hasNext();)
{
Map.Entry<String, Node> entry = it.next();
Node n = entry.getValue().optimize();
if(n == null)
{
it.remove();
}
else
{
entry.setValue(n);
}
}
(It's a shame that entry doesn't have a remove method, otherwise you could still use the enhanced for loop syntax, making it somewhat less clunky.)
Old answer
(I've left this here for the more general case where you just want to make arbitrary modifications.)
No - you should neither add to the map nor remove from it directly. The set returned by HashSet.keySet() is a view onto the keys, not a snapshot.
You can remove via the iterator, although that requires that you use the iterator explicitly instead of via an enhanced for loop.
One simple option is to create a new set from the original:
for (String key : new HashSet<String>(map.keySet())) {
...
}
At this point you're fine, because you're not making any changes to the set.
EDIT: Yes, you can definitely remove elements via the key set iterator. From the docs for HashMap.keySet():
The set supports element removal,
which removes the corresponding
mapping from the map, via the
Iterator.remove, Set.remove,
removeAll, retainAll, and clear
operations. It does not support the
add or addAll operations.
This is even specified within the Map interface itself.
1 I decided to edit my answer rather than just commenting on psanton's, as I figured the extra information I'd got for similar-but-distinct situations was sufficiently useful to merit this answer staying.
you should use the entry set:
for(Iterator<Map.Entry<String, Node>> it = map.entrySet().iterator(); it.hasNext();)
{
Map.Entry<String, Node> entry = it.next();
Node n = entry.getValue().optimize();
if(n == null)
it.remove();
else
entry.setValue(n);
}
EDIT fixed code