In my Android app I use this code while drawing some waypoints on a map
Iterator<Waypoint> iterator = waypoints.iterator();
while (iterator.hasNext()) {
Waypoint w = iterator.next();
}
But I am getting this error
Fatal Exception: java.util.ConcurrentModificationException
java.util.ArrayList$ArrayListIterator.next (ArrayList.java:573)
I am not modifying the list directly in the loop I am iterating over.
But it is possible that I modify the list in another thread because a user can move some waypoints. And the drawing of a waypoint can happen the same time a user uses the touch display to move a waypoint.
Can I avoid that exception somehow?
If you want to maintain a List you use in several threads, it's best you use a concurrent list, such as CopyOnWriteArrayList.
Locally, you can avoid the exception by creating a copy of the waypoints list first and iterate that:
Iterator<Waypoint> iterator = new ArrayList<>(waypoints).iterator();
while (iterator.hasNext()) {
handle(iterator.next());
}
The iterator provided by array list is fail-fast iterator - meaning it fails as soon as the underlying list is modified.
One way to avoid the exception is to take a snapshot of the list into another list and then iterate over it.
Iterator<Waypoint> iterator = new ArrayList<>(waypoints).iterator();
while (iterator.hasNext()) {
Waypoint w = iterator.next();
}
another way is to use collection that implements fail-safe iterators such as CopyOnWriteArrayList.
I see some options:
a. Avoid multithreading. Well, you don't have to avoid multithreading completely, just for access to the array. All accesses to the array (even read) must happen from the same thread. Heavy computations can happen on some other threads, of course. This might be a reasonable approach when you can iterate fast.
b. Lock the ArrayList, even for reading. This can be tricky, as excessive locking can introduce deadlocks.
c. Use data copies. Remember, you copy just references, but you usually don't have to clone all the objects. For large data structures, it might be worth considering some persistent data structure, which does not require to copy all the data.
d. Deal with the ConcurrentModificationException somehow. Maybe restart the computation. This might be useful in some cases, but it might get tricky in complex code. Also, in some cases when accessing multiple shared data structures, you might get a livelock – two (or more) threads causing ConcurrentModificationException repeatedly to each other.
EDIT: For some approaches (at least A), you might find reactive programming useful, because this programming style reduces the time spent in the main thread.
Related
I'm looking for a fast Set or Map implementation that has a weaker thread-safety in favor of speed.
The idea is to have a data structure that can quickly be checked whether it contains a (Long) entry at best without thread-synchronization. It is okay if a new entry that is written by another thread becomes visible to the other threads at a later time.
I know already that the non thread-safe HashSet Java standard implementation may disrupt the datastructures while inserting a new element and a reader thread ends up in an endless loop during lookup.
I also know that whenever the writing methods are using synchronized blocks, all reader methods should be synchronized as well in a multi-threaded implementation.
So my ultimate goal is to find a possibility to insert in O(1) and lookup in O(1) where
the inserts might get queued in some way for bulk insert at a sync-point (if there is no other possibility)
the read does not get stuck but should not need to wait (for any writers)
the inserted element should be visible to any subsequent reads of the thread that added the element (which might prevent the aforementioned queue)
I am experimenting with Longs that represent hash-codes mapping to Lists of usually one, sometimes two or more entries.
Is there a way to achieve this e.g. via an array and compare-and-exchange and which is faster than using the ConcurrentHashMap?
How would a sketched implementation look like given that the input consists of node-ids (type Long) of a graph that is traversed with multiple threads that somehow exchange information which nodes have been visited already (as described in the list above).
I really appreciate any comments and ideas,
thanks in advance!
Edit: Added extended information on the actual task that I am doing some hobby-research on and which led me to asking the question here in the forum.
I know that concurrent modification exception is thrown when the collection is structurally changed while iterating it, but why?
what are the potential problems if we don't throw concurrent modification exception?
how concurrent modification exception prevents undetermined behaviour in future?
if its because to prevent multithreading related issues why its also thrown when the same thread which called the iterator modifies the collection structure?
this may be a pretty basic question but I do need some proper scenario to convince myself that checking ConcurrentmodificationException is absolutely necessary.
It's not "absolutely necessary", but it helps prevent a number of potential bugs, both in single-threaded and multi-threaded code. Suppose for instance you're iterating over a set and add or remove an element - how should the iterator handle that? There isn't an obviously correct behavior (you might want it to appear later in the iteration, or you might not).
To avoid CMEs you'll generally want to use two separate collections - one you iterate and one you mutate. This also often leads to cleaner code that's easier to reason about. In the specific case of lists you can iterate over the indices (for (int i = 0; i < list.size(); i++)) but you need to be careful about how i changes when you modify the list so you don't skip or double-iterate over an element.
Today I was reading about how HashMap works in Java. I came across a blog and I am quoting directly from the article of the blog. I have gone through this article on Stack Overflow. Still
I want to know the detail.
So the answer is Yes there is potential race condition exists while
resizing HashMap in Java, if two thread at the same time found that
now HashMap needs resizing and they both try to resizing. on the
process of resizing of HashMap in Java , the element in bucket which
is stored in linked list get reversed in order during there migration
to new bucket because java HashMap doesn't append the new element at
tail instead it append new element at head to avoid tail traversing.
If race condition happens then you will end up with an infinite loop.
It states that as HashMap is not thread-safe during resizing of the HashMap a potential race condition can occur. I have seen in our office projects even, people are extensively using HashMaps knowing they are not thread safe. If it is not thread safe, why should we use HashMap then? Is it just lack of knowledge among developers as they might not be aware about structures like ConcurrentHashMap or some other reason. Can anyone put a light on this puzzle.
I can confidently say ConcurrentHashMap is a pretty ignored class. Not many people know about it and not many people care to use it. The class offers a very robust and fast method of synchronizing a Map collection. I have read a few comparisons of HashMap and ConcurrentHashMap on the web. Let me just say that they’re totally wrong. There is no way you can compare the two, one offers synchronized methods to access a map while the other offers no synchronization whatsoever.
What most of us fail to notice is that while our applications, web applications especially, work fine during the development & testing phase, they usually go tilts up under heavy (or even moderately heavy) load. This is due to the fact that we expect our HashMap’s to behave a certain way but under load they usually misbehave. Hashtable’s offer concurrent access to their entries, with a small caveat, the entire map is locked to perform any sort of operation.
While this overhead is ignorable in a web application under normal load, under heavy load it can lead to delayed response times and overtaxing of your server for no good reason. This is where ConcurrentHashMap’s step in. They offer all the features of Hashtable with a performance almost as good as a HashMap. ConcurrentHashMap’s accomplish this by a very simple mechanism.
Instead of a map wide lock, the collection maintains a list of 16 locks by default, each of which is used to guard (or lock on) a single bucket of the map. This effectively means that 16 threads can modify the collection at a single time (as long as they’re all working on different buckets). Infact there is no operation performed by this collection that locks the entire map.
There are several aspects to this: First of all, most of the collections are not thread safe. If you want a thread safe collection you can call synchronizedCollection or synchronizedMap
But the main point is this: You want your threads to run in parallel, no synchronization at all - if possible of course. This is something you should strive for but of course cannot be achieved every time you deal with multithreading.
But there is no point in making the default collection/map thread safe, because it should be an edge case that a map is shared. Synchronization means more work for the jvm.
In a multithreaded environment, you have to ensure that it is not modified concurrently or you can reach a critical memory problem, because it is not synchronized in any way.
Dear just check Api previously I also thinking in same manner.
I thought that the solution was to use the static Collections.synchronizedMap method. I was expecting it to return a better implementation. But if you look at the source code you will realize that all they do in there is just a wrapper with a synchronized call on a mutex, which happens to be the same map, not allowing reads to occur concurrently.
In the Jakarta commons project, there is an implementation that is called FastHashMap. This implementation has a property called fast. If fast is true, then the reads are non-synchronized, and the writes will perform the following steps:
Clone the current structure
Perform the modification on the clone
Replace the existing structure with the modified clone
public class FastSynchronizedMap implements Map,
Serializable {
private final Map m;
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
.
.
.
public V get(Object key) {
lock.readLock().lock();
V value = null;
try {
value = m.get(key);
} finally {
lock.readLock().unlock();
}
return value;
}
public V put(K key, V value) {
lock.writeLock().lock();
V v = null;
try {
v = m.put(key, value);
} finally {
lock.writeLock().lock();
}
return v;
}
.
.
.
}
Note that we do a try finally block, we want to guarantee that the lock is released no matter what problem is encountered in the block.
This implementation works well when you have almost no write operations, and mostly read operations.
Hashmap can be used when a single thread has an access to it. However when multiple threads start accessing the Hashmap there will be 2 main problems:
1. resizing of hashmap is not gauranteed to work as expected.
2. Concurrent Modification exception would be thrown. This can also be thrown when its accessed by single thread to read and write onto the hashmap at the same time.
A workaround for using HashMap in multi-threaded environment is to initialize it with the expected number of objects' count, hence avoiding the need for a re-sizing.
One basic argument to use a Queue over an ArrayList is that Queue guarantees FIFO behavior.
But if I add 10 elements to an ArrayList and then iterate over the elements starting from the 0th element, then I will retrieve the elements in the same order as they were added. So essentially, that guarantees a FIFO behavior.
What is so special about Queue as compared to traditional ArrayList?
You can look at the javadoc here. The main difference is a List lets you look at any element whenever you want. A queue only lets you look at the "next" one.
Think about it as a real queue or as a line for the cash register at a grocery store. You don't ask the guy in the middle or the end to pay next, you always ask the guy who's in the front/been waiting the longest.
It's worth noting that some lists are queues. Look at LinkedList, for example.
If I gave you a Queue instance then you would know that by iteratively calling remove() you would retrieve the elements in FIFO order. If i gave you an ArrayList instance then you can make no such guarantee.
Take the following code as an example:
ArrayList<Integer> list = new ArrayList<Integer>();
list.add(5);
list.add(4);
list.add(3);
list.add(2);
list.add(1);
list.set(4,5);
list.set(3,4);
list.set(2,3);
list.set(1,2);
list.set(0,1);
System.out.println(list);
If I were now to give you this list, then my iterating from 0 to 4 you would not get the elements in FIFO order.
Also, I would say another difference is abstraction. With a Queue instance you don't have to worry about indexes and this makes things easier to think about if you don't need everything ArrayList has to offer.
The limitations imposed on a queue (FIFO, no random access), as compared to an ArrayList, allow for the data structure to be better optimized, have better concurrency, and be a more appropriate and cleaner design when called for.
In regards to optimization and concurrency, imagine the common scenario where a producer is filling a queue while a consumers consumes it. If we used an ArrayList for this, then in the naive implementation each removal of the first element would cause a shift operation on the ArrayList in order to move down every other element. This is very inefficient, especially in a concurrent implementation since the list would be locked for duration of the entire shift operation.
In regards to design, if items are to be accessed in a FIFO fashion then using a queue automatically communicates that intention, whereas a list does not. This clarity of communication allows for easier understanding of the code, and may possibly make the code more robust and bug free.
The difference is that for a Queue, you are guaranteed to pull elements out in FIFO order. For an ArrayList, you have no idea what order the elements were added. Depending on how you use it, you could enforce FIFO ordering on an ArrayList. I could also design a wrapper for a Queue that allowed me to pull out which-ever element I wanted.
The point I'm trying to make is that these classes are designed to be good at something. You don't have to use them for that, but that's what they are designed and optimized for. Queues are very good at adding and removing elements, but bad if you need to search through them. ArrayLists, on the other hand, are a bit slower to add elements, but allow easy random access. You won't see it in most applications you write, but there is often a performance penalty for choosing one over the other.
Yes!
I would have used poll() and peek() methods in queue which returns the value as well as remove , examine head element respectively .Also These methods provides you with a special value null if the operation fails and doesn't throws an exception as with remove() method will throw nosuchelement exception.
Ref: docs.oracle.com
For example, Queue methods poll() and remove() retrieves the element and removes it from the Queue.
Some implementation of Queue interface (PriorityQueue) allow to set a priority to the elements and retrieves them thanks to this priority. It is much more than a FIFO behaviour in that last case.
Consider a situation in which random processes update an arraylist randomly and we are supposed to process them in fifo?
There is absolutely no way to do that but to change the data structure from arraylist to queue
Sorry if this was asked before, but I could not find my exact scenario.
Currently I have a background thread that adds an element to a list and removes the old data every few minutes. Theoretically there can be at most 2 items in the list at a time and the items are immutable. I also have multiple threads that will grab the first element in the list whenever they need it. In this scenario, is it necessary to explicitly serialized operations on the list? My assumption that since I am just grabbing references to the elements, if the background thread deletes elements from the list, that should not matter since the thread already grabs a copy of the reference before the deletion. There is probably a better way to do this. Thanks in advanced.
Yes, synchronization is still needed here, because adding and removing are not atomic operations. If one thread calls add(0, new Object()) at the same time another calls remove(0), the result is undefined; for example, the remove() might end up having no effect.
Depending on your usage, you might be able to use a non-blocking list class like ConcurrentLinkedQueue. However, given that you are pushing one change every few minutes, I doubt you are gaining much in performance by avoiding synchronization.