How to loop over a updatable list? - java

I'm building a Java Running class that will treat a set of items one by one. When working (running), that set may be updated (items added only).
How can I loop over that list by being sure that it will take into consideration newly added elements?
Update
Following the answers, I implemented the code that I suggested on Code Review.

Answer
You shoud use a Queue, i.e. java.util.concurrent.ConcurrentLinkedQueue, java.util.concurrent.LinkedBlockingQueue, java.util.concurrent.ArrayBlockingQueue or one of the other implementations that suits your needs.
They have different methods which allow you to implement different scenarios. You can check javadocs for differences between Queue methods (there are methods throwing exceptions, or returning nulls; methods for viewing elements or for retrieving them with removal).
                   Throws exception       Returns special value
Insert          add(e)                          offer(e)
Remove     remove()                       poll()
Examine    element()                      peek()
In case of BlockingQueue implementations there are additionally 2 options: blocking methods and methods that time out. The extended table for possible methods is in it's javadoc.
Choose carefully the required implementation. Do you want a fixed capacity? Do you want to block on retrieval from empty queue? Do you want your Queue bounded or unbounded. If still in doubt, look up Stack Overflow answers that explain differences between different queue types, Google them or check javadocs.
RANT
There is one problem you may or may not run into, depending on your design - how to tell the queue is empty because you are done producing elements. Is your producer (whatever is inserting elements into your queue) not fast enough to add items to the queue before they are consumed? Or is the queue empty because all the tasks where completed? In the latter case, if you use a blocking queue, you can block on retrieval of element when there will be none available. You can consider a non-blocking queue in such case, using a "poison pill" marker element that means the producers are done producing, or even better decouple producer from consumer by using an intermediary mediator class which holds the queue and producer / consumer interact only with the mediator.

Use a queue (for example, java.util.concurrent.LinkedBlockingQueue) instead of a list. Queues are specifically designed for this kind of scenario.

Don't go with List.
If I you use a Queue instance then you can call remove() and you would retrieve the elements in FIFO order. If you use an List instance then you can make no such guarantee.
Take the following code as an example:
ArrayList<Integer> list = new ArrayList<Integer>();
list.add(5);
list.add(4);
list.add(3);
list.add(2);
list.add(1);
list.set(4,5);
list.set(3,4);
list.set(2,3);
list.set(1,2);
list.set(0,1);
System.out.println(list);
Also, another difference is abstraction. With a Queue instance you don't have to worry about indexes and this makes things easier to think about if you don't need everything a List has to offer.

Related

Ordered lists and class thread-safety

I have a class which has:
2 fields holding time-ordered list (list1, list2).
3 read-only methods which iterate above lists to
generate summary statistics.
1 mutating method, which looks for a match of given 'new-item' in list1. If match is not found, it adds 'new-item' to list1. If match is found, it removes the match from list1 and adds both match and 'new-item' to list2.
Lets assume that multiple concurrent invocation of all methods are possible. I need to achieve thread-safety while maximising performance.
Approach1 (extremely slow) - Declare field-types as ArrayList and use synchronise keyword on all methods.
Approach2 - Declare field-type as CopyOnWriteArrayList and synchronise the mutating method.
Questions
Does Approach2 ensure thread-safety?
Are there better alternatives?
Do you need the random access offered by an ArrayList? Can you instead use a thread-safe ordered collection like ConcurrentSkipListSet (non-blocking) or PriorityBlockingQueue (blocking)? Both have log(n) insertions.
Mutation methods in both cases are thread-safe.
Edit: Just note, you would still run into atomicity concerns. If you need the add's to be done attomically then you would need more coarse locking.
Approach number 2 does not guarantee thread-safety.
The two operations on collections are not atomic: first you remove an item, then you add it to the other collection. Some thread might in the meantime execute a read-only method to find out that the item is missing in list 1, and is not yet added to the list 2. It depends on your application whether this is acceptable.
On the other hand, it is also possible that: a read-only method first iterates through list 1, and finds that it contains item x; in the meantime the updating method executes and transfers item x; the read-only method continues and iterates through list 2 in which it finds item x one more item. Again, it depends on your application whether this is acceptable.
Other solutions are possible, but that would require more details about what are you trying to achieve exactly.
One obvious way would be to modify approach number 1, and instead of using synchronized on every method, use a readers-writer lock. You would read-lock in every read-only method and write-lock in the mutating one.
You could also use two separate readers-writer locks. One for the first collection and one for the other. If your read-only methods iterate through both of the lists, they would have to read-acquire both of the locks up front, before doing anything. On the other hand the mutating method would have to first write-acquire the first lock, and if it wishes to transfer an item, then it should write-acquire the second lock.
You'd need to do some testing to see if it works nicely for you. Still there are definitely even better ways to handle it, but you'd need to provide more details.
The time it takes to lock a method is less than a micro-second. If a fraction of a micro-second matters, you might consider something more complex, both otherwise something simple is usually better.
Just using thread safe collection is not enough when you perform multiple operations, e.g. remove from one list and add to another is two operations, and any number of thread can get in between those operations.
Note: if you do lots of updates this can be slower.

Effective thread-safe Java List impl when traversals match mutations

I have a number of threads that will be consuming messages from a broker and processing them. Each message is XML containing, amongst other elements, an alpha-numeric <itemId>WI354DE48</itemId> element that serves as a unique ID for the item to "process". Due to criteria I can't control or change, it is possible for items/messages to be duplicated on the broker queue that thhese threads are consuming from. So the same item (with an ID of WI354DE48), might only be sent to the queue once, or it might get sente 100 times. Regardless, I can only allow the item to be processed once; so I need a way to prevent Thread A from processing a duplicated item that Thread B already processed.
I'm looking to use a simple thread-safe list that can be shared by all threads (workers) to act as a cache mechanism. Each thread will be given the same instance of a List<String>. When each worker thread consumes a message, it checks to see if the itemId (a String) exists on the list. If it doesn't then no other worker has processed the item. In this case, the itemID is added to the list (locking/caching it), and then the item is processed. If the itemId does already exist on the list, then another worker has already processed the item, so we can ignore it. Simple, yet effective.
It's obviously then paramount to have a thread-safe list implementation. Note that the only two methods we will ever be calling on this list will be:
List#contains(String) - traversing/searching the list
List#add(String) - mutating the list
...and its important to note that we will be calling both methods with about the same frequency. Only rarely will contains() return true and prevent us from needing to add the ID.
I first thought that CopyOnWriteArrayList was my best bet, but after reading the Javadocs, it seems like each worker would just wind up with its own thread-local copy of the list, which isn't what I want. I then looked into Collections.synchronizedList(new ArrayList<String>), and that seems to be a decent bet:
List<String> processingCache = Collection.synchronizedList(new ArrayList<String>());
List<Worker> workers = getWorkers(processingCache); // Inject the same list into all workers.
for(Worker worker : workers)
executor.submit(worker);
// Inside each Worker's run method:
#Override
public void run() {
String itemXML = consumeItemFromBroker();
Item item = toItem(itemXML);
if(processingCache.contains(item.getId())
return;
else
processingCache.add(item.getId());
... continue processing.
}
Am I on track with Collections.synchronizedList(new ArrayList<String>), or am I way off base? Is there a more efficient thread-safe List impl given my use case, and if so, why?
Collections.synchronizedList is very basic, it just marks all methods as synchronized.
This will work but only under some specific assumptions, namely that you never carry out multiple accesses to the List, i.e.
if(!list.contains(x))
list.add(x);
Is not thread safe as the monitor is released between the two calls.
It can also be somewhat slow if you have many reads and few writes as all threads acquire an exclusive lock.
You can look at the implementations in the java.util.concurrent package, there are several options.
I would recommend using a ConcurrentHashMap with dummy values.
The reason for the recommendation is that the ConcurrentHashMap has synchronized key groups so if you have a good hashing algorithm (and String does) you can actually get a massive amount of concurrent throughput.
I would prefer this over a ConcurrentSkipListSet as it doesn't guarantee ordering and therefore you lose that overhead.
Of course with threading it's never entirely obvious where the bottlenecks are so I would suggest trying both and seeing which gives you better performance.

When to use queue over arraylist

One basic argument to use a Queue over an ArrayList is that Queue guarantees FIFO behavior.
But if I add 10 elements to an ArrayList and then iterate over the elements starting from the 0th element, then I will retrieve the elements in the same order as they were added. So essentially, that guarantees a FIFO behavior.
What is so special about Queue as compared to traditional ArrayList?
You can look at the javadoc here. The main difference is a List lets you look at any element whenever you want. A queue only lets you look at the "next" one.
Think about it as a real queue or as a line for the cash register at a grocery store. You don't ask the guy in the middle or the end to pay next, you always ask the guy who's in the front/been waiting the longest.
It's worth noting that some lists are queues. Look at LinkedList, for example.
If I gave you a Queue instance then you would know that by iteratively calling remove() you would retrieve the elements in FIFO order. If i gave you an ArrayList instance then you can make no such guarantee.
Take the following code as an example:
ArrayList<Integer> list = new ArrayList<Integer>();
list.add(5);
list.add(4);
list.add(3);
list.add(2);
list.add(1);
list.set(4,5);
list.set(3,4);
list.set(2,3);
list.set(1,2);
list.set(0,1);
System.out.println(list);
If I were now to give you this list, then my iterating from 0 to 4 you would not get the elements in FIFO order.
Also, I would say another difference is abstraction. With a Queue instance you don't have to worry about indexes and this makes things easier to think about if you don't need everything ArrayList has to offer.
The limitations imposed on a queue (FIFO, no random access), as compared to an ArrayList, allow for the data structure to be better optimized, have better concurrency, and be a more appropriate and cleaner design when called for.
In regards to optimization and concurrency, imagine the common scenario where a producer is filling a queue while a consumers consumes it. If we used an ArrayList for this, then in the naive implementation each removal of the first element would cause a shift operation on the ArrayList in order to move down every other element. This is very inefficient, especially in a concurrent implementation since the list would be locked for duration of the entire shift operation.
In regards to design, if items are to be accessed in a FIFO fashion then using a queue automatically communicates that intention, whereas a list does not. This clarity of communication allows for easier understanding of the code, and may possibly make the code more robust and bug free.
The difference is that for a Queue, you are guaranteed to pull elements out in FIFO order. For an ArrayList, you have no idea what order the elements were added. Depending on how you use it, you could enforce FIFO ordering on an ArrayList. I could also design a wrapper for a Queue that allowed me to pull out which-ever element I wanted.
The point I'm trying to make is that these classes are designed to be good at something. You don't have to use them for that, but that's what they are designed and optimized for. Queues are very good at adding and removing elements, but bad if you need to search through them. ArrayLists, on the other hand, are a bit slower to add elements, but allow easy random access. You won't see it in most applications you write, but there is often a performance penalty for choosing one over the other.
Yes!
I would have used poll() and peek() methods in queue which returns the value as well as remove , examine head element respectively .Also These methods provides you with a special value null if the operation fails and doesn't throws an exception as with remove() method will throw nosuchelement exception.
Ref: docs.oracle.com
For example, Queue methods poll() and remove() retrieves the element and removes it from the Queue.
Some implementation of Queue interface (PriorityQueue) allow to set a priority to the elements and retrieves them thanks to this priority. It is much more than a FIFO behaviour in that last case.
Consider a situation in which random processes update an arraylist randomly and we are supposed to process them in fifo?
There is absolutely no way to do that but to change the data structure from arraylist to queue

Re ordering of a priorityblockingqueue after an element is added

I use a PriorityBlockingQueue to maintain a list of objects whose order is dictated using a comparator. My requirement is as follows: First, I add N objects to the queue, and the queue maintains the ordered list with it. Later, I change the value in the objects that had been added to the queue. The issue is that the queue is not refreshed based on the updated values in the objects. In contrast, I observed that the queue is refreshed when a single object is removed.
Is there anyway I can manually refresh the values in the queue before obtaining values from it in a very efficient manner?
Not with PriorityBlockingQueue. It sounds like the option you're looking for is decrease-key, which isn't supported by the Java priority queue abstractions.

Java Concurrency: lock effiency

My program has 100 threads.
Every single thread does this:
1) if arrayList is empty, add element with certain properties to it
2) if arrayList is not empty, iterate through elements found in arrayList, if found suitable element (matching certain properties), get it and remove the arrayList
The problem here is that while one thread is iterating through the arrayList, other 99 threads are waiting for the lock on arrayList.
What would you suggest to me if I want all 100 threads to work in lock-less condition? So they all have work to do?
Thanks
Have you looked at shared vs exclusive locking? You could use a shared lock on the list, and then have a 'deleted' property on the list elements. The predicate you use to check the list elements would need to make sure the element is not marked 'deleted' in addition to whatever other queries you have - also due to potential read-write conflicts, you would need to lock on each element as you traverse. Then periodically get an exclusive lock on the list to perform the deletes for real.
The read lock allows for a lot of concurrency on the list. The exclusive locks on each element of the list are not as nice, but you need to force the memory model to update your 'deleted' flag to each thread, so there's no way around that.
First if you're not running on a machine that has 64 cores or more your 100 threads are probably a performance hog in themselves.
Then an ArrayList for what you're describing is certainly not a good choice because removing an element does not run in amortized constant time but in linear time O(n). So that's a second performance hog. You probably want to use a LinkedList instead of your ArrayList (if you insist on using a List).
Now of course I doubt very much that you need to iterate over your complete list each time you need to find one element: wouldn't another data structure be more appropriate? Maybe that the elements that you put in your list have such a concept as "equality" and hence a Map with an O(1) lookup time could be used instead?
That's just for a start: as I showed you, there are at least two serious performances issues in what you described.... Maybe you should clarify your question if you want more help.
If your notion of "suitable element (matching certain properties)" can be encoded using a Comparator then a PriorityBlockingQueue would allow each thread to poll the queue, taking the next element without having to search the list or enqueuing a new element if the queue is empty.
Addendum: Thilo raise an essential point: As your approach evolves, you may want to determine empirically how many threads are optimal.
The key is to only use the object lock on arraylist when you actually need to.
A good idea would be to subclass arraylist and provide synchro on single read + write + delete processes.
This will ensure fine granularity with the locking while allowing the threads to run through the array list while protecting the semantics of the arraylist.
Have a single thread own the array and be responsible for adding to it and iterating over it to find work to do. Once a unit of work is found, put the work on a BlockingQueue. Have all your worker threads use take() to remove work from the queue.
This allows multiple units of work to be discovered per pass through the array and they can be handed off to waiting worker threads fairly efficiently.

Categories

Resources