Reorder queue in Java's ThreadPoolExecutor [duplicate] - java

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Java Executors: how can I set task priority?
I have a ThreadPoolExecutor built using a LinkedBlockingDequeue and I want to manipulate the underlying queue, however reading this in the documentation makes me very nervous.
Queue maintenance
Method getQueue() allows access to the work queue for purposes of monitoring and debugging. Use of this method for any other purpose is strongly discouraged. Two supplied methods, remove(java.lang.Runnable) and purge() are available to assist in storage reclamation when large numbers of queued tasks become cancelled.
Specifically I want to be able to
Check the queue to see if an element already exists. I assume this is fine as no locking should be necessary to just view the elements in the queue.
I want to reorder the queue based on some signal. This can obviously be troublesome. I was wondering if there is a preferred way to do this so that I won't mess up the queue for other uses.
Thanks

getQueue() will always return the exact BlockingQueue<Runnable> that you pass into the ThreadPoolExecutor.
The worry with the documentation is that you could easily run into issues with double-running if you cannot guarantee the thread safety of the BlockingQueue. If you use a PriorityBlockingQueue, and only use remove and add (or, more directly, offer), then you will be safe, and you can even do it directly from the getQueue().
In other words, whenever your signal tells you that some Runnable's priority has changed, then you should remove it and check the result of the remove (true if removed), and only if it was actually removed, then you should re-add it. You are not guaranteed that something won't be picked up inbetween those operations, but you are at least guaranteed that you will not double-run the Runnable, which could easily happen if done with contains -> remove -> add.
Either that, or you can write your own implementation of a BlockingQueue that uses a Comparator (like the PriorityBlockingQueue) that finds the highest priority whenever asked for new data. This sounds like a lot more work given the various interfaces involved.

Related

Concurrently iterating over a BlockingQueue

In an application which uses a BlockingQueue, I am facing a new requirement that can only be implemented by iterating over the elements present in the queue (to provide info about the current status of the elements in there).
According to the API Javadoc only the queueing methods of a BlockingQueue implementation are required to be thread-safe. Other API methods (eg. those inherited from the Collection interface) may not be used concurrently, though I am not sure whether this also applies to mere read access...
Can I safely use iterator() WITHOUT altering the producer/consumer threads which may normally interact with the queue at any time? I have no need for a 100% consistent iteration (it does not matter whether I see elements added/removed while iterating the queue), but I don't want to end up with nasty ConcurrentModificationExceptions.
Note that the application is currently using a LinkedBlockingQueue, but I am free to choose any other (unbounded) BlockingQueue implementation (including free open-source third-party implementations). Also, I don't want to rely on things that may break in the future, so I want a solution that is OK according to the API and does not just merely happen to work with the current JRE.
Actually, the Java 8 javadoc for BlockingQueue states this:
BlockingQueue implementations are thread-safe.
Nothing in the javadoc says1 that this only applies to the methods specified in the BlockingQueue API itself.
Can I safely use iterator() WITHOUT altering the producer/consumer threads which may normally interact with the queue at any time?
Basically, yes. The Iterator's behavior in the face of concurrent modifications is specified in the implementation class javadocs. For LinkedBlockingQueue, the javadoc specifies that the Iterator returned by iterator() is weakly consistent. That means (for example) that your application won't get a ConcurrentModificationException if the queue is modified while it is iterating, but the iteration is not guaranteed to see all queue entries.
1 - The javadoc mentions that the bulk operations may be non-atomic, but non-atomic does not mean non-thread-safe. What it means here is that some other thread may observe the queue in state where some entries have been added (or removed, or whatever) and others haven't.
#John Vint warns:
Keep in mind, this is as of Java 8 and can change.
If Oracle decided to alter the behavior specified in the javadoc, that would be an impediment to migration. Past history shows that Sun / Oracle avoid doing that kind of thing.
Yes, you can iterate over the entire queue. Looking at LinkedBlockingQueue and ArrayBlockingQueue implementations you do have a side effect. When constructing and operating the Iterator there are three places where full locks are acquired.
During construction
When invoking next()
When invoking remove()
Keep in mind, this is as of Java 8 and can change.
So, yes you do get to iterate safely, but you will effect the performace of puts and offers.
Now for your question, does BlockingQueue offer safe iteration? The answer there is it depends on the implementation. There could be a future BlockingQueue implementation that will throw a UnsupportedOperationException.

Multiple message listeners to single data store. Efficient design

I have a data store that is written to by multiple message listeners. Each of these message listeners can also be in the hundreds of individual threads.
The data store is a PriorityBlockingQueue as it needs to order the inserted objects by a timestamp. To make checking of the queue of items efficient rather than looping over the queue a concurrent hashmap is used as a form of index.
private Map<String, SLAData> SLADataIndex = new ConcurrentHashMap<String, SLAData>();;
private BlockingQueue<SLAData> SLADataQueue;
Question 1 is this a acceptable design or should I just use the single PriorityBlockingQueue.
Each message listener performs an operation, these listeners are scaled up to multiple threads.
Insert Method so it inserts into both.
this.SLADataIndex.put(dataToWrite.getMessageId(), dataToWrite);
this.SLADataQueue.add(dataToWrite);
Update Method
this.SLADataIndex.get(messageId).setNodeId(
updatedNodeId);
Delete Method
SLATupleData data = this.SLADataIndex.get(messageId);
//remove is O(log n)
this.SLADataQueue.remove(data);
// remove from index
this.SLADataIndex.remove(messageId);
Question Two Using these methods is this the most efficient way? They have wrappers around them via another object for error handling.
Question Three Using a concurrent HashMap and BlockingQueue does this mean these operations are thread safe? I dont need to use a lock object?
Question Four When these methods are called by multiple threads and listeners without any sort of synchronized block, can they be called at the same time by different threads or listeners?
Question 1 is this a acceptable design or should I just use the single PriorityBlockingQueue.
Certainly you should try to use a single Queue. Keeping the two collections in sync is going to require a lot more synchronization complexity and worry in your code.
Why do you need the Map? If it is just to call setNodeId(...) then I would have the processing thread do that itself when it pulls from the Queue.
// processing thread
while (!Thread.currentThread().isInterrupted()) {
dataToWrite = queue.take();
dataToWrite.setNodeId(myNodeId);
// process data
...
}
Question Two Using these methods is this the most efficient way? They have wrappers around them via another object for error handling.
Sure, that seems fine but, again, you will need to do some synchronization locking otherwise you will suffer from race conditions keeping the 2 collections in sync.
Question Three Using a concurrent HashMap and BlockingQueue does this mean these operations are thread safe? I dont need to use a lock object?
Both of those classes (ConcurrentHashMap and the BlockingQueue implementations) are thread-safe, yes. BUT since there are two of them, you can have race conditions where one collection has been updated but the other one has not. Most likely, you will have to use a lock object to ensure that both collections are properly kept in sync.
Question Four When these methods are called by multiple threads and listeners without any sort of synchronized block, can they be called at the same time by different threads or listeners?
That's a tough question to answer without seeing the code in question. For example. someone might be calling Insert(...) and has added it to the Map but not the queue yet, when another thread else calls Delete(...) and the item would get found in the Map and removed but the queue.remove() would not find it in the queue since the Insert(...) has not finished in the other thread.

Multithreaded Observer in Java - preserve proper order

I am implementing something that I would call "Observable Set". It is just a normal set, but it can have some observers that are notified about adding new elements.
What is important for me, is that elements may be added from many threads at time, and also there are many observing threads. I hold Observers in CopyOnWriteArrayList (it is thread-safe). The key point is to inform observers about adding elements in way, that informing order for each of observers is the same as order of adding elements.
What is best approach?
The most naive one is to put adding and informing in "synchronized" block. But i believe it can be slow etc.
Second I've tried was to just add element to set, and add it to "informing queue". With each addition of element it was checked whether informing is turned on. If not, it was started until the queue was empty. It was working quite OK but i was afraid that it wasn't nice approach.
The last that I've implemented, i would call as "informing threads". With adding observers, each observer has it's own "informing thread" created. That thread runs in background and checks if it's at end of global "informing queue". If it isn't it informs specific thread about new elements. However I've problems with synchronization, and while(true) loop. I don't know how to set condition to end thread. The next problem I noticed when writing it, is that every new thread will be informed from beginning... It's not good.
I hope I have described everything quite well. If not, please let me know, i will try to fix it.
What is best way to accomplish this task?
Thanks!
Your second solution could be improved to use a BlockingQueue: with it you don't need to check whether "informing is turned on", you just call take(), and it will wait for something to appear in the queue.
You could also look into the RxJava project. It is somewhat complex, but it has lots of features you might need.
It extends the observer pattern to support sequences of data/events and adds operators that allow you to compose sequences together declaratively while abstracting away concerns about things like low-level threading, synchronization, thread-safety and concurrent data structures.

java single writer and multiple reader

Sorry if this was asked before, but I could not find my exact scenario.
Currently I have a background thread that adds an element to a list and removes the old data every few minutes. Theoretically there can be at most 2 items in the list at a time and the items are immutable. I also have multiple threads that will grab the first element in the list whenever they need it. In this scenario, is it necessary to explicitly serialized operations on the list? My assumption that since I am just grabbing references to the elements, if the background thread deletes elements from the list, that should not matter since the thread already grabs a copy of the reference before the deletion. There is probably a better way to do this. Thanks in advanced.
Yes, synchronization is still needed here, because adding and removing are not atomic operations. If one thread calls add(0, new Object()) at the same time another calls remove(0), the result is undefined; for example, the remove() might end up having no effect.
Depending on your usage, you might be able to use a non-blocking list class like ConcurrentLinkedQueue. However, given that you are pushing one change every few minutes, I doubt you are gaining much in performance by avoiding synchronization.

Java concurrency - use which technique to achieve safety?

I have a list of personId. There are two API calls to update it (add and remove):
public void add(String newPersonName) {
if (personNameIdMap.get(newPersonName) != null) {
myPersonId.add(personNameIdMap.get(newPersonName)
} else {
// get the id from Twitter and add to the list
}
// make an API call to Twitter
}
public void delete(String personNAme) {
if (personNameIdMap.get(newPersonName) != null) {
myPersonId.remove(personNameIdMap.get(newPersonName)
} else {
// wrong person name
}
// make an API call to Twitter
}
I know there can be concurrency problem. I read about 3 solutions:
synchronized the method
use Collections.synchronizedlist()
CopyOnWriteArrayList
I am not sure which one to prefer to prevent the inconsistency.
1) synchronized the method
2) use Collections.synchronizedlist
3) CopyOnWriteArrayList ..
All will work, it's a matter of what kind of performance / features you need.
Method #1 and #2 are blocking methods. If you synchronize the methods, you handle concurrency yourself. If you wrap a list in Collections.synchronizedList, it handles it for you. (IMHO #2 is safer -- just be sure to use it as the docs say, and don't let anything access the raw list that is wrapped inside the synchronizedList.)
CopyOnWriteArrayList is one of those weird things that has use in certain applications. It's a non-blocking quasi-immutable list, namely, if Thread A iterates through the list while Thread B is changing it, Thread A will iterate through a snapshot of the old list. If you need non-blocking performance, and you are rarely writing to the list, but frequently reading from it, then perhaps this is the best one to use.
edit: There are at least two other options:
4) use Vector instead of ArrayList; Vector implements List and is already synchronized. However, it's generally frowned, upon as it's considered an old-school class (was there since Java 1.0!), and should be equivalent to #2.
5) access the List serially from only one thread. If you do this, you're guaranteed not to have any concurrency problems with the List itself. One way to do this is to use Executors.newSingleThreadExecutor and queue up tasks one-by-one to access the list. This moves the resource contention from your list to the ExecutorService; if the tasks are short, it may be fine, but if some are lengthy they may cause others to block longer than desired.
In the end you need to think about concurrency at the application level: thread-safety should be a requirement, and find out how to get the performance you need with the simplest design possible.
On a side note, you're calling personNameIdMap.get(newPersonName) twice in add() and delete(). This suffers from concurrency problems if another thread modifies personNameIdMap between the two calls in each method. You're better off doing
PersonId id = personNameIdMap.get(newPersonName);
if (id != null){
myPersonId.add(id);
}
else
{
// something else
}
Collections.synchronizedList is the easiest to use and probably the best option. It simply wraps the underlying list with synchronized. Note that multi-step operations (eg for loop) still need to be synchronized by you.
Some quick things
Don't synchronize the method unless you really need to - It just locks the entire object until the method completes; hardly a desirable effect
CopyOnWriteArrayList is a very specialized list that most likely you wouldn't want since you have an add method. Its essentially a normal ArrayList but each time something is added the whole array is rebuilt, a very expensive task. Its thread safe, but not really the desired result
Synchronized is the old way of working with threads. Avoid it in favor of new idioms mostly expressed in the java.util.concurrent package.
See 1.
A CopyOnWriteArrayList has fast read and slow writes. If you're making a lot of changes to it, it might start to drag on your performance.
Concurrency isn't about an isolated choice of what mechanism or type to use in a single method. You'll need to think about it from a higher level to understand all of its impacts.
Are you making changes to personNameIdMap within those methods, or any other data structures access to which should also be synchronized? If so, it may be easiest to mark the methods as synchronized; otherwise, you might consider using Collections.synchronizedList to get a synchronized view of myPersonId and then doing all list operations through that synchronized view. Note that you should not manipulate myPersonId directly in this case, but do all accesses solely through the list returned from the Collections.synchronizedList call.
Either way, you have to make sure that there can never be a situation where a read and a write or two writes could occur simultaneously to the same unsynchronized data structure. Data structures documented as thread-safe or returned from Collections.synchronizedList, Collections.synchronizedMap, etc. are exceptions to this rule, so calls to those can be put anywhere. Non-synchronized data structures can still be used safely inside methods declared to be synchronized, however, because such methods are guaranteed by the JVM to never run at the same time, and therefore there could be no concurrent reading / writing.
In your case from the code that you posted, all 3 ways are acceptable. However, there are some specific characteristics:
#3: This should have the same effect as #2 but may run faster or slower depending on the system and workload.
#1: This way is the most flexible. Only with #1 can you make the the add() and delete() methods more complex. For example, if you need to read or write multiple items in the list, then you cannot use #2 or #3, because some other thread can still see the list being half updated.
Java concurrency (multi-threading) :
Concurrency is the ability to run several programs or several parts of a program in parallel. If a time consuming task can be performed asynchronously or in parallel, this improve the throughput and the interactivity of the program.
We can do concurrent programming with Java. By java concurrency we can do parallel programming, immutability, threads, the executor framework (thread pools), futures, callables and the fork-join framework programmings.

Categories

Resources