I am using an ArrayBlockingQueue but sometimes it gets to full and prevents other objects to be added to it.
What I would like to do is to remove the oldest object in the queue before adding another one when the ArrayBlockingQueue gets full. I need the ArrayBlockingQueue to be like the Guava EvictingQueue but thread safe. I intend to extend the ArrayBlockingQueue and override the offer(E e) method like below:
public class MyArrayBlockingQueue<E> extends ArrayBlockingQueue<E> {
// Size of the queue
private int size;
// Constructor
public MyArrayBlockingQueue(int queueSize) {
super(queueSize);
this.size = queueSize;
}
#Override
synchronized public boolean offer(E e) {
// Is queue full?
if (super.size() == this.size) {
// if queue is full remove element
this.remove();
}
return super.offer(e);
} }
Is the above approach OK? Or is there a better way of doing it?
Thanks
Your MyArrayBlockingQueue doesn't override BlockingQueue.offer(E, long, TimeUnit) or BlockingQueue.poll(long, TImeUnit). Do you actually need a queue with "blocking" features? If you do not then you can create a thread-safe queue backed by an EvictingQueue using Queues.synchronizedQueue(Queue):
Queues.synchronizedQueue(EvictingQueue.create(maxSize));
For an evicting blocking queue, I see a few issues with your proposed implementation:
remove() may throw an exception if the queue is empty. Your offer method is marked with synchronized but poll, remove, etc. are not so another thread could drain your queue in between calls to size() and remove(). I suggest using poll() instead which won't throw an exception.
Your call to offer may still return false (i.e. not "add" the element) because of another race condition where between checking the size and/or removing an element to reduce the size a different thread adds an element filling the queue. I recommend using a loop off of the result of offer until true is returned (see below).
Calling size(), remove() and offer(E) each require a lock so in the worse case scenario your code locks and unlocks 3 times (and even then it might fail to behave as desired due to the previous issues described).
I believe the following implementation will get you what you are after:
public class EvictingBlockingQueue<E> extends ArrayBlockingQueue<E> {
public EvictingBlockingQueue(int capacity) {
super(capacity);
}
#Override
public boolean offer(E e) {
while (!super.offer(e)) poll();
return true;
}
#Override
public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
while (!super.offer(e, timeout, unit)) poll();
return true;
}
}
Note that this implementation can unnecessarily remove an element if between two calls to super.offer(E) another thread removes an element. This seems acceptable to me and I don't really see a practical way around it (ArrayBlockingQueue.lock is package-private and java.util.concurrent is a prohibited package so we can't place an implementation there to access and use the lock, etc.).
When you say "it gets to full and prevents other objects to be added", does that mean it would be sufficient to ensure that objects can be added anytime? If that's true, you could simply switch to an unbounded queue such as LinkedBlockingQueue. But be aware of the differences compared with ArrayBlockingQueue:
Linked queues typically have higher throughput than array-based queues but less predictable performance in most concurrent applications.
You can find an overview of JDK queue implementations here.
Related
I have a simple, managed group of Stacks that need to be accessed in a thread-safe manner. My first implementation is working correctly but uses synchronized methods for all access, ie. locking is at the most coarse level. I'd like to make locking as granular as possible but I'm unsure of the best way to go about it.
Here's the basics of my Stack manager class (with some details elided for brevity):
public class StackManager {
private final Map<String, Deque<String>> myStacks;
public StackManager() {
myStacks = new ConcurrentHashMap<String, Deque<String>>();
}
public synchronized void addStack(String name) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
public synchronized void removeStack(String name) {
if (!myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.remove(name);
}
public synchronized void push(String stack, String payload) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
myStacks.get(stack).push(payload);
}
public synchronized String pop(String stack) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
return myStacks.get(stack).pop();
}
}
The stack-level methods (addStack(), removeStack()) are not used that often. However I'd like to know if their level of locking can be reduced. For example, if these methods were unsynchronized and established a lock on myStacks would this reduce contention? For example,
public void addStack(String name) {
synchronized(myStacks) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
}
The per-stack methods (push(), pop()) are where I feel the most gains can be made. I'd like to achieve per-stack locking if I could. That is, only lock the single stack within the stack manager that is being operated on. However I cannot see a good way to do this. Any suggestions?
While we're here, is it necessary to use the concurrent implementations of both Map and Deque?
Both data structures are thread safe. So, every isolated operation on the is thread safe.
The problem is performing more than one operation when there's a dependency between them.
In your case, checking for existance must be atomic with the actual operation to avoid race conditions.
To add a new stack, you can use the method putIfAbsent, which is atomic and not synchronized.
To remove a stack, you don't need to check for existance. If you want to know whether it existed, just return remove method return value. If it's null, it didn't exist.
To perform push and pop, you just have to get the stack first and assign to a local variable. If it's null, it didn't exist. If it's nonnull, you can safely push or pop.
The attribute myStacks must be either final or volatile to be thread safe.
Now you don't need any synchronization. And I would choose a solution without exceptions. Only to add a new stack it seems more necessary. If it can happen in a correct program, it should be a checked exception. Runtime exception is more suitable when it is supposed to be a bug.
Oh, and triplecheck and test it, as concurrent programming is tricky.
Given following variation of queue:
interface AsyncQueue<T> {
//add new element to the queue
void add(T elem);
//request single element from the queue via callback
//callback will be called once for single polled element when it is available
//so, to request multiple elements, poll() must be called multiple times with (possibly) different callbacks
void poll(Consumer<T> callback);
}
I found out i do not know how to implement it using java.util.concurrent primitives! So questions are:
What is the right way to implement it using java.util.concurrent package?
Is it possible to do this w/o using additional thread pool?
Your AsyncQueue is very similar to a BlockingQueue such as ArrayBlockingQueue. The Future returned would simply delegate to the ArrayBlockingQueue methods. Future.get would call blockingQueue.poll for instance.
As for your update, I'm assuming the thread that calls add should invoke the callback if there's one waiting? If so it's a simple task of creating one queue for elements, and one queue for callbacks.
Upon add, check if there's a callback waiting, then call it, otherwise put the element on the element queue
Upon poll, check if there's an element waiting, then call the callback with that element, otherwise put the callback on the callback queue
Code outline:
class AsyncQueue<E> {
Queue<Consumer<E>> callbackQueue = new LinkedList<>();
Queue<E> elementQueue = new LinkedList<>();
public synchronized void add(E e) {
if (callbackQueue.size() > 0)
callbackQueue.remove().accept(e);
else
elementQueue.offer(e);
}
public synchronized void poll(Consumer<E> c) {
if (elementQueue.size() > 0)
c.accept(elementQueue.remove());
else
callbackQueue.offer(c);
}
}
I've been working on a project where I need a synchronized queue, for the reason that my program is multi-threaded and the thread may access this queue.
I used arraylist to do that, but I seem to have some issues with it and threads got deadlocked. I don't know if the queue is the reason, but I just wanted to check:
public class URLQueue {
private ArrayList<URL> urls;
public URLQueue() {
urls = new ArrayList<URL>();
}
public synchronized URL remove() throws InterruptedException {
while (urls.isEmpty())
wait();
URL r = urls.remove(0);
notifyAll();
return r;
}
public synchronized void add(URL newURL) throws InterruptedException {
urls.add(newURL);
notifyAll();
}
public int getSize() {
return urls.size();
}
}
EDITS:
Even when using LinkedBlockingQueue I get stuck in the same loop as before. I think this is caused because there is a thread which is waiting for the queue to be filled, but it never does because the other functionalities are done running...any ideas???
It is better to use LinkedBlockingQueue here as it is designed for that purpose. It waits until some element is available while trying to remove an alement.
LinkedBlockingQueue
It provides a take() method which
Retrieves and removes the head of this queue, waiting if necessary until an element becomes available
In your code, notifyAll() doesn't throw InterruptedException so you should remove the throws from add()
The remove() method doesn't need to notifyAll() as it's action shouldn't wake other threads.
The getSize() method should be synchronized.
Otherwise there is no chance for your code to deadlock as you need two locks to create a deadlock.
I have a PriorityBlockingQueue as follows:
BlockingQueue<Robble> robbleListQueue = new PriorityBlockingQueue<Robble>();
Robble implements Comparable<Robble> and I am able to sort lists without issue, so I know my comparisons work.
I also have the following Runnable:
private class RobbleGeneratorRunnable implements Runnable {
private final BlockingQueue<Robble> robbleQueue;
public RobbleGeneratorRunnable(BlockingQueue<ResultList> robbleQueue) {
this.robbleQueue = robbleQueue;
}
#Override
public void run() {
try {
robbleQueue.put(generateRobble());
} catch (InterruptedException e) {
// ...
}
}
private Robble generateRobble() {
// ...
}
}
I push a few thousand of these runnables into an ExecutorService and then shutdown() and awaitTermination().
According to the BlockingQueue JavaDoc, put(...) is a blocking action. However, when I iterate over the items in the queue they are only mostly in order -- there are some that are out of order indicating to me that the queue is not blocking properly. Like I said before I can sort the Robbles just fine.
What could be causing robbleQueue.put(generateRobble()) to not block properly?
According to the javadoc,
The Iterator provided in method iterator() is not guaranteed to
traverse the elements of the priority queue in any particular order.
If you need ordered traversal, consider using
Arrays.sort(pq.toArray())
Add, peek, poll and remove are required to operate in priority sequence, but NOT the iterator.
PriorityBlockingQueue is an unbounded queue, and if you read the javadocs for put() it states:
Inserts the specified element into this priority queue. As the queue
is unbounded this method will never block.
Why would you expect put() to block?
Iterating a PriorityQueue or PriorityBlockingQueue is explicitly stated in the Javadoc not to be ordered. Only add(), peek(), poll(), and remove() are ordered. This has nothing to do with whether blocking is happening correctly.
public final class ClientGateway {
private static ClientGateway instance;
private static List<NetworkClientListener> listeners = Collections.synchronizedList(new ArrayList<NetworkClientListener>());
private static final Object listenersMutex = new Object();
protected EventHandler eventHandler;
private ClientGateway() {
eventHandler = new EventHandler();
}
public static synchronized ClientGateway getInstance() {
if (instance == null)
instance = new ClientGateway();
return instance;
}
public void addNetworkListener(NetworkClientListener listener) {
synchronized (listenersMutex) {
listeners.add(listener);
}
}
class EventHandler {
public void onLogin(final boolean isAdviceGiver) {
new Thread() {
public void run() {
synchronized (listenersMutex) {
for (NetworkClientListener nl : listeners)
nl.onLogin(isAdviceGiver);
}
}
}.start();
}
}
}
This code throws a ConcurrentModificationException
But I thought if they are both synchronized on the listenersMutex then they should be executed in serial? All code within functions that operate on the listeners list operate within syncrhonized blocks that are synchronized on the Mutex. The only code that modifies the list are addNetworkListener(...) and removeNetworkListener(...) but removeNetworkListener is never called at the moment.
What appears to be happening with the error is that a NetworkClientListener is still being added while the onLogin function/thread is iterating the listeners.
Thank you for your insight!
EDIT: NetworkClientListener is an interface and leaves the implementation of "onLogin" up to the coder implementing the function, but their implementation of the function does not have access to the listeners List.
Also, I just completely rechecked and there is no modification of the list outside of the addNetworkListener() and removeNetworkListener() functions, the other functions only iterate the list. Changing the code from:
for (NetworkClientListener nl : listeners)
nl.onLogin(isAdviceGiver);
To:
for(int i = 0; i < listeners.size(); i++)
nl.onLogin(isAdviceGiver);
Appears to solve the concurrency issue, but I already knew this and would like to know what's causing it in the first place.
Thanks again for your continuing help!
Exception:
Exception in thread "Thread-5" java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:782)
at java.util.ArrayList$Itr.next(ArrayList.java:754)
at chapchat.client.networkcommunication.ClientGateway$EventHandler$5.run(ClientGateway.java:283)
EDIT Okay, I feel a little dumb. But thank you for all your help! Particularly MJB & jprete!
Answer: Someone's implementation of onLogin() added a new listener to the gateway. Therefore(since java's synchronization is based on Threads and is reentrant, so that a Thread may not lock on itself) when onLogin() was called we in his implementation, we were iterating through the listeners and in the middle of doing so, adding a new listener.
Solution: MJB's suggestion to use CopyOnWriteArrayList instead of synchronized lists
Mutexes only guard from access from multiple threads. If nl.onLogin() happens to have logic that adds a listener to the listeners list, then a ConcurrentModificationException may be thrown, because it's being accessed (by the iterator) and changed (by the add) simultaneously.
EDIT: Some more information would probably help. As I recall, Java collections check for concurrent modifications by keeping a modification count for each collection. Every time you do an operation that changes the collection, the count gets incremented. In order to check the integrity of operations, the count is checked at the beginning and end of the operation; if the count changed, then the collection throws a ConcurrentModificationException at the point of access, not at the point of modification. For iterators, it checks the counter after every call to next(), so on the next iteration of the loop through listeners, you should see the exception.
I must admit that I don't see it either - if indeed removeListeners is not called.
What is the logic of the nl.onLogin bit? If it modified stuff, it could cause the exception.
A tip btw if you expect listeners to be moderately rare in being added, you could make the list CopyOnWriteArrayList type -- in which case you don't need your mutexes at all - CopyOnWriteArrayList is totally thread safe, and returns a weakly consistent iterator that will never throw CME (except where I just said, in nl.onLogin).
Instead of ArrayList , use can use thread-safe class CopyOnWriteArrayList which does not throw ConcurrentModificationException even if it is modified while iterating. While iterating if it is attempted to modify(add,update) then it makes a copy of the list, but iterater will continue working on original one.
Its a bit slower than ArrayList . It is useful in cases where you do not want to syncronise the iterations.