Given following variation of queue:
interface AsyncQueue<T> {
//add new element to the queue
void add(T elem);
//request single element from the queue via callback
//callback will be called once for single polled element when it is available
//so, to request multiple elements, poll() must be called multiple times with (possibly) different callbacks
void poll(Consumer<T> callback);
}
I found out i do not know how to implement it using java.util.concurrent primitives! So questions are:
What is the right way to implement it using java.util.concurrent package?
Is it possible to do this w/o using additional thread pool?
Your AsyncQueue is very similar to a BlockingQueue such as ArrayBlockingQueue. The Future returned would simply delegate to the ArrayBlockingQueue methods. Future.get would call blockingQueue.poll for instance.
As for your update, I'm assuming the thread that calls add should invoke the callback if there's one waiting? If so it's a simple task of creating one queue for elements, and one queue for callbacks.
Upon add, check if there's a callback waiting, then call it, otherwise put the element on the element queue
Upon poll, check if there's an element waiting, then call the callback with that element, otherwise put the callback on the callback queue
Code outline:
class AsyncQueue<E> {
Queue<Consumer<E>> callbackQueue = new LinkedList<>();
Queue<E> elementQueue = new LinkedList<>();
public synchronized void add(E e) {
if (callbackQueue.size() > 0)
callbackQueue.remove().accept(e);
else
elementQueue.offer(e);
}
public synchronized void poll(Consumer<E> c) {
if (elementQueue.size() > 0)
c.accept(elementQueue.remove());
else
callbackQueue.offer(c);
}
}
Related
I have a ConcurrentLinkedQueue that allow insertion from multiple thread however when I poll the queue, I do it in one function and I poll until the queue is empty. This can lead to an infinite loop because there can be thread inserting to the queue while I am polling.
How can I create a view of the queue and empty it before polling and still be thread safe?
One way I see is to use a ConcurrentLinkedDeque and iterating until you reach the most recently added item. You cannot do this with a single ended queue because reads look at the head first and you will need to read the tail in order to find the last added element.
The way that ConcurrentLinkedDeque works is that calls to offer(Object) and add(Object) will place the item at the tail of the queue. Calls to poll() will read the head of the queue, like so:
// Read direction --->
HEAD -> E1 -> E2 -> E3 = TAIL
// Write direction --->
As you add more items, the tail will extend to the last element, but since we want to empty the queue as we last saw it, we will grab the tail pointer and iterate until we reach the tail. We can then let subsequent iterations deal with what was added whilst we empty the queue. We peek first because using poll will remove the last added value and thus we would not be able to determine when to stop removing the elements because our marker gets removed.
ConcurrentLinkedDeque<Object> deque = new ConcurrentLinkedDeque<>();
public void emptyCurrentView() {
Object tail = deque.peekLast();
if (tail != null) {
while (true) {
// Poll the current head
Object current = deque.poll();
// Process the element
process(current);
// If we finish processing the marker
// Exit the method
if (current == tail) {
return;
}
}
}
}
You do not need to modify the producer code as the producer's default offer(Object) and add(Object) do exactly the same thing as adding the element to the tail.
How can I create a view of the queue and empty it before polling and still be thread safe?
Yeah this sounds like a really bad pattern. The whole point of using a concurrent queue implementation is that you can add to and remove from the queue at the same time. If you want to stick with ConcurrentLinkedQueue then I'd just do something like this:
// run every so often
while (true) {
// returns null immediately if the queue is empty
Item item = blockingQueue.poll();
if (item == null) {
break;
}
// process the item...
}
However, I would consider switching to use LinkedBlockingQueue instead, because it supports take(). The consumer thread would be in a loop like this:
private final BlockingQueue<Item> blockingQueue = new LinkedBlockingQueue<>();
...
while (!Thread.currentThread().isInterrupted()) {
// wait for the queue to get an item
Item item = blockingQueue.take();
// process item...
}
BlockingQueue extends Queue so the poll() loop is also available.
I am using an ArrayBlockingQueue but sometimes it gets to full and prevents other objects to be added to it.
What I would like to do is to remove the oldest object in the queue before adding another one when the ArrayBlockingQueue gets full. I need the ArrayBlockingQueue to be like the Guava EvictingQueue but thread safe. I intend to extend the ArrayBlockingQueue and override the offer(E e) method like below:
public class MyArrayBlockingQueue<E> extends ArrayBlockingQueue<E> {
// Size of the queue
private int size;
// Constructor
public MyArrayBlockingQueue(int queueSize) {
super(queueSize);
this.size = queueSize;
}
#Override
synchronized public boolean offer(E e) {
// Is queue full?
if (super.size() == this.size) {
// if queue is full remove element
this.remove();
}
return super.offer(e);
} }
Is the above approach OK? Or is there a better way of doing it?
Thanks
Your MyArrayBlockingQueue doesn't override BlockingQueue.offer(E, long, TimeUnit) or BlockingQueue.poll(long, TImeUnit). Do you actually need a queue with "blocking" features? If you do not then you can create a thread-safe queue backed by an EvictingQueue using Queues.synchronizedQueue(Queue):
Queues.synchronizedQueue(EvictingQueue.create(maxSize));
For an evicting blocking queue, I see a few issues with your proposed implementation:
remove() may throw an exception if the queue is empty. Your offer method is marked with synchronized but poll, remove, etc. are not so another thread could drain your queue in between calls to size() and remove(). I suggest using poll() instead which won't throw an exception.
Your call to offer may still return false (i.e. not "add" the element) because of another race condition where between checking the size and/or removing an element to reduce the size a different thread adds an element filling the queue. I recommend using a loop off of the result of offer until true is returned (see below).
Calling size(), remove() and offer(E) each require a lock so in the worse case scenario your code locks and unlocks 3 times (and even then it might fail to behave as desired due to the previous issues described).
I believe the following implementation will get you what you are after:
public class EvictingBlockingQueue<E> extends ArrayBlockingQueue<E> {
public EvictingBlockingQueue(int capacity) {
super(capacity);
}
#Override
public boolean offer(E e) {
while (!super.offer(e)) poll();
return true;
}
#Override
public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
while (!super.offer(e, timeout, unit)) poll();
return true;
}
}
Note that this implementation can unnecessarily remove an element if between two calls to super.offer(E) another thread removes an element. This seems acceptable to me and I don't really see a practical way around it (ArrayBlockingQueue.lock is package-private and java.util.concurrent is a prohibited package so we can't place an implementation there to access and use the lock, etc.).
When you say "it gets to full and prevents other objects to be added", does that mean it would be sufficient to ensure that objects can be added anytime? If that's true, you could simply switch to an unbounded queue such as LinkedBlockingQueue. But be aware of the differences compared with ArrayBlockingQueue:
Linked queues typically have higher throughput than array-based queues but less predictable performance in most concurrent applications.
You can find an overview of JDK queue implementations here.
Let's assume we have n Workers which do some computation. The computation may take a while and n computations may run in parallel. Each Worker needs some data structure (not shared between Workers) to do the work.
I thought about setting up each Worker during some initialization of the Master and handing over the required data structure to the Worker's constructor.
public class Master {
public Master() {
// initialize n Workers and "register" them "somewhere"
}
public boolean doCompute(int someInput) throws NoIdleWorkerException {
// check if there is an idle Worker, otherwise throw NoIdleWorkerException
// call the idle Worker, hand over someInput and wait for the result
// synchronously return the result
}
}
A Worker may implement Runnable and then be handed over to a Thread. An instance of Worker may be reused.
public class Worker implements Runnable {
private SomeDataStructure ds;
public Worker(SomeDataStructure ds) {
this.ds = ds;
}
public void run() {
// may call doCompute, but run() doesn't has a return type
}
public boolean doCompute(int someInput) {
// do the computation an return
}
}
What is the best way to manage the Worker instances? I was thinking about using ThreadFactory which returns a Thread only if a Worker instance is idle, otherwise null. Using this approach, I would have to manage Worker instances in some data structure.
Also, since Master.doCompute(int someInput) has a return value but its computation is done by a Thread, thus asynchronously, I may have to use Futures. Are there any alternatives?
Assuming that your Master has to do something with the results of all the workers for a given invocation, I would implement each worker as a Callable, with the work to be done implemented in its call function.
Then the master can generate its list of Callables from whatever you pass in and pass those to a ThreadPoolExecutor (which means you can control the number of threads actually in use for parallel processing of this kind) via the invokeAll method.
invokeAll returns a list of Futures, which will either have completed or have had a timeout expire (if you choose to set one). You can check if they have cancelled (timed out).
See http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/AbstractExecutorService.html#invokeAll(java.util.Collection) for further details.
I have a PriorityBlockingQueue as follows:
BlockingQueue<Robble> robbleListQueue = new PriorityBlockingQueue<Robble>();
Robble implements Comparable<Robble> and I am able to sort lists without issue, so I know my comparisons work.
I also have the following Runnable:
private class RobbleGeneratorRunnable implements Runnable {
private final BlockingQueue<Robble> robbleQueue;
public RobbleGeneratorRunnable(BlockingQueue<ResultList> robbleQueue) {
this.robbleQueue = robbleQueue;
}
#Override
public void run() {
try {
robbleQueue.put(generateRobble());
} catch (InterruptedException e) {
// ...
}
}
private Robble generateRobble() {
// ...
}
}
I push a few thousand of these runnables into an ExecutorService and then shutdown() and awaitTermination().
According to the BlockingQueue JavaDoc, put(...) is a blocking action. However, when I iterate over the items in the queue they are only mostly in order -- there are some that are out of order indicating to me that the queue is not blocking properly. Like I said before I can sort the Robbles just fine.
What could be causing robbleQueue.put(generateRobble()) to not block properly?
According to the javadoc,
The Iterator provided in method iterator() is not guaranteed to
traverse the elements of the priority queue in any particular order.
If you need ordered traversal, consider using
Arrays.sort(pq.toArray())
Add, peek, poll and remove are required to operate in priority sequence, but NOT the iterator.
PriorityBlockingQueue is an unbounded queue, and if you read the javadocs for put() it states:
Inserts the specified element into this priority queue. As the queue
is unbounded this method will never block.
Why would you expect put() to block?
Iterating a PriorityQueue or PriorityBlockingQueue is explicitly stated in the Javadoc not to be ordered. Only add(), peek(), poll(), and remove() are ordered. This has nothing to do with whether blocking is happening correctly.
What is the recommended / best way to implement a blocking function call in Java, that can be later unblocked by a call from another thread?
Basically I want to have two methods on an object, where the first call blocks any calling thread until the second method is run by another thread:
public class Blocker {
/* Any thread that calls this function will get blocked */
public static SomeResultObject blockingCall() {
// ...
}
/* when this function is called all blocked threads will continue */
public void unblockAll() {
// ...
}
}
The intention BTW is not just to get blocking behaviour, but to write a method that blocks until some future point when it is possible to compute the required result.
You can use a CountDownLatch.
latch = new CountDownLatch(1);
To block, call:
latch.await();
To unblock, call:
latch.countDown();
If you're waiting on a specific object, you can call myObject.wait() with one thread, and then wake it up with myObject.notify() or myObject.notifyAll(). You may need to be inside a synchronized block:
class Example {
List list = new ArrayList();
// Wait for the list to have an item and return it
public Object getFromList() {
synchronized(list) {
// Do this inside a while loop -- wait() is
// not guaranteed to only return if the condition
// is satisfied -- it can return any time
while(list.empty()) {
// Wait until list.notify() is called
// Note: The lock will be released until
// this call returns.
list.wait();
}
return list.remove(0);
}
}
// Add an object to the list and wake up
// anyone waiting
public void addToList(Object item) {
synchronized(list) {
list.add(item);
// Wake up anything blocking on list.wait()
// Note that we know that only one waiting
// call can complete (since we only added one
// item to process. If we wanted to wake them
// all up, we'd use list.notifyAll()
list.notify();
}
}
}
There are a couple of different approaches and primitives available, but the most appropriate sounds like a CyclicBarrier or a CountDownLatch.