How to snapshot a queue to avoid infinite loop - java

I have a ConcurrentLinkedQueue that allow insertion from multiple thread however when I poll the queue, I do it in one function and I poll until the queue is empty. This can lead to an infinite loop because there can be thread inserting to the queue while I am polling.
How can I create a view of the queue and empty it before polling and still be thread safe?

One way I see is to use a ConcurrentLinkedDeque and iterating until you reach the most recently added item. You cannot do this with a single ended queue because reads look at the head first and you will need to read the tail in order to find the last added element.
The way that ConcurrentLinkedDeque works is that calls to offer(Object) and add(Object) will place the item at the tail of the queue. Calls to poll() will read the head of the queue, like so:
// Read direction --->
HEAD -> E1 -> E2 -> E3 = TAIL
// Write direction --->
As you add more items, the tail will extend to the last element, but since we want to empty the queue as we last saw it, we will grab the tail pointer and iterate until we reach the tail. We can then let subsequent iterations deal with what was added whilst we empty the queue. We peek first because using poll will remove the last added value and thus we would not be able to determine when to stop removing the elements because our marker gets removed.
ConcurrentLinkedDeque<Object> deque = new ConcurrentLinkedDeque<>();
public void emptyCurrentView() {
Object tail = deque.peekLast();
if (tail != null) {
while (true) {
// Poll the current head
Object current = deque.poll();
// Process the element
process(current);
// If we finish processing the marker
// Exit the method
if (current == tail) {
return;
}
}
}
}
You do not need to modify the producer code as the producer's default offer(Object) and add(Object) do exactly the same thing as adding the element to the tail.

How can I create a view of the queue and empty it before polling and still be thread safe?
Yeah this sounds like a really bad pattern. The whole point of using a concurrent queue implementation is that you can add to and remove from the queue at the same time. If you want to stick with ConcurrentLinkedQueue then I'd just do something like this:
// run every so often
while (true) {
// returns null immediately if the queue is empty
Item item = blockingQueue.poll();
if (item == null) {
break;
}
// process the item...
}
However, I would consider switching to use LinkedBlockingQueue instead, because it supports take(). The consumer thread would be in a loop like this:
private final BlockingQueue<Item> blockingQueue = new LinkedBlockingQueue<>();
...
while (!Thread.currentThread().isInterrupted()) {
// wait for the queue to get an item
Item item = blockingQueue.take();
// process item...
}
BlockingQueue extends Queue so the poll() loop is also available.

Related

How to implement asynchronous queue?

Given following variation of queue:
interface AsyncQueue<T> {
//add new element to the queue
void add(T elem);
//request single element from the queue via callback
//callback will be called once for single polled element when it is available
//so, to request multiple elements, poll() must be called multiple times with (possibly) different callbacks
void poll(Consumer<T> callback);
}
I found out i do not know how to implement it using java.util.concurrent primitives! So questions are:
What is the right way to implement it using java.util.concurrent package?
Is it possible to do this w/o using additional thread pool?
Your AsyncQueue is very similar to a BlockingQueue such as ArrayBlockingQueue. The Future returned would simply delegate to the ArrayBlockingQueue methods. Future.get would call blockingQueue.poll for instance.
As for your update, I'm assuming the thread that calls add should invoke the callback if there's one waiting? If so it's a simple task of creating one queue for elements, and one queue for callbacks.
Upon add, check if there's a callback waiting, then call it, otherwise put the element on the element queue
Upon poll, check if there's an element waiting, then call the callback with that element, otherwise put the callback on the callback queue
Code outline:
class AsyncQueue<E> {
Queue<Consumer<E>> callbackQueue = new LinkedList<>();
Queue<E> elementQueue = new LinkedList<>();
public synchronized void add(E e) {
if (callbackQueue.size() > 0)
callbackQueue.remove().accept(e);
else
elementQueue.offer(e);
}
public synchronized void poll(Consumer<E> c) {
if (elementQueue.size() > 0)
c.accept(elementQueue.remove());
else
callbackQueue.offer(c);
}
}

Which collection should i choose in java? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need a collection to store a lot of requests from many clients, and at the same time i use one thread processing all the requests stored every five seconds.so which collection should i choose in java to have the best efficiency? obviously, the collection should be thread-safe and efficiency to poll all elements every five seconds,right?
You may try to use ArrayBlockingQueue in this case.
A bounded blocking queue backed by an array. This queue orders
elements FIFO (first-in-first-out). The head of the queue is that
element that has been on the queue the longest time. The tail of the
queue is that element that has been on the queue the shortest time.
New elements are inserted at the tail of the queue, and the queue
retrieval operations obtain elements at the head of the queue.
This is a classic "bounded buffer", in which a fixed-sized array holds
elements inserted by producers and extracted by consumers. Once
created, the capacity cannot be changed. Attempts to put an element
into a full queue will result in the operation blocking; attempts to
take an element from an empty queue will similarly block.
In this there is a take method which will block, without consuming CPU cycles, until an item gets added to the queue. And it is thread safe.
I wrote a lock-free DoubleBufferedList for just this situation. Essentially, you can write to it from multiple threads and the writes will accumulate. When a read comes along the whole list is returned while, at the same time, in a thread-safe way, a new list is created for the writers to write to.
The critical difference between this and any kind of BlockingQueue is that with a Queue you will need to poll each entry out of it one-at-a-time. This structure gives you the whole accumulated list all at once, containing everything that has accumulated since the last time you looked.
public class DoubleBufferedList<T> {
// Atomic reference so I can atomically swap it through.
// Mark = true means I am adding to it so momentarily unavailable for iteration.
private AtomicMarkableReference<List<T>> list = new AtomicMarkableReference<>(newList(), false);
// Factory method to create a new list - may be best to abstract this.
protected List<T> newList() {
return new ArrayList<>();
}
// Get and replace with empty the current list - can return null - does not mean failed.
public List<T> get() {
// Atomically grab and replace the list with an empty one.
List<T> empty = newList();
List<T> it;
// Replace an unmarked list with an empty one.
if (!list.compareAndSet(it = list.getReference(), empty, false, false)) {
// Failed to replace!
// It is probably marked as being appended to but may have been replaced by another thread.
// Return empty and come back again soon.
return Collections.<T>emptyList();
}
// Successfull replaced an unmarked list with an empty list!
return it;
}
// Grab and lock the list in preparation for append.
private List<T> grab() {
List<T> it;
// We cannot fail so spin on get and mark.
while (!list.compareAndSet(it = list.getReference(), it, false, true)) {
// Spin on mark - waiting for another grabber to release (which it must).
}
return it;
}
// Release the list.
private void release(List<T> it) {
// Unmark it - should this be a compareAndSet(it, it, true, false)?
if (!list.attemptMark(it, false)) {
// Should never fail because once marked it will not be replaced.
throw new IllegalMonitorStateException("It changed while we were adding to it!");
}
}
// Add an entry to the list.
public void add(T entry) {
List<T> it = grab();
try {
// Successfully marked! Add my new entry.
it.add(entry);
} finally {
// Always release after a grab.
release(it);
}
}
// Add many entries to the list.
public void add(List<T> entries) {
List<T> it = grab();
try {
// Successfully marked! Add my new entries.
it.addAll(entries);
} finally {
// Always release after a grab.
release(it);
}
}
// Add a number of entries.
#SafeVarargs
public final void add(T... entries) {
// Make a list of them.
add(Arrays.<T>asList(entries));
}
}
A static ConcurrentHashmap with timestamp as key and request object as value would be my suggestion.

Why my producer consumer program is blocking?

I have created my own queue .
Queue.java
public class MyQueue {
private int size;
private Queue<String> q;
public MyQueue(int size ,Queue<String> queue) {
this.size = size;
this.q = queue;
}
//getter and setter
public synchronized void putTail(String s) {
System.out.println(this.size); // It should print 0, 1,2
while (q.size() != size) {
try {
wait();
}
catch (InterruptedException e) {
}
}
Date d = new Date();
q.add(d.toString());
notifyAll();
}
}
MyProducer.java
import com.conpro.MyQueue;
public class MyProducer implements Runnable {
private final MyQueue queue;
private final int size;
MyProducer(int size,MyQueue q) { this.queue = q; this.size = size; }
#Override
public void run()
{
queue.putTail(String.valueOf(Math.random()));
}
}
MyTest.java
public class MyTest {
public static void main(String[] args) {
Queue q = new PriorityQueue<String>();
MyQueue mq = new MyQueue(3,q);
MyProducer p = new MyProducer(3,mq);
MyProducer p1 = new MyProducer(3,mq);
MyProducer p2 = new MyProducer(3,mq);
new Thread(p).start();
new Thread(p1).start();
new Thread(p2).start();
}
}
Now Here I have created 3 producer .
So after executing this 3 lines , queue should be full.
Output should be :
0
1
2
But it is only printing 0.
Why?
P.S : I have written only producer code since I have not reached there yet.
Since putTail() is synchronized, only one of the three threads can enter it. That thread then sits forever inside the while (q.size() != size) loop, while the other two threads remain unable to enter the method.
The problem is that all 3 threads enter the wait() but never get notified via notifyAll
There's problem with your code that doesn't really make it a blocking queue. Here's what I would expect a blocking queue to do:
Queue is bound to maximum size of 3, start with no elements
Get something from a producer, size is not yet 3, add it, don't block
Get something else from a producer, size is not yet 3, add it, don't block
Get something else from a producer, size is not yet 3, add it, don't block
Get something else from a producer, size is now 3, block until something is taken
Get something else from a producer, size is still 3, block until something is taken
A consumer takes from the queue, the threads from (5) and (6) are notified, and the first one to get scheduled obtains the lock long enough to add his element, the other one is forced to block again until another consumer takes from the queue.
Here's what yours is actually doing as-written:
Queue is bound to maximum size of 3, start with no elements
Get something from a producer, size is not yet 3, block on wait() without adding it
Get something from a producer, size is not yet 3, block on wait() without adding it
Get something from a producer, size is not yet 3, block on wait() without adding it
In all 3 cases of adding the element, the element doesn't actually get added, and we get stuck at wait() because all 3 enter the while loop, then nothing ever calls notifyAll().
Your fix in the comments:
while (q.size() == size)
This makes it do what it's supposed to: if the size has already reached the maximum, block until it's told to continue via a notify, then check if the size is still the maximum. For the thread from my example above that receives the lock after being notified (e.g. the thread from step 6), it will get the chance to add its message. The thread that doesn't receive the lock will receive the lock after the first one releases it, but the size will have increased to the max size again, which causes it to block again. That being said, I think your approach is a good start.
The one thing in your code that's incorrect is that you're calling notifyAll after you add it. Adding will never cause the queue size to shrink, but you're notifying all the threads waiting in the putTail method to continue. There's no reason to notify the threads that are waiting to add something to the queue if you just put something into it that made it reach the maximum size anyway. I think you meant for that notify to do something with the threads waiting on your eventual take method, which leads me to my next point:
Your next step will be to have two lock objects instead of always using this. That way the take method can block separately from the put method. Use one lock to wait in the put method and notifyAll on it in take, then use the other lock to wait in the take method and notifyAll on it in put. This will make it so that you can separately notify the takers and putters without notifying all of them at once like you would using this.notifyAll.
The problem is in MyQueue class's putTail() method.There you are calling wait() on this (current object),and it never be notified.Then the thread will wait forever.

Java Threadpools with competeting queues

I have a situation where I'd like to use an extension of Java's fixed thread pools. I have N groups of runnable objects that I'd like to compete for resources. However, I'd like the total number of threads used to remain constant. The way that I would like this to work is outlined here
Allocate an object with N threads and M queues;
Schedule job n on queue m.
Have a pointer to the first queue
Repeat
a. If the maximum number of threads is currently in use wait.
b. Pop off a job on the current queue
c. Move the pointer one queue over (or from the last queue to the first)
First, does something like this already exist? Second if not, I'm nervous about writing my own because I know writing my own thread pools can be dangerous. Can anyone point me to some good examples for writing my own.
Your best bet is probably creating your own implementation of a Queue that cycles through other queues. For example (in pseudo-code):
class CyclicQueue {
Queue queues[];
int current = 0;
CyclicQueue(int size) {
queues = new Queue[size];
for(int i=0; i<size; i++)
queues[i] = new LinkedList<T>();
}
T get() {
int i = current;
T value;
while( (value = queues[i].poll() == null) {
i++;
if(i == current)
return null;
}
return value;
}
}
Of course, with this, if you want blocking you'll need to add that in yourself.
In which case, you'll probably want a custom Queue for each queue which can notify the parent queue that value has been added.

Synchronization of a Queue

I've been reading up on Doug Lea's 'Concurrency Programming in Java' book. As you may know, Doug originally wrote the Java Concurrency API. However, something has caused me some confusion and I was hoping to gain a few my opinions on this little conundrum!
Take the following code from Doug Lea's queuing example...
class LinkedQueue {
protected Node head = new Node(null);
protected Node last = head;
protected final Object pollLock = new Object();
protected final Object putLock = new Object();
public void put(Object x) {
Node node = new Node(x);
synchronized (putLock) { // insert at end of list
synchronized (last) {
last.next = node; // extend list
last = node;
}
}
}
public Object poll() { // returns null if empty
synchronized (pollLock) {
synchronized (head) {
Object x = null;
Node first = head.next; // get to first real node
if (first != null) {
x = first.object;
first.object = null; // forget old object
head = first; // first becomes new head
}
return x;
}
}
}
static class Node { // local node class for queue
Object object;
Node next = null;
Node(Object x) { object = x; }
}
}
This a quite a nice Queue. It uses two monitors so a Producer and a Consumer can access the Queue at the same time. Nice! However, the synchronization on 'last' and 'head' is confusing me here. The book states this is needed for for the situation whereby Queue is currently or about to have 0 entries. Ok, fair enough and this kind of makes sense.
However, then I looked at the Java Concurrency LinkedBlockingQueue. The original version of the Queue don't synchronize on head or tail (I also wanted to post another link to the modern version which also suffers from the same problem but I couldn't do so because I'm a newbie). I wonder why not? Am I missing something here? Is there some part of the idiosyncratic nature of the Java Memory Model I'm missing? I would have thought for visibility purposes that this synchronization is needed? I'd appreciate some expert opinions!
In the version you put up a link for as well as the version in the latest JRE the item inside the Node class is volatile which enforces reads and writes to be visible to all other threads, here is a more in depth explaination http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
The subtlety here is that synchronized(null) would throw a NullPointerException,so neither head nor last is allowed to become null. They are both initialized to the value of the same dummy node that is never returned or removed from either list.
put() and poll() are synchronized on two different locks. The methods would need to synchronize on the same lock to be thread-safe with respect to one another if they could modify the same value from different threads. The only situation in which this is a problem is when head == last (i.e. they are the same object, referenced through different member variables). This is why the code synchronizes on head and last - most of the time these will be fast, uncontented locks, but occasionally head and last will be the same instance and one of the threads will have to block the other.
The only time that visibility is an issue is when the queue is nearly empty, the rest of the time put() and poll() work on different ends of the queue and don't interfere with each other.

Categories

Resources