How implementation of java.util.queue uses LIFO? - java

In Java doc:
[...] Among the exceptions are priority queues, which order elements according to a supplied comparator, or the elements' natural ordering, and LIFO queues (or stacks) which order the elements LIFO (last-in-first-out)
How implementation of java.util.queue uses LIFO instead of FIFO?

You can use any Deque as a LIFO queue using Collections.asLifoQueue method:
Queue<Integer> arrayLifoQueue = Collections.asLifoQueue(new ArrayDeque<Integer>());
Queue<Integer> linkedListLifoQueue = Collections.asLifoQueue(new LinkedList<Integer>());

You can use a java.util.LinkedList and use the pop() and push() methods and use it like a stack, which is a LIFO queue.

Implementation of the Queue can base on FIFO, priorities and LIFO - that is what official documentation says.
When a programmer first sees "Queue" he automatically thinks "it must be FIFO order" (or eventually prioritized order). But as documentation says there must be possibility to use Queue interface for LIFO ordering. Let me explain you how it can be done.
// FIFO queue usage
Queue<Integer> queue = new LinkedList<>();
queue.add(1);
queue.add(2);
queue.remove(); // returns 1
queue.remove(); // returns 2
// LIFO queue usage
Queue<Integer> queue = Collections.asLifoQueue(new ArrayDeque<>());
queue.add(1);
queue.add(2);
queue.remove(); // returns 2
queue.remove(); // returns 1
As you can see depending on the implementation, Queue interface can be used also as a LIFO.

Stack and LinkedList offered here are just a collections. Queue is not a collection. It is a part of concurrency package and can be used with threadpools.
I have just verified again and read javadoc that you have quoted. I think that the only option to use LIFO queue is to use priority queue with custom comparator that compare elements according to the insertion time in reverse order.

Deque can be used as LIFO or FIFO

Queue is a data structure that uses a technique of First-In-First-Out.
Here's a useful link : magi.toolkit.util.queue Class LIFOQueue
An implementation of a "Last In, First Out" Queue. Basically, a LIFO
Queue is a Stack.

I made a LIFO queue with limited size. Limited size is maintained by replacing the oldest entries with the new ones. The implementation is based on LinkedList.
package XXXX;
import java.util.LinkedList;
public class LIFOQueueLimitedSize<E> extends LinkedList<E> {
/**
* generated serial number
*/
private static final long serialVersionUID = -7772085623838075506L;
// Size of the queue
private int size;
// Constructor
public LIFOQueueLimitedSize(int crunchifySize) {
// Creates an ArrayBlockingQueue with the given (fixed) capacity and default access policy
this.size = crunchifySize;
}
// If queue is full, it will remove oldest/first element from queue like FIFO
#Override
synchronized public boolean add(E e) {
// Check if queue full already?
if (super.size() == this.size) {
// remove element from queue if queue is full
this.remove();
}
return super.add(e);
}
}

Related

How to snapshot a queue to avoid infinite loop

I have a ConcurrentLinkedQueue that allow insertion from multiple thread however when I poll the queue, I do it in one function and I poll until the queue is empty. This can lead to an infinite loop because there can be thread inserting to the queue while I am polling.
How can I create a view of the queue and empty it before polling and still be thread safe?
One way I see is to use a ConcurrentLinkedDeque and iterating until you reach the most recently added item. You cannot do this with a single ended queue because reads look at the head first and you will need to read the tail in order to find the last added element.
The way that ConcurrentLinkedDeque works is that calls to offer(Object) and add(Object) will place the item at the tail of the queue. Calls to poll() will read the head of the queue, like so:
// Read direction --->
HEAD -> E1 -> E2 -> E3 = TAIL
// Write direction --->
As you add more items, the tail will extend to the last element, but since we want to empty the queue as we last saw it, we will grab the tail pointer and iterate until we reach the tail. We can then let subsequent iterations deal with what was added whilst we empty the queue. We peek first because using poll will remove the last added value and thus we would not be able to determine when to stop removing the elements because our marker gets removed.
ConcurrentLinkedDeque<Object> deque = new ConcurrentLinkedDeque<>();
public void emptyCurrentView() {
Object tail = deque.peekLast();
if (tail != null) {
while (true) {
// Poll the current head
Object current = deque.poll();
// Process the element
process(current);
// If we finish processing the marker
// Exit the method
if (current == tail) {
return;
}
}
}
}
You do not need to modify the producer code as the producer's default offer(Object) and add(Object) do exactly the same thing as adding the element to the tail.
How can I create a view of the queue and empty it before polling and still be thread safe?
Yeah this sounds like a really bad pattern. The whole point of using a concurrent queue implementation is that you can add to and remove from the queue at the same time. If you want to stick with ConcurrentLinkedQueue then I'd just do something like this:
// run every so often
while (true) {
// returns null immediately if the queue is empty
Item item = blockingQueue.poll();
if (item == null) {
break;
}
// process the item...
}
However, I would consider switching to use LinkedBlockingQueue instead, because it supports take(). The consumer thread would be in a loop like this:
private final BlockingQueue<Item> blockingQueue = new LinkedBlockingQueue<>();
...
while (!Thread.currentThread().isInterrupted()) {
// wait for the queue to get an item
Item item = blockingQueue.take();
// process item...
}
BlockingQueue extends Queue so the poll() loop is also available.

Evict object from ArrayBlockingQueue if full

I am using an ArrayBlockingQueue but sometimes it gets to full and prevents other objects to be added to it.
What I would like to do is to remove the oldest object in the queue before adding another one when the ArrayBlockingQueue gets full. I need the ArrayBlockingQueue to be like the Guava EvictingQueue but thread safe. I intend to extend the ArrayBlockingQueue and override the offer(E e) method like below:
public class MyArrayBlockingQueue<E> extends ArrayBlockingQueue<E> {
// Size of the queue
private int size;
// Constructor
public MyArrayBlockingQueue(int queueSize) {
super(queueSize);
this.size = queueSize;
}
#Override
synchronized public boolean offer(E e) {
// Is queue full?
if (super.size() == this.size) {
// if queue is full remove element
this.remove();
}
return super.offer(e);
} }
Is the above approach OK? Or is there a better way of doing it?
Thanks
Your MyArrayBlockingQueue doesn't override BlockingQueue.offer(E, long, TimeUnit) or BlockingQueue.poll(long, TImeUnit). Do you actually need a queue with "blocking" features? If you do not then you can create a thread-safe queue backed by an EvictingQueue using Queues.synchronizedQueue(Queue):
Queues.synchronizedQueue(EvictingQueue.create(maxSize));
For an evicting blocking queue, I see a few issues with your proposed implementation:
remove() may throw an exception if the queue is empty. Your offer method is marked with synchronized but poll, remove, etc. are not so another thread could drain your queue in between calls to size() and remove(). I suggest using poll() instead which won't throw an exception.
Your call to offer may still return false (i.e. not "add" the element) because of another race condition where between checking the size and/or removing an element to reduce the size a different thread adds an element filling the queue. I recommend using a loop off of the result of offer until true is returned (see below).
Calling size(), remove() and offer(E) each require a lock so in the worse case scenario your code locks and unlocks 3 times (and even then it might fail to behave as desired due to the previous issues described).
I believe the following implementation will get you what you are after:
public class EvictingBlockingQueue<E> extends ArrayBlockingQueue<E> {
public EvictingBlockingQueue(int capacity) {
super(capacity);
}
#Override
public boolean offer(E e) {
while (!super.offer(e)) poll();
return true;
}
#Override
public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
while (!super.offer(e, timeout, unit)) poll();
return true;
}
}
Note that this implementation can unnecessarily remove an element if between two calls to super.offer(E) another thread removes an element. This seems acceptable to me and I don't really see a practical way around it (ArrayBlockingQueue.lock is package-private and java.util.concurrent is a prohibited package so we can't place an implementation there to access and use the lock, etc.).
When you say "it gets to full and prevents other objects to be added", does that mean it would be sufficient to ensure that objects can be added anytime? If that's true, you could simply switch to an unbounded queue such as LinkedBlockingQueue. But be aware of the differences compared with ArrayBlockingQueue:
Linked queues typically have higher throughput than array-based queues but less predictable performance in most concurrent applications.
You can find an overview of JDK queue implementations here.

How to implement asynchronous queue?

Given following variation of queue:
interface AsyncQueue<T> {
//add new element to the queue
void add(T elem);
//request single element from the queue via callback
//callback will be called once for single polled element when it is available
//so, to request multiple elements, poll() must be called multiple times with (possibly) different callbacks
void poll(Consumer<T> callback);
}
I found out i do not know how to implement it using java.util.concurrent primitives! So questions are:
What is the right way to implement it using java.util.concurrent package?
Is it possible to do this w/o using additional thread pool?
Your AsyncQueue is very similar to a BlockingQueue such as ArrayBlockingQueue. The Future returned would simply delegate to the ArrayBlockingQueue methods. Future.get would call blockingQueue.poll for instance.
As for your update, I'm assuming the thread that calls add should invoke the callback if there's one waiting? If so it's a simple task of creating one queue for elements, and one queue for callbacks.
Upon add, check if there's a callback waiting, then call it, otherwise put the element on the element queue
Upon poll, check if there's an element waiting, then call the callback with that element, otherwise put the callback on the callback queue
Code outline:
class AsyncQueue<E> {
Queue<Consumer<E>> callbackQueue = new LinkedList<>();
Queue<E> elementQueue = new LinkedList<>();
public synchronized void add(E e) {
if (callbackQueue.size() > 0)
callbackQueue.remove().accept(e);
else
elementQueue.offer(e);
}
public synchronized void poll(Consumer<E> c) {
if (elementQueue.size() > 0)
c.accept(elementQueue.remove());
else
callbackQueue.offer(c);
}
}

PriorityBlockingQueue not blocking?

I have a PriorityBlockingQueue as follows:
BlockingQueue<Robble> robbleListQueue = new PriorityBlockingQueue<Robble>();
Robble implements Comparable<Robble> and I am able to sort lists without issue, so I know my comparisons work.
I also have the following Runnable:
private class RobbleGeneratorRunnable implements Runnable {
private final BlockingQueue<Robble> robbleQueue;
public RobbleGeneratorRunnable(BlockingQueue<ResultList> robbleQueue) {
this.robbleQueue = robbleQueue;
}
#Override
public void run() {
try {
robbleQueue.put(generateRobble());
} catch (InterruptedException e) {
// ...
}
}
private Robble generateRobble() {
// ...
}
}
I push a few thousand of these runnables into an ExecutorService and then shutdown() and awaitTermination().
According to the BlockingQueue JavaDoc, put(...) is a blocking action. However, when I iterate over the items in the queue they are only mostly in order -- there are some that are out of order indicating to me that the queue is not blocking properly. Like I said before I can sort the Robbles just fine.
What could be causing robbleQueue.put(generateRobble()) to not block properly?
According to the javadoc,
The Iterator provided in method iterator() is not guaranteed to
traverse the elements of the priority queue in any particular order.
If you need ordered traversal, consider using
Arrays.sort(pq.toArray())
Add, peek, poll and remove are required to operate in priority sequence, but NOT the iterator.
PriorityBlockingQueue is an unbounded queue, and if you read the javadocs for put() it states:
Inserts the specified element into this priority queue. As the queue
is unbounded this method will never block.
Why would you expect put() to block?
Iterating a PriorityQueue or PriorityBlockingQueue is explicitly stated in the Javadoc not to be ordered. Only add(), peek(), poll(), and remove() are ordered. This has nothing to do with whether blocking is happening correctly.

How to make my data structure thread safe?

I defined an Element class:
class Element<T> {
T value;
Element<T> next;
Element(T value) {
this.value = value;
}
}
also defined a List class based on Element. It is a typical list, just like in any data structure books, has addHead, delete and etc operations
public class List<T> implements Iterable<T> {
private Element<T> head;
private Element<T> tail;
private long size;
public List() {
this.head = null;
this.tail = null;
this.size = 0;
}
public void insertHead (T node) {
Element<T> e = new Element<T>(node);
if (size == 0) {
head = e;
tail = e;
} else {
e.next = head;
head = e;
}
size++;
}
//Other method code omitted
}
How do I make this List class thread safe?
put synchronized on all methods? Seems not working. Two threads may work on differnt methods at the same time and cause collision.
If I have used an array to keep all the elements in the class, then I may use a volatile on the array to make sure only one thread is working with the internal elements. But currently all the elements are linked through object refernece on each's next pointer. I have no way to use volatile.
Using volatile on head, tail and size? This may cause deadlocks if two thread running different methods holding on the resource each other waiting for.
Any suggestions?
If you put synchronized on every method, the data structure WILL BE thread-safe. Because by definition, only one thread will be executing any method on the object at a time, and inter-thread ordering and visibility is also ensured. So it is as good as if one thread is doing all operations.
Putting a synchronized(this) block won't be any different if the area the block covers is the whole method. You might get better performance if the area is smaller than that.
Doing something like
private final Object LOCK = new Object();
public void method(){
synchronized(LOCK){
doStuff();
}
}
Is considered good practice, although not for better performance. Doing this will ensure that nobody else can use your lock, and unintentionally creating a deadlock-prone implementation etc.
In your case, I think you could use ReadWriteLock to get better read performance. As the name suggests, a ReadWriteLock lets multiple threads through if they are accessing "read method", methods that does not mutate the state of the object (Of course you have to correctly identify which of your methods are "read method" and "write method", and use ReadWriteLock accordingly!). Also, it ensures that no other thread is accessing the object while "write method" are executed. And it takes care of the scheduling of the read/write threads.
Other well known way of making a class thread-safe is "CopyOnWrite", where you copy the whole data structure upon mutation. This is only recommended when the object is mostly "read" and rarely "written".
Here is a sample implementation of that strategy.
http://www.codase.com/search/smart?join=class+java.util.concurrent.CopyOnWriteArrayList
private volatile transient E[] array;
/**
* Returns the element at the specified position in this list.
*
* #param index index of element to return.
* #return the element at the specified position in this list.
* #throws IndexOutOfBoundsException if index is out of range <tt>(index
* < 0 || index >= size())</tt>.
*/
public E get(int index) {
E[] elementData = array();
rangeCheck(index, elementData.length);
return elementData[index];
}
/**
* Appends the specified element to the end of this list.
*
* #param element element to be appended to this list.
* #return true (as per the general contract of Collection.add).
*/
public synchronized boolean add(E element) {
int len = array.length;
E[] newArray = (E[]) new Object[len+1];
System.arraycopy(array, 0, newArray, 0, len);
newArray[len] = element;
array = newArray;
return true;
}
Here, read method is accessing without going through any lock, while write method has to be synchronized. Inter-thread ordering and visibility for read methods are ensured by the use of volatile for the array.
The reason that write methods have to "copy" is because the assignment array = newArray has to be "one shot" (in java, assignment of object reference is atomic), and you may not touch the original array during the manipulation.
I'd look at the source code for the java.util.LinkedList class for a real implementation.
Synchronized by default will lock on the instance of the class - which may not be what you want. (esp. if Element is externally accessible). If you synchronize all the methods on the same lock, then you'll have terrible concurrent performance, but it'll prevent them from executing at the same time - effectively single-threading access to the class.
Also - I see a tail reference, but don't see Element with a corresponding previous field, for a double linked-list - reason?
I'd suggest you to use a ReentrantLock which you can pass to every element of the list, but you will have to use a factory to instantiate every element.
Any time you need to take something out of the list, you will block the very same lock, so you can assure that no two threads will be accessing at the same time.

Categories

Resources