Whats the difference between dequeue() and front() in Java Queue? - java

I have two definitions from some college notes I'm reading.
"dequeue(): Remove the object from the front of the queue and return it; an error occurs if the queue is empty"
"front(): Return the front object in the queue, BUT DO NOT remove it; an error occurs if the queue is empty"
I understand the dequeue method but the front method has me a bit perplexed. Just wondering if someone has a good example of the front method being used so I can get my head around the difference between the two. Thanks.

Imagine a scenario with a single producer and multiple consumers. A particular thread-safe queue is used as the buffer between the different producers and consumers.
Now imagine that a particular consumer only has the ability to process a certain type of data from the queue. It could use the front() method to peek at the next data to see if it actually can process it and then call dequeue() if it can. If it cannot, it simply won't call dequeue(), leaving the queue unmodified.
Arguably, in the same scenario, you could call dequeue() to obtain the data, examine it, determine if you can process it. If not, add to the front of the queue again. But in this, it takes a lot more effort as the queue is being modified twice, and the action of putting an element back at the front of the queue may be expensive or completely prohibited. Most likely, you're adding the element to the end of the queue, disrupting the queue process order.
The front() method is a method to optimize queue access by keeping the number of modifications being done to the queue to a minimum given that queue modifications are usually more expensive than simple peeks at it. By looking at the first element without accessing it, consumers can decide if they are actually going to modify the queue, reducing the number of modifications when compared to dequeue() and the re-queuing the data.

Assume you have a Queue of Integers that you are iterating over for whatever reason. You have a label on your UI that shows the upcoming number. To update that label, you will use youeQueue.front() to retrieve the number in question without removing it. When your next calculation starts, your calculation method will use yourQueue.dequeu() to retrieve the next element and remove it from the queue.

Related

How to correctly use BlockingQueue in java when I want to drop messages from the its head

I'm writing app for Android that process real-time data.
My app reads binary data from data bus (CAN), parse and display it on the screen.
App reads data in background thread. A need rapidly transfer data from one thread to another. Displaying data should be most actual.
I've found the nice java queue that almost implements required behavior: LinkedBlockingQueue. I plan to set the strong limit for this queue (about 100 messages).
Consumer thread should read data from queue with the take() method. But producer thread can't wait for consumer. By this reason it can't use standard method put() (because it's blocking).
So, I plan to put messages to my queue using the following construction:
while (!messageQueue.offer(message)) {
messageQueue.poll();
}
That is, the oldest message should be removed from queue to provide a place for the new actual data.
Is this a good practice? Or I've lost some important details?
Can't see anything wrong with it. You know what you are doing (loosing the head record). This can't relate to any practice; it's your call to use the api like you want. I personally prefer ArrayBlockingQueue though (less temp objects).
This should be what you're looking for: Size-limited queue that holds last N elements in Java
Top answer refers to an apache lib queue which will drop elements.

Why doesn't a method which would block untill the queue become non-empty?

In the blocking queue documentation said that the method which examine and block the queue until it becomes non-empty is not applicable. But to me it's not clear why. Couldn't you explain that?
Maybe you are misunderstanding that part of the documentation (the table which groups the different methods into four categories) ?
The javadoc is simply saying that the BlockingQueue class has no need for a method which may block (or time out) when it examines the content of the queue, unlike actions such as inserting or removing items into/from the queue, where you may need the options of waiting or timing out if the queue is 'unavailable' at that precise moment.
I don't know exactly why this is, but I can make an educated guess: if you want to look inside a queue, you may get an exception, or you may get a value returned, but then the code will just move on from there -- you can't really block or time out.

is there a blocking queue in Java that only allows peek?

I need a blocking queue that has a size of 1, and every time put is applied it removes the last value and adds the next one. The consumers would be a thread pool in which each thread needs to read the message as it gets put on the queue and decide what to do with it, but they shouldn't be able to take from the queue since all of them need to read from it.
I was considering just taking and putting every time the producer sends out a new message, but having only peek in the run method of the consumers will result in them constantly peeking, won't it? Ideally the message will disappear as soon as the peeking stops, but I don't want to use a timed poll as it's not guaranteed that every consumer will peek the message in time.
My other option at the moment is to iterate over the collection of consumers and call a public method on them with the message, but I really don't want to do that since the system relies on real time updates, and a large collection will take a while to iterate through completely if I'm going through each method call on the stack.
After some consideration, I think you're best off, with each consumer having its own queue and the producer putting its messages on all queues.
If there are few consumers, then putting the messages on those few queues will not take too long (except when the producer blocks because a consumer can't keep up).
If there are many consumers this situation will be highly preferable over a situation where many consumers are in contention with each other.
At the very least this would be a good measure to compare alternate solutions against.

Iterating LinkedBlockingQueue in batches without removing items

Is there a way to iterate a LinkedBlockingQueue starting with a specific index number?
I have a LinkedBlockingQueue that contains a list of changes to make to a game world. It works perfectly fine when I'm actually making those changes, which involves iterating the queue, using the object polled this iteration, and then removing it from the queue.
The next time the process runs, iterating the queue from the beginning works because we had removed all "used" items.
However, I also have a preview mode, where the changes from the queue need to read and shown to the player, but not actually removed from the queue yet (since they're not officially "used")
These are all done in batches of 1000 so we don't overload network traffic or the clients.
I'd rather not have to re-iterate the queue each "batch" and use something to tell us to continue on until a specific index - and I'd rather not create a secondary queue or "holder".

java single writer and multiple reader

Sorry if this was asked before, but I could not find my exact scenario.
Currently I have a background thread that adds an element to a list and removes the old data every few minutes. Theoretically there can be at most 2 items in the list at a time and the items are immutable. I also have multiple threads that will grab the first element in the list whenever they need it. In this scenario, is it necessary to explicitly serialized operations on the list? My assumption that since I am just grabbing references to the elements, if the background thread deletes elements from the list, that should not matter since the thread already grabs a copy of the reference before the deletion. There is probably a better way to do this. Thanks in advanced.
Yes, synchronization is still needed here, because adding and removing are not atomic operations. If one thread calls add(0, new Object()) at the same time another calls remove(0), the result is undefined; for example, the remove() might end up having no effect.
Depending on your usage, you might be able to use a non-blocking list class like ConcurrentLinkedQueue. However, given that you are pushing one change every few minutes, I doubt you are gaining much in performance by avoiding synchronization.

Categories

Resources