I've been reading about blocking queues and certain questions appeared. All the examples that i've read demonstrated only situations where there are only one consumer and one producer thread. The question is: suppose we have 1 producer and 3 consumers and in the current moment all consumers are called take() method but the queue is empty so they are all waiting for appearing first element. Which of the consumer threads will take the first element when it will appear? The consumer thread which called take() first?
I don't know if you can tell. The real question is: why do you need to know? All listeners should be equivalent. It should not matter which one handles a request. If you have to know, you designed and implemented it incorrectly.
check ArrayBlockingQueue(int capacity, boolean fair) if fair is true,then the queue accesses for threads blocked on insertion or removal, are processed in FIFO order.
Which of the consumer threads will take the first element when it will appear? The consumer thread which called take() first?
This is tied to the blocking queue implementation as well as the JVM in question but the short answer is most likely yes. Each of the threads will be waiting on a condition and the first thread in the wait queue will be awoken when the condition is signaled.
That said, you should not depend on this functionality since it is very dependent on the particulars of the blocking queue in question as well as the JVM and OS version.
I agree with duffymo, the idea of having multiple threads waiting indefinitely for some new elements to pop up in the queue does not sound very well structured.
Also, if you need to know which one of the consumers remove the element, that makes me think that the consumers are actually doing different things, giving life to different ouputs on different scenarios, depending on the order with which the consumers perform the take(). If that is the case you might want to have different queues for the different threads.
If you are not planning to change your code, what about having the threads to perform a poll on regular basis?
Related
The ArrayBlockingQueue will block the producer thread if the queue is full and it will block the consumer thread if the queue is empty.
Does not this concept of blocking goes against the very idea of multi threading? if I have a 'main' thread and let us say I want to delegate all 'Logging' activities to another thread. So Basically inside my main thread,I create a Runnable to log the output and I put the Runnable on an ArrayBlockingQueue. The whole purpose of doing this is have the 'main' thread return immediately without wasting any time in an expensive logging operation.
But if the queue is full, the main thread will be blocked and will wait until a spot is available. So how does it help us?
The queue doesn't block out of spite, it blocks to introduce an additional quality into the system. In this case, it's prevention of starvation.
Picture a set of threads, one of which produces work units really fast. If the queue were to be allowed unbounded growth, potentially, the "rapid producer" queue could hog all the producing capacity. Sometimes, prevention of such side-effects is more important than having all threads unblocked.
I think this is the designer's decision. If he chose blocking mode ArrayBlockingQueue provides it with put method. If the desiner dont want blocking mode ArrayBlockingQueue has offer method which will return false when queue is full but then he needs to decide what to do with regected logging event.
In your example I would consider blocking to be a feature: It prevents an OutOfMemoryError.
Generally speaking, one of your threads is just not fast enough to cope with the assigned load. So the others must slow down somehow in order not to endanger the whole application.
On the other hand, if the load is balanced, the queue will not block.
Blocking is a necessary function of multithreading. You must block to have synchronized access to data. It does not defeat the purpose of multithreading.
I would suggest throwing an exception when the producer attempts to submit an item to a queue which is full. There are methods to test if the capacity is full beforehand I believe.
This would allow the invoking code to decide how it wants to handle a full queue.
If execution order when processing items from the queue is unimportant, I recommend using a threadpool (known as an ExecutorService in Java).
It depends on the nature of your multi threading philosophy. For those of us who favour Communicating Sequential Processes a blocking queue is nearly perfect. In fact, the ideal would be one where no message can be put into the queue at all unless the receiver is ready to receive it.
So no, I don't think that a blocking queue goes against the very purpose of multi-threading. In fact, the scenario that you describe (the main thread eventually getting stalled) is a good illustration of the major problem with the actor-model of multi-threading; you've no idea whether or not it will deadlock / block, and you can't exhaustively test for it either.
In contrast, imagine a blocking queue that is zero messages deep. That way for the system to work at all you'd have to find a way to ensure that the logger is always guaranteed to be able to receive a message from the main thread. That's CSP. It might mean that in your hypothetical logger thread you have to have application defined buffering (as opposed to some framework developer's best guess of how deep a FIFO should be), a fast I/O subsystem, checks for keeping up, ways of dealing with falling behind, etc. In short it doesn't let you get away with it, you're forced to address every aspect of your system's performance.
That is of course harder, but that way you end up with a system that's definitely OK rather than the questionable "maybe" that you have if your blocking queues are an unknown number of messages deep.
It sounds like you have the general idea right of why you'd use something like an ArrayBlockingQueue to talk between threads.
Having a blocking queue gives you the option to do something different in case something goes wrong with your background worker threads, rather than blindly adding more requests to the queue. If there is room in the queue, there is no blocking.
For your specific use case, though, I would use ExecutorService rather than reading/writing queues directly, which creates a pool of background worker threads:
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ExecutorService.html
pool = Executors.newFixedThreadPool(poolSize);
pool.submit(myRunnable);
A multithreaded program is non-deterministic insofar as you can't say beforehand: n producer actions will take exactly as long as m consumer actions. Therefore, synchronization between n producers and m consumers is necessary in every case.
You'll want to choose the queue size so that the number of active producers and consumers is maximized most of the time. But the thread model of java does not guarantee that any consumer will run unless it is the only unblocked thread. (Yet, of course, on multi-core CPUs it is very likely that the consumer will run).
You have to make a choice about what to do when a Queue is full. In the case of an Array Blocking queue, that choice is to wait.
Another option would be to just throw away new Objects if the queue was full; you can achieve this with offer.
You have to make a trade-off.
I am trying to find a solution for a queuing problem I have. In the typical scenario, the producer puts something on the queue and the consumer takes it off. How about if we have a producer that also consumes and a consumer that initially takes something off the queue and then puts something (like a result) back on the queue. As such, there is a two way flow if you follow. Is it possible to synchronize two threads to do this effectively? Naively, I had put a loop in the run method of one of my threads only to discover that the other thread will only run once and then dies.. Apologies if this appears vague..Hopefully someone can point me in the right direction
Cheers
If you just use a ConcurrentLinkedQueue, you can put and take from it from any thread. There is no strict distinction between producer and consumer threads and the queue object guarantees consistency of each operation.
I have three Java's LinkedBlockingQueue instances and I'd like to read from them (take operation) only using one thread. The naive approach is to have one thread per queue.
Is there anything like the UNIX select system call for blocking queues in Java?
Thanks.
Well, those BlockingQueues were really meant to be serviced by their own Threads.
Something I'd consider trying is to set up a 4th queue for much smaller items, say Booleans, and have the offer() calls on each of the 3 other queues accompany their insertion by inserting a Boolean into that 4th queue. Your thread can then go to sleep on the 4th queue, and when it wakes up it can peek() in the other 3 to find out where to get the goods.
Highly inelegant solution, I think, and I suspect there are possible race conditions where you won't be cleanly woken up some times. But it should basically work.
We are developing a Java application with several worker threads. These threads will have to deliver a lot of computation results to our UI thread. The order in which the results are delivered does not matter.
Right now, all threads simply push their results onto a synchronized Stack - but this means that every thread must wait for the other threads before results can be delivered.
Is there a data structure that supports simultaneous insertions with each insertion completing in constant time?
Thanks,
Martin
ConcurrentLinkedQueue is designed for high contention. Producers enqueue stuff on one end and consumers collect elements at the other end, so everything will be processed in the order it's added.
ArrayBlockingQueue is a better for lower contention, with lower space overhead.
Edit: Although that's not what you asked for. Simultaneuos inserts? You may want to give every thread one output queue (say, an ArrayBlockingQueue) and then have the UI thread poll the separate queues. However, I'd think you'll find one of the two above Queue implementations sufficient.
Right now, all threads simply push
their results onto a synchronized
Stack - but this means that every
thread must wait for the other threads
before results can be delivered.
Do you have any evidence indicating that this is actually a problem? If the computation performed by those threads is even the least little bit complex (and you don't have literally millions of threads), then lock contention on the result stack is simply a non-issue because when any given thread delivers its results, all others are most likely busy doing their computations.
Take a step back and evaluate whether performance is the key design consideration here. Don't think, know: does profiling back it up?
If not, I'd say a bigger concern is clarity and readability of design, and not introducing new code to maintain. It just so happens that, if you're using Swing, there is a library for doing exactly what you're trying to do, called SwingWorker.
Take a look at java.util.concurrent.ConcurrentLinkedQueue, java.util.concurrent.ConcurrentHashMap or java.util.concurrent.ConcurrentSkipListSet. They might do what you need. ConcurrentSkipListSet, for instance, claims to have "expected average log(n) time cost for the contains, add and remove operations and their variants. Insertion, removal, and access operations safely execute concurrently by multiple threads."
Two other patterns you might want to look at are
each thread has its own collection, when polled it returns the collection and creates a new one, so the collection only holds the pending items between polls. The thread needs to protect operations on its collection, but there is no contention between threads. This is blocking (each thread cannot add to its collection while the UI thread pulls updates from it), but can reduce contention (no contention between threads).
each thread has its own collection, and appends the results to a common queue which is protected using a Lock.tryLock(). The thread continues processing if it fails to acquire the lock. This makes it less likely that a thread will block waiting for the shared queue.
I need a queue that can be processed by multiple readers.
The readers will dequeue an element and send it to a REST service.
What's important to note are:
Each reader should be dequeueing different elements. If the queue has elements A, B & C, Thread 1 should dequeue A and Thread 2 should dequeue B in concurrent fashion. And so forth until there's nothing in the queue.
I understand that it is CPU intensive to always run in busy loop, peeking into the queue for items. So I am not sure if a Blocking queue is a good option.
What are my options?
ConcurrentLinkedQueue or LinkedBlockingQueue are two options that immediately come to mind, depending on whether you want blocking behavior or not.
As Adamski notes, the take() method of the LinkedBlockingQueue does not needlessly burn cpu cycles while waiting for data to arrive.
I am not sure from your question description whether the threads need to dequeue elements in a strict round-robin fashion. Assuming this isn't a restriction you can use BlockingQueue's take() method, which will cause the thread to block until data is available (therefore not consuming CPU cycles).
Also note that take() implementations are atomic (e.g. LinkedBlockingQueue): If multiple threads are blocked on take() and a single element is enqueued then only one thread's take() call will return; the other will remain blocked.
The major difference between ConcurrentLinkedQueue and LinkedBLockingQueue is its throughput. Under moderate thread contention ConcurrentLinkedQueue greatly out performs all other BlockingQueues. Under heavy contetion, however, a BlockingQueue is a slightly better choice as it will appropriately put contending threads into the waiting thread set.