I have three Java's LinkedBlockingQueue instances and I'd like to read from them (take operation) only using one thread. The naive approach is to have one thread per queue.
Is there anything like the UNIX select system call for blocking queues in Java?
Thanks.
Well, those BlockingQueues were really meant to be serviced by their own Threads.
Something I'd consider trying is to set up a 4th queue for much smaller items, say Booleans, and have the offer() calls on each of the 3 other queues accompany their insertion by inserting a Boolean into that 4th queue. Your thread can then go to sleep on the 4th queue, and when it wakes up it can peek() in the other 3 to find out where to get the goods.
Highly inelegant solution, I think, and I suspect there are possible race conditions where you won't be cleanly woken up some times. But it should basically work.
Related
I got to know that we can use BlockingQueue instead of classical wait() and notify() while implementing the Producer Consumer pattern. My question is, which implementation is more efficient? In an article about blocking queues it's been written that- "you don't require to use wait and notify to communicate between Producer and Consumer"
Read more: http://javarevisited.blogspot.com/2012/02/producer-consumer-design-pattern-with.html#ixzz2lczIZ3Mo" . Does this simplicity come at the cost of efficiency??
The BlockingQueue will be faster, because it does not use wait/notify or synchronized for the queue access. All concurrent packages implement the lock-free algorithms using the Atomic-classes.
Think about a queue of 100 elements, and 1000 Threads wanting to do their work. With a synchronized implementation, for each element 999 Threads need to wait, till 1 Thread has picked it's task. With a lock-free algorithm, 100 Threads simultaneously pick their task, and only the other 900 have to wait.
If the number of objects produced/consumed every second is less than 100000, then you'll be unable to see the difference for standard or your own implementations.
Otherwise, you have following options to speed up your code:
use ArrayBlockingQueue instead of LinkedBlockingQueue: no need to create wrapper object for each transferred message. Another advantage of ArrayBlockingQueue is that producer thread is blocked if the queue is full - and indeed, producer should slow down if consumer is not fast, otherwise, we will end up with memory exhausted.
send messages in batches, say in arrays of 10 messages each. This reduces the contention of threads on shared object.
If you have to send tens of millions messages per second, look at Lmax Disruptor.
BlockingQueue is simply a class that puts wait() and notify() to this common use. Generally, doing it yourself is just reinventing the wheel, and only worth it if you have lots of producers and consumers and you can optimize in some way that's specific to your code.
I've been reading about blocking queues and certain questions appeared. All the examples that i've read demonstrated only situations where there are only one consumer and one producer thread. The question is: suppose we have 1 producer and 3 consumers and in the current moment all consumers are called take() method but the queue is empty so they are all waiting for appearing first element. Which of the consumer threads will take the first element when it will appear? The consumer thread which called take() first?
I don't know if you can tell. The real question is: why do you need to know? All listeners should be equivalent. It should not matter which one handles a request. If you have to know, you designed and implemented it incorrectly.
check ArrayBlockingQueue(int capacity, boolean fair) if fair is true,then the queue accesses for threads blocked on insertion or removal, are processed in FIFO order.
Which of the consumer threads will take the first element when it will appear? The consumer thread which called take() first?
This is tied to the blocking queue implementation as well as the JVM in question but the short answer is most likely yes. Each of the threads will be waiting on a condition and the first thread in the wait queue will be awoken when the condition is signaled.
That said, you should not depend on this functionality since it is very dependent on the particulars of the blocking queue in question as well as the JVM and OS version.
I agree with duffymo, the idea of having multiple threads waiting indefinitely for some new elements to pop up in the queue does not sound very well structured.
Also, if you need to know which one of the consumers remove the element, that makes me think that the consumers are actually doing different things, giving life to different ouputs on different scenarios, depending on the order with which the consumers perform the take(). If that is the case you might want to have different queues for the different threads.
If you are not planning to change your code, what about having the threads to perform a poll on regular basis?
I am trying to find a solution for a queuing problem I have. In the typical scenario, the producer puts something on the queue and the consumer takes it off. How about if we have a producer that also consumes and a consumer that initially takes something off the queue and then puts something (like a result) back on the queue. As such, there is a two way flow if you follow. Is it possible to synchronize two threads to do this effectively? Naively, I had put a loop in the run method of one of my threads only to discover that the other thread will only run once and then dies.. Apologies if this appears vague..Hopefully someone can point me in the right direction
Cheers
If you just use a ConcurrentLinkedQueue, you can put and take from it from any thread. There is no strict distinction between producer and consumer threads and the queue object guarantees consistency of each operation.
Theoretical question. If I have two SwingWorkers and an outputObject with method
public void synchronized outputToPane(String output)
If each SwingWorker has a loop in it as shown:
//SwingWorker1
while(true) {
outputObject.outputToPane("garbage");
}
//SwingWorker2
Integer i=0;
while(true) {
outputObject.outputToPane(i.toString());
i++;
}
How would those interact? does the outputToPane method receive an argument from one thread and block the other one until it finishes with the first, or does it build a queue of tasks that will execute in the order received, or some other option?
The reason I ask:
I have two threads that will be doing some heavy number crunching, one with a non-pausable data stream and the other from a file. I would like them both to output to a central messaging area when they hit certain milestones; however, I CANNOT risk the data stream getting blocked while it waits for the other thread to finish with the output. I will risk losing data then.
synchronized only guarantees mutual exclusion. Is not fair, which in practice means that your workers might alternate quite nicely, or the first one might get precedence and block the second one completely until finished, or anything between.
See Reentrantlock docs for more about fairness. Maybe you could consider using it instead of synchronized. Probably even better alternative would be using a Queue.
I would advise you to have two output object in your messaging area. Because if one thread starts to modify the output answer then the other one will have to wait for it to finish. Even if you can optimize it to make it fast enough, the actual display of info would make your threads slow each others down over time.
Although you might try to synchronize them, the result might not always be 100% safe
We are developing a Java application with several worker threads. These threads will have to deliver a lot of computation results to our UI thread. The order in which the results are delivered does not matter.
Right now, all threads simply push their results onto a synchronized Stack - but this means that every thread must wait for the other threads before results can be delivered.
Is there a data structure that supports simultaneous insertions with each insertion completing in constant time?
Thanks,
Martin
ConcurrentLinkedQueue is designed for high contention. Producers enqueue stuff on one end and consumers collect elements at the other end, so everything will be processed in the order it's added.
ArrayBlockingQueue is a better for lower contention, with lower space overhead.
Edit: Although that's not what you asked for. Simultaneuos inserts? You may want to give every thread one output queue (say, an ArrayBlockingQueue) and then have the UI thread poll the separate queues. However, I'd think you'll find one of the two above Queue implementations sufficient.
Right now, all threads simply push
their results onto a synchronized
Stack - but this means that every
thread must wait for the other threads
before results can be delivered.
Do you have any evidence indicating that this is actually a problem? If the computation performed by those threads is even the least little bit complex (and you don't have literally millions of threads), then lock contention on the result stack is simply a non-issue because when any given thread delivers its results, all others are most likely busy doing their computations.
Take a step back and evaluate whether performance is the key design consideration here. Don't think, know: does profiling back it up?
If not, I'd say a bigger concern is clarity and readability of design, and not introducing new code to maintain. It just so happens that, if you're using Swing, there is a library for doing exactly what you're trying to do, called SwingWorker.
Take a look at java.util.concurrent.ConcurrentLinkedQueue, java.util.concurrent.ConcurrentHashMap or java.util.concurrent.ConcurrentSkipListSet. They might do what you need. ConcurrentSkipListSet, for instance, claims to have "expected average log(n) time cost for the contains, add and remove operations and their variants. Insertion, removal, and access operations safely execute concurrently by multiple threads."
Two other patterns you might want to look at are
each thread has its own collection, when polled it returns the collection and creates a new one, so the collection only holds the pending items between polls. The thread needs to protect operations on its collection, but there is no contention between threads. This is blocking (each thread cannot add to its collection while the UI thread pulls updates from it), but can reduce contention (no contention between threads).
each thread has its own collection, and appends the results to a common queue which is protected using a Lock.tryLock(). The thread continues processing if it fails to acquire the lock. This makes it less likely that a thread will block waiting for the shared queue.