I'm using the observer pattern and a BlockingQueue to add some instances. Now in another method I'm using the queue, but it seems take() is waiting forever, even though I'm doing it like this:
/** {#inheritDoc} */
#Override
public void diffListener(final EDiff paramDiff, final IStructuralItem paramNewNode,
final IStructuralItem paramOldNode, final DiffDepth paramDepth) {
final Diff diff =
new Diff(paramDiff, paramNewNode.getNodeKey(), paramOldNode.getNodeKey(), paramDepth);
mDiffs.add(diff);
try {
mDiffQueue.put(diff);
} catch (final InterruptedException e) {
LOGWRAPPER.error(e.getMessage(), e);
}
mEntries++;
if (mEntries == AFTER_COUNT_DIFFS) {
try {
mRunner.run(new PopulateDatabase(mDiffDatabase, mDiffs));
} catch (final Exception e) {
LOGWRAPPER.error(e.getMessage(), e);
}
mEntries = 0;
mDiffs = new LinkedList<>();
}
}
/** {#inheritDoc} */
#Override
public void diffDone() {
try {
mRunner.run(new PopulateDatabase(mDiffDatabase, mDiffs));
} catch (final Exception e) {
LOGWRAPPER.error(e.getMessage(), e);
}
mDone = true;
}
whereas mDiffQueue is a LinkedBlockingQueue and I'm using it like this:
while (!(mDiffQueue.isEmpty() && mDone) || mDiffQueue.take().getDiff() == EDiff.INSERTED) {}
But I think the first expression is checked whereas mDone isn't true, then maybe mDone is set to true (an observer always is multithreaded?), but it's already invoking mDiffQueue.take()? :-/
Edit: I really don't get it right now. I've recently changed it to:
synchronized (mDiffQueue) {
while (!(mDiffQueue.isEmpty() && mDone)) {
if (mDiffQueue.take().getDiff() != EDiff.INSERTED) {
break;
}
}
}
If I wait in the debugger a little time it works, but it should also work in "real time" since mDone is initialized to false and therefore the while-condition should be true and the body should be executed.
If the mDiffQueue is empty and mDone is true it should skip the body of the while-loop (which means the queue isn't filled anymore).
Edit: Seems it is:
synchronized (mDiffQueue) {
while (!(mDiffQueue.isEmpty() && mDone)) {
if (mDiffQueue.peek() != null) {
if (mDiffQueue.take().getDiff() != EDiff.INSERTED) {
break;
}
}
}
}
Even though I don't get why the peek() is mandatory.
Edit:
What I'm doing is iterating over a tree and I want to skip all INSERTED nodes:
for (final AbsAxis axis = new DescendantAxis(paramRtx, true); axis.hasNext(); axis.next()) {
skipInserts();
final IStructuralItem node = paramRtx.getStructuralNode();
if (node.hasFirstChild()) {
depth++;
skipInserts();
...
Basically computing the maximum depth or level in the tree without considering nodes which have been deleted in another revision of the tree (for a comparsion Sunburst visualization), but ok, that's maybe out of scope. Just to illustrate that I'm doing something with nodes which haven't been inserted, even if it's just adjusting the maximum depth.
regards,
Johannes
take() is a "blocking call". That means it will block (wait forever) until something is on the queue then it will return what was added. Of course, if something is on the queue, it will return immediately.
You can use peek() to return what would be returned by take() - that is, peek() returns the next item without removing it from the queue, or returns null if there's nothing on the queue. Try using peek() instead in your test (but check for null too).
First advice: don't synchronized (mDiffQueue). You would get deadlock if the LinkedBlockingQueue had some synchronized method; it's not the case here, but it's a practice that you should avoid. Anyway, I don't see why you are synchronizing at that point.
You have to "wake up" periodically while waiting to check if mDone has been set:
while (!(mDiffQueue.isEmpty() && mDone)) {
// poll returns null if nothing is added in the queue for 0.1 second.
Diff diff = mDiffQueue.poll(0.1, TimeUnit.SECONDS);
if (diff != null)
process(diff);
}
This is about the same as using peek, but peek basically waits for a nanosecond instead. Using peek is called "busy waiting" (your thread runs the while loop non-stop) and using pool is called "semi-busy waiting" (you let the thread sleep at intervals).
I guess in your case process(diff) would be to get out of the loop if diff is not of type EDiff.INSERTED. I'm not sure if that is what you are trying to accomplish. This seems odd since you are basically just stalling the consumer thread until you get a single element of the right type, and then you do nothing with it. And you cannot receive the future incoming elements since you are out of the while loop.
Related
I want to have a thread which does some I/O work when it is interrupted by a main thread and then go back to sleep/wait until the interrupt is called back again.
So, I have come up with an implementation which seems to be not working. The code snippet is below.
Note - Here the flag is a public variable which can be accessed via the thread class which is in the main class
// in the main function this is how I am calling it
if(!flag) {
thread.interrupt()
}
//this is how my thread class is implemented
class IOworkthread extends Thread {
#Override
public void run() {
while(true) {
try {
flag = false;
Thread.sleep(1000);
} catch (InterruptedException e) {
flag = true;
try {
// doing my I/O work
} catch (Exception e1) {
// print the exception message
}
}
}
}
}
In the above snippet, the second try-catch block catches the InterruptedException. This means that both of the first and second try-catch block are catching the interrupt. But I had only called interrupt to happen during the first try-catch block.
Can you please help me with this?
EDIT
If you feel that there can be another solution for my objective, I will be happy to know about it :)
If it's important to respond fast to the flag you could try the following:
class IOworkthread extends Thread {//implements Runnable would be better here, but thats another story
#Override
public void run() {
while(true) {
try {
flag = false;
Thread.sleep(1000);
}
catch (InterruptedException e) {
flag = true;
}
//after the catch block the interrupted state of the thread should be reset and there should be no exceptions here
try {
// doing I/O work
}
catch (Exception e1) {
// print the exception message
// here of course other exceptions could appear but if there is no Thread.sleep() used here there should be no InterruptedException in this block
}
}
}
}
This should do different because in the catch block when the InterruptedException is caught, the interrupted flag of the thread is reset (at the end of the catch block).
It does sound like a producer/consumer construct. You seem to kind of have it the wrong way around, the IO should be driving the algorithm. Since you stay very abstract in what your code actually does, I'll need to stick to that.
So let's say your "distributed algorithm" works on data of type T; that means that it can be described as a Consumer<T> (the method name in this interface is accept(T value)). Since it can run concurrently, you want to create several instances of that; this is usually done using an ExecutorService. The Executors class provides a nice set of factory methods for creating one, let's use Executors.newFixedThreadPool(parallelism).
Your "IO" thread runs to create input for the algorithm, meaning it is a Supplier<T>. We can run it in an Executors.newSingleThreadExecutor().
We connect these two using a BlockingQueue<T>; this is a FIFO collection. The IO thread puts elements in, and the algorithm instances take out the next one that becomes available.
This makes the whole setup look something like this:
void run() {
int parallelism = 4; // or whatever
ExecutorService algorithmExecutor = Executors.newFixedThreadPool(parallelism);
ExecutorService ioExecutor = Executors.newSingleThreadExecutor();
// this queue will accept up to 4 elements
// this might need to be changed depending on performance of each
BlockingQueue<T> queue = new ArrayBlockingQueue<T>(parallelism);
ioExecutor.submit(new IoExecutor(queue));
// take element from queue
T nextElement = getNextElement(queue);
while (nextElement != null) {
algorithmExecutor.submit(() -> new AlgorithmInstance().accept(nextElement));
nextElement = getNextElement(queue);
if (nextElement == null) break;
}
// wait until algorithms have finished running and cleanup
algorithmExecutor.awaitTermination(Integer.MAX_VALUE, TimeUnit.YEARS);
algorithmExecutor.shutdown();
ioExecutor.shutdown(); // the io thread should have terminated by now already
}
T getNextElement(BlockingQueue<T> queue) {
int timeOut = 1; // adjust depending on your IO
T result = null;
while (true) {
try {
result = queue.poll(timeOut, TimeUnits.SECONDS);
} catch (TimeoutException e) {} // retry indefinetely, we will get a value eventually
}
return result;
}
Now this doesn't actually answer your question because you wanted to know how the IO thread can be notified when it can continue reading data.
This is achieved by the limit to the BlockingQueue<> which will not accept elements after this has been reached, meaning the IO thread can just keep reading and try to put in elements.
abstract class IoExecutor<T> {
private final BlockingQueue<T> queue;
public IoExecutor(BlockingQueue<T> q) { queue = q; }
public void run() {
while (hasMoreData()) {
T data = readData();
// this will block if the queue is full, so IO will pause
queue.put(data);
}
// put null into queue
queue.put(null);
}
protected boolean hasMoreData();
protected abstract T readData();
}
As a result during runtime you should at all time have 4 threads of the algorithm running, as well as (up to) 4 items in the queue waiting for one of the algorithm threads to finish and pick them up.
This is a follow-up question from my previous question asked here.
I am using a PriorityBlockingQueue now. I changed my producer to the following:
synchronized(Manager.queue) {
Manager.queue.add(new Job());
Manager.queue.notify();
}
And changed Consumer to the following. Full code skeleton is here:
//my consumer thread run()
public void run() {
synchronized(Manager.queue) {
while (Manager.queue.peek() == null) {
System.out.println("111111111111111");
try {
Manager.queue.wait();
} catch (InterruptedException e) {
}
}
Job job=Manager.queue.peek();
if (job != null) {
submitJob(job);
if (job.SubmissionFailed.equals("false")) {
// successful submission. Remove from queue. Add to another.
Manager.queue.poll();
Manager.submissionQueue.put(job.uniqueid, job);
}
}
}
My code only works for the first time (first produce and first consume), but it doesn't work for the second time. Somewhere the wait/notify logic fails I guess. The producer pushes new jobs to the queue, but the consumer doesn't peek any more items. In fact, it doesn't even go to the while loop and no more 111111111111111 printing.
What is the problem? How to fix it?
You could simplify all this code to just:
In the Producer:
Manager.queue.add(new Job());
and in the Consumer:
while (true) {
try {
submitJob(Manager.queue.take()); //or do something else with the Job
//your code here, then remove the break
break;
} catch (InterruptedException ex) {
//usually no need to do anything, simply live on unless you
//caused that
}
}
//or your code here, then you need an surrounding while and the break
When using an PriorityBlockingQueue, you don't need any syncronized statements, since they're inside the PriorityBlockingQueue already. And according to the documentation take() wait for an element to be added if necessary and than polls it. See https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/PriorityBlockingQueue.html#take() for reference.
And for the InterruptedException you might want to take a look here: Handling InterruptedException in Java
Edit: added missing try{} catch()
Please see the following code segment:
private static Queue<Message> m_Queue;
public boolean isQueueEmpty()
{
if (m_Queue.isEmpty())
return true;
else
return false;
}
public WgwConferenceMessage dequeue(){
try{
if(!isQueueEmpty())
{
Message message = m_Queue.poll();
if (message != null)
{
if (!message.getMessage().equals(""))
Log4jWrapper.writeLog("Retrieved " + message.getMessage() + " from queue");
else
Log4jWrapper.writeLog(LogLevelEnum.ERROR, "<Queue> dequeue", "Message empty");
return message;
}
else
{
Log4jWrapper.writeLog(LogLevelEnum.TRACE, "<Queue> dequeue", " Q is empty!");
return null;
}
}
else
return null;
}
catch (Exception e)
{
ExceptionHandler.printException(e, "<Queue>", "dequeue");
return null;
}
}
public void enqueue(Message a_Message) throws Exception
{
try
{
if (m_Queue.offer(a_Message))
Log4jWrapper.writeLog(LogLevelEnum.TRACE, "<Queue> enqueue", "Pushed " + a_Message.getMessage() + " to queue");
else
throw new Exception("Queue - Could not push message to queue");
}
catch (Exception e)
{
ExceptionHandler.printException(e, "Queue", "enqueue");
}
}
My problem is that eventually I get " Q is empty!" log line.
And I can't understand how can it be?
isQueueEmpty() says the Q is not empty, and poll says it is!
Can you advice please?
Thank you.
Assuming this code is accessed by multiple threads, the reason is that the check for emptiness and the subsequent polling are not done atomically: they are two separate actions. This means that it is possible for a different thread to call poll on the queue in between the first thread checking if it is empty and calling poll itself; if there happens only to have been one element in the queue, one of these threads is going to get null back from the call to poll.
Quoting the Javadoc for Queue:
Queue implementations generally do not allow insertion of null elements, although some implementations, such as LinkedList, do not prohibit insertion of null. Even in the implementations that permit it, null should not be inserted into a Queue, as null is also used as a special return value by the poll method to indicate that the queue contains no elements.
This means that you should use the fact that null is returned by poll as an indication that the queue was empty - you don't need to do the calls separately.
poll may be atomic - depending on the implementation of Queue you are actually using:
If you're using a non-synchronized implementation like LinkedList, you should be synchronizing it anyway if multiple threads are modifying the list, making poll atomic;
Concurrent implementations like BlockingQueue implements poll atomically, so you don't need to worry about doing anything explicitly.
TL;DR:
Remove the !isQueueEmpty() check
Ensure that your poll method is atomic either by choosing a concurrent implementation, or by synchronizing mutations of the queue.
your initialization of Queue is as follows:
m_Queue = new LinkedList<Message>();
LinkedinList is an implementation of Queue which allows null to be added.
so basically you are adding null values into your m_Queue
And as #Andy mentioned, such implementation should not be used when using poll() method.
There are two ways to avoid that
before adding Message to m_Queue you can check if its null or not
new LinkedList<Message>(); to new ArrayDequeue<Message>(); where it throws Exception if you are adding null to your Queue
I prefer second one as it makes it as a Queue should really be.
How to wait x seconds or until a condition becomes true? The condition should be tested periodically while waiting. Currently I'm using this code, but there should be a short function.
for (int i = 10; i > 0 && !condition(); i--) {
Thread.sleep(1000);
}
Assuming you want what you asked for, as opposed to suggestions for redesigning your code, you should look at Awaitility.
For example, if you want to see if a file will be created within the next 10 seconds, you do something like:
await().atMost(10, SECONDS).until(() -> myFile.exists());
It's mainly aimed at testing, but does the specific requested trick of waiting for an arbitrary condition, specified by the caller, without explicit synchronization or sleep calls. If you don't want to use the library, just read the code to see the way it does things.
Which, in this case, comes down to a similar polling loop to the question, but with a Java 8 lambda passed in as an argument, instead of an inline condition.
I didn't find a solution in the JDK. I think this feature should be added to the JDK.
Here what I've implemented with a Functional Interface:
import java.util.concurrent.TimeoutException;
import java.util.function.BooleanSupplier;
public interface WaitUntilUtils {
static void waitUntil(BooleanSupplier condition, long timeoutms) throws TimeoutException{
long start = System.currentTimeMillis();
while (!condition.getAsBoolean()){
if (System.currentTimeMillis() - start > timeoutms ){
throw new TimeoutException(String.format("Condition not met within %s ms",timeoutms));
}
}
}
}
Have you thought about some classes from java.util.concurrent - for example a BlockingQueue?
You could use:
BlockingQueue<Boolean> conditionMet = new BlockingQueue<Boolean>;
conditionMet.poll(10,TimeUnit.SECONDS);
And then in the code that changes your condition do this:
conditionMet.put(true);
EDIT:
Another example form java.util.concurrent may be CountDownLatch:
CountDownLatch siteWasRenderedLatch = new CountDownLatch(1);
boolean siteWasRendered = siteWasRenderedLatch.await(10,TimeUnit.SECONDS);
This way you'll wait 10 seconds or until the latch reaches zero. To reach zero all you have to do is:
siteWasRenderedLatch.countDown();
This way you won't need to use locks which would be needed in Condition examples presented by #Adrian. I think it's just simpler and straight-forward.
And if you don't like the naming 'Latch' or 'Queue' you can always wrap it into your own class called i.e. LimitedTimeCondition:
public class LimitedTimeCondition
{
private CountDownLatch conditionMetLatch;
private Integer unitsCount;
private TimeUnit unit;
public LimitedTimeCondition(final Integer unitsCount, final TimeUnit unit)
{
conditionMetLatch = new CountDownLatch(1);
this.unitsCount = unitsCount;
this.unit = unit;
}
public boolean waitForConditionToBeMet()
{
try
{
return conditionMetLatch.await(unitsCount, unit);
}
catch (final InterruptedException e)
{
System.out.println("Someone has disturbed the condition awaiter.");
return false;
}
}
public void conditionWasMet()
{
conditionMetLatch.countDown();
}
}
And the usage would be:
LimitedTimeCondition siteRenderedCondition = new LimitedTimeCondition(10, TimeUnit.SECONDS);
//
...
//
if (siteRenderedCondition.waitForConditionToBeMet())
{
doStuff();
}
else
{
System.out.println("Site was not rendered properly");
}
//
...
// in condition checker/achiever:
if (siteWasRendered)
{
condition.conditionWasMet();
}
Have a look at Condition.
Conditions (also known as condition queues or condition variables)
provide a means for one thread to suspend execution (to "wait") until
notified by another thread that some state condition may now be true.
Because access to this shared state information occurs in different
threads, it must be protected, so a lock of some form is associated
with the condition. The key property that waiting for a condition
provides is that it atomically releases the associated lock and
suspends the current thread, just like Object.wait.
A Condition instance is intrinsically bound to a lock. To obtain a
Condition instance for a particular Lock instance use its
newCondition() method.
EDIT:
Related question Sleep and check until condition is true
Related question is there a 'block until condition becomes true' function in java?
You may want to use something like the code below (where secondsToWait holds the maximum number of seconds you want to wait to see if the condition() turns true. The varialbe isCondetionMet will contain true if the condition was found, or false if the code timed out waiting for the condition.
long endWaitTime = System.currentTimeMillis() + secondsToWait*1000;
boolean isConditionMet = false;
while (System.currentTimeMillis() < endWaitTime && !isConditionMet) {
isConditionMet = condition();
if (isConditionMet) {
break;
} else {
Thread.sleep(1000);
}
}
I'm using the following adaptation of the original question's solution:
public class Satisfied {
public static boolean inTime(Callable<Boolean> condition, int timeoutInSecs) {
int count;
try {
for (count = 1; count < timeoutInSecs * 20 && !condition.call(); count++)
Thread.sleep(50);
return (count < timeoutInSecs * 20);
} catch (Exception e) {
throw new AssertionError(e.getMessage());
}
}
}
When used in testing, it appears like this:
assertThat(Satisfied.inTime(() -> myCondition(), 5)).isTrue();
Using await Awaitility:
Awaitility.with().pollDelay(1000, TimeUnit.MILLISECONDS).await().until(() -> true);
The producer is finite, as should be the consumer.
The problem is when to stop, not how to run.
Communication can happen over any type of BlockingQueue.
Can't rely on poisoning the queue(PriorityBlockingQueue)
Can't rely on locking the queue(SynchronousQueue)
Can't rely on offer/poll exclusively(SynchronousQueue)
Probably even more exotic queues in existence.
Creates a queued seq on another (presumably lazy) seq s. The queued
seq will produce a concrete seq in the background, and can get up to
n items ahead of the consumer. n-or-q can be an integer n buffer
size, or an instance of java.util.concurrent BlockingQueue. Note
that reading from a seque can block if the reader gets ahead of the
producer.
http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/seque
My attempts so far + some tests: https://gist.github.com/934781
Solutions in Java or Clojure appreciated.
class Reader {
private final ExecutorService ex = Executors.newSingleThreadExecutor();
private final List<Object> completed = new ArrayList<Object>();
private final BlockingQueue<Object> doneQueue = new LinkedBlockingQueue<Object>();
private int pending = 0;
public synchronized Object take() {
removeDone();
queue();
Object rVal;
if(completed.isEmpty()) {
try {
rVal = doneQueue.take();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
pending--;
} else {
rVal = completed.remove(0);
}
queue();
return rVal;
}
private void removeDone() {
Object current = doneQueue.poll();
while(current != null) {
completed.add(current);
pending--;
current = doneQueue.poll();
}
}
private void queue() {
while(pending < 10) {
pending++;
ex.submit(new Runnable() {
#Override
public void run() {
doneQueue.add(compute());
}
private Object compute() {
//do actual computation here
return new Object();
}
});
}
}
}
Not exactly an answer I'm afraid, but a few remarks and more questions. My first answer would be: use clojure.core/seque. The producer needs to communicate end-of-seq somehow for the consumer to know when to stop, and I assume the number of produced elements is not known in advance. Why can't you use an EOS marker (if that's what you mean by queue poisoning)?
If I understand your alternative seque implementation correctly, it will break when elements are taken off the queue outside your function, since channel and q will be out of step in that case: channel will hold more #(.take q) elements than there are elements in q, causing it to block. There might be ways to ensure channel and q are always in step, but that would probably require implementing your own Queue class, and it adds so much complexity that I doubt it's worth it.
Also, your implementation doesn't distinguish between normal EOS and abnormal queue termination due to thread interruption - depending on what you're using it for you might want to know which is which. Personally I don't like using exceptions in this way — use exceptions for exceptional situations, not for normal flow control.