LMAX Disruptor remainingCapacity equals 0, even before some time waiting - java

I added data to Disruptor by calling tryPublishEvent function.
After I wait 40 second and tried to check unprocessedDataCount by following calculation:
long ringBufferUnProcessedCount = disruptor.getBufferSize()
- disruptor.getRingBuffer().remainingCapacity()
Sometimes remainingCapacity value equals to 0, even if before getting ringBufferUnProcessedCount value we wait 40 seconds.
This error happens very rarely.
Do not you know why it might be?

If disruptor.getRingBuffer().remainingCapacity() equals 0 then that means that your disruptor is full and you are experiencing backpressure.
This may be caused by two things, either one of the eventHandlers is blocked for whatever reason and cannot make forward progress or the eventHandlers cannot process events quickly enough to keep up with the rate of production of new events.

Related

Selector.select() does not fill selectedKeys() if wakeup() is called before select()

I have a somewhat complex loop processing several different NIO channels and other objects. Some of these objects trigger their processing (i.e. simulate a "select"-event) by calling Selector.wakeup().
When I accidentally put my thread into a busy-loop by always calling wakeup() before every call to select(), I noticed that my other NIO channels were no longer getting serviced at all. As soon as the condition triggering the busy-loop went away, all other channels were immediately serviced again as usual.
After a bit of messing around with this I figured out that the reason for my other channels no longer being serviced was that select() did not fill the list of selectedKeys() if it returned immediately due to a pending wakeup().
Is this expected behavior? I couldn't find anything in the docs regarding this detail.
If so, is there a way to prevent this more elegantly than by following every select() with an additional selectNow()?
Of course it's expected. It retuned immediately because of the wakeup, so it didn't block looking for ready channels, populate the selected keys, etc.

Should I have a dedicated thread for watching a timeout?

Right now I have two threads running in my program. One constantly tries to read input from the user, and the other watches for a timeout. A timeout occurs if the user does not send any input in a given amount of time. The two threads look like this:
User input thread
while(true){
if(in.hasNextLine()){
processLine(in.nextLine());
timeLastRecieved = System.currentTimeMillis();
}
}
Timeout thread
while(true){
//Check for a timout
if(timeLastRecieved+timeoutDuration <= System.currentTimeMillis())
timeUserOut();
else{
//Sleep until it is possible for a timeout to occur
Thread.sleep((timeLastSent+timeoutDuration) - System.currentTimeMillis());
}
}
As of now I have these thread separated, but I could combine them like this...
while(true){
if(in.hasNextLine()){
processLine(in.nextLine());
timeLastRecieved = System.currentTimeMillis();
}
//Check for a timout
if(timeLastRecieved+timeoutDuration <= System.currentTimeMillis())
timeUserOut();
}
But I really don't need to check for a timeout that frequently. So should I combine the threads and check for a timeout too often, or should I have two threads. I am not as worried about performance as I am proper coding etiquette. If it means anything the timeout duration in something like 15 minutes long.
EDIT: Just want to point out that in the version with two thread I am sleeping, but in the combined version I never sleep the thread. This obviously causes the if statement that checks for a timeout to run more then necessary.
To summarize my comments: I don't think a separate thread to check for timeouts is necessary.
Reasons:
You'd need to share information like timeLastRecieved between them, which could be more complex than wanted (e.g. AFAIK in some cases access to long values is not atomic).
From your description it seems that polling for user input and timeout (no input provided in time) are closely related, thus the polling thread could check for the timeout as well. That doesn't mean it has to handle the timeout too, just reporting it somewhere or calling some timeout handler might be better design.
It is easier to read and understand since updating timeLastRecieved and checking for a timeout is handled in the same place.
Since there is no inter-thread communication nor coordination needed (there are no threads that need to communicate) it probably is more robust as well.
A few hints on checking for the timeout:
You should calculate the timeout threshold when you update timeLastReceived and then only check agains the current time instead of calculating it in every iteration.
You might want to calculate the timeout threshold before processing the input in order not to have it depend on the processing time as well.
Finally, there are alternative approaches like using java.util.Timer. Here you could simply schedule a timeout task which is executed when the timeout should occur. That task then would check if the timeout really happened and if not it just returns.
To handle new input before the timeout occured you could use at least two approaches:
Cancel the current timeout task, remove it from the timer and schedule a new one.
If there is already a scheduled timeout task then don't schedule a new one but wait for the current one to run. The current one then checks for the timeout and if none happened it schedules a new task (or itself) for the current anticipated timeout (note that this would require some inter-thread communcation so be careful here).
You need to have two threads - one waiting for data coming in through the InputStream / Reader, and one that's watching the time to see if the time elapsed as taken too long. The only way to do it with 1 thread would be to sleep for a segment of the timeout period and then poll for data periodically. But that's less efficient than having a separate thread dedicated to reading from your InputStream/Reader.
You may want to check out Timeout as a generic option for implementing a timeout

Long delay between Akka actors

I'm consistently seeing very long delays (60+ seconds) between two actors, from the time at which the first actor sends a message for the second, and when the second actor's onReceive method is actually called with the message. What kinds of things can I look for to debug this problem?
Details
Each instance of ActorA is sending one message for ActorB with ActorRef.tell(Object, ActorRef). I collect a millisecond timestamp (with System.currentTimeMillis()) right after calling the tell method in ActorA, and getting another one at the start of ActorB's onReceive(Object). The interval between these timestamps is consistently 60 seconds or more. Specifically, when plotted over time, this interval follows a rough saw tooth pattern that ranges from more 60 second to almost 120 seconds, as shown in the graph below.
These actors are early in the data flow of the system, there are several other actors that follow after ActorB. This large gap only occurs between these two specific actors, the gaps between other pairs of adjacent actors is typically less than a millisecond, occassionally a few tens of milliseconds. Additionally, the actual time spent inside any given actor is never more than a second.
Generally, each actor in the system only passes a single message to another actor. One of the actors (subsequent to ActorB) sends a single message to each of a few different actors, and a small percentage (less than 0.1%) of the time, certain actors will send multiple messages to the same subsequent actor (i.e., multiple instances of the subsequent actor will be demanded). When this occurs, the number of multiple messages is typically on the order of a dozen or less.
Can this be explained (explicitely) by the normal reactive nature of Akka? Does it indicate a problem with the way work is distributed or the way the actors are configured? Is there something that can explicitly block a particular actor from spinning up? What other information should I collect or look at to understand the source of this, or to understand whether or not it is actually a problem?
You have a limited thread pool. If your Actors block, they still take up space in the thread pool. New threads will not be created if your thread pool is saturated.
You may want to configure
core-pool-size-factor,
core-pool-size-min, and
core-pool-size-max.
If you expect certain actions to block, you can instead wrap them in Future { blocking { ... } } and register a callback. But it's better to use asynchronous, non-blocking calls.

Running thread for 2 millisecond and then wait for particular time before running it again

I have one method execute(data) which takes considerable time (depending on data like 10 seconds or 20 seconds), it has timeout feature which is 30 seconds default. I want to test that method. One way of doing it is to collect enough data which lasts more than 30 seconds and then see whether I get timeout exception. Other way of doing it is to use threads. What I intend to do is to run method for some milliseconds and then put thread on wait before I get timeout exception or make it last for some seconds.Can any one please suggest how can I achieve that.
You should walk through the Java Threads Tutorial (Concurrency). Any answer on Stack Overflow would need to be really long to help you here, and the Threads/Concurrency tutorials already cover this well.
http://docs.oracle.com/javase/tutorial/essential/concurrency/
You could use
Thread.sleep( millis );
to put the thread to sleep for the required time.
Or, you could put your data processing code into a loop, so that it processes it multiple times. This would recreate the scenario of the thread actually processing data for longer than 30 seconds.
Or, you could test your code with a shorter timeout value.

Is sleep method accurate in timing precision?

We all know of sleep method available in java threads..
I understand that the precision in timing depends on the precision of hardware clock in the system..
So my question is how accurate is this method or better say what is the error in milliseconds or nanoseconds considering a general pc.
My requirement its to synchronise data transfer using sleep for timing.. The data is to be sent in fixed in intervals (10-20 millis) and if there is a delay of more than 1sec due to successive error in timing it may be bad !
So is it advisable to use the also method?
Sleep is not the thing you want, as in here.
I suggest to read through this.
If you need to synchronize data, I suggest you do this yourself rather than relying on threads to wake up at preset times. i.e. use one thread to simulate when events occur, in the order you expect them to occur.

Categories

Resources