I'm using a ScheduledExecutorService to provide an update to a database every hour with the scheduleAtFixedRate method. The problem is that it gradually gets later - in long service I've been logging it and it's about a second a day.
I made a small class just to examine this aspect - seems to work fine when nothing is happening on the PC ( running WinXP ) but if things are going on it rapidly gets later. 18:00:00.5 last night was its first log and this morning was 09:00:00.5 then 10:00:05.9, 11:00:26.8, 12:00:45.3, 13:01:07.8...
I can attach the code although my example isn't the smallest.
Anyone else experienced this? Any ideas why this isn't working properly?
I can think of lots of ways around it but I'd really like to know why it doesn't work as advertised!
Thanks, Mike
This is normal AFAIK. With scheduleAtFixedRate, If any execution of this task takes longer than its period, then subsequent executions may start late. That being said, I'd recommend scheduleWithFixedDelay. This will ensure that tasks are carried out at the specified delay interval.
Related
In my server and client main loops, I have custom timers. They're pretty basic things. I'm using System.currentTimeMillis() to get the milliseconds and then comparing it to different variables for different timers. If the timer variable is less than tickCount, it runs the code, then sets the timer to tickCount + UpdateTime.
Here is an example:
long tickCount = System.currentTimeMillis();
if (LastUpdateTime_WoodCutting < tickCount) {
woodcutting();
LastUpdateTime_WoodCutting = tickCount + UpdateTime_WoodCutting;
}
UpdateTime_WoodCutting is set to 10. In theory, this should update this timer every 10ms. I'm sure it's not exactly that accurate, but the problem I'm having is, overall, this timer is meant to be a 10 second timer, which would be 10000ms.
The timer seems to be taking anywhere from 20-30 seconds to get there. The woodcutting method just checks if the timer in the player class is less than 10000, and if so, it adds 10 to it and once it's at 10000 or more, it executes the code for cutting down a tree in the game.
Another problem is that the client uses the exact same code for timers as the server, yet even while running on the same machine, they do not line up. The client's timer seems to finish about halfway through the server's timer. I've tried a bunch of alternatives to System.currentTimeMillis() but they all pretty much work exactly the same, so it hasn't been very fruitful.
Basically, what I'm trying to figure out is how I should handle these timers. It doesn't appear that I'm handling them properly. Before a bunch of changes to my code, these timers worked flawlessly, but all of a sudden, they do not. I don't know if it is the result of updating Gradle or Java (from 1.7 to 1.8), but I'm very frustrated with this and it's a pretty game breaking issue.
My source code is easily almost 40k lines of code and I am unable to share all of my code, but anything someone may need to see in order to better help me with this, I will provide what I can.
I am using a java.util.Timer class. I use it to execute repetitive tasks. (e.g. a polling mechanism that checks a status every second).
timer.scheduleAtFixedRate(new Poller(), 0, pollingInterval);
The problem case is: timers can fall behind. e.g. If they need to execute every second, and the task takes 2 seconds to execute.
The documentation says that the timer will try to catch-up. e.g. Suddenly if the tasks only takes half a second to execute, it will speed up.
First of all, I am wondering, is there a built-in way to detect if it's running behind.
But anyway, I am looking for a way to disable the catching-up behavior.
If it falls behind, I just want it to skip a couple of cycles.
(I've also used a ScheduledThreadPoolExecutor for similar tasks. It has more options, and maybe that can be the solution. But it's a bit too overwhelming to find it.)
EDIT:
Now that I think about it, I think the way to do it with a ScheduledThreadPoolExecutor, is to use the scheduleWithFixedDelay method, which uses an interval between task execution.
Still, is there a way to achieve the same with a Timer?
If you don't care if the timer falls behind, why not use a simple while loop with a Thread.sleep, such as:
while(condition) {
// do work here
Thread.sleep(1000L);
}
If this falls behind due to OS scheduling constraints, it will never try to make up the time that was lost. It will always delay for at least the amount of time you specify. Remember, simple is better than complex.
How does things like scheduleAtFixedRate work? How does it work behind the scenes and is there a penalty to using it?
More specifically, I have a task that I want to run periodically, say every 12 hours. The period is not strict at all, so my first instinct was to check in every request (tomcat server) if it's been more than >12 hours since the task last executed and if so, execute it and reset the timer. The downside of this is that I have to do a small time check on every request, make sure the task is run only once (using a semaphore or something similar) and the task might not execute in a long time if there's no requests.
scheduleAtFixedRate makes it easier to schedule a recurring task, but since I don't know how it does it, I don't know what the performance impact is. Is there a thread continually checking if the task is due to run? etc.
edit:
In Timer.java, there's a mainLoop function which, in my understanding, is something like this (overly simplified):
while(true) {
currentTime = System.currentTimeMillis();
if(myTask.nextExecutionTime == currentTime) myTask.run();
}
Won't this loop try to run as fast as possible and use a ton of CPU (I know, obviously not, but why)? There's no Thread.sleep in there to slow things down.
You can read the code if you wish to work out how it works.
There is an overhead using ScheduledExecutorService in terms of CPU and memory, however on the scale of hours, minutes, second even milli-seconds, it probably not work worrying about. If you have a task running in the range of micro-seconds, I would consider something more light weight.
In short, the overhead is probably too small for you to notice. The benefit it gives you is ease of use, and it is likely to be worth it.
Part of an application I'm writing uses a chronometer system. The timer should tick once every ms.
In my chronometer, I have these variables.
private static final int DELAY_IN_MILLISECONDS = 0;
private int intervalInMilliseconds = 1;
I start the timer like this:
timer = new Timer();
timer.schedule(new Task(), DELAY_IN_MILLISECONDS,
getIntervalInMilliseconds());
Yet, after a second he only reached about +- 100ms instead of 1000ms.
Though he used to work fine, untill I've added code to a different part of the game. I'm rather sure I've changed nothing on the timer but yet he became slower than normal (he used to work fine at first).
Is it possible that my timer runs slower due to the application requiring too much CPU time for other things? (it's a game I'm creating). If so, what would be the conventional way to solve this? Keeping in mind that it's more important the game runs smooth than the timer.
Thanks in advance!
EDIT: Is there a way to find out which part of your application is "bottlenecking" it, such as check where he uses most resources etc?
If long-term accuracy of scheduling is what you are after, then you should use the Timer#scheduleAtFixedRate method. If you continually reschedule the task with a delay, then the Timer instance cannot compensate for its past timing errors.
If short-term accuracy is also a concern, then you should switch to the Scheduled Executor Service, which uses a more accurate low-level technique to schedule the tasks.
I actually have a bit of a ad-hoc solution to the problem.
I stopped counting every ms, and just did the +=15ms to counter for the sleep();
The timer runs smooth now and after a minute I had less than 1second difference with the actual time it should have been.
Thanks everyone for your help, but any other (less ad-hoc) solutions are still very welcome!
EDIT: I just got this method due to Boris, so you can post your comment as an answer if you like :)
In the method from Connection, how much timeout should I give it? :S I have no idea what a normal timeout would be, how much time should it take? :)
I dont want isValid() to return false if it could return true if it had gotten more time, but also I don't want it to slow down the whole program and give me "freezes".
If I set 0, does that mean I don't care for any timeout, it will try for as long as it needs to?
Thanks!
This depends on a lot of things. Generally, I'd assume that the time that isValid takes is about the same time that a simple query would take. For that reason, I would use the maximum acceptable time for the user.
E.g. if you think that users of your (say) web application will wait at most 5 seconds for a response before giving up, you might want to use that value for isValid. Because it makes no sense to declare the connection valid if it takes, say, 50 seconds to reach the database.
I have no idea what a normal timeout
would be, how much time should it
take?
Then put the timeout into the program configuration (whatever this is). Maybe log the events when timeout occur and get some experience over time what a normal timeout is.
... but also I don't want it to slow
down the whole program and give me
"freezes"
Is this an interactive program for end-users, then think how much time she will wait without to get nervous. For me 2-3 seconds is still ok, dependingwhat the program is doing for me.
Is this a background server program think about what can happen that the connection get delayed (reconnect network, etc). A background program can wait longer.