Accuracy Vs. Precision
What I would like to know is whether I should use System.currentTimeMillis() or System.nanoTime() when updating my object's positions in my game? Their change in movement is directly proportional to the elapsed time since the last call and I want to be as precise as possible.
I've read that there are some serious time-resolution issues between different operating systems (namely that Mac / Linux have an almost 1 ms resolution while Windows has a 50ms resolution??). I'm primarly running my apps on windows and 50ms resolution seems pretty inaccurate.
Are there better options than the two I listed?
Any suggestions / comments?
If you're just looking for extremely precise measurements of elapsed time, use System.nanoTime(). System.currentTimeMillis() will give you the most accurate possible elapsed time in milliseconds since the epoch, but System.nanoTime() gives you a nanosecond-precise time, relative to some arbitrary point.
From the Java Documentation:
public static long nanoTime()
Returns the current value of the most precise available system timer, in nanoseconds.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary origin time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
For example, to measure how long some code takes to execute:
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
See also: JavaDoc System.nanoTime() and JavaDoc System.currentTimeMillis() for more info.
Since no one else has mentioned this…
It is not safe to compare the results of System.nanoTime() calls between different JVMs, each JVM may have an independent 'origin' time.
System.currentTimeMillis() will return the (approximate) same value between JVMs, because it is tied to the system wall clock time.
If you want to compute the amount of time that has elapsed between two events, like a stopwatch, use nanoTime(); changes in the system wall-clock make currentTimeMillis() incorrect for this use case.
Update by Arkadiy: I've observed more correct behavior of System.currentTimeMillis() on Windows 7 in Oracle Java 8. The time was returned with 1 millisecond precision. The source code in OpenJDK has not changed, so I do not know what causes the better behavior.
David Holmes of Sun posted a blog article a couple years ago that has a very detailed look at the Java timing APIs (in particular System.currentTimeMillis() and System.nanoTime()), when you would want to use which, and how they work internally.
Inside the Hotspot VM: Clocks, Timers and Scheduling Events - Part I - Windows
One very interesting aspect of the timer used by Java on Windows for APIs that have a timed wait parameter is that the resolution of the timer can change depending on what other API calls may have been made - system wide (not just in the particular process). He shows an example where using Thread.sleep() will cause this resolution change.
As others have said, currentTimeMillis is clock time, which changes due to daylight saving time (not: daylight saving & time zone are unrelated to currentTimeMillis, the rest is true), users changing the time settings, leap seconds, and internet time sync. If your app depends on monotonically increasing elapsed time values, you might prefer nanoTime instead.
You might think that the players won't be fiddling with the time settings during game play, and maybe you'd be right. But don't underestimate the disruption due to internet time sync, or perhaps remote desktop users. The nanoTime API is immune to this kind of disruption.
If you want to use clock time, but avoid discontinuities due to internet time sync, you might consider an NTP client such as Meinberg, which "tunes" the clock rate to zero it in, instead of just resetting the clock periodically.
I speak from personal experience. In a weather application that I developed, I was getting randomly occurring wind speed spikes. It took a while for me to realize that my timebase was being disrupted by the behavior of clock time on a typical PC. All my problems disappeared when I started using nanoTime. Consistency (monotonicity) was more important to my application than raw precision or absolute accuracy.
System.nanoTime() isn't supported in older JVMs. If that is a concern, stick with currentTimeMillis
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms). I'm not sure why, but some Windows machines are just as accurate as Linux machines.
I have used GAGETimer in the past with moderate success.
Yes, if such precision is required use System.nanoTime(), but be aware that you are then requiring a Java 5+ JVM.
On my XP systems, I see system time reported to at least 100 microseconds 278 nanoseconds using the following code:
private void test() {
System.out.println("currentTimeMillis: "+System.currentTimeMillis());
System.out.println("nanoTime : "+System.nanoTime());
System.out.println();
testNano(false); // to sync with currentTimeMillis() timer tick
for(int xa=0; xa<10; xa++) {
testNano(true);
}
}
private void testNano(boolean shw) {
long strMS=System.currentTimeMillis();
long strNS=System.nanoTime();
long curMS;
while((curMS=System.currentTimeMillis()) == strMS) {
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)); }
}
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)+", Milli: "+(curMS-strMS)); }
}
For game graphics & smooth position updates, use System.nanoTime() rather than System.currentTimeMillis(). I switched from currentTimeMillis() to nanoTime() in a game and got a major visual improvement in smoothness of motion.
While one millisecond may seem as though it should already be precise, visually it is not. The factors nanoTime() can improve include:
accurate pixel positioning below wall-clock resolution
ability to anti-alias between pixels, if you want
Windows wall-clock inaccuracy
clock jitter (inconsistency of when wall-clock actually ticks forward)
As other answers suggest, nanoTime does have a performance cost if called repeatedly -- it would be best to call it just once per frame, and use the same value to calculate the entire frame.
System.currentTimeMillis() is not safe for elapsed time because this method is sensitive to the system realtime clock changes of the system.
You should use System.nanoTime.
Please refer to Java System help:
About nanoTime method:
.. This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes) - no
guarantees are made except that the resolution is at least as good as
that of currentTimeMillis()..
If you use System.currentTimeMillis() your elapsed time can be negative (Back <-- to the future)
I've had good experience with nanotime. It provides wall-clock time as two longs (seconds since the epoch and nanoseconds within that second), using a JNI library. It's available with the JNI part precompiled for both Windows and Linux.
one thing here is the inconsistency of the nanoTime method.it does not give very consistent values for the same input.currentTimeMillis does much better in terms of performance and consistency,and also ,though not as precise as nanoTime,has a lower margin of error,and therefore more accuracy in its value. i would therefore suggest that you use currentTimeMillis
I am essentially trying to create a visual representation of the memory being used in my program. I have created a LineChart and set the Y-axis to be Memory used, and the X-axis to be time. My question is, what is the best way to set up a timer, so that incoming data about memory usage can be paired with the current time.
By this I mean, I want to start a timer when the window displays, and continue to count up (possibly with millisecond precision), and so I can say that after the program has been running for this long, this is the amount of memory used.
What would be the best resources to use for this task?
The best bet would probably just to use System.currentTimeMillis(); and set it to a variable when you start the count, then call it again and compare the saved value with the new timer to get your time.
so..
Long startTime = System.currentTimeMillis();
//Do whatever stuff
long timeElapsed = System.currentTimeMillis() - startTime;
One thing to keep in mind with this though, is currentTimeMillis() is platform dependent on how granular it is. On unix-based you get 1 ms. of a granularity minimum, I think on windows it's 50. So if you need something more accurate than 50 ms. time steps, you might need a different method.
You must use a StopWatch to measure the time. Please go through the following links
https://stackoverflow.com/a/8255766/1759128
There are many alternatives in different answers of the question. You can use any of them !
I am creating a Java Application where the OS's System Clock is adjusted from time to time. (so it's like a peer-to-peer NTP experiment)
I am looking for a Java construct that is something like a virtual clock where in I can still get the age of the application in milliseconds from the time it was executed. Because if I will always just use System.currentTimeMillis(), it might give me false age of the application.
Is there something like that? without actually creating another thread solely for it?
To calculte the elapsed time of your program you have multiple possibilities. Not all will fit your program because your system time could be change while your program is running.
currentTimeMillis()
With that method you get the current time of your system in millisecounds. If you want to calculate the runnning time of your program you could save the currentTime in a long variable. When you want the time the program is needed, you just simply subtract the currentTime now with your saved one.
Save the time when your program starts!
long start = System.currentTimeMillis();
Subtract the end time and the start time!
long need = System.currentTimeMillis() - start;
Keep in mind that if you change the system time you get a wrong time!
nanoTime()
With nanoTime you get the elapsed time of your Virtual Java Machine in nanosecounds. If you want to calculate the elapsed time, you have to do the same like with currentTimeMillis(). At the beginning you save the time and at the end you substract it.
Save the time when your program starts!
long start = System.nanoTime();
Subtract the end time and the start time!
long need = (System.nanoTime() - start) / 1000000; // divide to get millisecounds
Keep in mind that you get the right time, even if you change the system time, because you use the time of the Virtual Java Machine!
Difference
You only get the right elapsed time with System.nanoTime(). You should not use System.currentTimeMillis(), unless you do not mind that your result is wrong. currentTimeMillis() is to measure "wall-clock" time. When your system time is updateing, you simply get a wrong time. nanoTime() is actully mad for that, that you calculate the elapsed time.
No way to do this directly in Java, the only solution to this is to record the time differences applied to the system clock and takes this into account in your application.
Of course this depends greatly on the underlying operating system and the tools used to adjust the system clock.
I am writing a mini program in Java to use as a stop watch but I am not sure if I am using the right methods in terms of efficiency and accuracy.
From what I have read on stackoverflow it it appears that System.nanoTime() is the best method to use when measuring time elapsed. Is that right? To what extent is it accurate as in to the nearest nanosecond, microsecond, millisecond etc.
Also, while my stop watch is running, I would like it to display the current time elapsed every second. To do this I plan to use a TimerTask and schedule it to report the time (converted to seconds) every second.
Is this the best way? Will this have any effect on the accuracy?
Lastly with my current design will this use up much of a computer's resources e.g. processing time.
PS Sorry can't share much code right now cause I've just started designing it. I just did not want to waste time on a design that would be inefficient and make an inaccurate timer.
Yes, you can use java.util.Timer and TimerTask that runs periodically and updates your view every second. However I do not think you have to deal with nono seconds while you actually need resolution of seconds only. Use regular System.currentTimeMillis().
I know that System.nanoTime() is now the preferred method for measuring time over System.currentTimeInMillis() . The first obvious reason is nanoTime() gives more precise timing and the other reason I read that the latter is affected by adjustments to the system’s real-time clock. What does "getting affected by systems real-time clock " mean ?
In this case I've found following blog post excerpt useful:
If you are interested in measuring absolute time then always use
System.currentTimeMillis(). Be aware that its resolution may be quite
coarse (though this is rarely an issue for absolute times.)
If you are interested in measuring/calculating elapsed time, then
always use System.nanoTime(). On most systems it will give a
resolution on the order of microseconds. Be aware though, this call
can also take microseconds to execute on some platforms.
Clocks and Timers - General Overview by David Holmes
Since System.currentTimeMillis() is relying on the systems time of day clock, adjustments to the time are legitimate, in order to keep it on time.
What means adjustments here? Take for instance a look at the description of CLOCK_REALTIME from Linux:
System-wide clock that measures real (i.e., wall-clock) time.
Setting this clock requires appropriate privileges. This clock is
affected by discontinuous jumps in the system time (e.g., if the
system administrator manually changes the clock), and by the
incremental adjustments performed by adjtime(3) and NTP.
Just check the JavaDoc of the methods:
System.nanoTime()
"... This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. ..."
System.currentTimeMillis()
"... Returns the current time in milliseconds. ..."
So as you can see if the system time changes during the measurement using the System.currentTimeMillis(), the interval you measure will change too. However, it will not change when measuring the interval using the System.nanoTime() method.
It means that the value that System.currentTimeMillis() returns is obtained from the internal clock of the machine. If a sysadmin (or NTP) changes the time, for example if the clock is found to be running 5 minutes fast and the sysadmin goes and corrects it, System.currentTimeMillis() will be affected. This means that you can even see the value decrease, and if you use it to measure intervals the timings can be off. You may even measure negative timings.
System.nanoTime() on the other hand returns a value that is derived from some internal CPU counter/clock. The time measured by this clock cannot be changed by any user or program. This means that it will be more reliable for timing. But the CPU clock is reset on poweroff so it's not useful for finding the current "wall-clock" time.