System.nanoTime how consistently accurate should it be? - Java - java

I am speed testing a function for creating a maze and my total time to run I am returning is varying every time. From around:
0.004692336s to 0.01715823s
Just wondering if this is normal or a little odd as I have never used the system timer before. Thanks

System.nanoTime() provides nanosecond precision, but not necessarily nanosecond accuracy. That means you should only use it to measure elapsed time.

Related

Digging deeper into System.currentTimeMillis(), how precise can we expect it to be? What does millisecond actually precision mean?

Please only answer WHEN you fully comprehend the question.
Do not close down, as there does not exist a similar question.
I am aware of System.nanoTime() gives ns from an arbitrary "random" point after the JVM starts. And I am aware that System.currentTimeMillis() only gives ms precision.
What I am looking for is for the PROOF and keep an open mind, to the hypothesis that the ms changes are not exact once we try to define what exact means.
Exact would in my world mean that everytime we were to register a new ms say, we go from 97ms, 98ms, 99ms and so forth, on every time we get an update, through whatever mechanisms, we can not expect at least observed Java to give us nanosecond precision at the switches.
I know, i know. It sounds weird to expect that, but then the question comes, how accurate are the ms switches then?
It appears to be that when you ask System.nanoTime() repeatedly you would be able to get a linear graph with nanosecond resolution.
If we at the same time ask System.currentTimeMillis() right after System.nanoTime() and we disregard the variance in cost of commands, it appears as if there would be not a linear graph on the same resolution. The ms graph would +-250ns.
This is the to be expected, yet I can not find any information on the error margin, or the accuracy of the ms.
This issue is there for second precision as well, or hour precision, day, year, and so forth. When the year comes, how big is the error?
When the ms comes, how big is the error in terms on ns?
System.currenTimeMillis() can not be trusted to stay linear against System.nanoTime() and we can not expect System.currenTimeMillis() to keep up with ns precision.
But how big is the error? In computing? In Java, in unix systems?
From the documentation:
"Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
See the description of the class Date for a discussion of slight discrepancies that may arise between "computer time" and coordinated universal time (UTC)."
So both the precision and accuracy of the call is undefined. They pass the buck to the OS and shrug. I doubt that 250 ns is an accurate measure of its quality. The gap is likely much larger than that. "Tens of milliseconds" as per the documentation is a much more likely value, especially across multiple systems.
Also they essentially disavow any knowledge of UTC as well. "Slight discrepancies" are allowed, whatever that means. Technically this allows any value at all, because what exactly is "slight?" It could be a second or a minute depending on your point of view.
Finally the system clock could be misconfigured by the person operating the system, and at that point everything goes out the window.

currentTimeMillis() and waiting by spinning on nanoTime() [duplicate]

Accuracy Vs. Precision
What I would like to know is whether I should use System.currentTimeMillis() or System.nanoTime() when updating my object's positions in my game? Their change in movement is directly proportional to the elapsed time since the last call and I want to be as precise as possible.
I've read that there are some serious time-resolution issues between different operating systems (namely that Mac / Linux have an almost 1 ms resolution while Windows has a 50ms resolution??). I'm primarly running my apps on windows and 50ms resolution seems pretty inaccurate.
Are there better options than the two I listed?
Any suggestions / comments?
If you're just looking for extremely precise measurements of elapsed time, use System.nanoTime(). System.currentTimeMillis() will give you the most accurate possible elapsed time in milliseconds since the epoch, but System.nanoTime() gives you a nanosecond-precise time, relative to some arbitrary point.
From the Java Documentation:
public static long nanoTime()
Returns the current value of the most precise available system timer, in nanoseconds.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary origin time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
For example, to measure how long some code takes to execute:
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
See also: JavaDoc System.nanoTime() and JavaDoc System.currentTimeMillis() for more info.
Since no one else has mentioned this…
It is not safe to compare the results of System.nanoTime() calls between different JVMs, each JVM may have an independent 'origin' time.
System.currentTimeMillis() will return the (approximate) same value between JVMs, because it is tied to the system wall clock time.
If you want to compute the amount of time that has elapsed between two events, like a stopwatch, use nanoTime(); changes in the system wall-clock make currentTimeMillis() incorrect for this use case.
Update by Arkadiy: I've observed more correct behavior of System.currentTimeMillis() on Windows 7 in Oracle Java 8. The time was returned with 1 millisecond precision. The source code in OpenJDK has not changed, so I do not know what causes the better behavior.
David Holmes of Sun posted a blog article a couple years ago that has a very detailed look at the Java timing APIs (in particular System.currentTimeMillis() and System.nanoTime()), when you would want to use which, and how they work internally.
Inside the Hotspot VM: Clocks, Timers and Scheduling Events - Part I - Windows
One very interesting aspect of the timer used by Java on Windows for APIs that have a timed wait parameter is that the resolution of the timer can change depending on what other API calls may have been made - system wide (not just in the particular process). He shows an example where using Thread.sleep() will cause this resolution change.
As others have said, currentTimeMillis is clock time, which changes due to daylight saving time (not: daylight saving & time zone are unrelated to currentTimeMillis, the rest is true), users changing the time settings, leap seconds, and internet time sync. If your app depends on monotonically increasing elapsed time values, you might prefer nanoTime instead.
You might think that the players won't be fiddling with the time settings during game play, and maybe you'd be right. But don't underestimate the disruption due to internet time sync, or perhaps remote desktop users. The nanoTime API is immune to this kind of disruption.
If you want to use clock time, but avoid discontinuities due to internet time sync, you might consider an NTP client such as Meinberg, which "tunes" the clock rate to zero it in, instead of just resetting the clock periodically.
I speak from personal experience. In a weather application that I developed, I was getting randomly occurring wind speed spikes. It took a while for me to realize that my timebase was being disrupted by the behavior of clock time on a typical PC. All my problems disappeared when I started using nanoTime. Consistency (monotonicity) was more important to my application than raw precision or absolute accuracy.
System.nanoTime() isn't supported in older JVMs. If that is a concern, stick with currentTimeMillis
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms). I'm not sure why, but some Windows machines are just as accurate as Linux machines.
I have used GAGETimer in the past with moderate success.
Yes, if such precision is required use System.nanoTime(), but be aware that you are then requiring a Java 5+ JVM.
On my XP systems, I see system time reported to at least 100 microseconds 278 nanoseconds using the following code:
private void test() {
System.out.println("currentTimeMillis: "+System.currentTimeMillis());
System.out.println("nanoTime : "+System.nanoTime());
System.out.println();
testNano(false); // to sync with currentTimeMillis() timer tick
for(int xa=0; xa<10; xa++) {
testNano(true);
}
}
private void testNano(boolean shw) {
long strMS=System.currentTimeMillis();
long strNS=System.nanoTime();
long curMS;
while((curMS=System.currentTimeMillis()) == strMS) {
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)); }
}
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)+", Milli: "+(curMS-strMS)); }
}
For game graphics & smooth position updates, use System.nanoTime() rather than System.currentTimeMillis(). I switched from currentTimeMillis() to nanoTime() in a game and got a major visual improvement in smoothness of motion.
While one millisecond may seem as though it should already be precise, visually it is not. The factors nanoTime() can improve include:
accurate pixel positioning below wall-clock resolution
ability to anti-alias between pixels, if you want
Windows wall-clock inaccuracy
clock jitter (inconsistency of when wall-clock actually ticks forward)
As other answers suggest, nanoTime does have a performance cost if called repeatedly -- it would be best to call it just once per frame, and use the same value to calculate the entire frame.
System.currentTimeMillis() is not safe for elapsed time because this method is sensitive to the system realtime clock changes of the system.
You should use System.nanoTime.
Please refer to Java System help:
About nanoTime method:
.. This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes) - no
guarantees are made except that the resolution is at least as good as
that of currentTimeMillis()..
If you use System.currentTimeMillis() your elapsed time can be negative (Back <-- to the future)
I've had good experience with nanotime. It provides wall-clock time as two longs (seconds since the epoch and nanoseconds within that second), using a JNI library. It's available with the JNI part precompiled for both Windows and Linux.
one thing here is the inconsistency of the nanoTime method.it does not give very consistent values for the same input.currentTimeMillis does much better in terms of performance and consistency,and also ,though not as precise as nanoTime,has a lower margin of error,and therefore more accuracy in its value. i would therefore suggest that you use currentTimeMillis

How to improve System.currentTimeMillis() granularity?

How can I achieve it without giving as input very large arrays? I am measuring the running time of different algorithms and for an array of 20 elements I get very (the same) similar values. I tried divided the total time by 1000000000 to clear of the E and then used like 16 mirrors where I copied the input array and executed it again for the mirror. But still it is the same for Heap and Quick sort. Any ideas without needing to write redundant lines?
Sample output:
Random array:
MergeSort:
Total time 14.333066343496
QuickSort:
Total time 14.3330663435256
HeapSort:
Total time 14.3330663435256
If you need code snippets just notify me.
To your direct question, use System.nanoTime() for more granular timestamps.
To your underlying question of how to get better benchmarks, you should run the benchmark repeatedly and on larger data sets. A benchmark that takes ~14ms to execute is going to be very noisy, even with a more precise clock. See also How do I write a correct micro-benchmark in Java?
You can't improve the granularity of this method. According to Java SE documentation:
Returns the current time in milliseconds. Note that while the unit of
time of the return value is a millisecond, the granularity of the
value depends on the underlying operating system and may be larger.
For example, many operating systems measure time in units of tens of
milliseconds.
(source)
As others stated, for time lapses, public static long nanoTime() would give you more precision, but not resolution:
This method provides nanosecond precision, but not necessarily
nanosecond resolution.
(source)

inconsistent time elapsed in java

I am trying to compute time lapsed in java using nanoTime. But everytime it gives me different results. Why it is not consistent always ?
Sample code :
long startTime=System.nanoTime();
String.valueOf(number).length();
long endTime = System.nanoTime();
System.out.println(endTime-startTime);
nanoTime() and its sister currentTimeMillis() are not exact and depending on the architecture you run your code on they suffer from rounding (see the javadoc for details):
This method provides nanosecond precision, but not necessarily nanosecond resolution
(that is, how frequently the value changes) -
no guarantees are made except that the resolution is at least as good
as that of currentTimeMillis().
If you measure the time in order to decide if alternative a or b is faster you are basically doing a micro benchmark. There are frameworks for this and you should use them. Probably the one most known for Java is JMH. If you need to do the same for larger code parts you might consider profiling.
You might want to have a look at this stackoverflow post: How do I write a correct micro-benchmark in Java?
The lapsed time will vary depending upon how JVM will execute and allocate the processing time to the thread in which this code executes.
I tried multiple runs and always got result in between 9000 to 11000 nanoseconds range. This looks fairly consistent.

Why is System.nanoTime() way slower (in performance) than System.currentTimeMillis()?

Today I did a little quick Benchmark to test speed performance of System.nanoTime() and System.currentTimeMillis():
long startTime = System.nanoTime();
for(int i = 0; i < 1000000; i++) {
long test = System.nanoTime();
}
long endTime = System.nanoTime();
System.out.println("Total time: "+(endTime-startTime));
This are the results:
System.currentTimeMillis(): average of 12.7836022 / function call
System.nanoTime(): average of 34.6395674 / function call
Why are the differences in running speed so big?
Benchmark system:
Java 1.7.0_25
Windows 8 64-bit
CPU: AMD FX-6100
From this Oracle blog:
System.currentTimeMillis() is implemented using the
GetSystemTimeAsFileTime method, which essentially just reads the low
resolution time-of-day value that Windows maintains. Reading this
global variable is naturally very quick - around 6 cycles according to
reported information.
System.nanoTime() is implemented using the
QueryPerformanceCounter/ QueryPerformanceFrequency API (if available,
else it returns currentTimeMillis*10^6).
QueryPerformanceCounter(QPC) is implemented in different ways
depending on the hardware it's running on. Typically it will use
either the programmable-interval-timer (PIT), or the ACPI power
management timer (PMT), or the CPU-level timestamp-counter (TSC).
Accessing the PIT/PMT requires execution of slow I/O port instructions
and as a result the execution time for QPC is in the order of
microseconds. In contrast reading the TSC is on the order of 100 clock
cycles (to read the TSC from the chip and convert it to a time value
based on the operating frequency).
Perhaps this answer the question. The two methods use different number of clock cycles, thus resulting in slow speed of the later one.
Further in that blog in the conclusion section:
If you are interested in measuring/calculating elapsed time, then always use System.nanoTime(). On most systems it will give a resolution on the order of microseconds. Be aware though, this call can also take microseconds to execute on some platforms.
Most OS's (you didn't mention which one you are using) have an in memory counter/clock which provides millisecond accuracy (or close to that). For nanosecond accuracy most have to read a hardware counter. Communicating with hardware is slower then reading some value already in memory.
It may only be the case on Windows. See this answer to a similar question.
Basically, System.currentTimeMillis() just reads a global variable maintained by Windows (which is why it has low granularity), whereas System.nanoTime() actually has to do IO operations.
You are measuring that on Windows, aren't you. I went through this exercise in 2008. nanoTime IS slower on Windows than currentTimeMillis. As I recall, on Linux, nanotime is faster than currentTimeMillis and is certainly faster than it is on Windows.
The important thing to note is if you are trying to measure the aggregate of multiple sub-millisecond operations, you must use nanotime as if the operation finished in less than 1/1000th of a second your code, comparing currentTimeMillis will show the operation as instantaneous so 1,000 of these will still be instantaneous. What you might want to do is use nanotime then round to the nearest millisecond, so if an operation took 8000 nanoseconds it will be counted as 1 millisecond, not 0.
What you might want to do is use nanotime then round to the nearest millisecond, so if an operation took 8000 nanoseconds it will be counted as 1 millisecond, not 0.
Arithmetic note:
8000 nanoseconds is 8 microseconds is 0.008 milliseconds. Rounding will take that to 0 milliseconds.

Categories

Resources