currentTimeMillis() and waiting by spinning on nanoTime() [duplicate] - java

Accuracy Vs. Precision
What I would like to know is whether I should use System.currentTimeMillis() or System.nanoTime() when updating my object's positions in my game? Their change in movement is directly proportional to the elapsed time since the last call and I want to be as precise as possible.
I've read that there are some serious time-resolution issues between different operating systems (namely that Mac / Linux have an almost 1 ms resolution while Windows has a 50ms resolution??). I'm primarly running my apps on windows and 50ms resolution seems pretty inaccurate.
Are there better options than the two I listed?
Any suggestions / comments?

If you're just looking for extremely precise measurements of elapsed time, use System.nanoTime(). System.currentTimeMillis() will give you the most accurate possible elapsed time in milliseconds since the epoch, but System.nanoTime() gives you a nanosecond-precise time, relative to some arbitrary point.
From the Java Documentation:
public static long nanoTime()
Returns the current value of the most precise available system timer, in nanoseconds.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary origin time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
For example, to measure how long some code takes to execute:
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
See also: JavaDoc System.nanoTime() and JavaDoc System.currentTimeMillis() for more info.

Since no one else has mentioned this…
It is not safe to compare the results of System.nanoTime() calls between different JVMs, each JVM may have an independent 'origin' time.
System.currentTimeMillis() will return the (approximate) same value between JVMs, because it is tied to the system wall clock time.
If you want to compute the amount of time that has elapsed between two events, like a stopwatch, use nanoTime(); changes in the system wall-clock make currentTimeMillis() incorrect for this use case.

Update by Arkadiy: I've observed more correct behavior of System.currentTimeMillis() on Windows 7 in Oracle Java 8. The time was returned with 1 millisecond precision. The source code in OpenJDK has not changed, so I do not know what causes the better behavior.
David Holmes of Sun posted a blog article a couple years ago that has a very detailed look at the Java timing APIs (in particular System.currentTimeMillis() and System.nanoTime()), when you would want to use which, and how they work internally.
Inside the Hotspot VM: Clocks, Timers and Scheduling Events - Part I - Windows
One very interesting aspect of the timer used by Java on Windows for APIs that have a timed wait parameter is that the resolution of the timer can change depending on what other API calls may have been made - system wide (not just in the particular process). He shows an example where using Thread.sleep() will cause this resolution change.

As others have said, currentTimeMillis is clock time, which changes due to daylight saving time (not: daylight saving & time zone are unrelated to currentTimeMillis, the rest is true), users changing the time settings, leap seconds, and internet time sync. If your app depends on monotonically increasing elapsed time values, you might prefer nanoTime instead.
You might think that the players won't be fiddling with the time settings during game play, and maybe you'd be right. But don't underestimate the disruption due to internet time sync, or perhaps remote desktop users. The nanoTime API is immune to this kind of disruption.
If you want to use clock time, but avoid discontinuities due to internet time sync, you might consider an NTP client such as Meinberg, which "tunes" the clock rate to zero it in, instead of just resetting the clock periodically.
I speak from personal experience. In a weather application that I developed, I was getting randomly occurring wind speed spikes. It took a while for me to realize that my timebase was being disrupted by the behavior of clock time on a typical PC. All my problems disappeared when I started using nanoTime. Consistency (monotonicity) was more important to my application than raw precision or absolute accuracy.

System.nanoTime() isn't supported in older JVMs. If that is a concern, stick with currentTimeMillis
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms). I'm not sure why, but some Windows machines are just as accurate as Linux machines.
I have used GAGETimer in the past with moderate success.

Yes, if such precision is required use System.nanoTime(), but be aware that you are then requiring a Java 5+ JVM.
On my XP systems, I see system time reported to at least 100 microseconds 278 nanoseconds using the following code:
private void test() {
System.out.println("currentTimeMillis: "+System.currentTimeMillis());
System.out.println("nanoTime : "+System.nanoTime());
System.out.println();
testNano(false); // to sync with currentTimeMillis() timer tick
for(int xa=0; xa<10; xa++) {
testNano(true);
}
}
private void testNano(boolean shw) {
long strMS=System.currentTimeMillis();
long strNS=System.nanoTime();
long curMS;
while((curMS=System.currentTimeMillis()) == strMS) {
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)); }
}
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)+", Milli: "+(curMS-strMS)); }
}

For game graphics & smooth position updates, use System.nanoTime() rather than System.currentTimeMillis(). I switched from currentTimeMillis() to nanoTime() in a game and got a major visual improvement in smoothness of motion.
While one millisecond may seem as though it should already be precise, visually it is not. The factors nanoTime() can improve include:
accurate pixel positioning below wall-clock resolution
ability to anti-alias between pixels, if you want
Windows wall-clock inaccuracy
clock jitter (inconsistency of when wall-clock actually ticks forward)
As other answers suggest, nanoTime does have a performance cost if called repeatedly -- it would be best to call it just once per frame, and use the same value to calculate the entire frame.

System.currentTimeMillis() is not safe for elapsed time because this method is sensitive to the system realtime clock changes of the system.
You should use System.nanoTime.
Please refer to Java System help:
About nanoTime method:
.. This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes) - no
guarantees are made except that the resolution is at least as good as
that of currentTimeMillis()..
If you use System.currentTimeMillis() your elapsed time can be negative (Back <-- to the future)

I've had good experience with nanotime. It provides wall-clock time as two longs (seconds since the epoch and nanoseconds within that second), using a JNI library. It's available with the JNI part precompiled for both Windows and Linux.

one thing here is the inconsistency of the nanoTime method.it does not give very consistent values for the same input.currentTimeMillis does much better in terms of performance and consistency,and also ,though not as precise as nanoTime,has a lower margin of error,and therefore more accuracy in its value. i would therefore suggest that you use currentTimeMillis

Related

Why does the Java Scheduler exhibit significant time drift on Windows?

I have Java service running on Windows 7 that runs once per day on a SingleThreadScheduledExecutor. I've never given it much though as it's non critical but recently looked at the numbers and saw that the service was drifting approximately 15 minutes per day which sounds way to much so dug it up.
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
long drift = (System.currentTimeMillis() - lastTimeStamp - seconds * 1000);
lastTimeStamp = System.currentTimeMillis();
}, 0, 10, TimeUnit.SECONDS);
This method pretty consistently drifts +110ms per each 10 seconds. If I run it on a 1 second interval the drift averages +11ms.
Interestingly if I do the same on a Timer() values are pretty consistent with an average drift less than a full millisecond.
new Timer().schedule(new TimerTask() {
#Override
public void run() {
long drift = (System.currentTimeMillis() - lastTimeStamp - seconds * 1000);
lastTimeStamp = System.currentTimeMillis();
}
}, 0, seconds * 1000);
Linux: doesn't drift (nor with Executor, nor with Timer)
Windows: drifts like crazy with Executor, doesn't with Timer
Tested with Java8 and Java11.
Interestingly, if you assume a drift of 11ms per second you'll get 950400ms drift per day which amounts to 15.84 minutes per day. So it's pretty consistent.
The question is: why?
Why would this happen with a SingleThreadExecutor but not with a Timer.
Update1: following Slaw's comment I tried on multiple different hardware. What I found is that this issue doesn't manifest on any personal hardware. Only on the company one. On company hardware it also manifests on Win10, though an order of magnitude less.
As pointed out in the comments, the ScheduledThreadPoolExecutor bases its calculations on System.nanoTime(). For better or worse, the old Timer API however preceeded nanoTime(), and so uses System.currentTimeMillis() instead.
The difference here might seem subtle, but is more significant than one might expect. Contrary to popular belief, nanoTime() is not just a "more accurate version" of currentTimeMillis(). Millis is locked to system time, whereas nanos is not. Or as the docs put it:
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. [...] The values returned by this method become meaningful only when the difference between two such values, obtained within the same instance of a Java virtual machine, is computed.
In your example, you're not following this guidance for the values to be "meaningful" - understandably, because the ScheduledThreadPoolExecutor only uses nanoTime() as an implementation detail. But the end result is the same, that being that you can't guarantee that it will stay synchronised to the system clock.
But why not? Seconds are seconds, right, so the two should stay in sync from a certain, known point?
Well, in theory, yes. But in practice, probably not.
Taking a look at the relevant native code on windows:
LARGE_INTEGER current_count;
QueryPerformanceCounter(&current_count);
double current = as_long(current_count);
double freq = performance_frequency;
jlong time = (jlong)((current/freq) * NANOSECS_PER_SEC);
return time;
We see nanos() uses the QueryPerformanceCounter API, which works by QueryPerformanceCounter getting the "ticks" of a frequency that's defined by QueryPerformanceFrequency. That frequency will stay identical, but the timer it's based off, and its synchronistaion algorithm that windows uses, varies by configuration, OS, and underlying hardware. Even ignoring the above, it's never going to be close to 100% accurate (it's based of a reasonably cheap crystal oscillator somewhere on the board, not a Caesium time standard!) so it's going to drift out with the system time as NTP keeps it in sync with reality.
In particular, this link gives some useful background, and reinforces the above pont:
When you need time stamps with a resolution of 1 microsecond or better and you don't need the time stamps to be synchronized to an external time reference, choose QueryPerformanceCounter.
(Bolding is mine.)
For your specific case of Windows 7 performing badly, note that in Windows 8+, the TSC synchronisation algorithm was improved, and QueryPerformanceCounter was always based on a TSC (as oppose to Windows 7, where it could be a TSC, HPET or the ACPI PM timer - the latter of which is especially rather inaccurate.) I suspect this is the most likely reason the situation improves tremendously on Windows 10.
That being said, the above factors still mean that you can't rely on the ScheduledThreadPoolExecutor to keep in time with "real" time - it will always drift. If that drift is an issue, then it's not a solution you can rely on in this context.
Side note: In Windows 8+, there is a GetSystemTimePreciseAsFileTime function which offers the high resolution of QueryPerformanceCounter combined with the accuracy of the system time. If Windows 7 was dropped as a supported platform, this could in theory be used to provide a System.getCurrentTimeNanos() method or similar, assuming other similar native functions exist for other supported platforms.
CronScheduler is a project of mine designed to be proof against time drift problem, and at the same time it avoids some of the problems with the old Timer class described in this post.
Example usage:
Duration syncPeriod = Duration.ofMinutes(1);
CronScheduler cron = CronScheduler.create(syncPeriod);
cron.scheduleAtFixedRateSkippingToLatest(0, 1, TimeUnit.MINUTES, runTimeMillis -> {
// Collect and send summary metrics to a remote monitoring system
});
Note: this project was actually inspired by this StackOverflow question.

System.currenTimeInMillis() vs System.nanoTime()

I know that System.nanoTime() is now the preferred method for measuring time over System.currentTimeInMillis() . The first obvious reason is nanoTime() gives more precise timing and the other reason I read that the latter is affected by adjustments to the system’s real-time clock. What does "getting affected by systems real-time clock " mean ?
In this case I've found following blog post excerpt useful:
If you are interested in measuring absolute time then always use
System.currentTimeMillis(). Be aware that its resolution may be quite
coarse (though this is rarely an issue for absolute times.)
If you are interested in measuring/calculating elapsed time, then
always use System.nanoTime(). On most systems it will give a
resolution on the order of microseconds. Be aware though, this call
can also take microseconds to execute on some platforms.
Clocks and Timers - General Overview by David Holmes
Since System.currentTimeMillis() is relying on the systems time of day clock, adjustments to the time are legitimate, in order to keep it on time.
What means adjustments here? Take for instance a look at the description of CLOCK_REALTIME from Linux:
System-wide clock that measures real (i.e., wall-clock) time.
Setting this clock requires appropriate privileges. This clock is
affected by discontinuous jumps in the system time (e.g., if the
system administrator manually changes the clock), and by the
incremental adjustments performed by adjtime(3) and NTP.
Just check the JavaDoc of the methods:
System.nanoTime()
"... This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. ..."
System.currentTimeMillis()
"... Returns the current time in milliseconds. ..."
So as you can see if the system time changes during the measurement using the System.currentTimeMillis(), the interval you measure will change too. However, it will not change when measuring the interval using the System.nanoTime() method.
It means that the value that System.currentTimeMillis() returns is obtained from the internal clock of the machine. If a sysadmin (or NTP) changes the time, for example if the clock is found to be running 5 minutes fast and the sysadmin goes and corrects it, System.currentTimeMillis() will be affected. This means that you can even see the value decrease, and if you use it to measure intervals the timings can be off. You may even measure negative timings.
System.nanoTime() on the other hand returns a value that is derived from some internal CPU counter/clock. The time measured by this clock cannot be changed by any user or program. This means that it will be more reliable for timing. But the CPU clock is reset on poweroff so it's not useful for finding the current "wall-clock" time.

Java timing, System.nanoTime() batter than System.currentTimeMillis() but does it persist over sleep?

I am trying to implement a timer, it may be used for short (seconds) events, or longer (hours, etc) events.
Ideally it should persist over periods when the CPU is off, for example, battery has died. If I set the start time using System.currentTimeMillis() and end time using the same function, it works in almost all cases, except during periods like leap seconds, leap years, daylight savings time changes, etc... Or, if the user just changes the time (I've verified this). This is on an Android system, btw.
Instead, if I used System.nanoTime(), in addition to potentially being more accurate, it won't have the usual "hard time" issues with time changes, etc. My question is, does System.nanoTime() measure nanoseconds from some arbitrary time, in "hard time"? I'm not sure what the proper term is, but for example, will System.nanoTime() ran at X, then X+1 hour later, the system is shut off (dead battery on Android device, for example), then X+10 hours, the system is started, will running System.nanoTime() at this point return 10 hours? Or will it return 1 hour (since the "counter" that nanoTime uses may not be running when system is off/asleep?).
android.os.SystemClock.elapsedRealtime() - milliseconds since the system was booted including time spent in sleep state. This should be your best bet.
I dont think you can measure the switched off time in android.
For more info it might be better to check android system clock page. http://developer.android.com/reference/android/os/SystemClock.html
It is undefined:
"The value returned represents nanoseconds since some fixed but
arbitrary origin time (perhaps in the future, so values may be
negative). The same origin is used by all invocations of this method
in an instance of a Java virtual machine; other virtual machine
instances are likely to use a different origin."
For simplicity, we'll say when you run it at time X, the origin is X (this is allowed). That means it will return 0 then, and within the VM instance, time will then elapse at the same rate as a normal clock.
When you use Thread.sleep, that doesn't change the VM instance, so it isn't treated specially.
However, after the device is rebooted, you're in a different VM instance. Therefore, X is no longer guaranteed to be the origin.

Comparing System.nanoTime() values resulting from different machines

Is it correct to compare two values resulting from a call to System.nanoTime() on two different machines? I would say no because System.nanoTime() returns a nanosecond-precise time relative to some arbitrary point time by using the Time Stamp Counter (TSC) which is processor dependent.
If I am right, is there a way (in Java) to capture an instant on two different machines and to compare (safely) these values with at least a microsecond precision or even nanotime precision?
System.currentTimeMillis() is not a solution because it is not returning a linearly increasing number of time stamps. The user or services such as NTP can change the system clock at any time and the time will leap back and forward.
You might want to look into the various clock synchronization algorithms available. Apparently the Precision Time Protocol can get you within sub-microsecond accuracy on a LAN.
If you don't need a specific time value but rather would like to know the ordering of various events, you could for instance use Lamport timestamps.
You cannot use nanoTime between two different machines. For the Java API docs:
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary time
(perhaps in the future, so values may be negative).
There's no guarantee that nanoTime is relative to any timebase.
This is a processor & OS dependent Q. Looking at POSIX clocks, for example, there are high precision time of day aware timestamps (e.g. CLOCK_REALTIME returns a nano epoch time value) and high precision arbitrary time timestamps (e.g. CLOCK_MONOTONIC) (NB: the difference between these 2 is nicely explained in this answer).
The latter is often something like time since the box was booted and therefore there's no way to accurately compare them across servers unless you have high precision clock sync (e.g. PTP as referenced in the other answer) in the first place (as then you'd be able to share an offset between them).
Whether NTP is good enough for you depends on what you're trying to measure. For example if you're trying to measure an interval of a few hundred micros (e.g. boxes connected to the same switch) then your results will be rough, at the other extreme NTP can be perfectly good if your servers are in different geographical locations entirely (e.g. London to NY) which means the clock sync effect (as long as it's not way way off) is swamped by the latency between the locations.
FWIW the JNI required to access such clocks from java is pretty trivial.
You can synchronize the time to current time millis. However even if you use NTP this can drift by 1 ms to 10 ms between machines. The only way to be micro-second synchronization between machines is to use specialist hardware.
nanoTime is guaranteed to be determined the same way or have the same resolution on two different OSes.

Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly?

For diagnostic purposes, I want to be able to detect changes in the system time-of-day clock in a long-running server application. Since System.currentTimeMillis() is based on wall clock time and System.nanoTime() is based on a system timer that is independent(*) of wall clock time, I thought I could use changes in the difference between these values to detect system time changes.
I wrote up a quick test app to see how stable the difference between these values is, and to my surprise the values diverge immediately for me at the level of several milliseconds per second. A few times I saw much faster divergences. This is on a Win7 64-bit desktop with Java 6. I haven't tried this test program below under Linux (or Solaris or MacOS) to see how it performs. For some runs of this app, the divergence is positive, for some runs it is negative. It appears to depend on what else the desktop is doing, but it's hard to say.
public class TimeTest {
private static final int ONE_MILLION = 1000000;
private static final int HALF_MILLION = 499999;
public static void main(String[] args) {
long start = System.nanoTime();
long base = System.currentTimeMillis() - (start / ONE_MILLION);
while (true) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Don't care if we're interrupted
}
long now = System.nanoTime();
long drift = System.currentTimeMillis() - (now / ONE_MILLION) - base;
long interval = (now - start + HALF_MILLION) / ONE_MILLION;
System.out.println("Clock drift " + drift + " ms after " + interval
+ " ms = " + (drift * 1000 / interval) + " ms/s");
}
}
}
Inaccuracies with the Thread.sleep() time, as well as interruptions, should be entirely irrelevant to timer drift.
Both of these Java "System" calls are intended for use as a measurement -- one to measure differences in wall clock time and the other to measure absolute intervals, so when the real-time-clock is not being changed, these values should change at very close to the same speed, right? Is this a bug or a weakness or a failure in Java? Is there something in the OS or hardware that prevents Java from being more accurate?
I fully expect some drift and jitter(**) between these independent measurements, but I expected well less than a minute per day of drift. 1 msec per second of drift, if monotonic, is almost 90 seconds! My worst-case observed drift was perhaps ten times that. Every time I run this program, I see drift on the very first measurement. So far, I have not run the program for more than about 30 minutes.
I expect to see some small randomness in the values printed, due to jitter, but in almost all runs of the program I see steady increase of the difference, often as much as 3 msec per second of increase and a couple times much more than that.
Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?
(*) I understand that on some architectures, System.nanoTime() will of necessity use the same mechanism as System.currentTimeMillis(). I also believe it's fair to assume that any modern Windows server is not such a hardware architecture. Is this a bad assumption?
(**) Of course, System.currentTimeMillis() will usually have a much larger jitter than System.nanoTime() since its granularity is not 1 msec on most systems.
You might find this Sun/Oracle blog post about JVM timers to be of interest.
Here are a couple of the paragraphs from that article about JVM timers under Windows:
System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).
System.nanoTime() is implemented using the QueryPerformanceCounter / QueryPerformanceFrequency API (if available, else it returns currentTimeMillis*10^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency). You can tell if your system uses the ACPI PMT by checking if QueryPerformanceFrequency returns the signature value of 3,579,545 (ie 3.57MHz). If you see a value around 1.19Mhz then your system is using the old 8245 PIT chip. Otherwise you should see a value approximately that of your CPU frequency (modulo any speed throttling or power-management that might be in effect.)
I am not sure how much this will actually help. But this is an area of active change in the Windows/Intel/AMD/Java world. The need for accurate and precise time measurement has been apparent for several (at least 10) years. Both Intel and AMD have responded by changing how TSC works. Both companies now have something called Invariant-TSC and/or Constant-TSC.
Check out rdtsc accuracy across CPU cores. Quoting from osgx (who refers to an Intel manual).
"16.11.1 Invariant TSC
The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by PUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."
See also http://www.citihub.com/requesting-timestamp-in-applications/. Quoting from the author
For AMD:
If CPUID 8000_0007.edx[8] = 1, then the TSC rate is ensured to be invariant across all P-States, C-States, and stop-grant transitions (such as STPCLK Throttling); therefore, the TSC is suitable for use as a source of time.
For Intel:
Processor’s support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behaviour moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."
Now the really important point is that the latest JVMs appear to exploit the newly reliable TSC mechanisms. There isn't much online to show this. However, do take a look at http://code.google.com/p/disruptor/wiki/PerformanceResults.
"To measure latency we take the three stage pipeline and generate events at less than saturation. This is achieved by waiting 1 microsecond after injecting an event before injecting the next and repeating 50 million times. To time at this level of precision it is necessary to use time stamp counters from the CPU. We choose CPUs with an invariant TSC because older processors suffer from changing frequency due to power saving and sleep states. Intel Nehalem and later processors use an invariant TSC which can be accessed by the latest Oracle JVMs running on Ubuntu 11.04. No CPU binding has been employed for this test"
Note that the authors of the "Disruptor" have close ties to the folks working on the Azul and other JVMs.
See also "Java Flight Records Behind the Scenes". This presentation mentions the new invariant TSC instructions.
"Returns the current value of the most precise available system timer, in nanoseconds.
"This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change. Differences in successive calls that span greater than approximately 292 years (2**63 nanoseconds) will not accurately compute elapsed time due to numerical overflow."
Note that it says "precise", not "accurate".
It's not a "bug in Java" or a "bug" in anything. It's a definition. The JVM developers look around to find the fastest clock/timer in the system and use it. If that's in lock-step with the system clock then good, but if it's not, that's just the way the cookie crumbles. It's entirely plausible, say, that a computer system will have an accurate system clock but then have a higher-rate timer internally that's tied to the CPU clock rate or some such. Since clock rate is often varied to minimize power consumption, the increment rate of this internal timer would vary.
System.currentTimeMillis() and System.nanoTime() are not necessarily provided by
the same hardware. System.currentTimeMillis(), backed by GetSystemTimeAsFileTime()
has 100ns resolution elements. Its source is the system timer. System.nanoTime() is backed by the system's high performance counter. There is a whole variety of different hardware
providing this counter. Therefore its resolution varies, depending on the underlying hardware.
In no case can it be assumed that these two sources are in phase. Measuring the two values
against each other will disclose a different running speed. If the update of System.currentTimeMillis() is taken as the real progress in time, the output of System.nanoTime() may be sometimes slower, sometimes faster, and also varying.
A careful calibration has to be done in order to phase lock these two time sources.
A more detailed description of the relation between these two time sources can be found
at the Windows Timestamp Project.
Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?
The Windows Timestamp Project does what you are asking for. As far as I know it only affects the wall-clock timer.

Categories

Resources