Comparing System.nanoTime() values resulting from different machines - java

Is it correct to compare two values resulting from a call to System.nanoTime() on two different machines? I would say no because System.nanoTime() returns a nanosecond-precise time relative to some arbitrary point time by using the Time Stamp Counter (TSC) which is processor dependent.
If I am right, is there a way (in Java) to capture an instant on two different machines and to compare (safely) these values with at least a microsecond precision or even nanotime precision?
System.currentTimeMillis() is not a solution because it is not returning a linearly increasing number of time stamps. The user or services such as NTP can change the system clock at any time and the time will leap back and forward.

You might want to look into the various clock synchronization algorithms available. Apparently the Precision Time Protocol can get you within sub-microsecond accuracy on a LAN.
If you don't need a specific time value but rather would like to know the ordering of various events, you could for instance use Lamport timestamps.

You cannot use nanoTime between two different machines. For the Java API docs:
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary time
(perhaps in the future, so values may be negative).
There's no guarantee that nanoTime is relative to any timebase.

This is a processor & OS dependent Q. Looking at POSIX clocks, for example, there are high precision time of day aware timestamps (e.g. CLOCK_REALTIME returns a nano epoch time value) and high precision arbitrary time timestamps (e.g. CLOCK_MONOTONIC) (NB: the difference between these 2 is nicely explained in this answer).
The latter is often something like time since the box was booted and therefore there's no way to accurately compare them across servers unless you have high precision clock sync (e.g. PTP as referenced in the other answer) in the first place (as then you'd be able to share an offset between them).
Whether NTP is good enough for you depends on what you're trying to measure. For example if you're trying to measure an interval of a few hundred micros (e.g. boxes connected to the same switch) then your results will be rough, at the other extreme NTP can be perfectly good if your servers are in different geographical locations entirely (e.g. London to NY) which means the clock sync effect (as long as it's not way way off) is swamped by the latency between the locations.
FWIW the JNI required to access such clocks from java is pretty trivial.

You can synchronize the time to current time millis. However even if you use NTP this can drift by 1 ms to 10 ms between machines. The only way to be micro-second synchronization between machines is to use specialist hardware.
nanoTime is guaranteed to be determined the same way or have the same resolution on two different OSes.

Related

Digging deeper into System.currentTimeMillis(), how precise can we expect it to be? What does millisecond actually precision mean?

Please only answer WHEN you fully comprehend the question.
Do not close down, as there does not exist a similar question.
I am aware of System.nanoTime() gives ns from an arbitrary "random" point after the JVM starts. And I am aware that System.currentTimeMillis() only gives ms precision.
What I am looking for is for the PROOF and keep an open mind, to the hypothesis that the ms changes are not exact once we try to define what exact means.
Exact would in my world mean that everytime we were to register a new ms say, we go from 97ms, 98ms, 99ms and so forth, on every time we get an update, through whatever mechanisms, we can not expect at least observed Java to give us nanosecond precision at the switches.
I know, i know. It sounds weird to expect that, but then the question comes, how accurate are the ms switches then?
It appears to be that when you ask System.nanoTime() repeatedly you would be able to get a linear graph with nanosecond resolution.
If we at the same time ask System.currentTimeMillis() right after System.nanoTime() and we disregard the variance in cost of commands, it appears as if there would be not a linear graph on the same resolution. The ms graph would +-250ns.
This is the to be expected, yet I can not find any information on the error margin, or the accuracy of the ms.
This issue is there for second precision as well, or hour precision, day, year, and so forth. When the year comes, how big is the error?
When the ms comes, how big is the error in terms on ns?
System.currenTimeMillis() can not be trusted to stay linear against System.nanoTime() and we can not expect System.currenTimeMillis() to keep up with ns precision.
But how big is the error? In computing? In Java, in unix systems?
From the documentation:
"Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
See the description of the class Date for a discussion of slight discrepancies that may arise between "computer time" and coordinated universal time (UTC)."
So both the precision and accuracy of the call is undefined. They pass the buck to the OS and shrug. I doubt that 250 ns is an accurate measure of its quality. The gap is likely much larger than that. "Tens of milliseconds" as per the documentation is a much more likely value, especially across multiple systems.
Also they essentially disavow any knowledge of UTC as well. "Slight discrepancies" are allowed, whatever that means. Technically this allows any value at all, because what exactly is "slight?" It could be a second or a minute depending on your point of view.
Finally the system clock could be misconfigured by the person operating the system, and at that point everything goes out the window.

currentTimeMillis() and waiting by spinning on nanoTime() [duplicate]

Accuracy Vs. Precision
What I would like to know is whether I should use System.currentTimeMillis() or System.nanoTime() when updating my object's positions in my game? Their change in movement is directly proportional to the elapsed time since the last call and I want to be as precise as possible.
I've read that there are some serious time-resolution issues between different operating systems (namely that Mac / Linux have an almost 1 ms resolution while Windows has a 50ms resolution??). I'm primarly running my apps on windows and 50ms resolution seems pretty inaccurate.
Are there better options than the two I listed?
Any suggestions / comments?
If you're just looking for extremely precise measurements of elapsed time, use System.nanoTime(). System.currentTimeMillis() will give you the most accurate possible elapsed time in milliseconds since the epoch, but System.nanoTime() gives you a nanosecond-precise time, relative to some arbitrary point.
From the Java Documentation:
public static long nanoTime()
Returns the current value of the most precise available system timer, in nanoseconds.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary origin time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
For example, to measure how long some code takes to execute:
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
See also: JavaDoc System.nanoTime() and JavaDoc System.currentTimeMillis() for more info.
Since no one else has mentioned this…
It is not safe to compare the results of System.nanoTime() calls between different JVMs, each JVM may have an independent 'origin' time.
System.currentTimeMillis() will return the (approximate) same value between JVMs, because it is tied to the system wall clock time.
If you want to compute the amount of time that has elapsed between two events, like a stopwatch, use nanoTime(); changes in the system wall-clock make currentTimeMillis() incorrect for this use case.
Update by Arkadiy: I've observed more correct behavior of System.currentTimeMillis() on Windows 7 in Oracle Java 8. The time was returned with 1 millisecond precision. The source code in OpenJDK has not changed, so I do not know what causes the better behavior.
David Holmes of Sun posted a blog article a couple years ago that has a very detailed look at the Java timing APIs (in particular System.currentTimeMillis() and System.nanoTime()), when you would want to use which, and how they work internally.
Inside the Hotspot VM: Clocks, Timers and Scheduling Events - Part I - Windows
One very interesting aspect of the timer used by Java on Windows for APIs that have a timed wait parameter is that the resolution of the timer can change depending on what other API calls may have been made - system wide (not just in the particular process). He shows an example where using Thread.sleep() will cause this resolution change.
As others have said, currentTimeMillis is clock time, which changes due to daylight saving time (not: daylight saving & time zone are unrelated to currentTimeMillis, the rest is true), users changing the time settings, leap seconds, and internet time sync. If your app depends on monotonically increasing elapsed time values, you might prefer nanoTime instead.
You might think that the players won't be fiddling with the time settings during game play, and maybe you'd be right. But don't underestimate the disruption due to internet time sync, or perhaps remote desktop users. The nanoTime API is immune to this kind of disruption.
If you want to use clock time, but avoid discontinuities due to internet time sync, you might consider an NTP client such as Meinberg, which "tunes" the clock rate to zero it in, instead of just resetting the clock periodically.
I speak from personal experience. In a weather application that I developed, I was getting randomly occurring wind speed spikes. It took a while for me to realize that my timebase was being disrupted by the behavior of clock time on a typical PC. All my problems disappeared when I started using nanoTime. Consistency (monotonicity) was more important to my application than raw precision or absolute accuracy.
System.nanoTime() isn't supported in older JVMs. If that is a concern, stick with currentTimeMillis
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms). I'm not sure why, but some Windows machines are just as accurate as Linux machines.
I have used GAGETimer in the past with moderate success.
Yes, if such precision is required use System.nanoTime(), but be aware that you are then requiring a Java 5+ JVM.
On my XP systems, I see system time reported to at least 100 microseconds 278 nanoseconds using the following code:
private void test() {
System.out.println("currentTimeMillis: "+System.currentTimeMillis());
System.out.println("nanoTime : "+System.nanoTime());
System.out.println();
testNano(false); // to sync with currentTimeMillis() timer tick
for(int xa=0; xa<10; xa++) {
testNano(true);
}
}
private void testNano(boolean shw) {
long strMS=System.currentTimeMillis();
long strNS=System.nanoTime();
long curMS;
while((curMS=System.currentTimeMillis()) == strMS) {
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)); }
}
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)+", Milli: "+(curMS-strMS)); }
}
For game graphics & smooth position updates, use System.nanoTime() rather than System.currentTimeMillis(). I switched from currentTimeMillis() to nanoTime() in a game and got a major visual improvement in smoothness of motion.
While one millisecond may seem as though it should already be precise, visually it is not. The factors nanoTime() can improve include:
accurate pixel positioning below wall-clock resolution
ability to anti-alias between pixels, if you want
Windows wall-clock inaccuracy
clock jitter (inconsistency of when wall-clock actually ticks forward)
As other answers suggest, nanoTime does have a performance cost if called repeatedly -- it would be best to call it just once per frame, and use the same value to calculate the entire frame.
System.currentTimeMillis() is not safe for elapsed time because this method is sensitive to the system realtime clock changes of the system.
You should use System.nanoTime.
Please refer to Java System help:
About nanoTime method:
.. This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes) - no
guarantees are made except that the resolution is at least as good as
that of currentTimeMillis()..
If you use System.currentTimeMillis() your elapsed time can be negative (Back <-- to the future)
I've had good experience with nanotime. It provides wall-clock time as two longs (seconds since the epoch and nanoseconds within that second), using a JNI library. It's available with the JNI part precompiled for both Windows and Linux.
one thing here is the inconsistency of the nanoTime method.it does not give very consistent values for the same input.currentTimeMillis does much better in terms of performance and consistency,and also ,though not as precise as nanoTime,has a lower margin of error,and therefore more accuracy in its value. i would therefore suggest that you use currentTimeMillis

Java timing, System.nanoTime() batter than System.currentTimeMillis() but does it persist over sleep?

I am trying to implement a timer, it may be used for short (seconds) events, or longer (hours, etc) events.
Ideally it should persist over periods when the CPU is off, for example, battery has died. If I set the start time using System.currentTimeMillis() and end time using the same function, it works in almost all cases, except during periods like leap seconds, leap years, daylight savings time changes, etc... Or, if the user just changes the time (I've verified this). This is on an Android system, btw.
Instead, if I used System.nanoTime(), in addition to potentially being more accurate, it won't have the usual "hard time" issues with time changes, etc. My question is, does System.nanoTime() measure nanoseconds from some arbitrary time, in "hard time"? I'm not sure what the proper term is, but for example, will System.nanoTime() ran at X, then X+1 hour later, the system is shut off (dead battery on Android device, for example), then X+10 hours, the system is started, will running System.nanoTime() at this point return 10 hours? Or will it return 1 hour (since the "counter" that nanoTime uses may not be running when system is off/asleep?).
android.os.SystemClock.elapsedRealtime() - milliseconds since the system was booted including time spent in sleep state. This should be your best bet.
I dont think you can measure the switched off time in android.
For more info it might be better to check android system clock page. http://developer.android.com/reference/android/os/SystemClock.html
It is undefined:
"The value returned represents nanoseconds since some fixed but
arbitrary origin time (perhaps in the future, so values may be
negative). The same origin is used by all invocations of this method
in an instance of a Java virtual machine; other virtual machine
instances are likely to use a different origin."
For simplicity, we'll say when you run it at time X, the origin is X (this is allowed). That means it will return 0 then, and within the VM instance, time will then elapse at the same rate as a normal clock.
When you use Thread.sleep, that doesn't change the VM instance, so it isn't treated specially.
However, after the device is rebooted, you're in a different VM instance. Therefore, X is no longer guaranteed to be the origin.

Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly?

For diagnostic purposes, I want to be able to detect changes in the system time-of-day clock in a long-running server application. Since System.currentTimeMillis() is based on wall clock time and System.nanoTime() is based on a system timer that is independent(*) of wall clock time, I thought I could use changes in the difference between these values to detect system time changes.
I wrote up a quick test app to see how stable the difference between these values is, and to my surprise the values diverge immediately for me at the level of several milliseconds per second. A few times I saw much faster divergences. This is on a Win7 64-bit desktop with Java 6. I haven't tried this test program below under Linux (or Solaris or MacOS) to see how it performs. For some runs of this app, the divergence is positive, for some runs it is negative. It appears to depend on what else the desktop is doing, but it's hard to say.
public class TimeTest {
private static final int ONE_MILLION = 1000000;
private static final int HALF_MILLION = 499999;
public static void main(String[] args) {
long start = System.nanoTime();
long base = System.currentTimeMillis() - (start / ONE_MILLION);
while (true) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Don't care if we're interrupted
}
long now = System.nanoTime();
long drift = System.currentTimeMillis() - (now / ONE_MILLION) - base;
long interval = (now - start + HALF_MILLION) / ONE_MILLION;
System.out.println("Clock drift " + drift + " ms after " + interval
+ " ms = " + (drift * 1000 / interval) + " ms/s");
}
}
}
Inaccuracies with the Thread.sleep() time, as well as interruptions, should be entirely irrelevant to timer drift.
Both of these Java "System" calls are intended for use as a measurement -- one to measure differences in wall clock time and the other to measure absolute intervals, so when the real-time-clock is not being changed, these values should change at very close to the same speed, right? Is this a bug or a weakness or a failure in Java? Is there something in the OS or hardware that prevents Java from being more accurate?
I fully expect some drift and jitter(**) between these independent measurements, but I expected well less than a minute per day of drift. 1 msec per second of drift, if monotonic, is almost 90 seconds! My worst-case observed drift was perhaps ten times that. Every time I run this program, I see drift on the very first measurement. So far, I have not run the program for more than about 30 minutes.
I expect to see some small randomness in the values printed, due to jitter, but in almost all runs of the program I see steady increase of the difference, often as much as 3 msec per second of increase and a couple times much more than that.
Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?
(*) I understand that on some architectures, System.nanoTime() will of necessity use the same mechanism as System.currentTimeMillis(). I also believe it's fair to assume that any modern Windows server is not such a hardware architecture. Is this a bad assumption?
(**) Of course, System.currentTimeMillis() will usually have a much larger jitter than System.nanoTime() since its granularity is not 1 msec on most systems.
You might find this Sun/Oracle blog post about JVM timers to be of interest.
Here are a couple of the paragraphs from that article about JVM timers under Windows:
System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).
System.nanoTime() is implemented using the QueryPerformanceCounter / QueryPerformanceFrequency API (if available, else it returns currentTimeMillis*10^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency). You can tell if your system uses the ACPI PMT by checking if QueryPerformanceFrequency returns the signature value of 3,579,545 (ie 3.57MHz). If you see a value around 1.19Mhz then your system is using the old 8245 PIT chip. Otherwise you should see a value approximately that of your CPU frequency (modulo any speed throttling or power-management that might be in effect.)
I am not sure how much this will actually help. But this is an area of active change in the Windows/Intel/AMD/Java world. The need for accurate and precise time measurement has been apparent for several (at least 10) years. Both Intel and AMD have responded by changing how TSC works. Both companies now have something called Invariant-TSC and/or Constant-TSC.
Check out rdtsc accuracy across CPU cores. Quoting from osgx (who refers to an Intel manual).
"16.11.1 Invariant TSC
The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by PUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."
See also http://www.citihub.com/requesting-timestamp-in-applications/. Quoting from the author
For AMD:
If CPUID 8000_0007.edx[8] = 1, then the TSC rate is ensured to be invariant across all P-States, C-States, and stop-grant transitions (such as STPCLK Throttling); therefore, the TSC is suitable for use as a source of time.
For Intel:
Processor’s support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behaviour moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."
Now the really important point is that the latest JVMs appear to exploit the newly reliable TSC mechanisms. There isn't much online to show this. However, do take a look at http://code.google.com/p/disruptor/wiki/PerformanceResults.
"To measure latency we take the three stage pipeline and generate events at less than saturation. This is achieved by waiting 1 microsecond after injecting an event before injecting the next and repeating 50 million times. To time at this level of precision it is necessary to use time stamp counters from the CPU. We choose CPUs with an invariant TSC because older processors suffer from changing frequency due to power saving and sleep states. Intel Nehalem and later processors use an invariant TSC which can be accessed by the latest Oracle JVMs running on Ubuntu 11.04. No CPU binding has been employed for this test"
Note that the authors of the "Disruptor" have close ties to the folks working on the Azul and other JVMs.
See also "Java Flight Records Behind the Scenes". This presentation mentions the new invariant TSC instructions.
"Returns the current value of the most precise available system timer, in nanoseconds.
"This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change. Differences in successive calls that span greater than approximately 292 years (2**63 nanoseconds) will not accurately compute elapsed time due to numerical overflow."
Note that it says "precise", not "accurate".
It's not a "bug in Java" or a "bug" in anything. It's a definition. The JVM developers look around to find the fastest clock/timer in the system and use it. If that's in lock-step with the system clock then good, but if it's not, that's just the way the cookie crumbles. It's entirely plausible, say, that a computer system will have an accurate system clock but then have a higher-rate timer internally that's tied to the CPU clock rate or some such. Since clock rate is often varied to minimize power consumption, the increment rate of this internal timer would vary.
System.currentTimeMillis() and System.nanoTime() are not necessarily provided by
the same hardware. System.currentTimeMillis(), backed by GetSystemTimeAsFileTime()
has 100ns resolution elements. Its source is the system timer. System.nanoTime() is backed by the system's high performance counter. There is a whole variety of different hardware
providing this counter. Therefore its resolution varies, depending on the underlying hardware.
In no case can it be assumed that these two sources are in phase. Measuring the two values
against each other will disclose a different running speed. If the update of System.currentTimeMillis() is taken as the real progress in time, the output of System.nanoTime() may be sometimes slower, sometimes faster, and also varying.
A careful calibration has to be done in order to phase lock these two time sources.
A more detailed description of the relation between these two time sources can be found
at the Windows Timestamp Project.
Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?
The Windows Timestamp Project does what you are asking for. As far as I know it only affects the wall-clock timer.

Big difference in timestamps when running same application multiple times on an emulator

In an android application I am trying to do a exponential modulus operation and want to calculate the time taken for that process. So i have created 2 timestamps,one just before the operation and the other just after the operation
Calendar calendar0 = Calendar.getInstance();
java.util.Date now0 = calendar0.getTime();
java.sql.Timestamp currentTimestamp0 = new java.sql.Timestamp(now0.getTime());
BigInteger en = big.modPow(e, n);
Calendar calendar1 = Calendar.getInstance();
java.util.Date now1 = calendar1.getTime();
java.sql.Timestamp currentTimestamp1 = new java.sql.Timestamp(now1.getTime());
The difference in time reported by these 2 timestamps is varying over a large range for the same inputs when i am running application multiple times. It gives time in range of [200ns-6ns]
Can someone point out the reason for such result/something that i am doing wrong?
Well for one thing, you're going about your timing in a very convoluted way. Here's something which gives just as much accuracy, but rather simpler:
long start = System.currentTimeMillis();
BigInteger en = big.modPow(e, n);
long end = System.currentTimeMillis();
Note that java.util.Date only has accuracy to the nearest millisecond, and using that same value and putting it in a java.sql.Timestamp doesn't magically make it more accurate. So any result under a millisecond (6ns-200ns) is obviously spurious - basically anything under a millisecond is 0.
You may be able to get a more accurate reading using System.nanoTime - I don't know whether that's supported on Android. There may be alternative high-precision timers available, too.
Now as for why operations may actually take very different amounts of recorded time:
The granularity of the system clock used above may well be considerably less than 1ms. For example, if it measures nothing smaller than 15ms, you could see some operations supposedly taking 0ms and some taking 15ms, even though they actually take the same amount of time.
There can easily be other factors involved, the most obvious being garbage collection
Do more calculations in the trial so that your expected time is several milliseconds.
Consider doing your output to logcat, recording that on the PC and then processing the logcat to only select trials with no intervening platform messages about garbage collection or routine background operations.
A large part of the reason for the variation is that your application is being run by a virtual machine running under a general-purpose operating system on an emulated computer which is running under a general-purpose operating system on a real computer.
With the exception of your application, every one of the things in that pile (the JVM running your app, Android's Linux OS, the emulator and whatever OS runs it) may use the CPU (real or virtual) to do something else at any time for any duration. Those cycles will take away from your program's execution and add to its wall-clock execution time. The result is completely nondeterministic behavior.
If you count emulator CPU cycles consumed and discount whatever Java does behind the scenes, I have no doubt that the standard deviation in execution times would be a lot lower than what you're seeing for the same inputs. The emulated Android environment isn't the place to be benchmarking algorithms because there are too many variables you can't control.
You can't make any conclusions only running a single function. There's so much more going on. You have no control over the underlying linux operating system taking up cycles, let alone the java virtual machine, and yet still all the other programs running. Try running it 1000 or 10000 times

Categories

Resources