I have a program which i have myself written in java, but I want to test method execution times and get timings for specific methods. I was wondering if this is possible, by maybe somehow an eclipse plug-in? or maybe inserting some code?
I see, it is quite a small program, nothing more than 1500 lines, which would be better a dedicated tool or System.currentTimeMillis()?
Other than using a profiler, a simple way of getting what you want is the following:
public class SomeClass{
public void somePublicMethod()
{
long startTime = System.currentTimeMillis();
someMethodWhichYouWantToProfile();
long endTime = System.currentTimeMillis();
System.out.println("Total execution time: " + (endTime-startTime) + "ms");
}
}
If the bottleneck is big enough to be observed with a profiler, use a profiler.
If you need more accuracy, the best way of measuring an small piece of code is the use of a Java microbenchmark framework like OpenJDK's JMH or Google's Caliper. I believe they are as simple to use as JUnit and not only you will get more accurate results, but you will gain the expertise from the community to do it right.
Follows a JMH microbenchmark to measure the execution time of Math.log():
private double x = Math.PI;
#Benchmark
public void myBenchmark() {
return Math.log(x)
}
Using the currentMillis() and nanoTime() for measuring has many limitations:
They have latency (they also take time to execute) which bias your measurements.
They have VERY limited precision, this means you can mesure things from 26ns to 26ns in linux and 300 or so in Windows has described here
The warmup phase is not taken into consideration, making your measurements fluctuate A LOT.
The currentMillis() and nanoTime() methods in Java can be useful but must be used with EXTREME CAUTION or you can get wrong measurements errors like this where the order of the measured snippets influence the measurements or like this where the author wrongly conclude that several millions of operations where performed in less than a ms, when in fact the JMV realised no operations where made and hoisted the code, running no code at all.
Here is a wonderful video explaining how to microbenchmark the right way: https://shipilev.net/#benchmarking
For quick and dirty time measurement tests, don't use wall-clock time (System.currentTimeMillis()). Prefer System.nanoTime() instead:
public static void main(String... ignored) throws InterruptedException {
final long then = System.nanoTime();
TimeUnit.SECONDS.sleep(1);
final long millis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - then);
System.out.println("Slept for (ms): " + millis); // = something around 1000.
}
You should use a profiler like
jprofiler
yourkit
They will easily integrate with any IDE and show whatever detail you need.
Of course these tools are complex and meant to be used to profile complex programs, if you need just some simple benchmarks I suggest you to use System.currentTimeMillis() or System.nanoTime() and calculate delta of millisecs between calls by yourself.
Using a profiler is better because you can find out average execution times and bottlenecks in your app.
I use VisualVM. slick and simple.
Google Guava has a stopwatch, makes things simple and easy.
Stopwatch stopwatch = Stopwatch.createStarted();
myFunctionCall();
LOGGER.debug("Time taken by myFunctionCall: " + stopwatch.stop());
Jprofiler and yourkit are good, but cost money.
There is a free plugin for eclispe called TPTP (Test & Performance Tools Platform) That can give you code execution times. Here is a tutorial that a quick google search brought up. http://www.eclipse.org/articles/Article-TPTP-Profiling-Tool/tptpProfilingArticle.html
Another Custom made solution could be based on the the following post : http://www.mkyong.com/spring/spring-aop-examples-advice/
You then have also the possibility to use the utilities around application monitoring & snmp. If you need to "time" your methods on a regular basis in a production environment, you proabably should consider using one of the those SNMP tools
Usually I store the time in a .txt file for analise the outcome
StopWatch sWatch = new StopWatch();
sWatch.start();
//do stuff that you want to measure
downloadContent();
sWatch.stop();
//make the time pretty
long timeInMilliseconds = sWatch.getTime();
long hours = TimeUnit.MILLISECONDS.toHours(timeInMilliseconds);
long minutes = TimeUnit.MILLISECONDS.toMinutes(timeInMilliseconds - TimeUnit.HOURS.toMillis(hours));
long seconds = TimeUnit.MILLISECONDS.toSeconds(timeInMilliseconds - TimeUnit.HOURS.toMillis(hours) - TimeUnit.MINUTES.toMillis(minutes));
long milliseconds = timeInMilliseconds - TimeUnit.HOURS.toMillis(hours) - TimeUnit.MINUTES.toMillis(minutes) - TimeUnit.SECONDS.toMillis(seconds);
String t = String.format("%02d:%02d:%02d:%d", hours, minutes, seconds, milliseconds);
//each line to store in a txt file, new line
String content = "Ref: " + ref + " - " + t + "\r\n";
//you may want wrap this section with a try catch
File file = new File("C:\\time_log.txt");
FileWriter fw = new FileWriter(file.getAbsoluteFile(), true); //append content set to true, so it does not overwrite existing data
BufferedWriter bw = new BufferedWriter(fw);
bw.write(content);
bw.close();
You can add this code and it will tell you how long the method took to execute.
long millisecondsStart = System.currentTimeMillis();
executeMethod();
long timeSpentInMilliseconds = System.currentTimeMillis() - millisecondsStart;
Related
From time to time I encounter mentions of System.nanoTime() being a lot slower (the call could cost up to microseconds) than System.currentTimeMillis(), but prooflinks are often outdated, or lead to some fairly opinionated blog posts that can't be really trusted, or contain information pertaining to specific platform, or this, or that and so on.
I didn't run benchmarks since I'm being realistic about my ability to conduct an experiment concerning such a sensitive matter, but my conditions are really well-defined, so I'm expecting quite a simple answer.
So, on an average 64-bit Linux (implying 64-bit JRE), Java 8 and a modern hardware, will switching to nanoTime() cost me that microseconds to call? Should I stay with currentTimeMillis()?
As always, it depends on what you're using it for. Since others are bashing nanoTime, I'll put a plug in for it. I exclusively use nanoTime to measure elapsed time in production code.
I shy away from currentTimeMillis in production because I typically need a clock that doesn't jump backwards and forwards around like the wall clock can (and does). This is critical in my systems which use important timer-based decisions. nanoTime should be monotonically increasing at the rate you'd expect.
In fact, one of my co-workers says "currentTimeMillis is only useful for human entertainment," (such as the time in debug logs, or displayed on a website) because it cannot be trusted to measure elapsed time.
But really, we try not to use time as much as possible, and attempt to keep time out of our protocols; then we try to use logical clocks; and finally if absolutely necessary, we use durations based on nanoTime.
Update: There is one place where we use currentTimeMillis as a sanity check when connecting two hosts, but we're checking if the hosts' clocks are more than 5 minutes apart.
If you are currently using currentTimeMillis() and are happy with the resolution, then you definitely shouldn't change.
According the javadoc:
This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes)
no guarantees are made except that the resolution is at least as
good as that of {#link #currentTimeMillis()}.
So depending on the OS implementation, there is no guarantee that the nano time returned is even correct! It's just the 9 digits long and has the same number of millis as currentTimeMillis().
A perfectly valid implementation could be currentTimeMillis() * 1000000
Therefore, I don't think you really gain a benefit from nano seconds even if there wasn't a performance issue.
I want to stress that even if the calls would be very cheap, you will not get the nanosecond resolution of your measurements.
Let me give you an example (code from http://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime--):
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
So while both long values will be resolved to a nanosecond, JVM is not giving you a guarantee that every call you make to nanoTime(), JVM will give you a new value.
To illustrate this, I wrote a simple program and ran it on Win7x64 (feel free to run it and report the results as well):
package testNano;
public class Main {
public static void main(String[] args) {
long attempts = 10_000_000L;
long stale = 0;
long prevTime;
for (int i = 0; i < attempts; i++) {
prevTime = System.nanoTime();
long nanoTime = System.nanoTime();
if (prevTime == nanoTime) stale++;
}
System.out.format("nanoTime() returned stale value in %d out of %d tests%n", stale, attempts);
}
}
It prints out nanoTime() returned stale value in 9117171 out of 10000000 tests.
EDIT
I also recommend to read the Oracle article on this: https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks. The conclusions of the article are:
If you are interested in measuring absolute time then always use System.currentTimeMillis(). Be aware that its resolution may be quite coarse (though this is rarely an issue for absolute times.)
If you are interested in measuring/calculating elapsed time, then always use System.nanoTime(). On most systems it will give a resolution on the order of microseconds. Be aware though, this call can also take microseconds to execute on some platforms.
Also you might find this discussion interesting: Why is System.nanoTime() way slower (in performance) than System.currentTimeMillis()?.
Running this very simple test:
public static void main(String[] args) {
// Warmup loops
long l;
for (int i=0;i<1000000;i++) {
l = System.currentTimeMillis();
}
for (int i=0;i<1000000;i++) {
l = System.nanoTime();
}
// Full loops
long start = System.nanoTime();
for (int i=0;i<10000000;i++) {
l = System.currentTimeMillis();
}
start = System.nanoTime()-start;
System.err.println("System.currentTimeMillis() "+start/1000);
start = System.nanoTime();
for (int i=0;i<10000000;i++) {
l = System.nanoTime();
}
start = System.nanoTime()-start;
System.err.println("System.nanoTime() "+start/1000);
}
On Windows 7 this shows millis to be just over 2 times as fast:
System.currentTimeMillis() 138615
System.nanoTime() 299575
On other platforms, the difference isn't as large, with nanoTime() actually being slightly (~10%) faster:
On OS X:
System.currentTimeMillis() 463065
System.nanoTime() 432896
On Linux with OpenJDK:
System.currentTimeMillis() 352722
System.nanoTime() 312960
Well the best thing to do in such situations is always to benchmark it. And since the timing depends solely on your platform and OS there's really nothing we can do for you here, particularly since you nowhere explain what you actually use the timer for.
Neither nanoTime nor currentTimeMillis generally guarantee monotonicity (nanoTime does on HotSpot for Solaris only and otherwise relies on an existing monotone time source of the OS - so for most people it will be monotonic even if currentTimeMillis is not).
Luckily for you writing benchmarks in Java is relatively easy these days thanks to jmh (java measuring harness) and even luckier for you Aleksey Shipilёv actually investigated nanoTime a while ago: See here - including source code to do the interesting benchmarking yourself (it's also a nice primer to jmh itself, if you want to write accurate benchmarks with only relatively little knowledge - that's the one to pick.. just amazing how far the engineers behind that project went to make benchmarking as straight-forward as possible to the general populace! Although you certainly can still fuck up if you're not careful ;-))
To summarize the results for a modern linux distribution or Solaris and a x86 CPU:
Precision: 30ns
Latency: 30ns best case
Windows:
Precision: Hugely variable, 370ns to 15 µs
Latency: Hugely variable, 15ns to 15 µs
But note Windows is also known to give you a precision of up to 100ms for currentTimeMillis in some rare situations soo.. pick your poison.
Mac OS X:
Precision: 1µs
Latency: 50ns
Be vary these results will differ greatly depending on your used platform (CPU/MB - there are some interesting older hardware combinations around, although they're luckily getting older) and OS. Heck obviously just running this on a 800 MHz CPU your results will be rather different when compared to a 3.6GHz server.
I've seen this question and it's somewhat similar. I would like to know if it really is a big factor that would affect the performance of my application. Here's my scenario.
I have this Java webapp that can upload thousands of data from a Spreadsheet which is being read per row from top to bottom. I'm using System.out.println() to show on the server's side on what line the application is currently reading.
- I'm aware of creating a log file. In fact, I'm creating a log file and at the same time, displaying the logs on the server's prompt.
Is there any other way of printing the current data on the prompt?
I was recently testing with (reading and) writing large (1-1.5gb) text-files, and I found out that:
PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(java.io.FileDescriptor.out), "UTF-8"), 512));
out.println(yourString);
//...
out.flush();
is in fact almost 250% faster than
System.out.println(yourString);
My test-program first read about 1gb of data, processed it a bit and outputted it in slightly different format.
Test results (on Macbook Pro, with SSD reading&writing using same disk):
data-output-to-system-out > output.txt => 1min32sec
data-written-to-file-in-java => 37sec
data-written-to-buffered-writer-stdout > output.txt => 36sec
I did try with multiple buffer sized between 256-10k but that didn't seem to matter.
So keep in mind if you're creating unix command-line tools with Java where output is meant to be directed or piped to somewhere else, don't use System.out directly!
It can have an impact on your application performance. The magnitude will vary depending on the kind of hardware you are running on and the load on the host.
Some points on which this can translate to performance wise:
-> Like Rocket boy stated, println is synchronized, which means you will be incurring in locking overhead on the object header and may cause thread bottlenecks depending on your design.
-> Printing on the console requires kernel time, kernel time means the cpu will not be running on user mode which basically means your cpu will be busy executing on kernel code instead of your application code.
-> If you are already logging this, that means extra kernel time for I/O, and if your platform does not support asynchronous I/O this means your cpu might become stalled on busy waits.
You can actually try and benchmark this and verify this yourself.
There are ways to getaway with this like for example having a really fast I/O, a huge machine for dedicated use maybe and biased locking on your JVM options if your application design will not be multithreaded on that console printing.
Like everything on performance, it all depends on your hardware and priorities.
System.out.println()
is synchronized.
public void println(String x) {
synchronized (this) {
print(x);
newLine();
}
If multiple threads write to it, its performance will suffer.
Yes, it will have a HUGE impact on performance. If you want a quantifiable number, well then there's plenty of software and/or ways of measuring your own code's performance.
System.out.println is very slow compared to most slow operations. This is because it places more work on the machine than other IO operations (and it is single threaded)
I suggest you write the output to a file and tail the output of this file. This way, the output will still be slow, but it won't slow down your web service so much.
Here's a very simple program to check performance of System.out.println and compare it with multiplication operation (You can use other operations or function specific to your requirements).
public class Main{
public static void main(String []args) throws InterruptedException{
long tTime = System.nanoTime();
long a = 123L;
long b = 234L;
long c = a*b;
long uTime = System.nanoTime();
System.out.println("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long vTime = System.nanoTime();
System.out.println("Time taken to execute Print statement : "+ (vTime - uTime) + " nano Seconds");
}
}
Output depends on your machine and it's current state.
Here's what I got on : https://www.onlinegdb.com/online_java_compiler
a * b = 28782. Time taken for multiplication = 330 nano Seconds
Time taken to execute Print statement : 338650 nano Seconds
EDIT :
I have logger set up on my local machine so wanted to give you idea of performance difference between system.out.println and logger.info - i.e., performance comparison between console print vs logging
public static void main(String []args) throws InterruptedException{
long tTime = System.nanoTime();
long a = 123L;
long b = 234L;
long c = a*b;
long uTime = System.nanoTime();
System.out.println("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long vTime = System.nanoTime();
System.out.println("Time taken to execute Print statement : "+ (vTime - uTime) + " nano Seconds");
long wTime = System.nanoTime();
logger.info("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long xTime = System.nanoTime();
System.out.println("Time taken to execute log statement : "+ (xTime - wTime) + " nano Seconds");
}
Here's what I got on my local machine :
a * b = 28782. Time taken for multiplication = 1667 nano Seconds
Time taken to execute Print statement : 34080917 nano Seconds
2022-11-15 11:36:32.734 [] INFO CreditAcSvcImpl uuid: - a * b = 28782. Time taken for multiplication = 1667 nano Seconds
Time taken to execute log statement : 9645083 nano Seconds
Notice that system.out.println is taking almost 24 ms higher then the logger.info.
I am making a call to a method by passing ipAddress and it will return back the location of ipAddress like Country, City, etc etc. So I was trying to see how much time it is taking for each call. So I set the start_time before making call to method and end_time after making a call. So sometimes I get difference as 0. And resp contains the valid response.
long start_time = System.currentTimeMillis();
resp = GeoLocationService.getLocationIp(ipAddress);
long end_time = System.currentTimeMillis();
long difference = end_time-start_time;
So that means sometimes it is taking 0 ms to get the response back. Any suggestions will be appreciated.
Try this
long start_time = System.nanoTime();
resp = GeoLocationService.getLocationByIp(ipAddress);
long end_time = System.nanoTime();
double difference = (end_time - start_time) / 1e6;
I pretty much like the (relatively) new java.time library: it's close to awesome, imho.
You can calculate a duration between two instants this way:
import java.time.*
Instant before = Instant.now();
// do stuff
Instant after = Instant.now();
long delta = Duration.between(before, after).toMillis(); // .toWhatsoever()
API is awesome, highly readable and intuitive.
Classes are thread-safe too. !
References: Oracle Tutorial, Java Magazine
No, it doesn't mean it's taking 0ms - it shows it's taking a smaller amount of time than you can measure with currentTimeMillis(). That may well be 10ms or 15ms. It's not a good method to call for timing; it's more appropriate for getting the current time.
To measure how long something takes, consider using System.nanoTime instead. The important point here isn't that the precision is greater, but that the resolution will be greater... but only when used to measure the time between two calls. It must not be used as a "wall clock".
Note that even System.nanoTime just uses "the most accurate timer on your system" - it's worth measuring how fine-grained that is. You can do that like this:
public class Test {
public static void main(String[] args) throws Exception {
long[] differences = new long[5];
long previous = System.nanoTime();
for (int i = 0; i < 5; i++) {
long current;
while ((current = System.nanoTime()) == previous) {
// Do nothing...
}
differences[i] = current - previous;
previous = current;
}
for (long difference : differences) {
System.out.println(difference);
}
}
}
On my machine that shows differences of about 466 nanoseconds... so I can't possibly expect to measure the time taken for something quicker than that. (And other times may well be roughly multiples of that amount of time.)
Since Java 1.5, you can get a more precise time value with System.nanoTime(), which obviously returns nanoseconds instead.
There is probably some caching going on in the instances when you get an immediate result.
From Java 8 onward you can try the following:
import java.time.*;
import java.time.temporal.ChronoUnit;
Instant start_time = Instant.now();
// Your code
Instant stop_time = Instant.now();
System.out.println(Duration.between(start_time, stop_time).toMillis());
//or
System.out.println(ChronoUnit.MILLIS.between(start_time, stop_time));
I do not know how does your PersonalizationGeoLocationServiceClientHelper works. Probably it performs some sort of caching, so requests for the same IP address may return extremely fast.
In the old days (you know, anytime before yesterday) a PC's BIOS timer would "tick" at a certain interval. That interval would be on the order of 12 milliseconds. Thus, it's quite easy to perform two consecutive calls to get the time and have them return a difference of zero. This only means that the timer didn't "tick" between your two calls. Try getting the time in a loop and displaying the values to the console. If your PC and display are fast enough, you'll see that time jumps, making it look as though it's quantized! (Einstein would be upset!) Newer PCs also have a high resolution timer. I'd imagine that nanoTime() uses the high resolution timer.
In such a small cases where difference is less than 0 milliseconds you can get difference in nano seconds as well.
System.nanoTime()
You can use
System.nanoTime();
To get the result in readable format, use
TimeUnit.MILLISECONDS or NANOSECONDS
I have a very long string with the pattern </value> at the very end, I am trying to test the performance of some function calls, so I made the following test to try to find out the answer... but I think I might be using nanoTime incorrectly? Because the result doesn't make sense no matter how I swap the order around...
long start, end;
start = System.nanoTime();
StringUtils.indexOf(s, "</value>");
end = System.nanoTime();
System.out.println(end - start);
start = System.nanoTime();
s.indexOf("</value>");
end = System.nanoTime();
System.out.println(end - start);
start = System.nanoTime();
sb.indexOf("</value>");
end = System.nanoTime();
System.out.println(end - start);
I get the following:
163566 // StringUtils
395227 // String
30797 // StringBuilder
165619 // StringBuilder
359639 // String
32850 // StringUtils
No matter which order I swap them around, the numbers will always be somewhat the same... What's the deal here?
From java.sun.com website's FAQ:
Using System.nanoTime() between various points in the code to perform elapsed time measurements should always be accurate.
Also:
http://download.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
The differences between the two runs is in the order of microseconds and that is expected. There are many things going on on your machine which make the execution environment never the same between two runs of your application. That is why you get that difference.
EDIT: Java API says:
This method provides nanosecond precision, but not necessarily
nanosecond accuracy.
Most likely there's memory initialization issues or other things that happen at the JVM's startup that is skewing your numbers. You should get a bigger sample to get more accurate numbers. Play around with the order, run it multiple times, etc.
It is more than likely that the methods you check use some common code behind the scenes. But the JIT will do its work only after about 10.000 invocations. Hence, this could be the cause why your first two example seem to be always slower.
Quick fix: just do the 3 method calls before the first measuring on a long enoug string.
One of my friends showed me something he had done, and I was at a serious loss to explain how this could have happened: he was using a System.nanotime to time something, and it gave the user an update every second to tell how much time had elapsed (it Thread.sleep(1000) for that part), and it took seemingly forever (something that was waiting for 10 seconds took roughly 3 minutes to finish). We tried using millitime in order to see how much time had elapsed: it printed how much nanotime had elapsed every second, and we saw that for every second, the nanotime was moving by roughly 40-50 milliseconds every second.
I checked for bugs relating to System.nanotime and Java, but it seemed the only things I could find involved the nanotime suddenly greatly increasing and then stopping. I also browsed this blog entry based on something I read in a different question, but that didn't have anything that may cause it.
Obviously this could be worked around for this situation by just using the millitime instead; there are lots of workarounds to this, but what I'm curious about is if there's anything other than a hardware issue with the system clock or at least whatever the most accurate clock the CPU has (since that's what System.nanotime seems to use) that could cause it to run consistently slow like this?
long initialNano = System.nanoTime();
long initialMili = System.currentTimeMillis();
//Obviously the code isn't actually doing a while(true),
//but it illustrates the point
while(true) {
Thread.sleep(1000);
long currentNano = System.nanoTime();
long currentMili = System.currentTimeMillis();
double secondsNano = ((double) (currentNano - initialNano))/1000000000D;
double secondsMili = ((double) (currentMili - initialMili))/1000D;
System.out.println(secondsNano);
System.out.println(secondsMili);
}
secondsNano will print something along the lines of 0.04, whereas secondsMili will print something very close to 1.
It looks like a bug along this line has been reported at Sun's bug database, but they closed it as a duplicate, but their link doesn't go to an existing bug. It seems to be very system-specific, so I'm getting more and more sure this is a hardware issue.
... he was using a System.nanotime to cause the program to wait before doing something, and ...
Can you show us some code that demonstrates exactly what he was doing? Was it some strange kind of busy loop, like this:
long t = System.nanoTime() + 1000000000L;
while (System.nanoTime() < t) { /* do nothing */ }
If yes, then that's not the right way to make your program pause for a while. Use Thread.sleep(...) instead to make the program wait for a specified number of milliseconds.
You do realise that the loop you are using doesn't take exactly 1 second to run? Firstly Thread.sleep() isn't guaranteed to be accurate, and the rest of the code in the loop does take some time to execute (Both nanoTime() and currentTimeMillis() actually can be quite slow depending on the underlying implementation). Secondly, System.currentTimeMillis() is not guaranteed to be accurate either (it only updates every 50ms on some operating system and hardware combinations). You also mention it being inaccurate to 40-50ms above and then go on to say 0.004s which is actually only 4ms.
I would recommend you change your System.out.println() to be:
System.out.println(secondsNano - secondsMili);
This way, you'll be able to see how much the two clocks differ on a second-by-second basis. I left it running for about 12 hours on my laptop and it was out by 1.46 seconds (fast, not slow). This shows that there is some drift in the two clocks.
I would think that the currentTimeMillis() method provides a more accurate time over a large period of time, yet nanoTime() has a greater resolution and is good for timing code or providing sub-millisecond timing over short time periods.
I've experienced the same problem. Except in my case, it is more pronounced.
With this simple program:
public class test {
public static void main(String[] args) {
while (true) {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
OStream.out("\t" + System.currentTimeMillis() + "\t" + nanoTimeMillis());
}
}
static long nanoTimeMillis() {
return Math.round(System.nanoTime() / 1000000.0);
}
}
I get the following results:
13:05:16:380 main: 1288199116375 61530042
13:05:16:764 main: 1288199117375 61530438
13:05:17:134 main: 1288199118375 61530808
13:05:17:510 main: 1288199119375 61531183
13:05:17:886 main: 1288199120375 61531559
The nanoTime is showing only ~400ms elapsed for each second.