Measure execution time for a Java method [duplicate] - java

This question already has answers here:
How do I time a method's execution in Java?
(42 answers)
Closed 9 years ago.
How do I calculate the time taken for the execution of a method in Java?

To be more precise, I would use nanoTime() method rather than currentTimeMillis():
long startTime = System.nanoTime();
myCall();
long stopTime = System.nanoTime();
System.out.println(stopTime - startTime);
In Java 8 (output format is ISO-8601):
Instant start = Instant.now();
Thread.sleep(63553);
Instant end = Instant.now();
System.out.println(Duration.between(start, end)); // prints PT1M3.553S
Guava Stopwatch:
Stopwatch stopwatch = Stopwatch.createStarted();
myCall();
stopwatch.stop(); // optional
System.out.println("Time elapsed: "+ stopwatch.elapsed(TimeUnit.MILLISECONDS));

You can take timestamp snapshots before and after, then repeat the experiments several times to average to results. There are also profilers that can do this for you.
From "Java Platform Performance: Strategies and Tactics" book:
With System.currentTimeMillis()
class TimeTest1 {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
long total = 0;
for (int i = 0; i < 10000000; i++) {
total += i;
}
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
System.out.println(elapsedTime);
}
}
With a StopWatch class
You can use this StopWatch class, and call start() and stop before and after the method.
class TimeTest2 {
public static void main(String[] args) {
Stopwatch timer = new Stopwatch().start();
long total = 0;
for (int i = 0; i < 10000000; i++) {
total += i;
}
timer.stop();
System.out.println(timer.getElapsedTime());
}
}
See here (archived).
NetBeans Profiler:
Application Performance Application
Performance profiles method-level CPU
performance (execution time). You can
choose to profile the entire
application or a part of the
application.
See here.

Check this: System.currentTimeMillis.
With this you can calculate the time of your method by doing:
long start = System.currentTimeMillis();
class.method();
long time = System.currentTimeMillis() - start;

In case you develop applications for Android you should try out the TimingLogger class.
Take a look at these articles describing the usage of the TimingLogger helper class:
Measuring performance in the Android SDK (27.09.2010)
Discovering the Android API - Part 1 (03.01.2017)

You might want to think about aspect-oriented programming. You don't want to litter your code with timings. You want to be able to turn them off and on declaratively.
If you use Spring, take a look at their MethodInterceptor class.

If you are currently writing the application, than the answer is to use System.currentTimeMillis or System.nanoTime serve the purpose as pointed by people above.
But if you have already written the code, and you don't want to change it its better to use Spring's method interceptors. So for instance your service is :
public class MyService {
public void doSomething() {
for (int i = 1; i < 10000; i++) {
System.out.println("i=" + i);
}
}
}
To avoid changing the service, you can write your own method interceptor:
public class ServiceMethodInterceptor implements MethodInterceptor {
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
long startTime = System.currentTimeMillis();
Object result = methodInvocation.proceed();
long duration = System.currentTimeMillis() - startTime;
Method method = methodInvocation.getMethod();
String methodName = method.getDeclaringClass().getName() + "." + method.getName();
System.out.println("Method '" + methodName + "' took " + duration + " milliseconds to run");
return null;
}
}
Also there are open source APIs available for Java, e.g. BTrace.
or Netbeans profiler as suggested above by #bakkal and #Saikikos.
Thanks.

As proposed nanoTime () is very precise on short time scales.
When this precision is required you need to take care about what you really measure.
Especially not to measure the nanotime call itself
long start1 = System.nanoTime();
// maybe add here a call to a return to remove call up time, too.
// Avoid optimization
long start2 = System.nanoTime();
myCall();
long stop = System.nanoTime();
long diff = stop - 2*start2 + start1;
System.out.println(diff + " ns");
By the way, you will measure different values for the same call due to
other load on your computer (background, network, mouse movement, interrupts, task switching, threads)
cache fillings (cold, warm)
jit compiling (no optimization, performance hit due to running the compiler, performance boost due to compiler (but sometimes code with jit is slower than without!))

Nanotime is in fact not even good for elapsed time because it drifts away signficantly more than currentTimeMillis. Furthermore nanotime tends to provide excessive precision at the expense of accuracy. It is therefore highly inconsistent,and needs refinement.
For any time measuring process,currentTimeMillis (though almost as bad), does better in terms of balancing accuracy and precision.

Related

Accuracy of System.nanoTime() to measure time elapsed decreases after a call to Thread.sleep()

I'm encountering a really unusual issue here. It seems that the calling of Thread.sleep(n), where n > 0 would cause the following System.nanoTime() calls to be less predictable.
The code below demonstrates the issue.
Running it on my computer (rMBP 15" 2015, OS X 10.11, jre 1.8.0_40-b26) outputs the following result:
Control: 48497
Random: 36719
Thread.sleep(0): 48044
Thread.sleep(1): 832271
On a Virtual Machine running Windows 8 (VMware Horizon, Windows 8.1, are 1.8.0_60-b27):
Control: 98974
Random: 61019
Thread.sleep(0): 115623
Thread.sleep(1): 282451
However, running it on an enterprise server (VMware, RHEL 6.7, jre 1.6.0_45-b06):
Control: 1385670
Random: 1202695
Thread.sleep(0): 1393994
Thread.sleep(1): 1413220
Which is surprisingly the result I expect.
Clearly the Thread.sleep(1) affects the computation of the below code. I have no idea why this happens. Does anyone have a clue?
Thanks!
public class Main {
public static void main(String[] args) {
int N = 1000;
long timeElapsed = 0;
long startTime, endTime = 0;
for (int i = 0; i < N; i++) {
startTime = System.nanoTime();
//search runs here
endTime = System.nanoTime();
timeElapsed += endTime - startTime;
}
System.out.println("Control: " + timeElapsed);
timeElapsed = 0;
for (int i = 0; i < N; i++) {
startTime = System.nanoTime();
//search runs here
endTime = System.nanoTime();
timeElapsed += endTime - startTime;
for (int j = 0; j < N; j++) {
int k = (int) Math.pow(i, j);
}
}
System.out.println("Random: " + timeElapsed);
timeElapsed = 0;
for (int i = 0; i < N; i++) {
startTime = System.nanoTime();
//search runs here
endTime = System.nanoTime();
timeElapsed += endTime - startTime;
try {
Thread.sleep(0);
} catch (InterruptedException e) {
break;
}
}
System.out.println("Thread.sleep(0): " + timeElapsed);
timeElapsed = 0;
for (int i = 0; i < N; i++) {
startTime = System.nanoTime();
//search runs here
endTime = System.nanoTime();
timeElapsed += endTime - startTime;
try {
Thread.sleep(2);
} catch (InterruptedException e) {
break;
}
}
System.out.println("Thread.sleep(1): " + timeElapsed);
}
}
Basically I'm running a search within a while-loop which takes a break every iteration by calling Thread.sleep(). I want to exclude the sleep time from the overall time taken to run the search, so I'm using System.nanoTime() to record the start and finishing times. However, as you notice above, this doesn't work well.
Is there a way to remedy this?
Thanks for any input!
This is a complex topic because the timers used by the JVM are highly CPU- and OS-dependent and also change with JVM versions (e.g. by using newer OS APIs). Virtual machines may also limit the CPU capabilities they pass through to guests, which may alter the choices in comparison to a bare metal setup.
On x86 the RDTSC instruction provides the lowest latency and best granularity of all clocks, but under some configurations it's not available or reliable enough as a time source.
On linux you should check kernel startup messages (dmesg), the tsc-related /proc/cpuinfo flags and the selected /sys/devices/system/clocksource/*/current_clocksource. The kernel will try to use TSC by default, if it doesn't there may be a reason for that.
For some history you may want to read the following, but note that some of those articles may be a bit dated, TSC reliability has improved a lot over the years:
OpenJDK Bug 8068730 exposing more precise system clocks in Java 9 through the Date and Time APIs introduced in java 8
http://shipilev.net/blog/2014/nanotrusting-nanotime/ (mentions the -XX:+AssumeMonotonicOSTimers manual override/footgun)
https://blog.packagecloud.io/eng/2017/03/14/using-strace-to-understand-java-performance-improvement/ (mentions the similar option for linux UseLinuxPosixThreadCPUClocks)
https://btorpey.github.io/blog/2014/02/18/clock-sources-in-linux/
https://stas-blogspot.blogspot.de/2012/02/what-is-behind-systemnanotime.html
https://en.wikipedia.org/wiki/Time_Stamp_Counter (especially CPU capabilities, constant_tsc tsc_reliable nonstop_tsc in linux nomenclature)
http://vanillajava.blogspot.de/2012/04/yield-sleep0-wait01-and-parknanos1.html
I can suggest at least two possible reasons of such behavior:
Power saving. When executing a busy loop, CPU runs at its maximum performance state. However, after Thread.sleep it is likely to fall into one of power-saving states, with frequency and voltage reduced. After than CPU won't return to its maximum performance immediately, this may take from several nanoseconds to microseconds.
Scheduling. After a thread is descheduled due to Thread.sleep, it will be scheduled for execution again after a timer event which might be related to the timer used for System.nanoTime.
In both cases you can't directly work around this - I mean Thread.sleep will also affect timings in your real application. But if the amount of useful work measured is large enough, the inaccuracy will be negligible.
The inconsistencies probably arise not from Java, but from the different OSs and VMs "atomic-" or system- clocks themselves.
According to the official .nanoTime() documentation:
no guarantees are made except that the resolution is at least as good
as that of currentTimeMillis()
source
...I can tell from personal knowledge that this is because in some OSs and VMs, the system itself doesn't support "atomic" clocks, which are necessary for higher resolutions. (I will post the link to source this information as soon as I find it again...It's been a long time.)

Interrupt Loop after some Time in Java

I am looking for a way to interrupt a loop in Java without multithreading.
public class Foo{
public Foo(){
}
public void bar(long timeLimit) {
long endTime = System.currentTimeMillis() + (timeLimit * 1000);
while (System.currentTimeMillis() < endTime) {
// Some really long and complicated computation
}
}
}
At the moment I realized that which various (System.currentTimeMillis() < timeLimit) calls to check during the computation if there is time left but I guess that eats up some time and I am also facing the problem that if a loop starts in time, the computation might not be done in time.
Amending the timeLimit (let's say only using 40 % of the time) accordingly also does not work, because I cannot predict how long some computations take.
Which options are there?

Execution time and process time fluctuation

Why there is fluctuation in execution and process CPU time vary every time i run the below code in Netbeans. Does my CPU usage also vary every time. And how do I find CPU usage for the below code.
class test{
static com.sun.management.OperatingSystemMXBean mxbean = (com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean();
public static void main(String[] args) throws InterruptedException {
long start1 = System.currentTimeMillis();
long pstart1 = mxbean.getProcessCpuTime();
long pstart2 = TimeUnit.MILLISECONDS.convert(pstart1, TimeUnit.NANOSECONDS);
for (int i = 0; i < 10000; i++) {
System.out.println("hello");
}
long end1 = System.currentTimeMillis();
long pend1 = mxbean.getProcessCpuTime();
long pend2 = TimeUnit.MILLISECONDS.convert(pend1, TimeUnit.NANOSECONDS);
float pdif = pend2 - pstart2;
float edif = end1 - start1;
System.out.println(edif);
System.out.println(pdif);
}
}
You are performing mostly IO so you as very dependant on the behaviour of the OS.
Millis-seconds is not very accurate and can have resolutions of more than 1 ms. e.g. currentTimeMillis has a resolution of about 15 ms on Windows XP. The cpu time used has a resolution of 10 ms on many unix systems.
If you make the task CPU bound instead of IO bound and run it for longer you should see more consistent timings.

How accurate is execution time calculation?

I was wondering if calculating a method's execution time this way is accurate:
public class GetExecutionTimes {
public static void main(String args[]) {
long startTime = System.currentTimeMillis();
GetExecutionTimes ext = new GetExecutionTimes();
ext.callMethod();
long endTime = System.currentTimeMillis();
System.out.println("Total elapsed time in execution of"
+ " method callMethod() is :" + (endTime - startTime));
}
public void callMethod() {
System.out.println("Calling method");
for (int i = 1; i <= 10; i++) {
System.out.println("Value of counter is " + i);
}
}
}
More specifically: Will the time difference be the same if I execute in different conditions?
If not how can I make this calculate more precise?
There are several aspects of Java that make writing micro-benchmarks difficult. Garbage collection and just-in-time compilation are two of them.
I would suggest the following as a very good starting point: How do I write a correct micro-benchmark in Java?
If you attempt to time only the method callMethod() you should move the constructor call before registering the start time.
You should also use nanoTime() rather then currentTimeMillis() since the letter is susceptible to errors due to changes in local clock (e.g. daylight saving time adjustments or someone on the machine simply changing the time).
The accuracy will depend on your platform and is not necessarily in the units the function names suggest.

How to compute accurately the time it takes a Java program to write or read a file?

How to compute accurately the time it takes a Java program to write or read a number of bytes from/to a file ?
It is really important that the time is being measured accurately. (The time should be computed by the program itself).
The standard idiom is:
long startTime = System.nanoTime();
doSomething();
long elapsedTime = System.nanoTime() - startTime;
not tested, but something like:
long delta = System.nanoTime();
try {
// do your stuff
} finally {
delta = System.nanoTime() - delta;
}
There is a code sample here:
http://www.goldb.org/stopwatchjava.html
/*
Copyright (c) 2005, Corey Goldberg
StopWatch.java is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
*/
public class StopWatch {
private long startTime = 0;
private long stopTime = 0;
private boolean running = false;
public void start() {
this.startTime = System.currentTimeMillis();
this.running = true;
}
public void stop() {
this.stopTime = System.currentTimeMillis();
this.running = false;
}
//elaspsed time in milliseconds
public long getElapsedTime() {
long elapsed;
if (running) {
elapsed = (System.currentTimeMillis() - startTime);
}
else {
elapsed = (stopTime - startTime);
}
return elapsed;
}
//elaspsed time in seconds
public long getElapsedTimeSecs() {
long elapsed;
if (running) {
elapsed = ((System.currentTimeMillis() - startTime) / 1000);
}
else {
elapsed = ((stopTime - startTime) / 1000);
}
return elapsed;
}
//sample usage
public static void main(String[] args) {
StopWatch s = new StopWatch();
s.start();
//code you want to time goes here
s.stop();
System.out.println("elapsed time in milliseconds: " + s.getElapsedTime());
}
}
The way I would do that is just run it in a loop some number of times. Like if you run it 1000 times and clock it, that gives you milliseconds. Run it 1,000,000 times, and it gives you microseconds.
If you also want to find out why it's taking as long as it is, you can just pause it some number of times (like 10) while it's running, and that will tell you what it's doing and why.
The problem with the get System.xxx method is that the method itself needs a few milliseconds to compute. The usually "accepted" way of doing it is running the test a few tens of thousands of times and calculating an average of this.
Also, depending on your OS there is something called the time granularity (example for windows). This is the smallest amount of time your OS can compute. On some OS its a millisecond, on some others its a nanosecond. It might or might not be relevant in your case.

Categories

Resources