I am trying to measure the complexity of an algorithm using a timer to measure the execution time, whilst changing the size of the input array.
The code I have at the moment is rather simple:
public void start() {
start = System.nanoTime();
}
public long stop() {
long time = System.nanoTime() - start;
start = 0;
return time;
}
It appears to work fine, up until the size of the array becomes very large, and what I expect to be an O(n) complexity algorithm turns out appearing to be O(n^2). I believe that this is due to the threading on the CPU, with other processes cutting in for more time during the runs with larger values for n.
Basically, I want to measure how much time my process has been running for, rather than how long it has been since I invoked the algorithm. Is there an easy way to do this in Java?
Measuring execution time is a really interesting, but also complicated topic. To do it right in Java, you have to know a little bit about how the JVM works. Here is a good article from developerWorks about benchmarking and measuring. Read it, it will help you a lot.
The author also provides a small framework for doing benchmarks. You can use this framework. It will give you exaclty what you needs - the CPU consuming time, instead of just two time stamps from before and after. The framework will also handle the JVM warm-up and will keep track of just-in-time-compilings.
You can also use a performance monitor like this one for Eclipse. The problem by such a performance monitor is, that it doesn't perform a benchmark. It just tracks the time, memory and such things, that your application currently uses. But that's not a real measurement - it's just a snapshot at a specific time.
Benchmarking in Java is a hard problem, not least because the JIT can have weird effects as your method gets more and more heavily optimized. Consider using a purpose-built tool like Caliper. Examples of how to use it and to measure performance on different input sizes are here.
If you want the actual CPU time of the current thread (or indeed, any arbitrary thread) rather than the wall clock time then you can get this via ThreadMXBean. Basically, do this at the start:
ThreadMXBean thx = ManagementFactory.getThreadMXBean();
thx.setThreadCpuTimeEnabled(true);
Then, whenever you want to get the elapsed CPU time for the current thread:
long cpuTime = thx.getCurrentThreadCpuTime();
You'll see that ThreadMXBean has calls to get CPU time and other info for arbitrary threads too.
Other comments about the complexities of timing also apply. The timing of the individual invocation of a piece of code can depend among other things on the state of the CPU and on what the JIT compiler decides to do at that particular moment. The overall scalability behaviour of an algorithm is generally a trend that emerges across a number of invocations and you will always need to be prepared for some "outliers" in your timings.
Also, remember that just because a particular timing is expressed in nanoseconds (or indeed milliseconds) does not mean that the timing actually has that granularity.
Related
I am trying to compute time lapsed in java using nanoTime. But everytime it gives me different results. Why it is not consistent always ?
Sample code :
long startTime=System.nanoTime();
String.valueOf(number).length();
long endTime = System.nanoTime();
System.out.println(endTime-startTime);
nanoTime() and its sister currentTimeMillis() are not exact and depending on the architecture you run your code on they suffer from rounding (see the javadoc for details):
This method provides nanosecond precision, but not necessarily nanosecond resolution
(that is, how frequently the value changes) -
no guarantees are made except that the resolution is at least as good
as that of currentTimeMillis().
If you measure the time in order to decide if alternative a or b is faster you are basically doing a micro benchmark. There are frameworks for this and you should use them. Probably the one most known for Java is JMH. If you need to do the same for larger code parts you might consider profiling.
You might want to have a look at this stackoverflow post: How do I write a correct micro-benchmark in Java?
The lapsed time will vary depending upon how JVM will execute and allocate the processing time to the thread in which this code executes.
I tried multiple runs and always got result in between 9000 to 11000 nanoseconds range. This looks fairly consistent.
I have measure the execution time of my program using System.nanoTime() function. For every execution it is giving different execution time. Also i have measure the no of clock cycles by multiplying it with the processor speed. And due to different execution time the no of clock cycles coming is also different. I don't know whether it is correct or i am doing wrong somewhere. Plz suggest the answer.
What you are doing is wrong. Your line of questioning indicates you do not understand how processors and clock cycles work, even at a rudimentary level.
There is no way to measure processor speed by timing how long it takes to execute a java program.
You have several errors in your assumption:
First the value returned by System.nanoTime() has a resolution of nanoseconds but not necessarily the precision. It is possible that the value is only updated once per millisecond (this is implementation dependent).
Next is that you're meassuring wall clock time. The time elapsed depends highly on your system load. You never know which processes also take processing time or not. Therefore you cannot calculate from the wall time to any execution speed directly.
Third you assume that a certain number of Java commands equal always to the same number of processor instructions. This is wrong. E.g. Hotspot can reorder your commands or even compile your code in the meantime and optimize the compiled code several times.
I have developed an image processing algorithm in core java (without using any third party API), Now I have to calculate execution time of that algorithm, for this i have used System.currentTimeMillis() like that,
public class MyAlgo {
public MyAlgo(String imagePath){
long stTime = System.currentTimeMillis();
// ..........................
// My Algorithm
// ..........................
long endTime = System.currentTimeMillis();
System.out.println("Time ==> " + (endTime - stTime));
}
public static void main(String args[]){
new MyAlgo("d:\\myImage.bmp");
}
}
But the problem is that each time I am running this program I am getting different execution time. Can anyone please suggest me that how can I do this?
If you don't want to use external profiling libraries just wrap your algorithm in a for() loop that executes it 1000 times and divide the total time by 1000. The result will be much more accurate since all the other tasks/processes will even out.
Note: The overall measure time will reflect the expected time of the algorithm to finish and not the total time that algorithms code instruction require.
For example if your algorithm uses a lot of memory and on average java VM calls garbage collector twice per each execution of algorithm - than you should take into account also the time of the garbage collector.
That is exactly what a for() loop does, so you will get good results.
You cannot get a reliable result from one execution alone; Java (well, JVMs) does runtime optimizations, plus there are other processes competing for CPU time/resource access. Also, are you sure your algorithm runs in constant time whatever the inputs?
Your best bet to have a calculation as reliable as possible is to use a library dedicated to performance measurements; one of them is caliper.
Set up a benchmark with different inputs/outputs etc and run it.
You need to apply some statistical analysis over multiple executions of your algorithm. For instance, execute it 1000 times and analyze min, max and average time.
Multiple executions in different scenarios might provide insights too, for instance, in different hardware or with images with different resolution.
I suppose your algorithm can be divided in multiple steps. You can monitor the steps independently to understand the impact of each one.
Marvin Image Processing Framework, for instance, provides methods to monitor and analyze the time and the number of executions of each algorithm step.
I want to optimize a method so it runs in less time. I was using System.currentTimeMillis() to calculate the time it lasted.
However, I just read the System.currentTimeMillis() Javadoc and it says this:
This method shouldn't be used for measuring timeouts or other elapsed
time measurements, as changing the system time can affect the results.
So, if I shouldn't use it to measure the elapsed time, how should I measure it?
Android native Traceview will help you measuring the time and also will give you more information.
Using it is as simple as:
// start tracing to "/sdcard/calc.trace"
Debug.startMethodTracing("calc");
// ...
// stop tracing
Debug.stopMethodTracing();
A post with more information in Android Developers Blog
Also take #Rajesh J Advani post into account.
There are a few issues with System.currentTimeMillis().
if you are not in control of the system clock, you may be reading the elapsed time wrong.
For server code or other long running java programs, your code is likely going to be called in over a few thousand iterations. By the end of this time, the JVM will have optimized the bytecode to the extent where the time taken is actually a lot lesser than what you measured as part of your testing.
It doesn't take into account the fact that there might be other processes on your computer or other threads in the JVM that compete for CPU time.
You can still use the method, but you need to keep the above points in mind. As others have mentioned, a profiler is a much better way of measuring system performance.
Welcome to the world of benchmarking.
As others point out - techniques based on timing methods like currentTimeMillis will only tell you elapsed time, rather than how long the method spent in the CPU.
I'm not aware of a way in Java to isolate timings of a method to how long it spent on the CPU - the answer is to either:
1) If the method is long running (or you run it many times, while using benchmarking rules like do not discard every result), use something like the "time" tool on Linux (http://linux.die.net/man/1/time) who will tell you how long the app spent on the CPU (obviously you have to take away the overhead of the application startup etc).
2) Use a profiler as others pointed out. This has dangers such as adding too much overhead using tracing techniques - if it uses stack sampling, it won't be 100% accurate
3) Am not sure how feasible this is on android - but you could get your bechmark running on a quiet multicore system and isolate a core (or ideally whole socket) to only be able to run your application.
You can use something called System.nanoTime(). As given here
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
As the document says
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Hope this will help.
SystemClock.elapsedRealtime()
Quoting words in the linked page: elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.
This question already has answers here:
Is stopwatch benchmarking acceptable?
(13 answers)
Closed 7 years ago.
I want to do some timing tests on a Java application. This is what I am currently doing:
long startTime = System.currentTimeMillis();
doSomething();
long finishTime = System.currentTimeMillis();
System.out.println("That took: " + (finishTime - startTime) + " ms");
Is there anything "wrong" with performance testing like this? What is a better way?
Duplicate: Is stopwatch benchmarking acceptable?
The one flaw in that approach is that the "real" time doSomething() takes to execute can vary wildly depending on what other programs are running on the system and what its load is. This makes the performance measurement somewhat imprecise.
One more accurate way of tracking the time it takes to execute code, assuming the code is single-threaded, is to look at the CPU time consumed by the thread during the call. You can do this with the JMX classes; in particular, with ThreadMXBean. You can retrieve an instance of ThreadMXBean from java.lang.management.ManagementFactory, and, if your platform supports it (most do), use the getCurrentThreadCpuTime method in place of System.currentTimeMillis to do a similar test. Bear in mind that getCurrentThreadCpuTime reports time in nanoseconds, not milliseconds.
Here's a sample (Scala) method that could be used to perform a measurement:
def measureCpuTime(f: => Unit): java.time.Duration = {
import java.lang.management.ManagementFactory.getThreadMXBean
if (!getThreadMXBean.isThreadCpuTimeSupported)
throw new UnsupportedOperationException(
"JVM does not support measuring thread CPU-time")
var finalCpuTime: Option[Long] = None
val thread = new Thread {
override def run(): Unit = {
f
finalCpuTime = Some(getThreadMXBean.getThreadCpuTime(
Thread.currentThread.getId))
}
}
thread.start()
while (finalCpuTime.isEmpty && thread.isAlive) {
Thread.sleep(100)
}
java.time.Duration.ofNanos(finalCpuTime.getOrElse {
throw new Exception("Operation never returned, and the thread is dead " +
"(perhaps an unhandled exception occurred)")
})
}
(Feel free to translate the above to Java!)
This strategy isn't perfect, but it's less subject to variations in system load.
The code shown in the question is not a good performance measuring code:
The compiler might choose to optimize your code by reordering statements. Yes, it can do that. That means your entire test might fail. It can even choose to inline the method under test and reorder the measuring statements into the now-inlined code.
The hotspot might choose to reorder your statements, inline code, cache results, delay execution...
Even assuming the compiler/hotspot didn't trick you, what you measure is "wall time". What you should be measuring is CPU time (unless you use OS resources and want to include these as well or you measure lock contestation in a multi-threaded environment).
The solution? Use a real profiler. There are plenty around, both free profilers and demos / time-locked trials of commercials strength ones.
Using a Java Profiler is the best option and it will give you all the insight that you need into the code. viz Response Times, Thread CallTraces, Memory Utilisations, etc
I will suggest you JENSOR, an open source Java Profiler, for its ease-of-use and no overheads on CPU. You can download it, instrument the code and will get all the info you need about your code.
You can download it from: http://jensor.sourceforge.net/
Keep in mind that the resolution of System.currentTimeMillis() varies between different operating systems. I believe Windows is around 15 msec. So if your doSomething() runs faster than the time resolution, you'll get a delta of 0. You could run doSomething() in a loop multiple times, but then the JVM may optimize it.
Have you looked at the profiling tools in netbeans and eclipse. These tools give you a better handle on what is REALLY taking up all the time in your code. I have found problems that I did not realize by using these tools.
Well that is just one part of performance testing. Depending on the thing you are testing you may have to look at heap size, thread count, network traffic or a whole host of other things. Otherwise I use that technique for simple things that I just want to see how long they take to run.
That's good when you are comparing one implementation to another or trying to find a slow part in your code (although it can be tedious). It's a really good technique to know and you'll probably use it more than any other, but be familiar with a profiling tool as well.
I'd imagine you'd want to doSomething() before you start timing too, so that the code is JITted and "warmed up".
Japex may be useful to you, either as a way to quickly create benchmarks, or as a way to study benchmarking issues in Java through the source code.