Java Performance Testing [duplicate] - java

This question already has answers here:
Is stopwatch benchmarking acceptable?
(13 answers)
Closed 7 years ago.
I want to do some timing tests on a Java application. This is what I am currently doing:
long startTime = System.currentTimeMillis();
doSomething();
long finishTime = System.currentTimeMillis();
System.out.println("That took: " + (finishTime - startTime) + " ms");
Is there anything "wrong" with performance testing like this? What is a better way?
Duplicate: Is stopwatch benchmarking acceptable?

The one flaw in that approach is that the "real" time doSomething() takes to execute can vary wildly depending on what other programs are running on the system and what its load is. This makes the performance measurement somewhat imprecise.
One more accurate way of tracking the time it takes to execute code, assuming the code is single-threaded, is to look at the CPU time consumed by the thread during the call. You can do this with the JMX classes; in particular, with ThreadMXBean. You can retrieve an instance of ThreadMXBean from java.lang.management.ManagementFactory, and, if your platform supports it (most do), use the getCurrentThreadCpuTime method in place of System.currentTimeMillis to do a similar test. Bear in mind that getCurrentThreadCpuTime reports time in nanoseconds, not milliseconds.
Here's a sample (Scala) method that could be used to perform a measurement:
def measureCpuTime(f: => Unit): java.time.Duration = {
import java.lang.management.ManagementFactory.getThreadMXBean
if (!getThreadMXBean.isThreadCpuTimeSupported)
throw new UnsupportedOperationException(
"JVM does not support measuring thread CPU-time")
var finalCpuTime: Option[Long] = None
val thread = new Thread {
override def run(): Unit = {
f
finalCpuTime = Some(getThreadMXBean.getThreadCpuTime(
Thread.currentThread.getId))
}
}
thread.start()
while (finalCpuTime.isEmpty && thread.isAlive) {
Thread.sleep(100)
}
java.time.Duration.ofNanos(finalCpuTime.getOrElse {
throw new Exception("Operation never returned, and the thread is dead " +
"(perhaps an unhandled exception occurred)")
})
}
(Feel free to translate the above to Java!)
This strategy isn't perfect, but it's less subject to variations in system load.

The code shown in the question is not a good performance measuring code:
The compiler might choose to optimize your code by reordering statements. Yes, it can do that. That means your entire test might fail. It can even choose to inline the method under test and reorder the measuring statements into the now-inlined code.
The hotspot might choose to reorder your statements, inline code, cache results, delay execution...
Even assuming the compiler/hotspot didn't trick you, what you measure is "wall time". What you should be measuring is CPU time (unless you use OS resources and want to include these as well or you measure lock contestation in a multi-threaded environment).
The solution? Use a real profiler. There are plenty around, both free profilers and demos / time-locked trials of commercials strength ones.

Using a Java Profiler is the best option and it will give you all the insight that you need into the code. viz Response Times, Thread CallTraces, Memory Utilisations, etc
I will suggest you JENSOR, an open source Java Profiler, for its ease-of-use and no overheads on CPU. You can download it, instrument the code and will get all the info you need about your code.
You can download it from: http://jensor.sourceforge.net/

Keep in mind that the resolution of System.currentTimeMillis() varies between different operating systems. I believe Windows is around 15 msec. So if your doSomething() runs faster than the time resolution, you'll get a delta of 0. You could run doSomething() in a loop multiple times, but then the JVM may optimize it.

Have you looked at the profiling tools in netbeans and eclipse. These tools give you a better handle on what is REALLY taking up all the time in your code. I have found problems that I did not realize by using these tools.

Well that is just one part of performance testing. Depending on the thing you are testing you may have to look at heap size, thread count, network traffic or a whole host of other things. Otherwise I use that technique for simple things that I just want to see how long they take to run.

That's good when you are comparing one implementation to another or trying to find a slow part in your code (although it can be tedious). It's a really good technique to know and you'll probably use it more than any other, but be familiar with a profiling tool as well.

I'd imagine you'd want to doSomething() before you start timing too, so that the code is JITted and "warmed up".

Japex may be useful to you, either as a way to quickly create benchmarks, or as a way to study benchmarking issues in Java through the source code.

Related

Measuring method time

I want to optimize a method so it runs in less time. I was using System.currentTimeMillis() to calculate the time it lasted.
However, I just read the System.currentTimeMillis() Javadoc and it says this:
This method shouldn't be used for measuring timeouts or other elapsed
time measurements, as changing the system time can affect the results.
So, if I shouldn't use it to measure the elapsed time, how should I measure it?
Android native Traceview will help you measuring the time and also will give you more information.
Using it is as simple as:
// start tracing to "/sdcard/calc.trace"
Debug.startMethodTracing("calc");
// ...
// stop tracing
Debug.stopMethodTracing();
A post with more information in Android Developers Blog
Also take #Rajesh J Advani post into account.
There are a few issues with System.currentTimeMillis().
if you are not in control of the system clock, you may be reading the elapsed time wrong.
For server code or other long running java programs, your code is likely going to be called in over a few thousand iterations. By the end of this time, the JVM will have optimized the bytecode to the extent where the time taken is actually a lot lesser than what you measured as part of your testing.
It doesn't take into account the fact that there might be other processes on your computer or other threads in the JVM that compete for CPU time.
You can still use the method, but you need to keep the above points in mind. As others have mentioned, a profiler is a much better way of measuring system performance.
Welcome to the world of benchmarking.
As others point out - techniques based on timing methods like currentTimeMillis will only tell you elapsed time, rather than how long the method spent in the CPU.
I'm not aware of a way in Java to isolate timings of a method to how long it spent on the CPU - the answer is to either:
1) If the method is long running (or you run it many times, while using benchmarking rules like do not discard every result), use something like the "time" tool on Linux (http://linux.die.net/man/1/time) who will tell you how long the app spent on the CPU (obviously you have to take away the overhead of the application startup etc).
2) Use a profiler as others pointed out. This has dangers such as adding too much overhead using tracing techniques - if it uses stack sampling, it won't be 100% accurate
3) Am not sure how feasible this is on android - but you could get your bechmark running on a quiet multicore system and isolate a core (or ideally whole socket) to only be able to run your application.
You can use something called System.nanoTime(). As given here
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
As the document says
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Hope this will help.
SystemClock.elapsedRealtime()
Quoting words in the linked page: elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.

Java - Measuring Method Execution Time

I am trying to measure the complexity of an algorithm using a timer to measure the execution time, whilst changing the size of the input array.
The code I have at the moment is rather simple:
public void start() {
start = System.nanoTime();
}
public long stop() {
long time = System.nanoTime() - start;
start = 0;
return time;
}
It appears to work fine, up until the size of the array becomes very large, and what I expect to be an O(n) complexity algorithm turns out appearing to be O(n^2). I believe that this is due to the threading on the CPU, with other processes cutting in for more time during the runs with larger values for n.
Basically, I want to measure how much time my process has been running for, rather than how long it has been since I invoked the algorithm. Is there an easy way to do this in Java?
Measuring execution time is a really interesting, but also complicated topic. To do it right in Java, you have to know a little bit about how the JVM works. Here is a good article from developerWorks about benchmarking and measuring. Read it, it will help you a lot.
The author also provides a small framework for doing benchmarks. You can use this framework. It will give you exaclty what you needs - the CPU consuming time, instead of just two time stamps from before and after. The framework will also handle the JVM warm-up and will keep track of just-in-time-compilings.
You can also use a performance monitor like this one for Eclipse. The problem by such a performance monitor is, that it doesn't perform a benchmark. It just tracks the time, memory and such things, that your application currently uses. But that's not a real measurement - it's just a snapshot at a specific time.
Benchmarking in Java is a hard problem, not least because the JIT can have weird effects as your method gets more and more heavily optimized. Consider using a purpose-built tool like Caliper. Examples of how to use it and to measure performance on different input sizes are here.
If you want the actual CPU time of the current thread (or indeed, any arbitrary thread) rather than the wall clock time then you can get this via ThreadMXBean. Basically, do this at the start:
ThreadMXBean thx = ManagementFactory.getThreadMXBean();
thx.setThreadCpuTimeEnabled(true);
Then, whenever you want to get the elapsed CPU time for the current thread:
long cpuTime = thx.getCurrentThreadCpuTime();
You'll see that ThreadMXBean has calls to get CPU time and other info for arbitrary threads too.
Other comments about the complexities of timing also apply. The timing of the individual invocation of a piece of code can depend among other things on the state of the CPU and on what the JIT compiler decides to do at that particular moment. The overall scalability behaviour of an algorithm is generally a trend that emerges across a number of invocations and you will always need to be prepared for some "outliers" in your timings.
Also, remember that just because a particular timing is expressed in nanoseconds (or indeed milliseconds) does not mean that the timing actually has that granularity.

Benchmarking inside Java code

I have been looking into benchmarking lately, I have always been interested in logging program data etc. I was interested in knowing if we can implement our own memory usage code and implement our own time consumption code efficently inside our program. I know how to check long it takes for a code to run:
public static void main(String[]args){
long start = System.currentTimeMillis();
// code
System.out.println(System.currentTimeMillis() - start);
}
I also looked into Robust Java benchmarking, Part 1: Issues, this tutorial is very comprehensive. Displays the negative effects of System.currentTimeMillis();. The tutorial then suggests that we use System.nanoTime(); (making it more accurate?).
I also looked at Determining Memory Usage in Java for memory usage. The website shows how you can implement it. The code that has been provided looks inefficent because the person is calling
long L = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
After this he calls System.gc(); (4 * 4) = 16 times. Then repeating the process again.
Doesn't this also take up memory?
So in conlusion, is it possible to implement an efficent benchmarking code inside your java program?
Yes it is possible to effectively implement performance benchmarks in java code. The important question is that any kind of performance benchmark is going to add its own overhead and how much of it do you want. System.currentMill..() is good enough benchmark for performance and in most of the cases nanoTime() is an overkill.
For memory System.gc will show you varied results for different runs (as gc run is never guranteed.) I generally use Visual VM for memory profiling (its free) and then use TDA for dumps analyzing.
One way to do it less invasively is using Aspect oriented programing. You can create just one Aspect that runs on a particular Annotation or set of methods and write an #Around advice to collect performance data.
Here is a small snippet:
public class TestAspect {
#LogPerformance
public void thisMethodNeedsToBeMonitored(){
// Do Something
}
public void thisMethodNeedsToBeMonitoredToo(){
// Do Something
}
}
#interface LogPerformance{}
#Aspect
class PerformanceAspect{
#Around("the pointcut expression to pick up all " +
"the #PerfMonitor annotated methods")
public void logPerformance(){
// log performance here
// Log it to a file
}
}
It may be impossible to benchmark without some Heisenberg effect, i.e. your benching code also being measured. However, if you measure at a high enough granularity the effect will be negligible.
Any benchmarking code is going to be less efficient than non-benchmarked code just based on having more stuff to do. That said, Java in particular causes issues as the article states due to garbage collection happening whenever the jre feels like it. Even the documentation for System.gc says it makes a "best effort".
As for your specific questions:
System.gc shouldn't take up more memory, but it will take processor resources.
It is somewhat possible based on what you're trying to benchmark. There will always be some interference. If you are willing to go outside of your code, there are tools like VisualVM to watch memory usage from outside of your application.
Edit: Corrected the wording of what System.gc docs say.

Java reduce CPU usage

Greets-
We gots a few nutters in work who enjoy using
while(true) { //Code }
in their code. As you can imagine this maxes out the CPU. Does anyone know ways to reduce the CPU utilization so that other people can use the server as well.
The code itself is just constantly polling the internet for updates on sites. Therefore I'd imagine a little sleep method would greatly reduce the the CPU usage.
Also all manipulation is being done in String objects (Java) anyone know how much StringBuilders would reduce the over head by?
Thanks for any pointers
A lot of the "folk wisdom" about StringBuilder is incorrect. For example, changing this:
String s = s1 + ":" + s2 + ":" + s3;
to this:
StringBuilder sb = new StringBuilder(s1);
sb.append(":");
sb.append(s2);
sb.append(":");
sb.append(s3);
String s = sb.toString();
probably won't make it go any faster. This is because the Java compiler actually translates the concatenation sequence into an equivalent sequence of appends to a temporary StringBuilder. Unless you are concatenating Strings in a loop, you are better of just using the + operator. Your code will be easier to read.
The other point that should be made is that you should use a profiler to identify the places in your code that would benefit from work to improve performance. Most developers' intuition about what is worth optimizing is not that reliable.
I'll start off with your second question, I would like to agree with the rest that StringBuilder vs String is very much dependent on the particular string manipulations. I had "benchmarked" this once and generally speaking as the amount of new string allocations went up (usually in the form of concatenations) the overall execution time went up. I won't go into the details and just say that StringBuilder turned out to be most efficient overtime, when compared to String, StringBuffer, String.format(), MessageFormat...
My rule of thumb is that whenever I wish to concatenate more than 3 string together I always use StringBuilder.
As for your first question. We had a requirement to bring CPU usage to 5%. Not an easy task. We used Spring's AOP mechanism to add a Thread.sleep() to before any method execution of a CPU intensive method. The Thread.sleep() would get invoked only if some limit had been exceeded. I am sorry to say that the computation of this limit is not that simple. And even sorrier to say that I still have not obtained the permission to post it up on the net. So this is just in order to put you on an interesting but complicated track that has proven to work over time.
How often do those sites update? You're probably really annoying the hosts. Just stick a Thread.sleep(60 * 1000); at the end of the loop and you can avoid this. That'll poll them once a minute—surely that's enough?
Make it wait some time before firing again, like this:
while(true) {
//Code
Thread.sleep (1000); //Wait 1 second
}
As for the second question, it would reduce memory and possibly CPU usage as well, but the gains really depend on what's happening with those strings.
A sleep would reduce the CPU usage. As for the StringBuilders they could lower the memory usage and improve performance.
You could also try
Thread.yield()
If their code is in a separate JVM and is running on Linux or some other Unix, require them to run their program at nice 20.
Or you could have them run inside a virtual machine like VirtualBox and use it to limit the processor utilization.
Or you could fire them if they continue burn cycles like that instead of using some event-driven model or Thread.sleep.
Never assume something you see in the code is bad (or good) for performance. Performance issues are notoriously deceptive.
Here's an easy way to see for sure what is taking the time.
What sites is the code polling? Do
they support an RSS push?
If they do, then there is no need to
poll the sites, and instead plug a
module to wait for updates. The current method then waits using wait();
The new module then notifies the object with this method using notifyAll().
Admittedly this is some extra work, but it saves a whole lot of bandwidth and computation.

How can I measure the execution time of a for loop?

I want to measure the execution time of for loops on various platforms like php, c, python, Java, javascript... How can i measure it?
I know these platforms so i am talking about these:
for (i = 0; i < 1000000; i++)
{
}
I don't want to measure anything within the loop.
Little bit modification:
#all Some of the friends of mine are saying compiler will optimize this code making this loop a useless loop. I agree with this. we can add a small statement like some incremental statement, but the fact is I just want to calculate the execution time of per iteration in a loop in various languages. By adding a incremental statement will add up the execution time and that will effect the results, cause on various platforms, execution time for incrementing a value also differ and that will make a result useless.
In short, in better way I should ask:
I WANT TO CALCULATE THE EXECUTION TIME OF PER ITERATION IN A LOOP on Various PLATFORMS..HOW CAN DO THIS???
edit---
I came to know about Python Profilers
Profiler modules ...which evaluate cpu time... absolute time.. Any suggestions???Meanwhile i am working on this...
Although an answer has been given for C++, it looks from your description ("[You] don't want to measure anything within the loop") like you're trying to measure the time which it takes a program to iterate over an empty loop.
Please take care here: not only will it take varying times from different platforms and processors, but many compilers will optimise away such loops, effectively rendering the answer as "0" for any loop size.
Note that it also depends on what exactly you want to achieve: do you care about the time your program waits due to it being preempted by the system scheduler? All the solutions above take actual time elapsed into consideration, but that also involves the time when other processes run instead of your own.
If you don't care about this, all of the solutions above are good. If you do care, you probably need some profiling software to actually see how long the loop takes.
I'd start with a program that does nothing but your loop and (in a linux environment at least) do time you-prg-executable.
Then I'd investigate if there are tools which work like time. Not sure, but I'd look at JRat for java, and gcc's gcov for C and C++. No doubt there are similar tools for the other languages. But, of course, you need to see if they give actual time or not.
javascript
start = new Date;
for(var i = 0; i < 1000000; i++) {}
time = new Date - start;
The right way to do it in python is to run timeit from the command line:
$ python -m timeit "for i in xrange(100): pass"
100000 loops, best of 3: 2.5 usec per loop
Other version in PHP that doesn't require any extra stuff:
$start = microtime(true);
for (...) {
....
}
$end = microtime(true);
echo ($end - $start).' seconds';
For compiled languages such as C and C++, make sure that your compiler flags are set such that the loop isn't optimized away. With optimization switched on I would expect most compilers to detect that nothing is going on in the loop and optimize it away.
If you're using Python, you can use a module specifically built for timing things. It's called Timeit.
Here are a couple of references I found (just Googled it):
Dive Into Python: Using the Timeit
Module
Python Documentation:
Timeit Module
And here's some example code to get you started quickly:
import timeit
t = timeit.Timer("for i in range(100): pass", "")
# Timeit will run the statement 1,000,000 times by default, and return the time it took for all the runs together (it doesn't try to average them out or anything).
t.timeit()
2.9035916423318398 # This is the result. Don't forget (like I did in an earlier edit) that this is the result of running the code 1,000,000 times!
in php: (code timer)
$timer = new timer();
$timer->start();
for(i=0;i<1000000;i++)
{
}
$timer->stop();
echo $timer->getTime();
I asked the same question a while back that was specifically for the c++ language.
Heres the answer I ended up using:
#include <omp.h>
// Starting the time measurement
double start = omp_get_wtime();
// Computations to be measured
...
// Measuring the elapsed time
double end = omp_get_wtime();
// Time calculation (in seconds)
The result wont make sense in an empty loop, sine most compilers will optimize it at compiling time, before the runtime.
To compare languages speed, you will need a real algorithm, like "Merge Sort", "Binary Search" or maybe "Dijkstra" if you want something complicated.
Implement the same algorithm in all languages then compare.
Here is a benchmark on a Bio-Informatics algorithm. link text Check the Results page
To the point: just get the current time before doing something (this is the start time) and get the current time after doing something (this is the end time) and then just do the primary school math to get the elapsed time. Every API provides ways to get the current time. In Java for example it's System.currentTimeMillis() and System.nanoTime().
But: especially in Java, the elapsed time isn't always that reliable. There can be microdifferences and it also depends much on how you do the tests. I've seen circumstances where in test2() is faster than test1() because it is executed a bit later and that it become slower when you rearrange the execution to test2() and then test1().
Last but not least, micro-optimization is root of all evil.
For Java, both Apache Commons Lang and the Spring Framework have StopWatch (see the Java doc for Apache's here) classes that you can use as a way to measure execution time. Under the covers though it's just subtracting System.currentTimeMillis() and it doesn't save you that much code to use this utility.

Categories

Resources