Imagine you have command-line application that takes input file and does something with it. Now imagine you want to sample/profile this application. If it were Visual Studio you would just select profiling method (sampling/instrumentation) and VS would run application for you and collect data while program completes. But as far as I can see there is no similar functionality in VisualVM. You have to run your application, then select it in VisualVM and then explicitly start sampling/profiling. The problem is that sometimes execution of program with certain input data takes less time than it is required to setup VisualVM. Also with such an approach there is no possibility to batch profile application. Someone has suggested to start application in debug mode from Eclipse and set breakpoint somewhere in the beginning of main() method. Then setup VisualVM and continue execution. But I have suspicion that running in Debug vs Release mode has performance implications on its own.
Suggestions?
There is a new Startup Profiler plugin for VisualVM 1.3.6, which allows you to profile your application from its startup. See this article for additional information.
If the program does I/O, the Visual Studio sampler will not see the I/O because it is a "CPU Sampler" (even if nearly all of the time is spent waiting for I/O).
If you use Instrumentation, you won't see any line-level information because it only summarizes at the function level.
I use this technique.
If the program runs too quickly to sample, just put a temporary outer loop around it of, say, 100 or 1000 iterations.
The difference between Debug and Release mode will be next to nothing unless you are spending a good fraction of time in tight loops, in your code, where the loops do not contain any function calls, OR if you are doing data structure operations that do a lot of validation in the libraries.
If you are, then your samples will show that you are, and you will know that Release will make a speed difference.
As far as batch profiling is concerned, I don't. I just keep an eye on the program's overall throughput rate. If there is some input that seems to make it take too long, then I do the sampling procedure on the program with that input, see what the problem is, and fix it.
Related
I am developing a Java 8 SE application in Netbeans. A new feature I added recently to the app was running too slowly (about a minute, until the calculations stopped). So I fired up the profiler to see what is the major bottleneck. To my surprise, the calculations completed in about 7 seconds.
Couldn't believe it at first, but the results were correct.
Tried it a few times again, but the app always ran 10 times faster with the profiler attached to it. I also tried to run the compiled .jar file directly from the Windows command line, but the computations took about a minute again and again.
How is it possible, that the attached profiler provides such a massive boost to the performance? What changes does it do to the JVM or application?
BTW, I am using native OpenCV in these calculations with provided Java wrapper, if it makes any difference.
//Edit - Additional info: I am using the built-in Netbeans 8.1 profiler, which I believe is basically VisualVM. As for a profiling method I chose to monitor "Methods" and their execution times and invocation counts. The performance bump happens both with instrumented and sampled profiling.
Unfortunately there probably isn't one single answer that will explain why this is the case. Of course, it will depend on what the program is doing as well as how the program is being launched. For example, if you're using the profiler to launch the application (as opposed to connecting afterwards) then it may be that the profiler is launching with different configuration (heap size, garbage collector etc.) and that is the cause of the difference.
If you run jcmd you should see a list of processes. You can then run jcmd <id> VM.flags to see what the JVM has been configured with, and verify that the same are for the application when under a profiler and when it isn't.
Another possibility is that your program is excessively locking, and this excessive locking is causing thrashing in your application when the profiler isn't attached. With it attached the locking may be slower, resulting in the application threads co-operating and ultimately making faster progress.
However these are just suggestions of how you can investigate further; it's quite likely that there is another as yet undiscovered problem that you're seeing which is completely different (e.g. it's defaulting to a different level of logging ...)
I need to optimise a Java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual API calls.
To give an idea of complexity-
the application takes a data source file containing 1 million rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimise the application.
Any suggestions would be appreciated.
I can recommend JVisualVM. It's a great monitoring / profiling tool that is bundled with the Oracle/Sun JDK. Just fire it up, connect to your application and start the CPU-profiling. You should get great histograms over where the time is spent.
Getting Started with VisualVM has a great screen-cast showing you how to work with it.
Screen shot:
Another more rudimentary alternative is to go with the -Xprof command line option:
-Xprof
Profiles the running program, and sends profiling data to
standard output. This option is provided as a utility that is
useful in program development and is not intended to be be
used in production systems.
I've been using YourKit a few times and what quite happy with it. I've however never profiled a long-running operation.
Is the processing the same for each row? In which case the size of the input file doesn't really matter. You could profile a subset to figure out which calls are expensive.
Just wanted to mention the inspectIT tool. It recently became completely open source (https://github.com/inspectIT/inspectIT). It provides complete and detailed call graph with contextual information, there are many out-of the box sensor for database calls, http monitoring, exceptions, etc.
Seams perfect for your use-case..
Try OPNET's Panorama software product
It sounds like a normal profiler might not be the right tool in this case, since they're geared towards measuring the CPU time taken by the program being profiled rather than external APIs that it calls, and they tend to incur a high overhead of their own and collect a large amount of data that would probably overwhelm your system if left running for a long time.
If you really need to collect performance data over such a long time, and mainly for external calls, then Perf4J is probably a better tool.
In our office we use YourKit profiler on a day to day basis. It's really light weight and serves most of the performance related use cases we have had.
But I have also used Visual VM. It's free and fast. You may first want to give Visual VM a try before going towards YourKit (YourKit is not freeware).
visualvm (part of the SDK) and Java 7 can produce detailed profiling.
I use profiler in NetBeans (it is really brilliant and already built in, no need to install plugin) or JVisualVM when not using NetBeans.
I have started to use visualvm for profiling my application which I launch in Eclipse. Then I launch visualvm which initially gives believable results.
After some time two processes appear in the monitor which consume huge amounts of time.
I have not deliberately invoked these. After a time they disappear. Are they an artefact of the profiling process and do I need to worry?
Very few of my routines appear in the profile, mainly the libraries they call. Is there a way of showing which routines call the most heavily used ones?
It is better to start with CPU sampling, if you don't know which part of the code is slow. Once you know better (based on the sampling results) what is going on, you can profile just part of your application, which is slow. You need to set profiling roots and instrumentation filter and don't forget to take the snapshot of collected results. See Profiling With VisualVM, Part 1 and Profiling With VisualVM, Part 2 to get more information about profiling and how to set profiling roots and instrumentation filter.
VisualVM uses Java to perform it's work. This means you will see some artefacts which relate to the RMI calls it makes. You can ignore them.
I use YourKit which doesn't do this, but it's not free ;)
VisualVM will track all methods being called by the java program it's monitoring, so either your program or one of its libraries is calling those methods. VisualVM is also connecting to it so there might be some small artifacts.
As for searching, probably the easiest way is to filter by your own packages. There is a space at the bottom where you can enter those so you can see which of your own methods is really taking time. Also you should pay attention to which thread you are in, usually you will want to look at the whatever is your "main" thread. The other threads are interesting but won't always give you the best idea of how your program behaves.
Is there any Java profiler that allows profiling short-lived applications? The profilers I found so far seem to work with applications that keep running until user termination. However, I want to profile applications that work like command-line utilities, it runs and exits immediately. Tools like visualvm or NetBeans Profiler do not even recognize that the application was ran.
I am looking for something similar to Python's cProfile, in that the profiler result is returned when the application exits.
You can profile your application using the JVM builtin HPROF.
It provides two methods:
sampling the active methods on the stack
timing method execution times using injected bytecode (BCI, byte codee injection)
Sampling
This method reveals how often methods were found on top of the stack.
java -agentlib:hprof=cpu=samples,file=profile.txt ...
Timing
This method counts the actual invocations of a method. The instrumenting code has been injected by the JVM beforehand.
java -agentlib:hprof=cpu=times,file=profile.txt ...
Note: this method will slow down the execution time drastically.
For both methods, the default filename is java.hprof.txt if the file= option is not present.
Full help can be obtained using java -agentlib:hprof=help or can be found on Oracles documentation
Sun Java 6 has the java -Xprof switch that'll give you some profiling data.
-Xprof output cpu profiling data
A program running 30 seconds is not shortlived. What you want is a profiler which can start your program instead of you having to attach to a running system. I believe most profilers can do that, but you would most likely like one integrated in an IDE the best. Have a look at Netbeans.
Profiling a short running Java applications has a couple of technical difficulties:
Profiling tools typically work by sampling the processor's SP or PC register periodically to see where the application is currently executing. If your application is short-lived, insufficient samples may be taken to get an accurate picture.
You can address this by modifying the application to run a number of times in a loop, as suggested by #Mike. You'll have problems if your app calls System.exit(), but the main problem is ...
The performance characteristics of a short-lived Java application are likely to be distorted by JVM warm-up effects. A lot of time will be spent in loading the classes required by your app. Then your code (and library code) will be interpreted for a bit, until the JIT compiler has figured out what needs to be compiled to native code. Finally, the JIT compiler will spend time doing its work.
I don't know if profilers attempt to compensate to for JVM warmup effects. But even if they do, these effects influence your applications real behavior, and there is not a great deal that the application developer can do to mitigate them.
Returning to my previous point ... if you run a short lived app in a loop you are actually doing something that modifies its normal execution pattern and removes the JVM warmup component. So when you optimize the method that takes (say) 50% of the execution time in the modified app, that is really 50% of the time excluding JVM warmup. If JVM warmup is using (say) 80% of the execution time when the app is executed normally, you are actually optimizing 50% of 20% ... and that is not worth the effort.
If it doesn't take long enough, just wrap a loop around it, an infinite loop if you like. That will have no effect on the inclusive time percentages spent either in functions or in lines of code. Then, given that it's taking plenty of time, I just rely on this technique. That tells which lines of code, whether they are function calls or not, are costing the highest percentage of time and would therefore gain the most if they could be avoided.
start your application with profiling turned on, waiting for profiler to attach. Any profiler that conforms to Java profiling architecture should work. i've tried this with NetBeans's profiler.
basically, when your application starts, it waits for a profiler to be attached before execution. So, technically even line of code execution can be profiled.
with this approach, you can profile all kinds of things from threads, memory, cpu, method/class invocation times/duration...
http://profiler.netbeans.org/
The SD Java Profiler can capture statement block execution-count data no matter how short your run is. Relative execution counts will tell you where the time is spent.
You can use a measurement (metering) recording: http://www.jinspired.com/site/case-study-scala-compiler-part-9
You can also inspect the resulting snapshots: http://www.jinspired.com/site/case-study-scala-compiler-part-10
Disclaimer: I am the architect of JXInsight/OpenCore.
I suggest you try yourkit. It can profile from the start and dump the results when the program finishes. You have to pay for it but you can get an eval license or use the EAP version without one. (Time limited)
YourKit can take a snapshot of a profile session, which can be later analyzed in the YourKit GUI. I use this to feature to profile a command-line short-lived application I work on. See my answer to this question for details.
I am writing a simple checkers game in Java. When I mouse over the board my processor ramps up to 50% (100% on a core).
I would like to find out what part of my code(assuming its my fault) is executing during this.
I have tried debugging, but step-through debugging doesn't work very well in this case.
Is there any tool that can tell me where my problem lies? I am currently using Eclipse.
This is called "profiling". Your IDE probably comes with one: see Open Source Profilers in Java.
Use a profiler (e.g yourkit )
Profiling? I don't know what IDE you are using, but Eclipse has a decent proflier and there is also a list of some open-source profilers at java-source.
In a nutshell, profilers will tell you which part of your program is being called how many often.
I don't profile my programs much, so I don't have too much experience, but I have played around with the NetBeans IDE profiler when I was testing it out. (I usually use Eclipse as well. I will also look into the profiling features in Eclipse.)
The NetBeans profiler will tell you which thread was executing for how long, and which methods have been called how long, and will give you bar graphs to show how much time each method has taken. This should give you a hint as to which method is causing problems. You can take a look at the Java profiler that the NetBeans IDE provides, if you are curious.
Profiling is a technique which is usually used to measure which parts of a program is taking up a lot of execution time, which in turn can be used to evaluate whether or not performing optimizations would be beneficial to increase the performance of a program.
Good luck!
1) It is your fault :)
2) If you're using eclipse or netbeans, try using the profiling features -- it should pretty quickly tell you where your code is spending a lot of time.
3) failing that, add console output where you think the inner loop is -- you should be able to find it quickly.
Yes, there are such tools: you have to profile the code. You can either try TPTP in eclipse or perhaps try JProfiler. That will let you see what is being called and how often.
Use a profiler. There are many. Here is a list: http://java-source.net/open-source/profilers.
For example you can use JIP, a java coded profiler.
Clover will give a nice report showing hit counts for each line and branch. For example, this line was executed 7 times.
Plugins for Eclipse, Maven, Ant and IDEA are available. It is free for open source, or you can get a 30 day evaluation license.
If you're using Sun Java 6, then the most recent JDK releases come with JVisualVM in the bin directory. This is a capable monitoring and profiling tool that will require very little effort to use - you don't even need to start your program with special parameters - JVisualVM simply lists all the currently running java processes and you choose the one you want to play with.
This tool will tell you which methods are using all the processor time.
There are plenty of more powerful tools out there, but have a play with a free one first. Then, when you read about what other features are available out there, you'll have an inking about how they might help you.
This is a typically 'High CPU' problem.
There are two kind of high CPU problems
a) Where on thread is using 100% CPU of one core (This is your scenario)
b) CPU usage is 'abnormally high' when we execute certain actions. In such cases CPU may not be 100% but will be abnormally high. Typically this happens when we have CPU intensive operations in the code like XML parsing, serialization de-serialization etc.
Case (a) is easy to analyze. When you experience 100% CPU 5-6 thread dumps in 30 sec interval. Look for a thread which is active (in "runnable" state) and which is inside the same method (you can infer that by monitoring the thread stack). Most probably that you will see a 'busy wait' (see code below for an example)
while(true){
if(status) break;
// Thread.sleep(60000); // such a statement would have avoided busy wait
}
Case (b) also can be analyzed using thread dumps taken in equal interval. If you are lucky you will be able to find out the problem code, If you are not able to identify the problem code by using thread dump. You need to resort to profilers. In my experience YourKit profiler is very good.
I always try with thread dumps first. Profilers will only be last resort. In 80% of the cases we will be able to identify using thread dumps.
Or use JUnit test cases and a code coverage tool for some common components of yours. If there are components that call other components, you'll quickly see those executed many more times.
I use Clover with JUnit test cases, but for open-source, I hear EMMA is pretty good.
In single-threaded code, I find adding some statements like this:
System.out.println("A: "+ System.currentTimeMillis());
is simpler and as effective as using a profiler. You can soon narrow down the part of the code causing the problem.