How to profile a Java program run from start to finish? - java

I have a Java program that runs for only about 20-30 seconds and I want to profile it.
Firing up the jvisualvm profiler manually at each start is not reliable, because you lose several seconds operating the UI.
Is there a way to profile the entire exectution, like the old -Xhprof:cpu=samples which no longer works?

With Flight Recorder you can add to the command line
-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
-XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true,settings=profile
https://docs.oracle.com/javacomponents/jmc-5-5/jfr-runtime-guide/run.htm#JFRRT176
It only records the memory allocation on a new TLAB, so I make it much smaller than the default to get more samples.
-XX:TLABSize=128k
You need to run jmc from the $JAVA_HOME/bin directory to open the .jfr file created.
I find this harder to use than VisualVM, but produces much more detailed results with less noise. i.e. you get far fewer allocations causes by the profiler itself.

Related

Why does my Java app run faster with profiler attached?

I am developing a Java 8 SE application in Netbeans. A new feature I added recently to the app was running too slowly (about a minute, until the calculations stopped). So I fired up the profiler to see what is the major bottleneck. To my surprise, the calculations completed in about 7 seconds.
Couldn't believe it at first, but the results were correct.
Tried it a few times again, but the app always ran 10 times faster with the profiler attached to it. I also tried to run the compiled .jar file directly from the Windows command line, but the computations took about a minute again and again.
How is it possible, that the attached profiler provides such a massive boost to the performance? What changes does it do to the JVM or application?
BTW, I am using native OpenCV in these calculations with provided Java wrapper, if it makes any difference.
//Edit - Additional info: I am using the built-in Netbeans 8.1 profiler, which I believe is basically VisualVM. As for a profiling method I chose to monitor "Methods" and their execution times and invocation counts. The performance bump happens both with instrumented and sampled profiling.
Unfortunately there probably isn't one single answer that will explain why this is the case. Of course, it will depend on what the program is doing as well as how the program is being launched. For example, if you're using the profiler to launch the application (as opposed to connecting afterwards) then it may be that the profiler is launching with different configuration (heap size, garbage collector etc.) and that is the cause of the difference.
If you run jcmd you should see a list of processes. You can then run jcmd <id> VM.flags to see what the JVM has been configured with, and verify that the same are for the application when under a profiler and when it isn't.
Another possibility is that your program is excessively locking, and this excessive locking is causing thrashing in your application when the profiler isn't attached. With it attached the locking may be slower, resulting in the application threads co-operating and ultimately making faster progress.
However these are just suggestions of how you can investigate further; it's quite likely that there is another as yet undiscovered problem that you're seeing which is completely different (e.g. it's defaulting to a different level of logging ...)

Profiling application with VisualVM

Imagine you have command-line application that takes input file and does something with it. Now imagine you want to sample/profile this application. If it were Visual Studio you would just select profiling method (sampling/instrumentation) and VS would run application for you and collect data while program completes. But as far as I can see there is no similar functionality in VisualVM. You have to run your application, then select it in VisualVM and then explicitly start sampling/profiling. The problem is that sometimes execution of program with certain input data takes less time than it is required to setup VisualVM. Also with such an approach there is no possibility to batch profile application. Someone has suggested to start application in debug mode from Eclipse and set breakpoint somewhere in the beginning of main() method. Then setup VisualVM and continue execution. But I have suspicion that running in Debug vs Release mode has performance implications on its own.
Suggestions?
There is a new Startup Profiler plugin for VisualVM 1.3.6, which allows you to profile your application from its startup. See this article for additional information.
If the program does I/O, the Visual Studio sampler will not see the I/O because it is a "CPU Sampler" (even if nearly all of the time is spent waiting for I/O).
If you use Instrumentation, you won't see any line-level information because it only summarizes at the function level.
I use this technique.
If the program runs too quickly to sample, just put a temporary outer loop around it of, say, 100 or 1000 iterations.
The difference between Debug and Release mode will be next to nothing unless you are spending a good fraction of time in tight loops, in your code, where the loops do not contain any function calls, OR if you are doing data structure operations that do a lot of validation in the libraries.
If you are, then your samples will show that you are, and you will know that Release will make a speed difference.
As far as batch profiling is concerned, I don't. I just keep an eye on the program's overall throughput rate. If there is some input that seems to make it take too long, then I do the sampling procedure on the program with that input, see what the problem is, and fix it.

Java application memory usage

I have been writing a small java application (my first!), that does only a few things at the moment. Currently, it runs the Main class which launches a gui class (a class I wrote that extends JFrame that only contains a JTextArea), a class that loads a local file through a BufferedInputStream that is approximately 40kb, and class that loads a entry from a Java properties file.
Everything works wonderfully, however, I was watching the Windows task manager and I noticed something that struck me as odd. When I launch the application, the RAM usage jumps to about 40MB while it loads the local file and pulls a few values from it to display in the JTextArea, which seems normal to me because of the JVM, Java base classes, etc. At this point, however, when the application has finished loading the file, itmerely sits idle, as I currently don't have it doing anything else. While it is sitting idle, as long as the window is active, the application's memory usage starts climbing by 10-20kb every second. This strikes me as odd. If I click on another program to make this one the inactive window, the memory still rises, but at a much slower rate (about 10kb every 3-5 seconds).
I have not tested to see how far it would go up, but this strikes me as very odd behavior. Is this normal Java behavior? I guess it is possible that my code could be leaking memory, but I'm not sure how. I did make sure to close the BufferedInputStream I am using, and I can't see what else would cause this.
I'm sorry if my explanation doesn't make sense, but I would appreciate any insight and/or pointers anyone may have.
UPDATE:
Upon suggestion, I basically stripped my application down to the Main class which simply calls the gui class. The gui class only extends JFrame and sets the window size, close operation, and visible properties. With these changes, the memory still grows at 10-20kb, but at a slower rate. This, in conjuction with other advice I have received leads me to believe that this is just Java. I will continue to play with it and let you all know if I find out anything else interesting.
Try monitoring the heap usage with jconsole instead of the Windows task manager:
Launch your app with the -Dcom.sun.management.jmxremote option e.g.
java -Dcom.sun.management.jmxremote -jar myapp.jar
Launch jconsole from the command line, and connect to the local pid of the java process you started in the last step.
Click over to memory and watch heap memory (the default display)
If you watch for a while, you'll probably get a "sawtooth" pattern as the memory climbs over time, but then has sharp drop-offs when the garbage collector runs. You can try to "suggest" garbage collection by clicking the so-labelled button.
When you do this, does the memory usage drop down to the same minimum level, or is the overall minimum increasing over the course of several minutes? If the minimum usage increases, then you have a memory leak. If it always returns to the same minimum level, then you're fine.
Congrats on your first app! Now, a couple things to think about. First, the Windows task manager is not a great resource to understand how quickly your vm is growing. Instead, you should monitor your garbage collection stats in the console (use the -verbose:gc commandline param). Second, if you are concerned about potential leaks and the growth of the vm, there are a bunch of great profilers out there that are easy to use and can help you diagnose memory issues. check out these two posts for some profiler options.
Congratulations for your first Java app!
Java applications run in a virtual machine. The virtual machine has been assigned a fixed amount of memory by the OS, typically 512 MB. As long as the application uses less than 512 MB the garbage collector won't kick in and start searching for "dead" memory blocks. The JVM memory limit can be modified in most OSes. Try switching the memory limit to 32 MB, for example.
Is this normal Java behavior?
No.
I guess it is possible that my code could be leaking memory
That is definitely the cause. Please post your source code, otherwise further diagnosis isn't possible.
I noticed you are using Swing, make sure you are launching your JFrame in the event dispatch thread, using the invokeLater(Runnable) method.
If your are using any sort of collections, make sure you clear them once done.
Since you are doing some file IO, make sure you close all of the classes involved in in the IO operations after you are done with them.
If you are using any event listeners, remember to explicitly remove event listeners when they are no longer necessary.
One thing you could try is experimenting. Take your application and remove the file IO, see what happens. Does the memory usage still climb as before? Now resotre your application to normal, and remove the text area - does the memory still climb as before? Etc, etc. This will help you to determine what the source is, and you can focus your efforts there. Most likely you will uncover what you are after by doing this.
Another useful diagnosis tool is to use System.gc() at particular points in time, usually after the heavy-lifting blocks of code. This will tell the JVM to perform a garbage collection at that point in the execution, rather than at another time determined by memory consumption. This will help you to take into account any periodic fluctuations in the memory usage of your application.
Failing which, you can always use a memory profiler. If you are using Netbeans IDE, there's one built right into it. For Eclipse, there're several plugins which can perform profiling.
it is normal. some background calc might leave dead objects around, which JVM isn't in a hurry to clean up. eventually they will be garbage collected, when max mem is approached.
leave your program running overnight, and your machine won't blow up.

How to detect Out Of Memory condition?

I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them)
Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs.
Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it.
Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!
You can pass the following arguments to the JVM on startup and a heap dump will be automatically generated on an OutOfMemoryError. The second argument lets you specify the path for the heap dump file. By using this at least you could check for the existence of a specific file to see if a heap dump has occurred.
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<value>
I see two options if you want heap dumping automated but #Mark's solution with heap dump on OOM isn't satisfactory.
You can use the MemoryMXBean to detect high memory pressure, and then programmatically create a heap dump if the usage (or usage delta) seems high.
You can periodically get memory usage info and generate heap dumps with a cron'd shell script using jmap (works both locally and remote).
It would be nice if you could have a callback on OOM, but, uhm, that callback probably would just crash with an OOM error. :)
Have you looked at JConsole ? It uses JMX to give you visibility of a variety of JVM metrics, including memory info. It would probably be worth monitoring your application using this to begin with, to get a feel for how/when the memory is consumed. You may find the memory is consumed uniformly over the day, or when using certain features.
Take a look at the detecting low memory section of the above link.
If you need you can then write a JMX client to watch the application automatically and trigger whatever actions required. JConsole will indicate which JMX methods you need to poll.
And alternative to waiting until the application has crashed may be to script a controlled restart like every night if you're optimistic that it can survive for twelve hours..
Maybe even websphere can do that for you !?
You could add a listener (Session scoped or Application scope attribute listener) class that would be called each time a new object is added in session/app scope.
In this - you can attempt to check the total memory used by app (Log it) as as call run gc (note that invoking it will not imply gc will always run)
(The above is for the logging part and gc based on usage growth)
For scheduled gc:
In addition you can keep a timer task class that runs after every few hrs and does a request for gc.
Our experience with ITCAM has been less than stellar from the monitoring perspective. We dumped it in favor of CA Wily Introscope.
Have you had a look on the jvisualvm tool in the latest Java 6 JDK's?
It is great for inspecting running code.
I'd dispute that the you need the heap dumps when the OOM occurs. Periodic gathering of the information over time should give the picture of what's going on.
As has been observed various tools exist for analysing these problems. I have had success with ITCAM for WebSphere, as an IBMer I have ready access to that. We were very quickly able to indentify the exact lines of code in out problem situation.
If there's any way you can get a tool of that nature then that's the way to go.
It should be possible to write a simple program to get the process list from the kernel and scan it to see if your WAS process is still running. On a Unix box you could probably whip up something in Perl in a few minutes (if you know Perl), not sure how difficult it would be under Windows. Run it as a scheduled task every five minutes or so, and if the process doesn't show up you could have it fork off another process that would deal with the heap dump and re-start WAS.

How to gather profiling information for a Java 1.4 application?

A Java application I support that runs on JRE 1.4.2_12 is hanging near midnight every night. I'd like to try and record as much profiling information as I can to discover if there is an issue in the JVM or external to the app.
I'd like to use HPROF to collect as much information as possible.
Is there a way to have HPROF dump its cpu sample and memory allocation report every minute instead of at the termination of the JVM?
Is there a different, more appropriate profiler that can collect information like this?
Rather than relying on dump files, I would try hooking up a profiler to the VM and leave it attached until the hang up occurs. Then use the profiler to introspect the state of the threads.
The use of Java 1.4 is a minor issue here, since 1.4's debug interface is not great, but some profilers still support it. I can particularly recommend YourKit, which is commercial, but offers an evaluation licence. It's the best profiler I've used, but some margin.
First things first: did you analyze the thread dump when your application hangs? A lot of the time that has enough information to troubleshoot a hanging java app...
Ctrl-Break in the process window on Windows, or kill -QUIT [pid] on Linux.
I would first try to determine if its actually your app or something else.
Are there any other apps on the box, if so do they run any batch around midnight. It could be a situation of your app suffering from a lack of resources due to other things running on the box or chewing up bandwidth.
Was this always the case or did it start recently. If this is new look at what changed on the box as a whole not just your own app.

Categories

Resources