I have java application that is crashing while in production. It doesn't do so in dev/QA. The jvm is creating a .mdmp file and a text file. How do I analyze the binary dump file? I googled but had no luck. We are using bea jrockit jvm 1.5 R27.
The .mdmp file is a Windows MiniDump file that you can only read with a debugger (like WinDbg). Typically you need the sources of the crashed application to really get some information out of the dump. So in your case you can't do much but contacting JRockit support.
Here a link to the Orace JRockit information about JVM crahes.
.mdmp files are the Windows equivalent of unix/linux core dumps. You can analyse them with WinDBG but if it's a Java process that has crashed most likely you'll want to use Java's own tools to analyse the crashed process.
If you want to look at the heap of the crashed Java process you can use a tool that ships with the JDK called jmap to extract a HPROF file from a .core or .mdmp and then load this into a memory analyser. Note also that some memory analyzers can load core dumps and Windows minidumps directly.
Related issue and the jmap docs
If you want to see the state of the threads then you can use a tool called jstack to print stack traces for every thread at the point the dump was created. jstack docs.
Related
I've got a java application that half the time just hangs, and the other half the JVM crashes. Is there a tool I can use to see what's going on that makes it hang and/or crash?
I'm using CentOS 5.6
For starters I would suggest JVisualVM. It comes with the JDK, so you should just need to type jvisualvm into the command line to start it.
Once it starts, you can connect to a running JVM, so you should be able to connect to your hung Java process and inspect the stack dump for all its running threads as well as the contents of the heap.
Other useful built-in tools include:
jps lists process ids of running java processes
jstack prints a stack dump for each thread in the specified JVM process
jmap generates a heap dump for the specified JVM process (jvisualvm can also generate heap dumps)
jhat analyzes heap dumps generated with jmap or jvisualvm
Of couse, there are also more sophisticated profilers available. JProfiler is quite highly regarded.
There are two different cases.
Application crash:
Was that an OOM? NPE? What was the exception? If there was jvm crash you will see hs_err_.log (http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf)
Looking at the file you may see if your own JNI caused a crash or JVM bug.
Application Hang: I would start with visualvm or jstat (both are part of JDK). You can see current state of threads and check if there is any application error..
Other linux tools that could help to see inside process:
lsof : you can check if the process opened too many files
strace: see current activity from system call point of view.
Oracle tools documentation provides pretty neat listing. It also links Operating System Specific tools
In these cases(hang, freeze, ...) you have to analyze an heap dump to try to figure out what's happening in your application , you can use JVisualVM to take the dump, or you can add the appropriate JVM parameter to dump the content of the heap in the case of a crash.
I performed a heap dump manually by invoking the com.sun.management.HotSpotDiagnostic MXBean's dumpHeap operation in jconsole. So I got a dump file.
My question:
Can jconsole read the dump file? If not, which tool can read it? Thanks!
EDIT: Now I know jconsole doesn't provide read feature, I am wondering reason why jconsole only writes dump file without read feature. (This is not my question, I am just curoius about it)
I found an Eclipse plugin Memory Analyzer to read the dump file by myself. Other tools are still welcome.
You can use jvisualvm.exe which comes with JDK 1.5 and above. Its present in bin folder of JDK. This is a very good tool which can be used to profile even the running Java applications.
You can even use JProfiler to read heap dump files. But this software is licensed.
Is there a tool to analyze a large Java Heap dump (2GB), if one only can assign 1,5GB to the JVM? I can't believe the dump must be fully loaded into memory to be analyzed...
Eclipse MemoryAnalyzer fails, and the IBM tool also.
Do I need to use command line tools here now?
If it's a dev server, restrict the max heap size to something a 32-bit OS can handle. If it's in production, demand a 64-bit OS! If you can't get that, you can run jhat on the server (it has a web interface you can access on your own PC).
One solution is to install the MAT tool on the remote server and generate an HTML output of the analysis to download and view locally. This saves the headache of attempting to get X Windows installed on the remote machine and get all of the ssh tunneling sorted out (which is of course an option as well).
First, download and install the stand-alone Eclipse RCP Application. Then transfer to your server and unpack. Then determine how large the heap dump is and, if necessary, modify the MemoryAnalyzer.ini file to instantiate a JVM with enough RAM for your heap dump.
In this example, I have an 11GB heap dump and have modified the last two lines (adding -Xms)
-startup
plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.300.v20150602-1417
-vmargs
-Xmx16g
-Xms16g
Do an initial run to parse the heap dump. This will generate intermediary data that can be used by subsequent runs to make future analysis faster.
./ParseHeapDump.sh /path/to/heap-dump
After that completes, you can run any of a number of different analysis on the data. The following is an illustration of how to search for memory leak suspects.
./ParseHeapDump.sh /path/to/heap-dump org.eclipse.mat.api:suspec
Unfortunately Eclipse MAT and all heap dump analysis tools loads entire heap dump into memory in order to do the analysis. If Eclipse MAT fails for you, you may try HeapHero tool. JHAT take lore more memory and time than Eclipse MAT to analyze heap dumps.
So far I have learned about generating thread dump and heap dump using jstack and and jmap respectively.
However, jstack thread dump contains only texts describing the stack on each thread. And opening heap dump (.hprof file) with Java VisualVM only shows the objects allocated in the heap.
What I actually want is to be able see the stack, to switch to particular stack frame, and watch local variables. This kind of post-mortem debugging can be done normally with tools like WinDbg, gdb and a core file (for a native C++ program.)
I wonder if such 'core' file (which will allow me to debug in non-live environment) exists in Java?
Java does. If you are using an IBM VM, use com.ibm.jvm.Dump.SystemDump() to programatically generate a dump. This can be debugged using a debugger. I believe "kill"ing your Java process should generate a system dump too. For Unix use kill -4 pid where pid is the process id and could be queried by typing in top | grep java if you have 1 VM instance running.
You could also add -Xdump:system or -Xdump:heap etc to your java command line to filter events and generate dumps on certain events like VM Stop (-Xdump:system:events=vmstop), full garbage collections(-Xdump:system:events=fullgc), etc. Note, depending on your heap size, generating a dump on a full GC is may not be a good idea (i.e you might create 50 dumps withing 20 seconds if you heap grows from 4M to around 60M in 20 seconds ) so you could add a counter like -Xdump:system:events=fullgc,range=50..55 which would generate 5 cores between the 50th to the 55th full garbage collect.
I've found relevant information in a Sun forum and in an SO discussion: I have not had much luck with it, but it might work in your case.
Note: some of the tools mentioned are Java tools, but are unsupported and are not available on Windows versions of the JDK.
I don't think such a dump mechanism exists in standard Java.
Some operating systems (for example Solaris mdb or gdb on Linux) support using the normal native debugger on dump files, with some special support for showing Java stack frames. But this is pretty hardcore and probably not what you want, since it is not well integrated with the Java Debugger.
I have a java process which is acting dubiously. I'd like to see what's up using the various HPROF analysis tools.
How do I generate one on the fly?
Yes. You can generate an hprof file (containing heap memory usage) on the fly using the jmap tool, which ships with Sun's Java VM:
jmap -dump:file=<file_name> <pid>
You have to start the Java process with the correct arguments, which vary a little depending on the JVM version. Then, send a QUIT signal to the process to generate a new file.
The output is normally generated when the VM exits, although this can be disabled by setting the “dump on exit” option to “n” (doe=n). In addition, a profile is generated when Ctrl-\ or Ctrl-Break (depending on platform) is pressed. On Solaris OS and Linux a profile is also generated when a QUIT signal is received (kill -QUIT pid). If Ctrl-\ or Ctrl-Break is pressed multiple times, multiple profiles are generated to the one file.
VisualVM can help you dig into what your process is doing, including the ability to arbitrarily force a heap dump on a running process.
jconsole now has the ability to create a dump to the app's current working directory.
Connect to your JMX enabled instance
Navigate to com.sun.management-->HotspotDiagnostic-->Operations
Fill in p0 to name the Heap Dump
press the heapDump button
Jconsole Screen Shot