Taking a heap dump leads to a crash - java

I have a java process running inside a container with Kubernetes orchestration. I was observing a high memory footprint in docker stats.
I have -Xmx=40Gb and docker stats were reporting 34.5 GiB memory. To get a better understanding about heap usage, I tried to take a heap dump of the running process with below command:
jmap -dump:live,format=b,file=/tmp/dump.hprof $pid
But this causes a container restart. The dump file generated is around 9.5 GiB but Eclipse Memory Analyzer reports that the file is incomplete and can't open it.
Invalid HPROF file: Expected to read another 1,56,84,83,080 bytes, but only 52,84,82,104 bytes are available for heap dump record
I didn't find much information in kubelet logs or the container logs except for liveness probe failure which could be caused by heap dump?/
I am unable to recreate the issue so far. I just wanted to get an understanding of what could have happened and could the heap dump interfere with my running process. I understand that -dump:live flag will force a GC cycle and then collect the heap dump, could that have interfered with my running process?

I have faced situations like this with JVM when it was under stress. Could you attach this file hs_err_pid log? After checking that file, you can configure the OS to generate a core dump (in case your system has not generated a core file yet) and you should be able to get the heap dump from the core file.
On the following link you can read about a similar issue that was related to a JDK bug.
Furthermore, I can recommend you to enable the garbage collector log file and the automatic heap dump using these flags. Kindly attach the gc.log after enabling it and allocate enough space in storage to save heap dumps.
XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Xloggc:/some/path/gc.log

Related

OutOfMemoryError heap dump

I have a java.lang.OutOfMemoryError:GC Overhead limit exceeded.
There is no HeapDumpOnOutOfMemoryError command line option for my application.
I need a heap dump but when I try to capture the dump with jmap or jcmd tools they are not responding:
jmap
D:\program>jmap -dump:live,format=b,file=d:\dump4.hprof 8280
Dumping heap to D:\dump4.hprof ...
jcmd
D:\program>jcmd 8280 GC.heap_dump d:\dump6.hprof
8280:
Processes are not completing but dump files are created. When I open them with VisualVM, they are loading infinitely.
If I capture a heap dump of e.g. VisualVM, Tools complete successfully and dumps are created and opened.
Could you please explain why jmap and jcmd are not completing? And how can I capture a dump of the application with OutOfMemoryError exception? Application is still working but there are only a few live threads.
One possibility is that the heap size you intend to dump is too large in size.
Please specify the size of the heap and RAM.
It is not due to your intended heap size is more than allocated heap size. This error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space. Your application might have ended up using almost all the RAM and Garbage collector has spent too much time trying to clean it and failed repeatedly.
Your application's performance will be slow comparatively, This is because the CPU is using its entire capacity for Garbage Collection and hence cannot perform any other tasks.
Following questions need to be addressed:
What are the objects in the application that occupy large portions of the heap?
In which parts of the source code are these objects being allocated?
You can also use automated graphical tools such as JConsole which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors.

Tomcat heap dump creation

In our Environment the tomcat server got hanged frequently then we increase the heap and restart the tomcat.
There is any another way to analyze the heap dumps in tomcat??
can we create the heap dumps in tomcat? if possible how?
Thanks
Surya
First, you should analyze exactly what causes your Tomcat to hang. There are many reasons which can cause an application to "hang", e.g. dead locks, long GC pauses, etc.
Looking at the heap dump makes sense if your Tomcat crashes with an OutOfMemoryError.
In that case you can use a tool like MAT to analyze the heap dump.
You can create heap dumps any time with jcmd <pid> GC.heap_dump <file>. You can also set the VM option -XX:+HeapDumpOnOutOfMemoryError. This will dump the heap automatically when you get an OutOfMemoryError.

Getting OutOfMemoryError: GC overhead limit exceeded with Tomcat server in Linux machine

I am getting "OutOfMemoryError: GC overhead limit exceeded" with my Tomcat server in Linux Machine. Please let me know if there are any tools that helps me to analyze which program in my Java application is consuming lot of memory. Do we have any debugging tools that gives some information to know where tomcat sever failed with this error?
Thanks in advance.
Be sure you set command line paramter: -XX:+HeapDumpOnOutOfMemoryError to make heapdump on OOM. Also can be usefull: -XX:HeapDumpPath=<folder for heap dump>.
When OOM occurred you can analyze heap dump with MAT. It is very useful tool to analyze heapdumps.
Also you can use jmap to make heap dumps manually. Example: jmap -dump:file=<output-filename> <java process id>

Collecting Java heapdump under load

I am running load against Tomcat 6 running on Java 6. I want to collect a heapdump of the Java heap while the Tomcat server is under load. I normally use jmap -dump to collect my heapdumps.
However, when I try to do this when Tomcat is handling a high load I find that the heapdump collection fails.
Is jmap the best tool for collecting a heap dump from a process under load? What are the possible causes which would cause jmap to fail to collect a heapdump?
If jmap is not the best tool - what is better?
It is entirely acceptable to me for jmap (or some other tool) to stop the world within the Java process while the heap dump is taken.
Is jmap the best tool for collecting a heap dump from a process under load?
I think: No it isn't. From this link:
NOTE - This utility is unsupported and
may or may not be available in future
versions of the JDK.
I've also found jmap can pretty temperamental. If you're having problems:
Try it again. It often manages to get a heap dump after a couple of attempts if it first fails
Use the -F option
Add -XX:+HeapDumpOnOutOfMemoryError as a standard configuration to proactively take heap dumps when an OOM error is thrown
Run Tomcat interactively and add the heap dump on ctrl-break option. This gives you a thread dump too, something you'll probably need anyway
If your heap size is especially large and you have a repeatable condition, temporarily lower your heap size. It makes the resulting file much easier to handle, takes less time and is more likely to succeed
I have found that running Tomcat with a JMX port allows me to take a remote heapdump using visualvm. This succeeded for me when jmap failed.

Loading a large hprof into jhat

I have a 6.5GB Hprof file that was dumped by a 64-bit JVM using the -XX:-HeapDumpOnOutOfMemoryError option. I have it sitting on a 16GB 64-bit machine, and am trying to get it into jhat, but it keeps running out of memory. I have tried passing in jvm args for minimum settings, but it rejects any minimum, and seems to run out of memory before hitting the maximum.
It seems kind of silly that a jvm running out of memory dumps a heap so large that it can't be loaded on a box with twice as much ram. Are there any ways of getting this running, or possibly amortizing the analysis?
Use the equivalent of jhat -J-d64 -J-mx16g myheap.hprof as a command to launch jhat, i.e., this will start jhat in 64-bit mode with a maximum heap size of 16 gigabytes.
If the JVM on your platform defaults to 64-bit-mode operation, then the -J-d64 option should be unnecessary.
I would take a look at the eclipse memory analyzer. This tool is great, and I have looked at several Gig heaps w/ this tool. The nice thing about the tool is it creates indexes on the dump so it is not all in memory at once.
I had to load a 11 GB hprof file and couldn't with eclipse memory analyzer. What I ended up doing was to write a program to reduce the size of the hprof file by randomly removing instance information. Once I got the size of the hprof file down to 1GB, I could open it with eclipse memory analyzer and get a clue on what was causing the memory leak.
What flags are you passing to jhat? Make sure that you're in 64-bit mode and you're setting the heap size large enough.

Categories

Resources