I am running load against Tomcat 6 running on Java 6. I want to collect a heapdump of the Java heap while the Tomcat server is under load. I normally use jmap -dump to collect my heapdumps.
However, when I try to do this when Tomcat is handling a high load I find that the heapdump collection fails.
Is jmap the best tool for collecting a heap dump from a process under load? What are the possible causes which would cause jmap to fail to collect a heapdump?
If jmap is not the best tool - what is better?
It is entirely acceptable to me for jmap (or some other tool) to stop the world within the Java process while the heap dump is taken.
Is jmap the best tool for collecting a heap dump from a process under load?
I think: No it isn't. From this link:
NOTE - This utility is unsupported and
may or may not be available in future
versions of the JDK.
I've also found jmap can pretty temperamental. If you're having problems:
Try it again. It often manages to get a heap dump after a couple of attempts if it first fails
Use the -F option
Add -XX:+HeapDumpOnOutOfMemoryError as a standard configuration to proactively take heap dumps when an OOM error is thrown
Run Tomcat interactively and add the heap dump on ctrl-break option. This gives you a thread dump too, something you'll probably need anyway
If your heap size is especially large and you have a repeatable condition, temporarily lower your heap size. It makes the resulting file much easier to handle, takes less time and is more likely to succeed
I have found that running Tomcat with a JMX port allows me to take a remote heapdump using visualvm. This succeeded for me when jmap failed.
Related
I have a java process running inside a container with Kubernetes orchestration. I was observing a high memory footprint in docker stats.
I have -Xmx=40Gb and docker stats were reporting 34.5 GiB memory. To get a better understanding about heap usage, I tried to take a heap dump of the running process with below command:
jmap -dump:live,format=b,file=/tmp/dump.hprof $pid
But this causes a container restart. The dump file generated is around 9.5 GiB but Eclipse Memory Analyzer reports that the file is incomplete and can't open it.
Invalid HPROF file: Expected to read another 1,56,84,83,080 bytes, but only 52,84,82,104 bytes are available for heap dump record
I didn't find much information in kubelet logs or the container logs except for liveness probe failure which could be caused by heap dump?/
I am unable to recreate the issue so far. I just wanted to get an understanding of what could have happened and could the heap dump interfere with my running process. I understand that -dump:live flag will force a GC cycle and then collect the heap dump, could that have interfered with my running process?
I have faced situations like this with JVM when it was under stress. Could you attach this file hs_err_pid log? After checking that file, you can configure the OS to generate a core dump (in case your system has not generated a core file yet) and you should be able to get the heap dump from the core file.
On the following link you can read about a similar issue that was related to a JDK bug.
Furthermore, I can recommend you to enable the garbage collector log file and the automatic heap dump using these flags. Kindly attach the gc.log after enabling it and allocate enough space in storage to save heap dumps.
XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Xloggc:/some/path/gc.log
I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html
I've got a java application that half the time just hangs, and the other half the JVM crashes. Is there a tool I can use to see what's going on that makes it hang and/or crash?
I'm using CentOS 5.6
For starters I would suggest JVisualVM. It comes with the JDK, so you should just need to type jvisualvm into the command line to start it.
Once it starts, you can connect to a running JVM, so you should be able to connect to your hung Java process and inspect the stack dump for all its running threads as well as the contents of the heap.
Other useful built-in tools include:
jps lists process ids of running java processes
jstack prints a stack dump for each thread in the specified JVM process
jmap generates a heap dump for the specified JVM process (jvisualvm can also generate heap dumps)
jhat analyzes heap dumps generated with jmap or jvisualvm
Of couse, there are also more sophisticated profilers available. JProfiler is quite highly regarded.
There are two different cases.
Application crash:
Was that an OOM? NPE? What was the exception? If there was jvm crash you will see hs_err_.log (http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf)
Looking at the file you may see if your own JNI caused a crash or JVM bug.
Application Hang: I would start with visualvm or jstat (both are part of JDK). You can see current state of threads and check if there is any application error..
Other linux tools that could help to see inside process:
lsof : you can check if the process opened too many files
strace: see current activity from system call point of view.
Oracle tools documentation provides pretty neat listing. It also links Operating System Specific tools
In these cases(hang, freeze, ...) you have to analyze an heap dump to try to figure out what's happening in your application , you can use JVisualVM to take the dump, or you can add the appropriate JVM parameter to dump the content of the heap in the case of a crash.
I have used Thread Pool for New IO server design . I have used newFixedThreadPool as a Executors factory method for thread pool creation. My server is throwing Exception when i execute my server for 20 to 30 minute . how to handle this exception.
java.lang.OutOfMemoryError: Java heap space
Obviously you are using too much memory, so now you need to find out why. Without your source it is very hard to to say what is wrong, but even with source it can be problematic when the program start to become complex.
What I have found helpful is to take memory dumps and look at them in tools such as Memory Analyzer (MAT). It can even compare several dumps to see what kind of objects are allocated. When you get an idea of what objects exists which you don't think should be there you can use the tool to see what roots it has (which objects has a reference to it).
To get a memory dump form a running java program use jmap -dump:format=b,file=heap.bin and to automatically get a memory dump when your program gets and OutOfMemoryError you can run it with java -XX:+HeapDumpOnOutOfMemoryError failing.java.Program
normally its java -Xms5m -Xmx15m MyApp
-Xms set initial Java heap size
-Xmx set maximum Java heap size
and in eclipse under java run configuration as VM argument
-Xms128m
-Xmx512m
-XX:permSize=128M
-XX:MaxPermSize=384M
You can defnitely try increasing the HEAP SIZE and check if you are getting the issue.
However, I would prefer you to try profiling your application to find out why your heap size and where the memory is being consumed.
There are few Profilers available as open source you can try.
I'm trying to diagnose a PermGen memory leak problem in a Sun One 9.1 Application Server. In order to do that I need to get a heap dump of the JVM process. Unfortunately, the JVM process is version 1.5 running on Windows. Apparently, none of the ways for triggering a heap dump support that setup. I can have the JVM do a heap dump after it runs out of memory, or when it shuts down, but I need to be able to get heap dumps at arbitrary times.
The two often mentioned ways for getting heap dumps are either using jmap or using the HotSpotDiagnostic MBean. Neither of those support jvm 1.5 on Windows.
Is there a method that I've missed? If there's a way to programmatically trigger a heap dump (without using the HotSpotDiagnostic MBean), that would do too...
If it's really not possible to do it in Windows, I guess I'd have to resort to building a Linux VM and doing my debugging in there.
Thanks.
There was a new hotspot option introduced in Java6, -XX:-HeapDumpOnOutOfMemoryError, which was actually backported to the Java5 JVM.
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
Dump heap to file when
java.lang.OutOfMemoryError is thrown.
Manageable. (Introduced in 1.4.2
update 12, 5.0 update 7.)
It's very handy. The JVM lives just long enough to dump its heap to a file, then falls over.
Of course, it does mean that you have to wait for the leak to get bad enough to trigger an OutOfMemoryError.
An alternative is to use a profiler, like YourKit. This provides the means to take a heap snapshot of a running JVM. I believe it still supports Java5.
P.S. You really need to upgrade to java 6....
If it's 1.5.0_14 or later, you can use -XX:+HeapDumpOnCtrlBreak and hit Ctrl-Break in the console