-XX:+HeapDumpOnOutOfMemoryError Multiple heap dump creation - java

"-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp" This parameter will help to take heap dump automatically when server limit is reached.
http://www.oracle.com/technetwork/java/javase/clopts-139448.html#gbzrr
I can see detailed information on above link but, "OutOfMemoryError" message prints so many times in my server log times.
So, If the error msg occurs multiple times, will JVM take multiple heap dump ?
Regards,
Peter

The oracle jvm creates a heap dump only on the first OOM when this flag is specified. However you can manually create multiple heapdumps if the jvm process is still alive and responsive. A little bit of googling:
-XX:+HeapDumpOnOutOfMemoryError not creating hprof file in OOM

It depends on the JVM. I think the Oracle JVM only dumps once.

Related

Jboss java.lang.OutOfMemoryError: Java heap space

from time to time jboss serwer throws "An unhandled exception has occurred:
java.lang.OutOfMemoryError: Java heap space".
Could you please explain me how it works?
A few infromation:
Jboss is running all the time in standalone mode on Windows.
From standalone.conf "JAVA_OPTS=-Xms1G -Xmx1G -XX:MaxPermSize=256M"
I deploy ~50MB war file, after tests I remove it.
What is possible cause of this Java heap space exception?
Should I restart server between following deploys?
Is there any command to clean heap space?
If I understand corectly increasing of -Xmx argument will not help. It will only delay the appearance of the exception. Right?
Thanks in advance
What is possible cause of this Java heap space exception?
On the face of it, the explanation is simple. The JVM has run out of heap space, and the GC is unable to reclaim enough space to continue.
But what caused the JVM to get into that state?
There are a number of possible explanations, but they mostly fall into three classes:
Your application has a memory leak.
Repeated deploys are causing memory to leak.
There are no memory leaks, but occasionally your application is getting a request that simply needs too much memory.
Should I restart server between following deploys?
That may help if the deploys are causing the leaks. If not, it won't.
Is there any command to clean heap space?
There is no command that will do this. The JVM will already have run a "full" GC before throwing the OOME.
If I understand corectly increasing of -Xmx argument will not help. It will only delay the appearance of the exception. Right?
That depends. If the root cause is #3 above, then increasing the heap size may solve the problem. But if the root cause is #1 or #2, then tweaking the heap size will (at best) cause the JVM to survive longer between crashes.
My recommendation is to start by treating this as a "normal" (cause #1) memory leak, and use a memory profiler to identify and fix leaks that are likely to build over time.
If / when you can definitively eliminate cause #1, consider the others.
I absolutely agree with the answer from Stephen C, i just want to show you a possible way how you can analyse it.
A minimal tool for monitoring the memory is jstat and comes with JDK.
After starting JBoss you can start monitoring the memory and GC with
jstat -gc <JBOSS_PID> 2s
The output can then be loaded for example in an excel.
Now when you recognize something strange happens with the memory, take a heap dump:
jcmd <JBOSS_PID> GC.heap_dump <filename>
jcmd also comes with JDK.
You can than load the heap dump into MAT and analyse it. It takes some practice and patience to work with MAT. They also have a good Tutorial. You can also compare heap dumps in MAT.
I also suggest you add XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path>" to your JAVA_OPTS.
Please increase the MaxPermSize variable and check it will work.
JAVA_OPTS=-Xms1G -Xmx1G -XX:MaxPermSize=512M

Is it possible to view threads from hprof dump / threads in heap dump

I have got a large (5GB) hprof dump, created by application when OutOfMemoryError occurred. (Using XX: HeapDumpOnOutOfMemoryError ).
Unfortunately there are no logs collected when this error happened. Re-creating this will take couple of hours. I was hoping if some tools could show the exception stack trace or all threads stacks etc from hprof.
I am currently using MAT, could not see a way to get thread information. Which tool I could use?
(I am not sure if hprof file has information about thread/location of call when OOM occurred).
( I do know to how to take thread dump in normal cases. The trouble here is the event already happened, all I have is the hprof dump. )
Answering own question. Credit goes to # RC
Open the dump using visualvm. It takes a while.
click on "threads at heap dump"
MAT can show the threads directly now (perhaps this was added since the question was asked).
Threads Overview
To get an overview of all the threads in the heap dump use the "Thread Overview" button in the toolbar, as shown on the image below. Alternatively one could use the Query Browser > Thread Overview and Stacks query:
I don't think heap dump contains thread information except GC root. If you need thread related information, you need to take thread dump also.
Eclipse MAT allows you to see the suspect threads in the Leak suspects report. Look for the classes in your application namespace with their line numbers to find how much memory they occupy in heap. This will give you a hint of leaky classes.
You can kill -3 the process id to get a thread dump to standard out. This will not kill the java process so you can do it as many times as you want.
as RC stated visualVM is a good tool which will give you Object counts by class type and all kinds of graphs and profiling tools.
Use visualvm.
try to analyse the graph when perm heap space exceeds...
u should also check the memory samples & save its snapshot..
Analysis the thread stack... will help you narrow down to the problem.
To turn on an option your need + and to turn off an option you need -
What is confusing about the documentation is that it shows the default setting to make it "clear" what setting you have already. The ones with + are on by default and the ones with - are off by default. This means if you copy any of the + or - options from the documentation they should do nothing (except where the default has changed over time)
-XX:-HeapDumpOnOutOfMemoryError turns off the heap dump, which is the default.
-XX:+HeapDumpOnOutOfMemoryError turns on the heap dump.

Taking a heap dump on already crashed system

My tomcat application crashed due to memory leak.
I want to take the heap dump on the crashed system/jvm.
Is it possible? I am using windows/tomcat 6
How?
The process does not exists anymore. So there is no heap to dump.
Use '-XX:+HeapDumpOnOutOfMemoryError' for the next time.
You can get the heap dump at runtime by:
jmap -dump:live,format=b,file=heap.dump
You can't get a heap dump on a process that is no longer running. Next time you start Tomcat, you're going to have to edit the file in the /bin directory called catalina.sh first so that it contains options to automatically dump the heap if it runs out of memory.
What you need to do is to edit the JAVA_OPTS variable so that contains the JVM options you need. So near the top of the file, after JAVA_OPTS has been created, you need to do something like
JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
You can also take heap dumps using JConsole, but in order to do this, you need to know roughly when Tomcat is running out of memory in order for the heap dump to help you diagnose the problem.
If your application is not responding but the JVM is still limping, you may try using JConsole and trigger a Heap Dump. Search for Heap Dump on this link

about out of memory Exception

I have used Thread Pool for New IO server design . I have used newFixedThreadPool as a Executors factory method for thread pool creation. My server is throwing Exception when i execute my server for 20 to 30 minute . how to handle this exception.
java.lang.OutOfMemoryError: Java heap space
Obviously you are using too much memory, so now you need to find out why. Without your source it is very hard to to say what is wrong, but even with source it can be problematic when the program start to become complex.
What I have found helpful is to take memory dumps and look at them in tools such as Memory Analyzer (MAT). It can even compare several dumps to see what kind of objects are allocated. When you get an idea of what objects exists which you don't think should be there you can use the tool to see what roots it has (which objects has a reference to it).
To get a memory dump form a running java program use jmap -dump:format=b,file=heap.bin and to automatically get a memory dump when your program gets and OutOfMemoryError you can run it with java -XX:+HeapDumpOnOutOfMemoryError failing.java.Program
normally its java -Xms5m -Xmx15m MyApp
-Xms set initial Java heap size
-Xmx set maximum Java heap size
and in eclipse under java run configuration as VM argument
-Xms128m
-Xmx512m
-XX:permSize=128M
-XX:MaxPermSize=384M
You can defnitely try increasing the HEAP SIZE and check if you are getting the issue.
However, I would prefer you to try profiling your application to find out why your heap size and where the memory is being consumed.
There are few Profilers available as open source you can try.

Collecting Java heapdump under load

I am running load against Tomcat 6 running on Java 6. I want to collect a heapdump of the Java heap while the Tomcat server is under load. I normally use jmap -dump to collect my heapdumps.
However, when I try to do this when Tomcat is handling a high load I find that the heapdump collection fails.
Is jmap the best tool for collecting a heap dump from a process under load? What are the possible causes which would cause jmap to fail to collect a heapdump?
If jmap is not the best tool - what is better?
It is entirely acceptable to me for jmap (or some other tool) to stop the world within the Java process while the heap dump is taken.
Is jmap the best tool for collecting a heap dump from a process under load?
I think: No it isn't. From this link:
NOTE - This utility is unsupported and
may or may not be available in future
versions of the JDK.
I've also found jmap can pretty temperamental. If you're having problems:
Try it again. It often manages to get a heap dump after a couple of attempts if it first fails
Use the -F option
Add -XX:+HeapDumpOnOutOfMemoryError as a standard configuration to proactively take heap dumps when an OOM error is thrown
Run Tomcat interactively and add the heap dump on ctrl-break option. This gives you a thread dump too, something you'll probably need anyway
If your heap size is especially large and you have a repeatable condition, temporarily lower your heap size. It makes the resulting file much easier to handle, takes less time and is more likely to succeed
I have found that running Tomcat with a JMX port allows me to take a remote heapdump using visualvm. This succeeded for me when jmap failed.

Categories

Resources