I am new to heap analysis. We have been using spring boot in our web application. Recently heap usage has become too high. To analyse heap dump with tools like Mat and JProfiler, I am downloading it using actuator as follows:
http://localhost:8080/actuator/heapdump
But each time, I am taking heap dump, heap usage is becoming low. I am suspecting may be GC kicks in all that time. Please rectify if I am wrong. So I am not able to capture the actual scenario. Is there any way to take heap dump without triggering GC ? Or is there anyway like whenever heap usage increases more than say 500 MB, heapdump gets generated.
You can have a #Scheduled task to get for you regularly heap usage and generate the 500 MB heapdump. You can use ManagementFactory.getMemoryPoolMXBeans();, it shows the different heap regions and their usage.
To do that externally:
You can use jstat for external monitoring of the heap usage. Wrap it in a script that would analyzse jstat -gc then use jmap to get the heap dump.
Related
I have a java.lang.OutOfMemoryError:GC Overhead limit exceeded.
There is no HeapDumpOnOutOfMemoryError command line option for my application.
I need a heap dump but when I try to capture the dump with jmap or jcmd tools they are not responding:
jmap
D:\program>jmap -dump:live,format=b,file=d:\dump4.hprof 8280
Dumping heap to D:\dump4.hprof ...
jcmd
D:\program>jcmd 8280 GC.heap_dump d:\dump6.hprof
8280:
Processes are not completing but dump files are created. When I open them with VisualVM, they are loading infinitely.
If I capture a heap dump of e.g. VisualVM, Tools complete successfully and dumps are created and opened.
Could you please explain why jmap and jcmd are not completing? And how can I capture a dump of the application with OutOfMemoryError exception? Application is still working but there are only a few live threads.
One possibility is that the heap size you intend to dump is too large in size.
Please specify the size of the heap and RAM.
It is not due to your intended heap size is more than allocated heap size. This error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space. Your application might have ended up using almost all the RAM and Garbage collector has spent too much time trying to clean it and failed repeatedly.
Your application's performance will be slow comparatively, This is because the CPU is using its entire capacity for Garbage Collection and hence cannot perform any other tasks.
Following questions need to be addressed:
What are the objects in the application that occupy large portions of the heap?
In which parts of the source code are these objects being allocated?
You can also use automated graphical tools such as JConsole which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors.
Problem
We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT.
However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT.
Context
Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making the dump, our buffers/cache used space was at 56GB.
We used the following command to create the dump: sudo -u {application user} jmap -dump:file=/mnt/heapdump/dump_prd.bin {pid}
The heap dump file size is 16,4GB and when analyzing it with Eclipse MAT, it says there are around 1GB live objects and ~14,8GB unreachable/shallow heap.
EDIT: Here is some more info about the problem we see happening. We monitor our memory usage, and we see it grow and grow, until there is ~300mb free memory left. Then it stays around that amount of memory, until the process crashes, unfortunately without error in the application log.
This makes us assume it is a hard OOM error because this only happens when the memory is near-depleted. We use the settings -Xms25000m -Xmx40000m for our JVM.
Question
Basically, we are wondering why the majority of our memory isn't captured in this dump. The top retained size classes don't look too suspicious, so we are wondering if there is something heap dump-related what we are doing wrong.
When dumping its heap, the JVM will first run a garbage collection cycle to free any unreachable objects.
How can I take a heap dump on Java 5 without garbage collecting first?
In my experience, in a true OutOfMemoryError where your application is simply demanding more heap space than is available, this GC is a fool's errand and the final heap dump will be the size of the max. heap size.
When the heap dump is much smaller, that means the system was not truly out of memory, but perhaps had memory pressure. For example, there is the java.lang.OutOfMemoryError: GC overhead limit exceeded error, which means that the JVM may have been able to free enough memory to service some new allocation request, but it had to spend too much time collecting garbage.
It's also possible that you don't have a memory problem. What makes you think you do? You didn't mention anything about heap usage or an OutOfMemoryError. You've only mentioned the JVM's memory footprint on the operating system.
In my experience, having a heap dump much smaller than the real memory used can be due to a leak in the JNI.
Despite you don't use directly any native code, there are certain libraries that use it to speed up.
In our case, it was a Deflater and Inflater not properly ended.
I am running tomcat7 service which process quite a big load from customers. I left application during a weekend and when I back I noticed that tomcat CPU usage increased to 99% and in the logs I have found following errors:
Exception in thread "http-bio-8080-exec-908" java.lang.OutOfMemoryError: Java heap space
Exception in thread "http-bio-8080-exec-948" java.lang.OutOfMemoryError: Java heap space
Does it means that at the time I've got OutOfMemory exception I had 908 and 948 opened active threads?
Currently my tomcat is runnning under default configurations I've never increased heap size yet.
We are receiving around 200 queries/sec.
My hardware:
CPU : Intel(R) Xeon(R) CPU X5650 # 2.67GHz
Memory: 2GB
Could you please point me into right direction, what should I look at in order to resolve this issue.
Thanks for any help!
There could be couple of reasons,
Since you have not increased the default memory allocation, with high load, it probably won't have enough resources to serve all requests and some are obviously running out of memory. So first thing to try is to tweak the jvm memory configuration.
If 1 does not working, it could be a bug in your application that might be preventing garbage collection. You might want to run your application with a profiler attached to see what objects are collected over time and use that information to debug.
When your JVM runs out of a particular resource, java.lang.OutOfMemoryError: Java heap space error will occur.
Solution 1 : You should increase the availability of that resource. When your application does not have enough Java heap space memory to run properly. For that, you need to alter your JVM launch configuration and add the following:
-Xmx1024m
The above configuration would give the application 1024MB of Java heap space. You can use g or G for GB, m or M for MB, k or K for KB. For example all of the following are equivalent to saying that the maximum Java heap space is 1GB:
-Xmx1073741824
-Xmx1048576k
-Xmx1024m
-Xmx1g
Solution 2 : If your application contains a memory leak, adding more heap will just postpone the java.lang.OutOfMemoryError: Java heap space error.
If you wish to solve the underlying problem with the Java heap space instead of masking the symptoms, you have several tools available.
For Example: Plumbr tool. Among other performance problems it catches all java.lang.OutOfMemoryErrors and automatically tells you what causes them. You can go through about other available tools. And the choice is yours!
Source:
https://plumbr.eu/outofmemoryerror/java-heap-space
I am running a Java application in JBoss which periodically undertakes a memory-intensive task. The amount of memory used by org.jboss.Main rises during this time, but does not fall once the task is completed.
I've been profiling the process using VisualVM and have forced a garbage collection. The memory used by org.jboss.Main remains at about 1GB. The basic info of the heap dump is as follows.
Total bytes: 53,795,695
Total classes: 5,696
Total instances: 732,292
Classloaders: 324
GC roots: 2,540
Number of objects pending for finalization: 0
I've also looked at the classes using the memory and found that char[] accounts for most of the memory.
Could anyone tell me why VisualVM shows 20 times less memory being used by JBoss than my activity monitor? And if anyone could provide any advice or directions to further reading on how to get JBoss to release memory after the job is complete that would be greatly appreciated.
VisualVM is showing the size of the heap after a Full GC. This is the amount of data you can use before the heap has to grow.
The heap dump is showing you how much of the heap is used at present.
I have a Tomcat Application Server that is configured to create a memory dump on OOM, and it is started with -Xmx1024M, so a Gigabyte should be available to him.
Now I found one such dump and it contains only 260MB of unretained memory. How is it possible that the dump is so much smaller than the size he should have available?
Permgen space is managed independently of the heap and can run out even when there's plenty of free memory overall. Some web frameworks (especially JSF) are real hogs and while easily cause the default config to run out. It can be increased with -XX:MaxPermSize=###m
Remember the system space is constrained by the sum of heap and permgen, so you can consume fewer total resources before you start getting the Cannot Create Native Thread OOM exception if you don't decrease heap by the amount PermGen is increased.
Only information about the usage of allocated memory will be dumped to a file.
A heap dump isn't a binary image of your heap, it contains information about data types etc. and may exceed your available memory.
Text (classic) Heapdump file format