JVM heap not released - java

I am new to analyzing memory issues in Java. So pardon me if this question seems naive
I have application running with following JVM parameters set:
-Xms3072m -Xmx3072m
-XX:MaxNewSize=1008m -XX:NewSize=1008m
-XX:PermSize=224m -XX:MaxPermSize=224m -XX:SurvivorRatio=6
I am using visualVM to monitor the usage : Here is what I see
The problem is, even when the application is not receiving any data for processing, the used memory doesn't go down. When the application is started, the used space starts low (around 1GB) but grows as the application is running. and then the used memory never goes down.
My question is why the used heap memory doesn't go down even when no major processing happening in application and what configurations can be set to correct it.
My understanding is if application is not doing any processing then the heap used should be less and heap memory available ( or max heap) should remain the same (3GB) in this case.

This is a totally normal trend, even if you believe that it is not used there are probably threads running doing tasks that create objects that are unreferenced once the tasks are done, those objects are eligible for the next GC but as long as there is no minor/major GC they take more and more room in your heap so it goes up until a GC is triggered then you get the normal heap size and so on.
An abnormal trend will be the same thing but after a GC the heap size would be higher than the heap size just after the previous GC which is not the case here.
Your real question is more what my application is doing when is not receiving any data to process? For that a thread dump should help, you can launch jcmd to get the PID then launch jstack $pid to get the thread dump.
Here is an example of a typical trend in case of memory leak:
As you can see the starting heap size has changed between two GC, the new starting heap size is higher than the previous one which may be due to a memory leak.

Related

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

Kubernetes Pod memory usage does not fall when jvm runs garbage collection

I'm struggling to understand why my Java application is slowly consuming all memory available to the pod causing Kubernetes to mark the pod as out of memory. The JVM (OpenJDK 8) is started with the following arguments:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2
I'm monitoring the memory used by the pod and also the JVM memory and was expecting to see some correlation e.g. after major garbage collection the pod memory used would also fall. However I don't see this. I've attached some graphs below:
Pod memory:
Total JVM memory
Detailed Breakdown of JVM (sorry for all the colours looking the same...thanks Kibana)
What I'm struggling with is why when there is a significant reduction in heap memory just before 16:00 does the pods memory not also fall?
It looks like you are creating a pod with a resource limit of 1GB Memory.
You are setting -XX:MaxRAMFraction=2 which means you are allocating 50% of available memory to the JVM which seem to match what you are graphing as Memory Limit.
JVM then reserves around 80% of that which is what you are graphing in Memory Consumed.
When you look at Memory Consumed you will not see internal garbage collection (as in your second graph), because that GC memory is released back to JVM but is still reserved by it.
Is it possible that there is a memory leak in your java application? it is possibly causing more memory to get reserved over time, until the JVM limit (512MB) is met and your pod gets OOM killed.

OutOfMemoryError heap dump

I have a java.lang.OutOfMemoryError:GC Overhead limit exceeded.
There is no HeapDumpOnOutOfMemoryError command line option for my application.
I need a heap dump but when I try to capture the dump with jmap or jcmd tools they are not responding:
jmap
D:\program>jmap -dump:live,format=b,file=d:\dump4.hprof 8280
Dumping heap to D:\dump4.hprof ...
jcmd
D:\program>jcmd 8280 GC.heap_dump d:\dump6.hprof
8280:
Processes are not completing but dump files are created. When I open them with VisualVM, they are loading infinitely.
If I capture a heap dump of e.g. VisualVM, Tools complete successfully and dumps are created and opened.
Could you please explain why jmap and jcmd are not completing? And how can I capture a dump of the application with OutOfMemoryError exception? Application is still working but there are only a few live threads.
One possibility is that the heap size you intend to dump is too large in size.
Please specify the size of the heap and RAM.
It is not due to your intended heap size is more than allocated heap size. This error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space. Your application might have ended up using almost all the RAM and Garbage collector has spent too much time trying to clean it and failed repeatedly.
Your application's performance will be slow comparatively, This is because the CPU is using its entire capacity for Garbage Collection and hence cannot perform any other tasks.
Following questions need to be addressed:
What are the objects in the application that occupy large portions of the heap?
In which parts of the source code are these objects being allocated?
You can also use automated graphical tools such as JConsole which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors.

Why is my Java heap dump size much smaller than used memory?

Problem
We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT.
However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT.
Context
Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making the dump, our buffers/cache used space was at 56GB.
We used the following command to create the dump: sudo -u {application user} jmap -dump:file=/mnt/heapdump/dump_prd.bin {pid}
The heap dump file size is 16,4GB and when analyzing it with Eclipse MAT, it says there are around 1GB live objects and ~14,8GB unreachable/shallow heap.
EDIT: Here is some more info about the problem we see happening. We monitor our memory usage, and we see it grow and grow, until there is ~300mb free memory left. Then it stays around that amount of memory, until the process crashes, unfortunately without error in the application log.
This makes us assume it is a hard OOM error because this only happens when the memory is near-depleted. We use the settings -Xms25000m -Xmx40000m for our JVM.
Question
Basically, we are wondering why the majority of our memory isn't captured in this dump. The top retained size classes don't look too suspicious, so we are wondering if there is something heap dump-related what we are doing wrong.
When dumping its heap, the JVM will first run a garbage collection cycle to free any unreachable objects.
How can I take a heap dump on Java 5 without garbage collecting first?
In my experience, in a true OutOfMemoryError where your application is simply demanding more heap space than is available, this GC is a fool's errand and the final heap dump will be the size of the max. heap size.
When the heap dump is much smaller, that means the system was not truly out of memory, but perhaps had memory pressure. For example, there is the java.lang.OutOfMemoryError: GC overhead limit exceeded error, which means that the JVM may have been able to free enough memory to service some new allocation request, but it had to spend too much time collecting garbage.
It's also possible that you don't have a memory problem. What makes you think you do? You didn't mention anything about heap usage or an OutOfMemoryError. You've only mentioned the JVM's memory footprint on the operating system.
In my experience, having a heap dump much smaller than the real memory used can be due to a leak in the JNI.
Despite you don't use directly any native code, there are certain libraries that use it to speed up.
In our case, it was a Deflater and Inflater not properly ended.

Java program (Tomcat) keeps eating memory (RES in top)

I am running Tomcat on a 4-cpu and 32GB memory 64 bit machine (OS is CentOS 6.3). The Java option I start Tomcat with is -server -Xms1024m -Xmx1024m -XX:PermSize=512m -XX:MaxPermSize=512m
At the beginning, RES is just 810MB using top, and it keeps increasing. During this period, I run jmap -J-d64 -histo pID to check the Java memory heap, and I think the gc works fine, because heap peak is 510MB and around 200MB after gc. But when RES in top hits 1.1g, the CPU usage will exceeds 100% and Tomcat will hang.
Using jstack pid to see the dump when the CPU usage is 100%, a thread named "vm thread" eats up almost 100% CPU. I googled, it is the JVM gc thread. So my question is: Why does res keep growing when gc works fine? How could I resolve this problem? Thanks.
Might be a permgen leak. If you say your heap stays at ~500Mb, and -XX:MaxPermSize is set to 512Mb, full permgen will give you about 1gb of memory usage.
It could happen if you have lots (like, lots!) of jsps being loaded later in the lifecycle of your program, or you have used String.intern() a lot.
Follow this thread for further investigation: How to dump Permgen?
And this thread for tuning gc to sweep permgen What does JVM flag CMSClassUnloadingEnabled actually do?
If the garbage collection thread is spinning at 100% then it's likely that it is trying to perform garbage collection, but none of the objects can be collected so you are in a garbage collection death spiral where it keeps trying to run but not able to free any memory.
This may happen because you have a memory leak in your program, or because you are just not giving the vm enough memory to handle the number of objects you are loading during regular use. It sounds like you have plenty of headroom to increase the heap size. This may only prolong the amount of time before you hit the death spiral, or it may get you to a running state. You'll want to run a test for quite some time to ensure that you're not just delaying the inevitable.

Categories

Resources