How to reduce committed heap memory in JVM - java

Our JVMs are consuming more memory than expected. We did some profiling and found that there is no leak. Used heap memory goes till 2.9 GB max but it comes down to 800 MB during idle time. But committed heap increased till 3.5 GB (sometimes 4 GB) and never comes down. Also after the idle time, when used heap increases from 800 MB, then committed heap memory gets increased from 3.5 GB. So our server reaches max memory size soon and we have to restart them every other day.
My questions are
My understanding is that committed heap memory is currently allocated memory. When used heap memory reduces why is the committed memory not getting reduced as well?
When used heap memory increases from its level (800 MB) committed heap memory also get increased from its level (from 3.5GB)
We have the below memory settings in our servers:
-Xmx4096M -Xms1536M -XX:PermSize=128M -XX:MaxPermSize=512M

You can try to tune -XX:MaxHeapFreeRatio which is "maximum percentage of heap free after GC to avoid shrinking". Default value = 70

Related

JVM in a container always using maximum ram percentage

I have the following VM Options:
-XX:InitialRAMPercentage=50 -XX:MaxRAMPercentage=80
As stated earlier, this JVM runs inside a docker container. Memory allocation to the pod is managed by Kubernetes. If I assign 10 GB to the pod, the committed heap is 8GB, 20GB then 16GB is committed, but what I see is that the heap used is around 2.5 GB. Moreover, the 8GB or 16GB committed heap is from the very beginning of the application's run.
Any ideas how to keep it around the 2.5 GB (used) + some free space to allow for expansion?

Why does memory go down after I take a heap dump?

I am running a web application with WildFly 9 and I've allocated X GB of memory. Over a period of time the memory gets accumulated and I run the jmap utility to collect the heap dump for analysing the memory utilization.
When I do this, the memory consumed by the java.exe comes down from X to X-Y GB. Sometimes the Y amounts to GBs of memory.
Why does this happen? If this chunk of memory is marked to be cleared, why doesn't GC clear it?

Spring Boot App memory consumption

1) Ours is a Spring Boot/Java 8 application that we run using
xms = 256 MB, xmx = 2 GB
2) Our release engineers are running TOP command on the unix server where the
application is running and are reporting that application is using 3.5 GB
3) When I profile our application's production JVM instance using VisualVM,
I see the used heap size shows only about 1.4 GB
Any thoughts on why the memory consumption numbers are so different between #2 and #3, above?
Thanks for your feedback.
The -Xmx parameter only sets a maximum size for the Java heap. Any memory outside the Java heap is not limited/controlled by -Xmx.
Examples of non heap memory usage include thread stacks, direct memory and perm gen.
The total virtual memory used (as reported by top) is the sum of heap usage (which you have capped by using -Xmx) and non heap usage (which cannot be capped by -Xmx).
So, the numbers in #2 and #3 are not comparable because they are not measurements of the same thing.
They will never be the same but if you want to bring them closer to each other (or at least have more control over the amount of virtual memory used) then you might consider using ...
-XX:MaxPermSize to limit perm gen size
-XX:MaxHeapFreeRatio to facilitate more aggressive heap shrinkage

Several Full GCs, heap size reached XmX which is 1.2g but heap dump shows data only for 200m

We noticed frequent Full GC entries in the GC log(for more than 3-4 hrs). Our XmX was 1.2g and all full GC's didn't recover much. VM stayed at around 1 g.
To see what is held in the heap, we took the heap dump but in Memory Analyzer Tool, we see only 30% occupied and remaining 70% free.
Heap file size is 1g though.
we used :live option to take the heap dump.
/usr/<java_version>/bin/jmap -dump:live,format=b,file=heap.bin <pid>
Is there any other way to get the complete snapshot of heap?

GC spinning all the time despite much free heap

I have an application running with -mx7000m. I can see it's allocated 5.5 GB heap. Yet for some reasons it's GCing constantly, and that being CMS it turns out to be quite CPU intensive. So, it has 5.5 GB heap already allocated, but somehow it's spending all the CPU time trying to keep the used heap as small as possible, which is around 2 GB - and the other 3.5G are allocated by JVM but unused.
I have a few servers with the exact same configuration and 2 out of 5 are behaving this way. What could explain it?
It's Java 6, the flags are: -server -XX:+PrintGCDateStamps -Duser.timezone=America/Chicago -Djava.awt.headless=true -Djava.io.tmpdir=/whatever -Xloggc:logs/gc.log -XX:+PrintGCDetails -mx7000m -XX:MaxPermSize=256m -XX:+UseConcMarkSweepGC
The default threshold for triggering a CMS GC is 70% full (in Java 6). A rule of thumb is that the heap size should be about 2.5x the heap used after a full GC (but your use case is likely to be different)
So in your case, say you have
- 2.5 GB of young generation space
- 3 GB of tenured space.
When you tenured space reached 70% or ~2.1GB, it will start cleaning up the region.
The setting involved is the -XX:CMSInitiatingOccupancyFraction=70
However, if you want to reduce the impact GC, the simplest thing to do is to create less garbage. i.e. use a memory profiler and ensure you allocation rate is as low as possible. Your system will run very different if you are creating as much garbage as the CPUs can handle to say 100 MB/s or 1 MB/s or less.
The reason you might have different servers running differently as the relative sizes of the region might be different, say you have 0.5 young and 5.0 GB tenured, you shouldn't be seeing this. The difference could be purely down to how busy the machine was when you started the process or what it did since then.

Categories

Resources