I am running my java application on centos 6 with openjdk version "1.8.0_232" using G1GC. I am seeing the total heap usage grows gradually and causing application to crash. When I am taking a heapdump of live objects
the dump size is only 1.6GB but my total used heap was 32GB.
Command used for taking dump: jmap -dump:live,format=b,file=/tmp/dump.hprof
Read somewhere , that the jmap dump command triggers a full GC and releases inaccessible heap , that is the reason for less dump size. I can see after triggering dump command my total heap usage came down and again it starts growing gradually.
My JVM args : -XX:-AllowUserSignalHandlers -Xmx49000m -DFCGI_PORT=6654 -XX:+UseG1GC -XX:+UseStringDeduplication -XX:InitiatingHeapOccupancyPercent=55 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/xyz -XX:+PerfDisableSharedMem -Djava.io.tmpdir=/var/XXX/temp
Is there a better way to efficiently do full GC with G1?
This is because what is being presented to you is the commited memory, which is different from the used memory.
In more recent versions of Java there have been improvements in the GC algorithms for the commited memory to be released more frequently for the operating system.
Use Java Mission Control to see your memory in more detail (commited memory vs used memory).
My recommendation is to use newer versions of Java, if it is not possible, change the value of XmX to a lower value (3Gb). You will notice that the JVM will always approach the defined limit.
https://openjdk.java.net/jeps/346
https://openjdk.java.net/jeps/351
https://blog.idrsolutions.com/2019/09/improved-garbage-collection-in-java-13/
https://www.slideshare.net/jelastic/choosing-right-garbage-collector-to-increase-efficiency-of-java-memory-usage
Related
I built a simple application using Springboot.The ZGC garbage collector I use when deploying to a Linux server USES a lot of memory..I tried to limit the maximum heap memory to 500MB with Xmx500m, but the JAVA program still used more than 1GB. When I used the G1 collector, it only used 350MB.I don't know why, is this a BUG of JDK11?Or do I have a problem with my boot parameters?
####Runtime environment
operating system: CentOS Linux release 7.8.2003
JDK version: jdk11
springboot version: v2.3.0.RELEASE
Here is my Java startup command
java -Xms128m -Xmx500m \
-XX:+UnlockExperimentalVMOptions -XX:+UseZGC \
-jar app.jar
Here is a screenshot of the memory usage at run time
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201259.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201357.png?raw=true
Here's what happens when you use the default garbage collector
Java startup command
java -Xms128m -Xmx500m \
-jar app.jar
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202442.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202421.png?raw=true
By default jdk11 USES the G1 garbage collector. Theoretically, shouldn't G1 be more memory intensive than ZGC?Why didn't I use it that way?Did I misunderstand?Since I'm a beginner to the JVM, I don't understand why.
ZGC employs a technique known as colored pointers. The idea is to use some free bits in 64-bit pointers into the heap for embedded metadata. However, when dereferencing such pointers, these bits need to be masked, which implies some extra work for the JVM.
To avoid the overhead of masking pointers, ZGC involves multi-mapping technique. Multi-mapping is when multiple ranges of virtual memory are mapped to the same range of physical memory.
ZGC uses 3 views of Java heap ("marked0", "marked1", "remapped"), i.e. 3 different "colors" of heap pointers and 3 virtual memory mappings for the same heap.
As a consequence, the operating system may report 3x larger memory usage. For example, for a 512 MB heap, the reported committed memory may be as large as 1.5 GB, not counting memory besides the heap. Note: multi-mapping affects the reported used memory, but physically the heap will still use 512 MB in RAM. This sometimes leads to a funny effect that RSS of the process looks larger than the amount of physical RAM.
See also:
ZGC: A Scalable Low-Latency Garbage Collector by Per Lidén
Understanding Low Latency JVM GCs by Jean Philippe Bempel
JVM uses much more than just the heap memory - read this excellent answer to understand JVM memory consumption better: Java using much more memory than heap size (or size correctly Docker memory limit)
You'll need to go beyond the heap inspection and use things like Native Memory Tracking to get a clearer picture.
I don't know what's the particular issue with your application, but ZGC is often mentioned as to be good for large heaps.
It's also a brand new collector and got many changes recently - I'd upgrade to JDK 14 if you want to use it (see "Change Log" here: https://wiki.openjdk.java.net/display/zgc/Main)
This is a result of the throughput-latency-footprint tradeoff. When choosing between these 3 things, you can only pick 2.
ZGC is a concurrent GC with low pause times. Since you don't want to give up throughput, you trade latency and throughput for footprint. So, there is nothing surprising in such high memory consumption.
G1 is not a low-pause collector, so you shift that tradeoff towards footprint and get bigger pause times but win some memory.
The amount of OS memory the JVM uses (ie, "committed heap") depends on how often the GC runs (and also whether it uncommits unneeded memory if the app starts to use less), which is a tunable option. Unfortunately ZGC isn't (currently) as aggressive about this by default as G1, but both have some tuning options that you can try.
P.S. As others have noted, the RES htop column is misleading, but the VisualVM chart shows the real picture.
My tomcat Java 8 application is running with 24GB XMX on 32GB Virtual machine with G1 GC. It processes large files. Files have size of 5-7 GBs.
After deployment during the processing of one of such files VM is starting to use > 90% of allocated memory. This value never drops and only increases despite the processing was over and Java has a ton of unused heap (~7GBs).
It however dropped once we triggered Full GC from the VisualVM. We've also noticed the application had only one Major GC during its lifetime - which was triggered with Full GC from VisualVM.
It's clear that java has committed heap and don't wan't to give it back to OS.
Question is: how to get rid of such behavior and return memory back to OS(looks like G1 doesn't do it until java 12) or how to investigate why remaining 8 GBs of non-heap space are full? seems like I can't do it with VisualVM.
We currently have problems with a java native memory leak. Server is quite big (40cpus, 128GB of memory). Java heap size is 64G and we run a very memory intensive application reading lot of data to strings with about 400 threads and throwing them away from memory after some minutes.
So the heap is filling up very fast but stuff on the heap becomes obsolete and can be GCed very fast, too. So we have to use G1 to not have STW breaks for minutes.
Now, that seems to work fine - heap is big enough to run the application for days, nothing leaking here. Anyway the Java process is growing and growing over time until all the 128G are used and the aplication crashes with an allocation failure.
I've read a lot about native java memory leaks, including the glibc issue with max. arenas (we have wheezy with glibc 2.13, so no fix possible here with setting MALLOC_ARENA_MAX=1 or 4 without a dist upgrade).
So we tried jemalloc what gave us graphs for:
inuse-space:
and
inuse-objects:
.
I don't get it what's the issue here, has someone an idea?
If I set MALLOC_CONF="narenas:1" for jemalloc as environment parameter for the tomcat process running our app, could that still use the glibc malloc version anyway somehow?
This is our G1 setup, maybe some issue here?
-XX:+UseCompressedOops
-XX:+UseNUMA
-XX:NewSize=6000m
-XX:MaxNewSize=6000m
-XX:NewRatio=3
-XX:SurvivorRatio=1
-XX:InitiatingHeapOccupancyPercent=55
-XX:MaxGCPauseMillis=1000
-XX:PermSize=64m
-XX:MaxPermSize=128m
-XX:+PrintCommandLineFlags
-XX:+PrintFlagsFinal
-XX:+PrintGC
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution
-XX:-UseAdaptiveSizePolicy
-XX:+UseG1GC
-XX:MaxDirectMemorySize=2g
-Xms65536m
-Xmx65536m
Thanks for your help!
We never called System.gc() explicitly, and meanwhile stopped using G1, not specifying anything other than xms and xmx.
Therefore using nearly all the 128G for the heap now. The java process memory usage is high - but constant for weeks. I'm sure this is some G1 or at least general GC issue. The only disadvantage by this "solution" are high GC pauses, but they decreased from up to 90s to about 1-5s with increasing the heap, which is ok for the benchmark we drive with our servers.
Before that, I played around with -XX:ParallelGcThreads options which had significant influence on the memory leak speed when decreasing from 28 (default for 40 cpus) downwards to 1. The memory graphs looked somewhat like a hand fan using different values on different instances...
I've been profiling the x64 version of my application as the memory usage has been outrageously high, all of it seems to be coming from the JavaFX MediaPlayer, i'm correctly releasing listeners and eventhandlers.
Here is the stark contrast.
The x32 version at start
And now the x64 version at start
The x32 version stays below 256mb while the x64 will shoot over a gig; this is while both are left to play through their playlist.
All the code is the same.
JDK: jdk1.8.0_20
JRE: jre1.8.0_20
VM arguments on both
-XX:MinHeapFreeRatio=40 -XX:MaxHeapFreeRatio=70 -Xms3670k -Xmx256m -Dsun.java2d.noddraw=true -XX:+UseParallelGC
Same issue occurring on another x64 Java application
Is this a bug or am I overlooking something?
What you are seeing is the memory usage of the entire JVM running your process. The -Xmx256m setting only limits the maximum heap space available for your application to allocate (and the JVM would enforce that). Outside of heap space, the JVM can use additional memory for a host of other purposes (I am sure I will miss a few in the list below):
PermGen, which has now be replaced by the Metaspace. According to the documentation, there is no default limit for this:
-XX:MaxMetaspaceSize=size
Sets the maximum amount of native memory that can be allocated for class metadata. By default, the size is not limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system.
Stack space (memory used = (number of threads) * stack size. You can control this with the -Xss parameter
Off-heap space (either use of ByteBuffers in your code, or use of third pary libraries like EHCache which would in turn use off-heap memory)
JNI code
GC (garbage collectors need their own memory, which is again not part of the heap and can vary greatly depending on the collector used and the application memory usage)
In your case, you are seeing the "almost doubling" of memory use, plus probably a more relax Metaspace allocation when you move from a 32bit to a 64bit JVM. Using -XX:MaxMetaspaceSize=128m will probably bring the memory usage down to under 512MB for the 64bit JVM.
I don't know your application, respectively how it is implemented.
One possible reason for such a surprize differences could be how much memory can be used before a garbage collection is performed. It is thinkable that a machine with 64 bit words is allocated with more memory then a machine with 32 bit words. The garbage collector could run less often, so there would be more garbage memory still allocated, even when it is not really necessary or usefull.
I have a java application with the arguments below but the heap is not getting reclaimed even when the total free space is greater than 45% (can see via visual VM). Is there any reason that the JVM wouldnt free that heap space? The same settings work as expected in Java6. Running Java5 runtime and compile time
java -jar -Xmx1024m -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=45 -XX:+HeapDumpOnOutOfMemoryError <myjarname>
We believe we have found an answer to the question. The systems we are running are server class machines with multiple CPUs the JRE was detecting the multiple CPUs and setting the GC to use parallel instead of serial GC which is not compatible with the XX:MaxHeapFreeRatio setting.
The only GC that gives the memory back my system is the G1GC -XX:+UseG1GC