I need some guidance related to Java 11 GC algorithm for JVM. We are migrating our application from jdk 8 to Java11. We are seeing spikes in memory management with the new GC algorithm that is defaulted with Java11 ie., GC1. Earlier we used CMS. Earlier our JVM startup params for the GC as below:
Application Infrastructure : 8 core CPU, 16GB RAM Linux EC2 (c5.2xlarge)
JDK 8 : -d64 -server -Xms4g -Xmx12g -XX:NewRatio=1 -XX:SurvivorRatio=4 -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat -XX:CMSInitiatingOccupancyFraction=70
Here we see the average memory usage at 4gb to 6gb when there is full load on the system.
Java11 : -server -Xms12g -Xmx12g -verbose:gc -Xlog:gc:gc.log -XX:+PrintGCDetails -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=20 -XX:+UseStringDeduplication -Xlog:safePoint -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat
Here we see the average memory usage gradually increase and go up till 90% on full load and that's when the GC runs and frees up the space. Also, we do not see the memory usage coming down when there is no load or 0 load on the system. I read that this is the expected behavior of Gc1
Kindly Advice!
Application behavior is similar is both cases
Related
Our application runs 24*7 with heavy user load, recnlty we started having issues with system performance, application not responding, we had to restart JVM to get it back online while investigating we found out that JVM runs GC and at that time application slows down, sometime not even responding,
Below is our stats for GC during that time.
GC Heap
GC Summary
Looking at this data, is there any recommendations on what to look for or if any JVM configuration needs to be corrected?
Java Version is 7, GC settings are #jvm -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -Xloggc:/opt/Server/logs/gclog.log -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Xms4096M -Xmx6144M -XX:PermSize=512M -XX:MaxPermSize=1024M -XX:ErrorFile=./log/error.log -XX:HeapDumpPath=./log/heap_dump.hprof –
i am working on Struts1.x web application and using weblogic server, when performing some processing on very huge data, it consume almost all the RAM but after completing or session timeout, it don't return the consumed RAM.
It never release the RAM until I stop the server or kill forcefully. wanted to know such scenario where the RAM utilization is not released after usage and if it can be solve by tuning weblogic paramter or by some other way.
using the following parameter in weblogic:
-Xms12g -Xmx12g -XX:MetaspaceSize=1G -XX:MaxMetaspaceSize=1G
-XX:NewRatio=3 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
-DBASE_CONFIG_LA=/data01/Lending_Analytics_weblogic/LendingAnalyticsPQM/config/
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError
-XX:+UseGCOverheadLimit -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2m -Xloggc:/data01/Lending_Analytics_weblogic/LendingAnalyticsPQM/GCLogs/gc.log
*** not able to find any GC issue after analyzing the GC logs.
*** fixed all the Memory Leak issue from the code through eclipse memory leak compiler
I am trying GC log rotation in JDK 8.So I have achieved it by using below GC Log JVM parameter
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:verbose-jdk8-gc.log -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1k
But Now I want that This should also compressed when Rotation is done.
So is there any JVM Parameter in JDK 8 for compression of GC Log ?
Is there any one who can help me.
On Linux, you can use logroate: https://linux.die.net/man/8/logrotate to manage the rotation and compression of JVM gc log.
I am running fortify static code scan.
main\Src>sourceanalyzer -64 -Xmx6500m -b project -scan -f project.fpr
The JDK is 1.8
java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
After 20 hours, I got PermGen OOM error
[error]: Unexpected exception: java.lang.OutOfMemoryError: PermGen space
^Cendering results 99% [====================]
According to a lot of resource, PermGen is obsolete in Java8.
Any Ideas?
There is also the option to increase PermGen space
Increase permgen space
-XX:MaxPermSize=128m
There is also the option to enable garbage collection on PermGen space
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
Nevertheless you should check with HP whether it is actually needed to increase PermGen space, it could also be a bug in the software. PermGen space out of memory errors are often caused by memory leaks.
Fortify uses it's own JRE (look under [Fortify Install Root]\jre and [Fortify Install Root]\jre64).
If you use -debug-verbose -logfile Project_Scan.txt you can see if there are issues the scanning engine is having with memory pressure anywhere since it dumps memory usage every so often.
Be sure you are using the latest version of Fortify. If your scan is taking 20 hours, it could be that the latest version addresses the speed and memory issues. The latest is 4.20.
To speed up the scan, have you looked at the Parallel Analysis Mode? This was introduced in 4.00 and you can read about it in the User Guide, Appendix B.
I did some heavy memory tuning with the translation cycle at one point and the following worked well. It should also work for the scan cycle. Please note that this was an extreme memory usage case and this is not the best for all Fortify usage.
-XX:MaxPermSize=256m -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
You can / should use JConsole to watch the process.
I think in Java 8 the PermGen space has been completely replaced with Metaspace. The JVM options
-XX:PermSize and -XX:MaxPermSize
have been replaced by
-XX:MetaSpaceSize and -XX:MaxMetaspaceSize respectively.
Try that.
We have performed a server migration from Solaris SunOS 5.10 to Redhat Linux VM recently. JVM was upgraded from 1.5.0_22 (32-bit) to 1.6.0_06 (64-bit)
However, since then, we encounter OutOfMemory error frequently. We have studied that a 64-bit JVM would require 30 - 50% more heap, so we increased our heap
size from 1200MB to 2048MB and have a try. However, we still observed some OOME occured after server run for a few dates.
Upon checking the GC log, we found that Full GC happened frequently after server has started for few dates, and for each Full GC,
it will only release a little memory and frequent Full GC slows down the application.
As you can see the excerpt of the GC log, almost no memory was released in PSOldGen
205023.895: [Full GC [PSYoungGen: 225919K->157256K(240960K)] [PSOldGen: 1841151K->1841151K(1841152K)] 2067071K->1998408K(2082112K) [PSPermGen: 108720K->108720K(109056K)], 6.2785770 secs] [Times: user=6.23 sys=0.01, real=6.28 secs]
Heap after GC invocations=1638 (full 251):
PSYoungGen total 240960K, used 157256K [0x00002aab2e800000, 0x00002aab3e200000, 0x00002aab3e200000)
eden space 225920K, 69% used [0x00002aab2e800000,0x00002aab38192208,0x00002aab3c4a0000)
from space 15040K, 0% used [0x00002aab3d350000,0x00002aab3d350000,0x00002aab3e200000)
to space 15040K, 0% used [0x00002aab3c4a0000,0x00002aab3c4a0000,0x00002aab3d350000)
PSOldGen total 1841152K, used 1841151K [0x00002aaabe200000, 0x00002aab2e800000, 0x00002aab2e800000)
object space 1841152K, 99% used [0x00002aaabe200000,0x00002aab2e7fffc8,0x00002aab2e800000)
PSPermGen total 109056K, used 108720K [0x00002aaaae200000, 0x00002aaab4c80000, 0x00002aaabe200000)
object space 109056K, 99% used [0x00002aaaae200000,0x00002aaab4c2c3f8,0x00002aaab4c80000)
}
Here is the heap usage pattern for a single OC4J instance within 24 hours, which is quite strange to me, it doesn't show a zig-zag path but instead, some random pattern.
May I know what can I do?
Server config:
Red Hat Enterprise Linux Server release 5.7 (Tikanga) 2.6.18 274.el5 (64-bit)
CPU : 8, 16GB RAM
JVM version : Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
Application server : OC4J 10.1.3.5
JVM starup arguments:
//Old confing
-server -Xms1200M -Xmx1200M -XX:MaxPermSize=64M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
//New config
-server -Xms2048M -Xmx2048M -XX:MaxPermSize=256M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
Either this is a memory leak in the application or a bug in the Java memory management.
Since the 1.6.0_06 release there is a whole bunch of bug fixes regarding memory management and garbage collection for example these two:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6676016
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6824570
So try the following:
Upgrade to a newer Java 1.6.0 build. (If not possible you can change GC strategy settings from (for example) parallel to concurrent to see if that bypasses a potential Java bug.
Troubleshoot your application for a memory leak using a heapdump (jmap) to see what´s occupying the heap.