Full GC only release little memory in PSOldGen - java

We have performed a server migration from Solaris SunOS 5.10 to Redhat Linux VM recently. JVM was upgraded from 1.5.0_22 (32-bit) to 1.6.0_06 (64-bit)
However, since then, we encounter OutOfMemory error frequently. We have studied that a 64-bit JVM would require 30 - 50% more heap, so we increased our heap
size from 1200MB to 2048MB and have a try. However, we still observed some OOME occured after server run for a few dates.
Upon checking the GC log, we found that Full GC happened frequently after server has started for few dates, and for each Full GC,
it will only release a little memory and frequent Full GC slows down the application.
As you can see the excerpt of the GC log, almost no memory was released in PSOldGen
205023.895: [Full GC [PSYoungGen: 225919K->157256K(240960K)] [PSOldGen: 1841151K->1841151K(1841152K)] 2067071K->1998408K(2082112K) [PSPermGen: 108720K->108720K(109056K)], 6.2785770 secs] [Times: user=6.23 sys=0.01, real=6.28 secs]
Heap after GC invocations=1638 (full 251):
PSYoungGen total 240960K, used 157256K [0x00002aab2e800000, 0x00002aab3e200000, 0x00002aab3e200000)
eden space 225920K, 69% used [0x00002aab2e800000,0x00002aab38192208,0x00002aab3c4a0000)
from space 15040K, 0% used [0x00002aab3d350000,0x00002aab3d350000,0x00002aab3e200000)
to space 15040K, 0% used [0x00002aab3c4a0000,0x00002aab3c4a0000,0x00002aab3d350000)
PSOldGen total 1841152K, used 1841151K [0x00002aaabe200000, 0x00002aab2e800000, 0x00002aab2e800000)
object space 1841152K, 99% used [0x00002aaabe200000,0x00002aab2e7fffc8,0x00002aab2e800000)
PSPermGen total 109056K, used 108720K [0x00002aaaae200000, 0x00002aaab4c80000, 0x00002aaabe200000)
object space 109056K, 99% used [0x00002aaaae200000,0x00002aaab4c2c3f8,0x00002aaab4c80000)
}
Here is the heap usage pattern for a single OC4J instance within 24 hours, which is quite strange to me, it doesn't show a zig-zag path but instead, some random pattern.
May I know what can I do?
Server config:
Red Hat Enterprise Linux Server release 5.7 (Tikanga) 2.6.18 274.el5 (64-bit)
CPU : 8, 16GB RAM
JVM version : Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
Application server : OC4J 10.1.3.5
JVM starup arguments:
//Old confing
-server -Xms1200M -Xmx1200M -XX:MaxPermSize=64M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
//New config
-server -Xms2048M -Xmx2048M -XX:MaxPermSize=256M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k

Either this is a memory leak in the application or a bug in the Java memory management.
Since the 1.6.0_06 release there is a whole bunch of bug fixes regarding memory management and garbage collection for example these two:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6676016
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6824570
So try the following:
Upgrade to a newer Java 1.6.0 build. (If not possible you can change GC strategy settings from (for example) parallel to concurrent to see if that bypasses a potential Java bug.
Troubleshoot your application for a memory leak using a heapdump (jmap) to see what´s occupying the heap.

Related

Java 11 GC1 GC memory management issue

I need some guidance related to Java 11 GC algorithm for JVM. We are migrating our application from jdk 8 to Java11. We are seeing spikes in memory management with the new GC algorithm that is defaulted with Java11 ie., GC1. Earlier we used CMS. Earlier our JVM startup params for the GC as below:
Application Infrastructure : 8 core CPU, 16GB RAM Linux EC2 (c5.2xlarge)
JDK 8 : -d64 -server -Xms4g -Xmx12g -XX:NewRatio=1 -XX:SurvivorRatio=4 -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat -XX:CMSInitiatingOccupancyFraction=70
Here we see the average memory usage at 4gb to 6gb when there is full load on the system.
Java11 : -server -Xms12g -Xmx12g -verbose:gc -Xlog:gc:gc.log -XX:+PrintGCDetails -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=20 -XX:+UseStringDeduplication -Xlog:safePoint -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat
Here we see the average memory usage gradually increase and go up till 90% on full load and that's when the GC runs and frees up the space. Also, we do not see the memory usage coming down when there is no load or 0 load on the system. I read that this is the expected behavior of Gc1
Kindly Advice!
Application behavior is similar is both cases

Why is the JVM memory footprint smaller than -Xms? [duplicate]

This question already has answers here:
Java heap Xms and linux free memory different
(1 answer)
Are some allocators lazy?
(6 answers)
Closed 2 years ago.
The startup command for the application is as follows:
java -server -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=77 -XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark -Djava.security.egd=file:/dev/./urandom -jar /app.jar --spring.profiles.active=prod
The following information can be obtained from the JVM command of arthas:
RUNTIME
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
MACHINE-NAME 7#authserver-5fc5c7c775-9fs2m
JVM-START-TIME 2020-04-15 00:04:27
MANAGEMENT-SPEC-VERSION 1.2
SPEC-NAME Java Virtual Machine Specification
SPEC-VENDOR Oracle Corporation
SPEC-VERSION 1.8
VM-NAME Java HotSpot(TM) 64-Bit Server VM
VM-VENDOR Oracle Corporation
VM-VERSION 25.111-b14
INPUT-ARGUMENTS -javaagent:/var/agent/pinpoint-agent/pinpoint-bootstrap.jar
-Xms15g
-Xmx15g
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=77
-XX:+ScavengeBeforeFullGC
-XX:+CMSScavengeBeforeRemark
-Djava.security.egd=file:/dev/./urandom
CLASS-PATH /app.jar:/var/agent/pinpoint-agent/pinpoint-bootstrap.jar
BOOT-CLASS-PATH /usr/java/jre1.8.0_111/lib/resources.jar:/usr/java/jre1.8.0_111/lib/rt.jar:/usr/java/jre1.8.0_111/lib/sunrsasign.jar
:/usr/java/jre1.8.0_111/lib/jsse.jar:/usr/java/jre1.8.0_111/lib/jce.jar:/usr/java/jre1.8.0_111/lib/charsets.jar:/usr
/java/jre1.8.0_111/lib/jfr.jar:/usr/java/jre1.8.0_111/classes
LIBRARY-PATH /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
CLASS-LOADING
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
LOADED-CLASS-COUNT 21926
TOTAL-LOADED-CLASS-COUNT 21944
UNLOADED-CLASS-COUNT 18
IS-VERBOSE false
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMPILATION
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
NAME HotSpot 64-Bit Tiered Compilers
TOTAL-COMPILE-TIME 366850(ms)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
GARBAGE-COLLECTORS
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
ParNew 1797/78247(ms)
[count/time]
ConcurrentMarkSweep 3/460(ms)
[count/time]
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
MEMORY-MANAGERS
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
CodeCacheManager Code Cache
Metaspace Manager Metaspace
Compressed Class Space
ParNew Par Eden Space
Par Survivor Space
ConcurrentMarkSweep Par Eden Space
Par Survivor Space
CMS Old Gen
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
MEMORY
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
HEAP-MEMORY-USAGE 15992750080(14.9 GiB)/16106127360(15.0 GiB)/15992750080(14.9 GiB)/2393542120(2.2 GiB)
[committed/init/max/used]
NO-HEAP-MEMORY-USAGE 255647744(243.8 MiB)/2555904(2.4 MiB)/-1(-1 B)/247922192(236.4 MiB)
[committed/init/max/used]
PENDING-FINALIZE-COUNT 0
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
OPERATING-SYSTEM
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
OS Linux
ARCH amd64
PROCESSORS-COUNT 16
LOAD-AVERAGE 5.08
VERSION 4.4.0-141-generic
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
THREAD
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
COUNT 256
DAEMON-COUNT 103
PEAK-COUNT 263
STARTED-COUNT 41467
DEADLOCK-COUNT 0
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
FILE-DESCRIPTOR
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
MAX-FILE-DESCRIPTOR-COUNT 1048576
OPEN-FILE-DESCRIPTOR-COUNT 811
Affect(row-cnt:0) cost in 6 ms.
From the above information, you can see that both committed/init of the JVM heap are about 15GiB.
However, from the system top command or the free command, the memory footprint of this process is only 3.712g
the result of the top command is shown
the result of the free command is shown

java -XX:-UseAdaptiveSizePolicy is not effective

I want to modify the maximum heap size through jinfo.
jinfo -flag MaxHeapSize=3122032640 <pid>
Since AdaptiveSizePolicy is enabled by default, modifying flags directly will result in an exception. So I disabled AdaptiveSizePolicy when the process started.
java -XX:-UseAdaptiveSizePolicy Sleep.java
I can also get the right result through jinfo
jinfo -flag UseAdaptiveSizePolicy 18220
-XX:-UseAdaptiveSizePolicy
But when I modify the maximum heap memory through jinfo again, exceptions will still occur.
jinfo -flag MaxHeapSize=3122032640 18220
Exception in thread "main" com.sun.tools.attach.AttachOperationFailedException: flag 'MaxHeapSize' cannot be changed
at jdk.attach/sun.tools.attach.VirtualMachineImpl.execute(VirtualMachineImpl.java:224)
at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.executeCommand(HotSpotVirtualMachine.java:309)
at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.setFlag(HotSpotVirtualMachine.java:282)
at jdk.jcmd/sun.tools.jinfo.JInfo.flag(JInfo.java:146)
at jdk.jcmd/sun.tools.jinfo.JInfo.main(JInfo.java:127)
It seems that -XX:-UseAdaptiveSizePolicy is not effective.
Does anyone know the reason?
I know the -Xmx flag to set the maximum heap size.
JDK: openjdk 13.0.1
OS: Ubuntu 18.04
VM flags:
-XX:CICompilerCount=3 -XX:ConcGCThreads=1 -XX:G1ConcRefinementThreads=4 -XX:G1HeapRegionSize=1048576 -XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=134217728 -XX:MarkStackSize=4194304 -XX:MaxHeapSize=536870912 -XX:MaxNewSize=321912832 -XX:MinHeapDeltaBytes=1048576 -XX:MinHeapSize=134217728 -XX:NonNMethodCodeHeapSize=5830732 -XX:NonProfiledCodeHeapSize=122913754 -XX:ProfiledCodeHeapSize=122913754 -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:SoftMaxHeapSize=536870912 -XX:-UseAdaptiveSizePolicy -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC
I want to modify the maximum heap size through jinfo.
It is not possible. MaxHeapSize is not manageable flag, it cannot be changed in runtime.
-XX:UseAdaptiveSizePolicy flag is completely different thing. If configures whether GC may resize heap generations basing on the GC statistics, in order to achieve pause/throughput/footprint goals.

Eclipse Photon excessive garbage collection

I'm using the latest Eclipse Photon Java EE IDE with Java 8 "1.8.0_181" 64-Bit on Windows 10.
After about an hour of usage, it becomes so unresponsive, that it's effectively unsuable. I checked the eclipse.ini and the default memory settings -Xms256m -Xmx1024m which I changed to 512m and 2g respectively. The current GC settings are:
-XX:+UseG1GC
-XX:+UseStringDeduplication
-Xms2g
-Xmx2g
Here is a Visual GC memory profile. The long linear growth in Eden space was while I did nothing but look at the graph. The spikes are during actual work.
GC of Eden space is basically triggered whenever it reaches ~512m. After an hour, every other keystroke seems to trigger GC, but the max heap size isn't even close to being exhausted.
Are there any GC settings to tune this? How can I find out what triggered the GC anyway?
(Or is this an Eclipse bug?)

PermGen outofmemory error using Fortify

I am running fortify static code scan.
main\Src>sourceanalyzer -64 -Xmx6500m -b project -scan -f project.fpr
The JDK is 1.8
java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
After 20 hours, I got PermGen OOM error
[error]: Unexpected exception: java.lang.OutOfMemoryError: PermGen space
^Cendering results 99% [====================]
According to a lot of resource, PermGen is obsolete in Java8.
Any Ideas?
There is also the option to increase PermGen space
Increase permgen space
-XX:MaxPermSize=128m
There is also the option to enable garbage collection on PermGen space
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
Nevertheless you should check with HP whether it is actually needed to increase PermGen space, it could also be a bug in the software. PermGen space out of memory errors are often caused by memory leaks.
Fortify uses it's own JRE (look under [Fortify Install Root]\jre and [Fortify Install Root]\jre64).
If you use -debug-verbose -logfile Project_Scan.txt you can see if there are issues the scanning engine is having with memory pressure anywhere since it dumps memory usage every so often.
Be sure you are using the latest version of Fortify. If your scan is taking 20 hours, it could be that the latest version addresses the speed and memory issues. The latest is 4.20.
To speed up the scan, have you looked at the Parallel Analysis Mode? This was introduced in 4.00 and you can read about it in the User Guide, Appendix B.
I did some heavy memory tuning with the translation cycle at one point and the following worked well. It should also work for the scan cycle. Please note that this was an extreme memory usage case and this is not the best for all Fortify usage.
-XX:MaxPermSize=256m -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
You can / should use JConsole to watch the process.
I think in Java 8 the PermGen space has been completely replaced with Metaspace. The JVM options
-XX:PermSize and -XX:MaxPermSize
have been replaced by
-XX:MetaSpaceSize and -XX:MaxMetaspaceSize respectively.
Try that.

Categories

Resources