Application not responding when GC runs - java

Our application runs 24*7 with heavy user load, recnlty we started having issues with system performance, application not responding, we had to restart JVM to get it back online while investigating we found out that JVM runs GC and at that time application slows down, sometime not even responding,
Below is our stats for GC during that time.
GC Heap
GC Summary
Looking at this data, is there any recommendations on what to look for or if any JVM configuration needs to be corrected?
Java Version is 7, GC settings are #jvm -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -Xloggc:/opt/Server/logs/gclog.log -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Xms4096M -Xmx6144M -XX:PermSize=512M -XX:MaxPermSize=1024M -XX:ErrorFile=./log/error.log -XX:HeapDumpPath=./log/heap_dump.hprof –

Related

Java 11 GC1 GC memory management issue

I need some guidance related to Java 11 GC algorithm for JVM. We are migrating our application from jdk 8 to Java11. We are seeing spikes in memory management with the new GC algorithm that is defaulted with Java11 ie., GC1. Earlier we used CMS. Earlier our JVM startup params for the GC as below:
Application Infrastructure : 8 core CPU, 16GB RAM Linux EC2 (c5.2xlarge)
JDK 8 : -d64 -server -Xms4g -Xmx12g -XX:NewRatio=1 -XX:SurvivorRatio=4 -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat -XX:CMSInitiatingOccupancyFraction=70
Here we see the average memory usage at 4gb to 6gb when there is full load on the system.
Java11 : -server -Xms12g -Xmx12g -verbose:gc -Xlog:gc:gc.log -XX:+PrintGCDetails -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=20 -XX:+UseStringDeduplication -Xlog:safePoint -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat
Here we see the average memory usage gradually increase and go up till 90% on full load and that's when the GC runs and frees up the space. Also, we do not see the memory usage coming down when there is no load or 0 load on the system. I read that this is the expected behavior of Gc1
Kindly Advice!
Application behavior is similar is both cases

Eclipse Photon excessive garbage collection

I'm using the latest Eclipse Photon Java EE IDE with Java 8 "1.8.0_181" 64-Bit on Windows 10.
After about an hour of usage, it becomes so unresponsive, that it's effectively unsuable. I checked the eclipse.ini and the default memory settings -Xms256m -Xmx1024m which I changed to 512m and 2g respectively. The current GC settings are:
-XX:+UseG1GC
-XX:+UseStringDeduplication
-Xms2g
-Xmx2g
Here is a Visual GC memory profile. The long linear growth in Eden space was while I did nothing but look at the graph. The spikes are during actual work.
GC of Eden space is basically triggered whenever it reaches ~512m. After an hour, every other keystroke seems to trigger GC, but the max heap size isn't even close to being exhausted.
Are there any GC settings to tune this? How can I find out what triggered the GC anyway?
(Or is this an Eclipse bug?)

Struts1.x application not releasing RAM after using it

i am working on Struts1.x web application and using weblogic server, when performing some processing on very huge data, it consume almost all the RAM but after completing or session timeout, it don't return the consumed RAM.
It never release the RAM until I stop the server or kill forcefully. wanted to know such scenario where the RAM utilization is not released after usage and if it can be solve by tuning weblogic paramter or by some other way.
using the following parameter in weblogic:
-Xms12g -Xmx12g -XX:MetaspaceSize=1G -XX:MaxMetaspaceSize=1G
-XX:NewRatio=3 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
-DBASE_CONFIG_LA=/data01/Lending_Analytics_weblogic/LendingAnalyticsPQM/config/
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError
-XX:+UseGCOverheadLimit -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2m -Xloggc:/data01/Lending_Analytics_weblogic/LendingAnalyticsPQM/GCLogs/gc.log
*** not able to find any GC issue after analyzing the GC logs.
*** fixed all the Memory Leak issue from the code through eclipse memory leak compiler

GC Log roation with compression paramewter in JDK 8

I am trying GC log rotation in JDK 8.So I have achieved it by using below GC Log JVM parameter
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:verbose-jdk8-gc.log -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1k
But Now I want that This should also compressed when Rotation is done.
So is there any JVM Parameter in JDK 8 for compression of GC Log ?
Is there any one who can help me.
On Linux, you can use logroate: https://linux.die.net/man/8/logrotate to manage the rotation and compression of JVM gc log.

Full GC only release little memory in PSOldGen

We have performed a server migration from Solaris SunOS 5.10 to Redhat Linux VM recently. JVM was upgraded from 1.5.0_22 (32-bit) to 1.6.0_06 (64-bit)
However, since then, we encounter OutOfMemory error frequently. We have studied that a 64-bit JVM would require 30 - 50% more heap, so we increased our heap
size from 1200MB to 2048MB and have a try. However, we still observed some OOME occured after server run for a few dates.
Upon checking the GC log, we found that Full GC happened frequently after server has started for few dates, and for each Full GC,
it will only release a little memory and frequent Full GC slows down the application.
As you can see the excerpt of the GC log, almost no memory was released in PSOldGen
205023.895: [Full GC [PSYoungGen: 225919K->157256K(240960K)] [PSOldGen: 1841151K->1841151K(1841152K)] 2067071K->1998408K(2082112K) [PSPermGen: 108720K->108720K(109056K)], 6.2785770 secs] [Times: user=6.23 sys=0.01, real=6.28 secs]
Heap after GC invocations=1638 (full 251):
PSYoungGen total 240960K, used 157256K [0x00002aab2e800000, 0x00002aab3e200000, 0x00002aab3e200000)
eden space 225920K, 69% used [0x00002aab2e800000,0x00002aab38192208,0x00002aab3c4a0000)
from space 15040K, 0% used [0x00002aab3d350000,0x00002aab3d350000,0x00002aab3e200000)
to space 15040K, 0% used [0x00002aab3c4a0000,0x00002aab3c4a0000,0x00002aab3d350000)
PSOldGen total 1841152K, used 1841151K [0x00002aaabe200000, 0x00002aab2e800000, 0x00002aab2e800000)
object space 1841152K, 99% used [0x00002aaabe200000,0x00002aab2e7fffc8,0x00002aab2e800000)
PSPermGen total 109056K, used 108720K [0x00002aaaae200000, 0x00002aaab4c80000, 0x00002aaabe200000)
object space 109056K, 99% used [0x00002aaaae200000,0x00002aaab4c2c3f8,0x00002aaab4c80000)
}
Here is the heap usage pattern for a single OC4J instance within 24 hours, which is quite strange to me, it doesn't show a zig-zag path but instead, some random pattern.
May I know what can I do?
Server config:
Red Hat Enterprise Linux Server release 5.7 (Tikanga) 2.6.18 274.el5 (64-bit)
CPU : 8, 16GB RAM
JVM version : Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
Application server : OC4J 10.1.3.5
JVM starup arguments:
//Old confing
-server -Xms1200M -Xmx1200M -XX:MaxPermSize=64M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
//New config
-server -Xms2048M -Xmx2048M -XX:MaxPermSize=256M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
Either this is a memory leak in the application or a bug in the Java memory management.
Since the 1.6.0_06 release there is a whole bunch of bug fixes regarding memory management and garbage collection for example these two:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6676016
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6824570
So try the following:
Upgrade to a newer Java 1.6.0 build. (If not possible you can change GC strategy settings from (for example) parallel to concurrent to see if that bypasses a potential Java bug.
Troubleshoot your application for a memory leak using a heapdump (jmap) to see what´s occupying the heap.

Categories

Resources