I am facing a strange issue while instrumenting java web application using ASM 5.0.3 / Java 1.7_80 / Apache Tomcat-6.0.35.
After server started with in 1 or 2 days code cache memory is completely filled 100% and never Garbage collected. Same is verified without Instrumentation which is working fine & code cache never fills 100%
I am using "JSRInlinerAdapter" in my instrumentation code. Since my application contains all class file versions from 48 to 51 (1.4 to 1.7)
My question is - Is code cache memory filled with inline methods when JIT compiler compiles these methods? (since i use JSRInlinerAdapter) ?
Additional info
I am using following VM arguments
-Djava.util.Arrays.useLegacyMergeSort=true
-XX:+UseCodeCacheFlushing
-Djsse.enableCBCProtection=false
-XX:NewSize=384m
-XX:MaxNewSize=384m
-XX:PermSize=384m
-XX:MaxPermSize=384m
-XX:+DisableExplicitGC
-XX:MaxGCPauseMillis=400
-XX:MaxGCMinorPauseMillis=100
-XX:+UseG1GC
-XX:+UnlockDiagnosticVMOptions
-XX:+G1SummarizeConcMark
-XX:+UseTLAB -XX:NewRatio=3
-XX:+UseCMSInitiatingOccupancyOnly
-XX:InitiatingHeapOccupancyPercent=30
-XX:ConcGCThreads=5
-XX:ParallelGCThreads=20
-XX:+UseFastAccessorMethods
-XX:+UseAdaptiveGCBoundary
-XX:+UseStringCache
-XX:+AggressiveOpts
-Xms1536m
-Xmx1536m
What could be the reason to fill code cache quickly?
Related
I need some guidance related to Java 11 GC algorithm for JVM. We are migrating our application from jdk 8 to Java11. We are seeing spikes in memory management with the new GC algorithm that is defaulted with Java11 ie., GC1. Earlier we used CMS. Earlier our JVM startup params for the GC as below:
Application Infrastructure : 8 core CPU, 16GB RAM Linux EC2 (c5.2xlarge)
JDK 8 : -d64 -server -Xms4g -Xmx12g -XX:NewRatio=1 -XX:SurvivorRatio=4 -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat -XX:CMSInitiatingOccupancyFraction=70
Here we see the average memory usage at 4gb to 6gb when there is full load on the system.
Java11 : -server -Xms12g -Xmx12g -verbose:gc -Xlog:gc:gc.log -XX:+PrintGCDetails -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=20 -XX:+UseStringDeduplication -Xlog:safePoint -XX:ThreadStackSize=1024 -XX:+OptimizeStringConcat
Here we see the average memory usage gradually increase and go up till 90% on full load and that's when the GC runs and frees up the space. Also, we do not see the memory usage coming down when there is no load or 0 load on the system. I read that this is the expected behavior of Gc1
Kindly Advice!
Application behavior is similar is both cases
On 64 bit linux, with java8, when running java command, it seems all the 3 options -client / -server / -d64 are using the 64-bit server compiler.
The questions are: (for 64bit linux with java8)
Since -client and -server use the same compiler, does it makes any difference to specify one of the 2 options?
For a long running java daemon program, is it preferred to use -server together with -XX:+TieredCompilation or without it, when during the startup time it's ok to be a little slow.
Look at the file jre/lib/amd64/jvm.cfg. You'll likely see the lines
-server KNOWN
-client IGNORE
This means that -client option is ignored. -server also does nothing, since JDK 8 for x64 has only one JVM that includes both C1 and C2 compilers, and the tiered compilation is on by default.
with -XX:+TieredCompilation or without it
Does not matter, because this option is on by default. The advanced compilation policy works fine for both client-grade and server-grade applications. There is no usually need to turn it off manually.
I am running fortify static code scan.
main\Src>sourceanalyzer -64 -Xmx6500m -b project -scan -f project.fpr
The JDK is 1.8
java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
After 20 hours, I got PermGen OOM error
[error]: Unexpected exception: java.lang.OutOfMemoryError: PermGen space
^Cendering results 99% [====================]
According to a lot of resource, PermGen is obsolete in Java8.
Any Ideas?
There is also the option to increase PermGen space
Increase permgen space
-XX:MaxPermSize=128m
There is also the option to enable garbage collection on PermGen space
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
Nevertheless you should check with HP whether it is actually needed to increase PermGen space, it could also be a bug in the software. PermGen space out of memory errors are often caused by memory leaks.
Fortify uses it's own JRE (look under [Fortify Install Root]\jre and [Fortify Install Root]\jre64).
If you use -debug-verbose -logfile Project_Scan.txt you can see if there are issues the scanning engine is having with memory pressure anywhere since it dumps memory usage every so often.
Be sure you are using the latest version of Fortify. If your scan is taking 20 hours, it could be that the latest version addresses the speed and memory issues. The latest is 4.20.
To speed up the scan, have you looked at the Parallel Analysis Mode? This was introduced in 4.00 and you can read about it in the User Guide, Appendix B.
I did some heavy memory tuning with the translation cycle at one point and the following worked well. It should also work for the scan cycle. Please note that this was an extreme memory usage case and this is not the best for all Fortify usage.
-XX:MaxPermSize=256m -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
You can / should use JConsole to watch the process.
I think in Java 8 the PermGen space has been completely replaced with Metaspace. The JVM options
-XX:PermSize and -XX:MaxPermSize
have been replaced by
-XX:MetaSpaceSize and -XX:MaxMetaspaceSize respectively.
Try that.
I have some memory leak issue in my web app which is deployed in tomcat. To find the root cause I enabled the HeapDumpOnOutOfMemory error by setting:
-XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/tomcat/logs
and the memory settings in the tomcat is:
-Xms256m -Xmx768m -XX:PermSize=128m -XX:MaxPermSize=256m
When the out of memory issue happened, I see
java.lang.OutOfMemoryError: Java heap space
on the tomcat log file, but the .hprof file is not generated. Am I missing some settings here?
As #beny23 wrote you should use -XX:+HeapDumpOnOutOfMemoryError
and as is stated here:
The -XX:HeapDumpOnOutOfMemoryError Option This option tells the Java
HotSpot VM to generate a heap dump when an allocation from the Java
heap or the permanent generation cannot be satisfied. There is no
overhead in running with this option, so it can be useful for
production systems where the OutOfMemoryError exception takes a long
time to surface.
Check also your Java version since this option was introduced in 1.4.2 update 12, 5.0 update 7.
Environment: Windows Server 2003 x86 Intel Xeon 2.3 4gb Ram | tomcat 7.0.27 | jdk 1.7.0.25
I am facing the OutOfMemoryError. SO suggests using java options to increase the permgen space using following options
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1536m -Xmx1536m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC
However tomcat status page still shows the permgen memory as 64MB. Why cant it pick up the value specified in the parameters?
There is no PermGen in the status page, see this http://tomcat.apache.org/tomcat-7.0-doc/manager-howto.html#Introduction.
Anyways you conf seems OK
Since PermGen is a java thing (not tomcat) you should use java tools to check it, take a look at this.
As your settings seem ok, check the way you apply them: If you start tomcat via batch file, create setenv.bat with the content
CATALINA_OPTS="-Dyour-settings-from-above ... all of them"
If you start a service, you'll need to update the service configuration - as I'm not on Windows, it's a long time since I did that. Did you use the tomcatw.exe to create/configure the service? Not sure...