I'm using the latest Eclipse Photon Java EE IDE with Java 8 "1.8.0_181" 64-Bit on Windows 10.
After about an hour of usage, it becomes so unresponsive, that it's effectively unsuable. I checked the eclipse.ini and the default memory settings -Xms256m -Xmx1024m which I changed to 512m and 2g respectively. The current GC settings are:
-XX:+UseG1GC
-XX:+UseStringDeduplication
-Xms2g
-Xmx2g
Here is a Visual GC memory profile. The long linear growth in Eden space was while I did nothing but look at the graph. The spikes are during actual work.
GC of Eden space is basically triggered whenever it reaches ~512m. After an hour, every other keystroke seems to trigger GC, but the max heap size isn't even close to being exhausted.
Are there any GC settings to tune this? How can I find out what triggered the GC anyway?
(Or is this an Eclipse bug?)
Related
I want to modify the maximum heap size through jinfo.
jinfo -flag MaxHeapSize=3122032640 <pid>
Since AdaptiveSizePolicy is enabled by default, modifying flags directly will result in an exception. So I disabled AdaptiveSizePolicy when the process started.
java -XX:-UseAdaptiveSizePolicy Sleep.java
I can also get the right result through jinfo
jinfo -flag UseAdaptiveSizePolicy 18220
-XX:-UseAdaptiveSizePolicy
But when I modify the maximum heap memory through jinfo again, exceptions will still occur.
jinfo -flag MaxHeapSize=3122032640 18220
Exception in thread "main" com.sun.tools.attach.AttachOperationFailedException: flag 'MaxHeapSize' cannot be changed
at jdk.attach/sun.tools.attach.VirtualMachineImpl.execute(VirtualMachineImpl.java:224)
at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.executeCommand(HotSpotVirtualMachine.java:309)
at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.setFlag(HotSpotVirtualMachine.java:282)
at jdk.jcmd/sun.tools.jinfo.JInfo.flag(JInfo.java:146)
at jdk.jcmd/sun.tools.jinfo.JInfo.main(JInfo.java:127)
It seems that -XX:-UseAdaptiveSizePolicy is not effective.
Does anyone know the reason?
I know the -Xmx flag to set the maximum heap size.
JDK: openjdk 13.0.1
OS: Ubuntu 18.04
VM flags:
-XX:CICompilerCount=3 -XX:ConcGCThreads=1 -XX:G1ConcRefinementThreads=4 -XX:G1HeapRegionSize=1048576 -XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=134217728 -XX:MarkStackSize=4194304 -XX:MaxHeapSize=536870912 -XX:MaxNewSize=321912832 -XX:MinHeapDeltaBytes=1048576 -XX:MinHeapSize=134217728 -XX:NonNMethodCodeHeapSize=5830732 -XX:NonProfiledCodeHeapSize=122913754 -XX:ProfiledCodeHeapSize=122913754 -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:SoftMaxHeapSize=536870912 -XX:-UseAdaptiveSizePolicy -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC
I want to modify the maximum heap size through jinfo.
It is not possible. MaxHeapSize is not manageable flag, it cannot be changed in runtime.
-XX:UseAdaptiveSizePolicy flag is completely different thing. If configures whether GC may resize heap generations basing on the GC statistics, in order to achieve pause/throughput/footprint goals.
I am running fortify static code scan.
main\Src>sourceanalyzer -64 -Xmx6500m -b project -scan -f project.fpr
The JDK is 1.8
java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
After 20 hours, I got PermGen OOM error
[error]: Unexpected exception: java.lang.OutOfMemoryError: PermGen space
^Cendering results 99% [====================]
According to a lot of resource, PermGen is obsolete in Java8.
Any Ideas?
There is also the option to increase PermGen space
Increase permgen space
-XX:MaxPermSize=128m
There is also the option to enable garbage collection on PermGen space
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
Nevertheless you should check with HP whether it is actually needed to increase PermGen space, it could also be a bug in the software. PermGen space out of memory errors are often caused by memory leaks.
Fortify uses it's own JRE (look under [Fortify Install Root]\jre and [Fortify Install Root]\jre64).
If you use -debug-verbose -logfile Project_Scan.txt you can see if there are issues the scanning engine is having with memory pressure anywhere since it dumps memory usage every so often.
Be sure you are using the latest version of Fortify. If your scan is taking 20 hours, it could be that the latest version addresses the speed and memory issues. The latest is 4.20.
To speed up the scan, have you looked at the Parallel Analysis Mode? This was introduced in 4.00 and you can read about it in the User Guide, Appendix B.
I did some heavy memory tuning with the translation cycle at one point and the following worked well. It should also work for the scan cycle. Please note that this was an extreme memory usage case and this is not the best for all Fortify usage.
-XX:MaxPermSize=256m -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68
You can / should use JConsole to watch the process.
I think in Java 8 the PermGen space has been completely replaced with Metaspace. The JVM options
-XX:PermSize and -XX:MaxPermSize
have been replaced by
-XX:MetaSpaceSize and -XX:MaxMetaspaceSize respectively.
Try that.
I have a computer that's running windows 7 with 8GB ram.
When I edit intelij bvm options and set the following I get the above error:
-Xms768m
-Xmx2g
Anything I can do in relation to memory management etc? InteliJ seems to be hogging a lot of CPU which is the reason I decided to increase it's xms param.
We have performed a server migration from Solaris SunOS 5.10 to Redhat Linux VM recently. JVM was upgraded from 1.5.0_22 (32-bit) to 1.6.0_06 (64-bit)
However, since then, we encounter OutOfMemory error frequently. We have studied that a 64-bit JVM would require 30 - 50% more heap, so we increased our heap
size from 1200MB to 2048MB and have a try. However, we still observed some OOME occured after server run for a few dates.
Upon checking the GC log, we found that Full GC happened frequently after server has started for few dates, and for each Full GC,
it will only release a little memory and frequent Full GC slows down the application.
As you can see the excerpt of the GC log, almost no memory was released in PSOldGen
205023.895: [Full GC [PSYoungGen: 225919K->157256K(240960K)] [PSOldGen: 1841151K->1841151K(1841152K)] 2067071K->1998408K(2082112K) [PSPermGen: 108720K->108720K(109056K)], 6.2785770 secs] [Times: user=6.23 sys=0.01, real=6.28 secs]
Heap after GC invocations=1638 (full 251):
PSYoungGen total 240960K, used 157256K [0x00002aab2e800000, 0x00002aab3e200000, 0x00002aab3e200000)
eden space 225920K, 69% used [0x00002aab2e800000,0x00002aab38192208,0x00002aab3c4a0000)
from space 15040K, 0% used [0x00002aab3d350000,0x00002aab3d350000,0x00002aab3e200000)
to space 15040K, 0% used [0x00002aab3c4a0000,0x00002aab3c4a0000,0x00002aab3d350000)
PSOldGen total 1841152K, used 1841151K [0x00002aaabe200000, 0x00002aab2e800000, 0x00002aab2e800000)
object space 1841152K, 99% used [0x00002aaabe200000,0x00002aab2e7fffc8,0x00002aab2e800000)
PSPermGen total 109056K, used 108720K [0x00002aaaae200000, 0x00002aaab4c80000, 0x00002aaabe200000)
object space 109056K, 99% used [0x00002aaaae200000,0x00002aaab4c2c3f8,0x00002aaab4c80000)
}
Here is the heap usage pattern for a single OC4J instance within 24 hours, which is quite strange to me, it doesn't show a zig-zag path but instead, some random pattern.
May I know what can I do?
Server config:
Red Hat Enterprise Linux Server release 5.7 (Tikanga) 2.6.18 274.el5 (64-bit)
CPU : 8, 16GB RAM
JVM version : Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
Application server : OC4J 10.1.3.5
JVM starup arguments:
//Old confing
-server -Xms1200M -Xmx1200M -XX:MaxPermSize=64M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
//New config
-server -Xms2048M -Xmx2048M -XX:MaxPermSize=256M -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xnoclassgc -verbose:gc -XX:NewSize=250M -XX:MaxNewSize=250M -XX:SurvivorRatio=15 -Xconcurrentio -Xss128k
Either this is a memory leak in the application or a bug in the Java memory management.
Since the 1.6.0_06 release there is a whole bunch of bug fixes regarding memory management and garbage collection for example these two:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6676016
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6824570
So try the following:
Upgrade to a newer Java 1.6.0 build. (If not possible you can change GC strategy settings from (for example) parallel to concurrent to see if that bypasses a potential Java bug.
Troubleshoot your application for a memory leak using a heapdump (jmap) to see what´s occupying the heap.
I have a Tomcat in virtual machine with dynamic memory size. Admin said that memory size incrised when it needed for system.
But when i try to set -Xms2048m -Xmx4096m -XX:MaxPermSize=256m in setenv.sh i get an error:
Tomcat could not reserve enough space for object heap
Now Tomcat starts with -Xms256m -Xmx1024m -XX:MaxPermSize=256m settings.
Its possible to set 2Gb start memory size in my case?
OS: Ubuntu 13.04 64bit
If you have more than 2GB of memory available in you system for the the tomcat process to start, I guess then you can use -Xms2048m. -Xms2048 means, your JVM will need this much memory for initial allocation and if it cannot allocate, you might get the exception.