Cannot understand Intellij IDEA's memory usage and management - java

Since a few years i am developing with IDEA again, and i am happy so far.
The problem is just weird memory usage behaviour and GC action while i am working on projects which causes my IDE freeze for a few seconds while GC is doing its job.
Regardless of how big the project is, i am working on, after a few days the memory usage increases upto 500 MBs (my heap space max 512 MB and actually, i assume, it had to be sufficient for web projects which has ca 100 java files). After GC did its job, i get 400 MB used - not collected - and just ca 100 MB free on heap and in a few mins the memory usage increases the heap is full again.
JVM version is 19.0-b09
using thread-local object allocation.
Parallel GC with 2 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 536870912 (512.0MB)
NewSize = 178257920 (170.0MB)
MaxNewSize = 178257920 (170.0MB)
OldSize = 4194304 (4.0MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 16777216 (16.0MB)
MaxPermSize = 314572800 (300.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 145489920 (138.75MB)
used = 81242600 (77.4789810180664MB)
free = 64247320 (61.271018981933594MB)
55.84070704004786% used
From Space:
capacity = 16384000 (15.625MB)
used = 0 (0.0MB)
free = 16384000 (15.625MB)
0.0% used
To Space:
capacity = 16384000 (15.625MB)
used = 0 (0.0MB)
free = 16384000 (15.625MB)
0.0% used
PS Old Generation
capacity = 358612992 (342.0MB)
used = 358612992 (342.0MB)
free = 0 (0.0MB)
100.0% used
PS Perm Generation
capacity = 172621824 (164.625MB)
used = 172385280 (164.3994140625MB)
free = 236544 (0.2255859375MB)
99.86296981776765% used
it's how my heap space seems. It is remarkable that Old Generation and Perm Generation use ca 100% their spaces. But i had triggered GC manually several times. The question is how can i get the IDE to sweep these objects in old generation without starting the IDE? (After start up the memory usage is about 60MB -90 MB) how can i find out what these are? There are some threads running which can be watched in VisualVM like RMI TCP Connection, RMI TCP Accept , XML RPC Weblistener and so on, although i do nothing on IDE and they're still consuming memory even 5-10 MBs per second.
$ uname -a
Linux bagdemir 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 21:21:01 UTC 2011 i686 GNU/Linux
$ java --version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) Server VM (build 19.1-b02, mixed mode)
UPDATE:
memory configuration:
-Xms256m -Xmx512m -Xmn170m -XX:MaxPermSize=300m

You may find this useful: Intellij Idea JVM options benchmark: default settings are worst

Right way to go is to get a memory snapshot and submit corresponding ticket to the JetBrains tracker with the snapshot attached.

Excess memory usage persists to this day, April 2 2019. By default the IntelliJ IDEA Ultimate Edition has 131 plugins enabled by default.
I turned off about 50 of those plugins.
Go to File >> Settings >> Plugins to manage plugins. Then click Installed to view the plugins already active.

Related

Why JVM uses max 40% of CPU?

The problem is: on load peaks CPU stucks on 40% and the request answers slow down.
I have dedicated backend API server with real load ~7,332 hits/min.
Database is on dedicated server and load is OK.
Almost no IO ops on this machine.
12 cores x 2 CPU = 24 cores
OS: OS Linux, 4.4.0-98-generic , amd64/64 (24 cores)
java version "1.7.0_151"
OpenJDK Runtime Environment (IcedTea 2.6.11) (7u151-2.6.11-0ubuntu1.14.04.1)
OpenJDK 64-Bit Server VM (build 24.151-b01, mixed mode)
Tomcat 7.0.82
-Xms50g
-Xmx50g
-XX:PermSize=512m
-XX:MaxPermSize=512m
-XX:MaxJavaStackTraceDepth=-1
-Djava.net.preferIPv4Stack=true
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70
-XX:+ScavengeBeforeFullGC
-XX:+CMSScavengeBeforeRemark
Used BIO (same thing), now use NIO connector.
<Connector port="8080" redirectPort="8443"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="2000"
minSpareThreads="20"
acceptCount="200"
acceptorThreadCount="2"
connectionTimeout="20000"
processorCache="-1"
URIEncoding="UTF-8" />
Stats from JavaMelody
Busy threads = 121 / 2,000
Bytes received = 8,679,400,308
Bytes sent = 83,345,586,407
Request count = 6,169,418
Error count = 961
Sum of processing times (ms) = 2,396,325,165
Max processing time (ms) = 4,168,515
Memory: Non heap memory = 275 Mb (Perm Gen, Code Cache),
Buffered memory = 5 Mb,
Loaded classes = 48,952,
Garbage collection time = 1,238,271 ms,
Process cpu time = 197,922,070 ms,
Committed virtual memory = 66,260 Mb,
Free physical memory = 267 Mb,
Total physical memory = 64,395 Mb,
Free swap space = 0 Mb,
Total swap space = 0 Mb
Perm Gen memory: 247 Mb / 512 Mb
Free disk space: 190,719 Mb
Can't reproduce this on test server.
Where is my bottleneck?
CPU usage chart from JMX
htop stats
Update:
Profiler screenshot TaskQueue.poll() hangs
Profiler screenshot ordered by self time CPU

Java app gets slower and slower until a full GC is performed

I have a program which receives UDP packets, parses some data from them, and saves it to a DB, in multiple threads. It uses Hibernate and Spring via Grails (GORM stand-alone).
It works OK in one server, it starts fast (20-30 ms per packet -except for the very first ones as JIT kicks in-) and after a while stabilizes at 50-60 ms.
However, in a newer, more powerful server it starts fast but gradually gets slower and slower (it reaches 200 ms or even 300 ms per packet, always with the same load). And then, when the JVM performs a full GC (or I do it manually from Visual VM), it gets fast again and the cycle starts over.
Any ideas about what could cause this behaviour? It seems to be getting slower as the Old Gen fills up. Eden fills up quite fast, but GCs pauses seem to be short. And it works OK in the old server, so it's puzzling me.
Servers and settings:
The servers specs are:
Old server: Intel Xeon E3-1245 V2 # 3.40GHz, 32 GB RAM without ECC
New server: Intel Xeon E5-1620 # 3.60GHz, 64 GB RAM with ECC
OS: Debian 7.6
JVM:
java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
JVM settings:
Old server: was running with no special RAM or GC params, PrintFlagsFinal gives: -XX:InitialHeapSize=525445120 -XX:+ManagementServer -XX:MaxHeapSize=8407121920 -XX:+UseCompressedOops -XX:+UseParallelGC
New server: tried forcing those same flags, same results.
Old server seemed to support UseFastStosb, it was enabled by default. Forcing it in the new server results in a message that says it's not supported.
Can you try to use G1 which is supported by your JVM version?
Applications running with either the CMS or the Parallel Old GC garbage collector would benefit switching to G1 if the application has one or more of the following traits.
(1) Full GC durations are too long or too frequent.
(2) The rate of object allocation rate or promotion varies significantly.
(3) Undesired long garbage collection or compaction pauses (longer than 0.5 to 1 second)
I cannot possibly say if there's anything wrong with your application / the server VM defaults.
Try adding -XX:+PrintGCDetails to know more about young and old generation sizes at the time of garbage collection. According the values, ur initial heap size starts around 525MB and maximum heap is around 8.4GB. The JVM will resize the heap based on the requirement and everytime it resizes this heap, all the young and old generations are resized accordingly which will cause a Full GC.
Also your flags indicate UseParallelGC which will do the young generation collection using multiple threads but old gen is still serially collected using single thread.
The default value of NewRatio is 2 which means that Young gen takes 1/3 of the heap and old gen takes 2/3 of the heap. If you have too many short living objects try resizing young gen size and probably give G1 GC a try now that u're using 7u65.
But before tuning I strongly recommend you to
(1) Do proper analysis taking GC logs - see if there are any Full GCs during your slow response times
(2) Try Java Mission Control. Use it to monitor your remote server process. Its feature rich and u'll know more information about GC.
You can use -XX:+PrintGCDetails option to see how frequency each GC occurs.
However, I don't think it is GC issue(or GC papameters). As what is said in your post, the program runs OK but problem occurs when it is moved to a new fast machines. My guess is that there is some bottleneck in your program, which in turn slows release references to allocated objects. Consequently, the memory accumlates and VM use a lot of time for GC and memory allocation.
In other means, the procedure who allocates heap memory for package process, and the consumer recycle this memory after packages memory are saved to DB. But the consumer can not catch up procedure speed.
So my suggestion is to check your program and doing some measurements

what is the purpose of -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio

Kindly tell me the purpose of those options.
After googling I think:
MinHeapFreeRatio tells "specified minimum percentage of space will be ensured to be free in heap memory after a GC"
and
MaxHeapFreeRatio tells "no more than specified percentage of space will be free in heap memory after a GC" [if there is excess free memory than the specified percentage, those memory will be returned to OS]
When i tried these options with 10 as value for both, even where there is more than 80 percentage of free heap memory, it was not released back to OS.
Details:
Java HotSpot(TM) 64-Bit Server VM (1.5.0_15-b04, mixed mode)
ParallelGC (otherwise known as throughput collector which is the default collector in server class VM)
i specified -Xms50M and -Xmx1000M as jvm arguments
OS: windows 7 professional (8 GB memory 64 bit OS)
Note: i just tried with SerialGC too, those min and max heap free ratio options were ignored.
The best explanation of the parameters -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio parameters are available in the oracle garbage documentation:
https://docs.oracle.com/javase/10/gctuning/factors-affecting-garbage-collection-performance.htm
-XX:MinHeapFreeRatio=40
-XX:MaxHeapFreeRatio=70
With these options, if the percent of free space in a generation falls below 40%, then the generation expands to maintain 40% free space, up to the maximum allowed size of the generation. Similarly, if the free space exceeds 70%, then the generation contracts so that only 70% of the space is free, subject to the minimum size of the generation.
I combine it with the G1GC in this way:
-XX:+UseG1GC -XX:MinHeapFreeRatio=2 -XX:MaxHeapFreeRatio=10
With this result:
Java very rarely releases memory back to the OS.
Generally speaking, applications use more memory over time rather than less. Are you sure you memory is so limited that you need this? Are you sure you are checking the resident memory not the virtual memory size which will be about 1.2 GB in your case all the time.

How to reduce Sun/Oracle JVM internal overhead?

This problem is specifically about Sun Java JVM running on Linux x86-64. I'm trying to figure out why the Sun JVM takes so much of system's physical memory even when I have set Heap and Non-Heap limits.
The program I'm running is Eclipse 3.7 with multiple plugins/features. The most used features are PDT, EGit and Mylyn. I'm starting the Eclipse with the following command line switches:
-nosplash -vmargs -Xincgc -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=50 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -Dorg.eclipse.swt.internal.gtk.disablePrinting
Worth noting are especially the switches:
-Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m
These switches should limit the JVM Heap to maximum of 200 MB and Non-Heap to 150 MB ("CMS Permanent generation" and "Code Cache" as labeled by JConsole). Logically the JVM should take total of 350 MB plus the internal overhead required by the JVM.
In reality, the JVM takes 544.6 MB for my current Eclipse process as computed by ps_mem.py (http://www.pixelbeat.org/scripts/ps_mem.py) which computes the real physical memory pages reserved by the Linux 2.6+ kernel. That's internal Sun JVM overhead of 35% or roughly 200MB!
Any hints about how to decrease this overhead?
Here's some additional info:
$ ps auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
me 23440 2.4 14.4 1394144 558440 ? Sl Oct12 210:41 /usr/bin/java ...
And according to JConsole, the process has used 160 MB of heap and 151 MB of non-heap.
I'm not saying that I cannot afford using extra 200MB for running Eclipse, but if there's a way to reduce this waste, I'd rather use that 200MB for kernel block device buffers or file cache. In addition, I have similar experience with other Java programs -- perhaps I could reduce the overhead for all of them with similar tweaks.
Update: After posting the question, I found previous post to SO:
Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable?
It seems that I should use pmap to investigate the problem.
I think the reason for the high memory consumption of your Eclipse Environment is the use of SWT. SWT is a native graphic library living outside of the heap of the JVM, and to worsen the situation, the implementation on Linux is not really optimized.
I don't think there's really a chance to reduce the memory consumption of your eclipse environment concerning the memory outside the heap.
Eclipse is a memory and cpu hog. In addition to the Java class libraries all the low end GUI stuff is handled by native system calls so you will have a substantial "native" JNI library to execute the low level X term calls attached to your process.
Eclipse offers millions of useful features and lots of helpers to speed up your day to day programming tasks - but lean and mean it is not. Any reduction in memory or resources will probably result in a noticeable slowdown. It really depends on how much you value your time vs. your computers memory.
If you want lean and mean gvim and make are unbeatable. If you want the code completion, automatic builds etc. you must expect to pay for this with extra resources.
If I run the following program
public static void main(String... args) throws InterruptedException {
for (int i = 0; i < 60; i++) {
System.out.println("waiting " + i);
Thread.sleep(1000);
}
}
with ps auwx prints
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
500 13165 0.0 0.0 596680 13572 pts/2 Sl+ 13:54 0:00 java -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -cp . Main
The amount of memory used is 13.5 MB. There about 200 MB of shared libraries which counts towards the VSZ size. The rest can be acounted for in the max heap, max perm gen with an overhead for the thread stacks etc.
The problem doesn't appear to be with the JVM but the application running in it. Using additional shared libraries, direct memory and memory mapped files can increase the amount of memory used.
Given you can buy 16 GB for around $100, do you know this is actually a problem?

Shrinking survivor spaces lead to continuous full GC

I've had this troubling experience with a Tomcat server, which runs:
our Hudson server;
a staging version of our web application, redeployed 5-8 times per day.
The problem is that we end up with continuous garbage collection, but the old generation is nowhere near to being filled. I've noticed that the survivor spaces are next to inexisting, and the garbage collector output is similar to:
[GC 103688K->103688K(3140544K), 0.0226020 secs]
[Full GC 103688K->103677K(3140544K), 1.7742510 secs]
[GC 103677K->103677K(3140544K), 0.0228900 secs]
[Full GC 103677K->103677K(3140544K), 1.7771920 secs]
[GC 103677K->103677K(3143040K), 0.0216210 secs]
[Full GC 103677K->103677K(3143040K), 1.7717220 secs]
[GC 103679K->103677K(3143040K), 0.0219180 secs]
[Full GC 103677K->103677K(3143040K), 1.7685010 secs]
[GC 103677K->103677K(3145408K), 0.0189870 secs]
[Full GC 103677K->103676K(3145408K), 1.7735280 secs]
The heap information before restarting Tomcat is:
Attaching to process ID 10171, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 14.1-b02
using thread-local object allocation.
Parallel GC with 8 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 3221225472 (3072.0MB)
NewSize = 2686976 (2.5625MB)
MaxNewSize = 17592186044415 MB
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 268435456 (256.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 1073479680 (1023.75MB)
used = 0 (0.0MB)
free = 1073479680 (1023.75MB)
0.0% used
From Space:
capacity = 131072 (0.125MB)
used = 0 (0.0MB)
free = 131072 (0.125MB)
0.0% used
To Space:
capacity = 131072 (0.125MB)
used = 0 (0.0MB)
free = 131072 (0.125MB)
0.0% used
PS Old Generation
capacity = 2147483648 (2048.0MB)
used = 106164824 (101.24666595458984MB)
free = 2041318824 (1946.7533340454102MB)
4.943684861063957% used
PS Perm Generation
capacity = 268435456 (256.0MB)
used = 268435272 (255.99982452392578MB)
free = 184 (1.7547607421875E-4MB)
99.99993145465851% used
The relevant JVM flags passed to Tomcat are:
-verbose:gc -Dsun.rmi.dgc.client.gcInterval=0x7FFFFFFFFFFFFFFE -Xmx3g -XX:MaxPermSize=256m
Please note that the survivor spaces are sized at about 40 MB at startup.
How can I avoid this problem?
Updates:
The JVM version is
$ java -version
java version "1.6.0_15"
Java(TM) SE Runtime Environment (build 1.6.0_15-b03)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02, mixed mode)
I'm going to look into bumping up the PermGen size and seeing if that helps - probably the sizing of the survivor spaces was unrelated.
The key is probably PS Perm Generation which is at 99.999% (only 184 bytes out of 256***MB*** free).
Usually, I'd suggest that you give it more perm gen but you already gave it 256MB which should be plenty. My guess is that you have a memory leak in some code generation library. Perm Gen is mostly used for bytecode for classes.
It's very easy to have ClassLoader leaks - all it takes is a single object loaded through the ClassLoader being referred by an object not loaded by it. A constantly redeployed app will then quickly fill PermGenSpace.
This article explains what to look out for, and a followup describes how to diagnose and fix the problem.
I think this is not that uncommon for an application server that gets continuously deployed to. The perm gen space, which is full for you, is where classes go. Keep in mind that JSPs are compiled as Java classes, and when you change a JSP, a new class gets generated and loaded.
We have had this problem, and our solution is to have the app server restart occasionally.
This is what I'd do:
Deploy Hudson to a separate server from your staging server
Configure Hudson to restart your staging server from time to time. You can either do this one of two ways:
Restart periodically (e.g., every night at midnight, regardless of if there's build activity); or
Have the web app deployment job trigger the server restart job. If you do this make sure there's a really long quiet period for the restart job (we set ours to 2 hours), so that you don't get a server restart for every build (i.e., if two web app deployments happen within 2 hours, they'll only trigger one server restart).
The flag -XX:SurvivorRatio sets the ratio between Eden and the survivor spaces. According to the JDK 1.5 tuning doc, the default value is 32, which gives a 1:32 ratio. This is in accordance with what you're seeing. It seems incredibly small to me, although I understand that only a very small number of objects are expected to make their way from Eden to the survivor space.
So, assuming that you have a lot of long-lived objects, you should decrease the survivor ratio. The risk is that you only have those long-lived objects during a startup phase, and so are limiting the Eden size. For a testing server, I doubt this is going to be an issue.
I'd probably also reduce the size of the Eden space, by increasing -XX:NewRatio (the default is 3). My gut says that a hundred MB or so is sufficient for the young generation, and you'll just be increasing the cost of garbage collection to have such a large amount of space allocated (ie, object will live in Eden far too long). But that's just instinct, and should definitely be validated for your environment.
And a semi-related comment, after reading other replies: if you're not seeing errors for running out of permgen space, don't spend your time fiddling with it. The permgen is managed separately from the rest of the heap.

Categories

Resources