This problem is specifically about Sun Java JVM running on Linux x86-64. I'm trying to figure out why the Sun JVM takes so much of system's physical memory even when I have set Heap and Non-Heap limits.
The program I'm running is Eclipse 3.7 with multiple plugins/features. The most used features are PDT, EGit and Mylyn. I'm starting the Eclipse with the following command line switches:
-nosplash -vmargs -Xincgc -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=50 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -Dorg.eclipse.swt.internal.gtk.disablePrinting
Worth noting are especially the switches:
-Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m
These switches should limit the JVM Heap to maximum of 200 MB and Non-Heap to 150 MB ("CMS Permanent generation" and "Code Cache" as labeled by JConsole). Logically the JVM should take total of 350 MB plus the internal overhead required by the JVM.
In reality, the JVM takes 544.6 MB for my current Eclipse process as computed by ps_mem.py (http://www.pixelbeat.org/scripts/ps_mem.py) which computes the real physical memory pages reserved by the Linux 2.6+ kernel. That's internal Sun JVM overhead of 35% or roughly 200MB!
Any hints about how to decrease this overhead?
Here's some additional info:
$ ps auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
me 23440 2.4 14.4 1394144 558440 ? Sl Oct12 210:41 /usr/bin/java ...
And according to JConsole, the process has used 160 MB of heap and 151 MB of non-heap.
I'm not saying that I cannot afford using extra 200MB for running Eclipse, but if there's a way to reduce this waste, I'd rather use that 200MB for kernel block device buffers or file cache. In addition, I have similar experience with other Java programs -- perhaps I could reduce the overhead for all of them with similar tweaks.
Update: After posting the question, I found previous post to SO:
Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable?
It seems that I should use pmap to investigate the problem.
I think the reason for the high memory consumption of your Eclipse Environment is the use of SWT. SWT is a native graphic library living outside of the heap of the JVM, and to worsen the situation, the implementation on Linux is not really optimized.
I don't think there's really a chance to reduce the memory consumption of your eclipse environment concerning the memory outside the heap.
Eclipse is a memory and cpu hog. In addition to the Java class libraries all the low end GUI stuff is handled by native system calls so you will have a substantial "native" JNI library to execute the low level X term calls attached to your process.
Eclipse offers millions of useful features and lots of helpers to speed up your day to day programming tasks - but lean and mean it is not. Any reduction in memory or resources will probably result in a noticeable slowdown. It really depends on how much you value your time vs. your computers memory.
If you want lean and mean gvim and make are unbeatable. If you want the code completion, automatic builds etc. you must expect to pay for this with extra resources.
If I run the following program
public static void main(String... args) throws InterruptedException {
for (int i = 0; i < 60; i++) {
System.out.println("waiting " + i);
Thread.sleep(1000);
}
}
with ps auwx prints
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
500 13165 0.0 0.0 596680 13572 pts/2 Sl+ 13:54 0:00 java -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -cp . Main
The amount of memory used is 13.5 MB. There about 200 MB of shared libraries which counts towards the VSZ size. The rest can be acounted for in the max heap, max perm gen with an overhead for the thread stacks etc.
The problem doesn't appear to be with the JVM but the application running in it. Using additional shared libraries, direct memory and memory mapped files can increase the amount of memory used.
Given you can buy 16 GB for around $100, do you know this is actually a problem?
Related
We currently have problems with a java native memory leak. Server is quite big (40cpus, 128GB of memory). Java heap size is 64G and we run a very memory intensive application reading lot of data to strings with about 400 threads and throwing them away from memory after some minutes.
So the heap is filling up very fast but stuff on the heap becomes obsolete and can be GCed very fast, too. So we have to use G1 to not have STW breaks for minutes.
Now, that seems to work fine - heap is big enough to run the application for days, nothing leaking here. Anyway the Java process is growing and growing over time until all the 128G are used and the aplication crashes with an allocation failure.
I've read a lot about native java memory leaks, including the glibc issue with max. arenas (we have wheezy with glibc 2.13, so no fix possible here with setting MALLOC_ARENA_MAX=1 or 4 without a dist upgrade).
So we tried jemalloc what gave us graphs for:
inuse-space:
and
inuse-objects:
.
I don't get it what's the issue here, has someone an idea?
If I set MALLOC_CONF="narenas:1" for jemalloc as environment parameter for the tomcat process running our app, could that still use the glibc malloc version anyway somehow?
This is our G1 setup, maybe some issue here?
-XX:+UseCompressedOops
-XX:+UseNUMA
-XX:NewSize=6000m
-XX:MaxNewSize=6000m
-XX:NewRatio=3
-XX:SurvivorRatio=1
-XX:InitiatingHeapOccupancyPercent=55
-XX:MaxGCPauseMillis=1000
-XX:PermSize=64m
-XX:MaxPermSize=128m
-XX:+PrintCommandLineFlags
-XX:+PrintFlagsFinal
-XX:+PrintGC
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution
-XX:-UseAdaptiveSizePolicy
-XX:+UseG1GC
-XX:MaxDirectMemorySize=2g
-Xms65536m
-Xmx65536m
Thanks for your help!
We never called System.gc() explicitly, and meanwhile stopped using G1, not specifying anything other than xms and xmx.
Therefore using nearly all the 128G for the heap now. The java process memory usage is high - but constant for weeks. I'm sure this is some G1 or at least general GC issue. The only disadvantage by this "solution" are high GC pauses, but they decreased from up to 90s to about 1-5s with increasing the heap, which is ok for the benchmark we drive with our servers.
Before that, I played around with -XX:ParallelGcThreads options which had significant influence on the memory leak speed when decreasing from 28 (default for 40 cpus) downwards to 1. The memory graphs looked somewhat like a hand fan using different values on different instances...
First time user of Jenkins here, and having a bit of trouble getting it started. From the Linux shell I run a command like:
java -Xms512m -Xmx512m -jar jenkins.war
and consistently get an error like:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# An error report file with more information is saved as:
# /home/twilliams/.jenkins/hs_err_pid36290.log
First, the basics:
Jenkins 1.631
Running via the jetty embedded in the war file
OpenJDK 1.7.0_51
Oracle Linux (3.8.13-55.1.5.el6uek.x86_64)
386 GB ram
40 cores
I get the same problem with a number of other configurations as well: using Java Hotspot 1.8.0_60, running through Apache Tomcat, and using all sorts of different values for -Xms/-Xmx/-Xss and similar options.
I've done a fair bit of research and think I know what the problem is, but am at a loss as how to solve it. I suspect that I'm running into the virtual memory overcommit issue mentioned here; the relevant bits from ulimit:
--($:)-- ulimit -a
...
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 8388608
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) 8388608
...
If I double the virtual memory limit as root, I can start Jenkins, but I'd rather not run Jenkins as the root user.
Another workaround: a soon-to-be-decommissioned machine with 48 GB ram and 24 cores can start Jenkins without issue, though (I suspect) just barely: according to htop, its virtual memory footprint is just over 8 GB. I suspect, as a result, that the memory overcommit issue is scaling with the number of processors on the machine, presumably the result of Jenkins starting a number of threads proportional to the number of processors present on the host machine. I roughly captured the thread count via ps -eLf | grep jenkins | wc -l and found that the thread count spikes at around 114 on the 40 core machine, and 84 on the 24 core machine.
Does this explanation seem sound? Provided it does...
Is there any way to configure Jenkins to reduce the number of threads it spawns at startup? I tried the arguments discussed here but, as advertised, they didn't seem to have any effect.
Are there any VMs available that don't suffer from the overcommit issue, or some other configuration option to address it?
The sanest option at this point may be to just run Jenkins in a virtualized environment to limit the resources at its disposal to something reasonable, but at this point I'm interested in this problem on an intellectual level and want to know how to get this recalcitrant configuration to behave.
Edit
Here's a snippet from the hs_error.log file, which guided my initial investigation:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
Here are a couple of command lines I tried, all with the same result.
Starting with a pitiful amount of heap space:
java -Xms2m -Xmx2m -Xss228k -jar jenkins.war
Starting with significantly more heap space:
java -Xms2048m -Xmx2048m -Xss1m -XX:ReservedCodeCacheSize=512m -jar jenkins.war
Room to grow:
java -Xms2m -Xmx1024m -Xss228k -jar jenkins.war
A number of other configurations were tried as well. Ultimately I don't think that the problem is heap exhaustion here - it's that the JVM is trying to reserve too much virtual memory for itself (in which to store the heap, thread stacks, etc) than allowed by the ulimit settings. Presumably this is the result of the overcommit issue linked earlier, such that if Jenkins is spawning 120 threads, it's erroneously trying to reserve 120x as much VM space as the master process originally occupied.
Having done what I could with the other options suggested in that log, I'm trying to figure out how to reduce the thread count in Jenkins to test the thread VM overcommit theory.
Edit #2
Per MichaĆ Grzejszczak, this is an issue with the glibc distributed with Red Hat Enterprise Linux 6 as discussed here. The issue can be worked around via explicit setting of the environment variable MALLOC_ARENA_MAX, in my case export MALLOC_ARENA_MAX=2. Without explicit setting of this variable, the JVM will apparently attempt to spawn (8 x cpu core count) threads, each consuming 64M. My 40 core case would have required northward of 10 gigs of virtual ram, exceeding (by itself) the ulimit on my machine of 8 gigs. Setting this to 2 reduces VM consumption to around 128 megs.
Jenkins memory footprint is related more to the number and size of projects it is managing than the number of CPUs or available memory. Jenkins should run fine on 1GB of heap memory unless you have gigantic projects on it.
You may have misconfigured the JVM though. -Xmx and -Xms parameters govern heap space JVM can use. -Xmx is a limit for heap memory, -Xms is a starting value for heap memory. Heap is a single memory area for entire JVM. You can easily monitor it by various tools like JConsole or VisualVM.
On the other hand -Xss is not related to heap. It is the size of a thread stack for all threads in this JVM process. As Java programs tend to create numerous threads setting this parameter too big can prevent your program from launching. Typically this value is in the range of 512kb. Entering here 512m instead makes it impossible for JVM to start. Make sure your settings do not contain any such mistakes (and post your memory config too).
My tomcat was auto-shutdown suddenly.I checked in log file and found that It was killed with message:
kernel: Killed process 17420, UID 0, (java) total-vm:8695172kB, anon-rss:4389088kB, file-rss:20kB
My setting for running tomcat is -Xms2048m -Xmx4096m -XX:NewSize=256m -XX:MaxNewSize=512m -XX:PermSize=256m -XX:MaxPermSize=1024m
My system when run command "free -m" is:
total used free shared buffers cached
Mem: 7859 7713 146 0 97 1600
-/+ buffers/cache: 6015 1844 Swap: 0 0 0
I monitor program with "top -p", the result as below
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8048440k total, 7900616k used, 147824k free, 100208k buffers Swap: 0k total, 0k used, 0k free, 1640888k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4473 root 20 0 8670m 2.5g 6568 S 0.0 32.6 71:07.84 java
My question is:
1.Why VIRT = 8670m (in "top -p" result) is greater than Mem: 8048440k total but my application is still running?
Why my tomcat was kill by kernel? I don't see any strange with memory (It's similar with when it's running)
To avoid this error happen, what will I do and why?
The only thing I know that causes the kernel to kill tasks in Linux is the out of memory killer. This article from Oracle might be a little more recently and relevant.
The solution depends on what else is running on the system. From what you showed, you have less than 2GB of usable memory, but your Java heap max is topping out around 4GB. What we don't know is how big the Java heap is at the time you took that snapshot. If it's at its initial 2GB, then you could be running close to the limit. Also based on your formatting, you have no swap space to use as a fallback.
If you have any other significant processes on the system, you need to account for their maximum memory usage. The short answer is try to reduce the Xmx and MaxPermSize if at all possible, you'll have to analyze your load to see if this is possible or will cause unreasonable GC CPU usage.
Some notes:
Java uses more memory than the heap, it has memory for the native code running the VM itself.
Java 8 stores permgen outside of the heap, so I believe it adds memory on top of the Xmx parameter, you may want to note that if running Java 8.
As you reduce the memory limit, you'll hit 3 ranges:
Far above real requirements: no noticeable difference
Very close to real requirements: server freezes/stops responding and uses 100% CPU (GC overhead)
Below real requirements: OutOfMemoryErrors
It's possible for a process's VM size to exceed RAM+swap size per your first question. I remember running Java on a swapless embedded system with 256MB RAM and seeing 500MB of memory usage and being surprised. Some reasons:
In Linux you can allocate memory, but it's not actually used until you write to it
Memory-mapped files (and probably things like shared memory segments) count towards this limit. I believe Java opens all of the jar files as memory mapped files so included in that virt size are all of the jars on your classpath, including the 80MB or so rt.jar.
Shared objects probably count towards VIRT but only occupy space once (i.e. one copy of so loaded for many processes)
I've heard, but I can't find a reference right now, that Linux can actually use binaries/.so files as read-only "swap" space, meaning essentially loading a 2MB binary/so will increase your VM size by 2MB but not actually use all of that RAM, because it pages in from disk only the parts actually accessed.
Linux OS has a OOM Mechanism, when OS's memory is insufficient. The OOM will kill the cost max memory program(In most situations, Linux Out Of Memory Management). Obviously Your tomcat own the max memory.
How to solve? In my experience, you must observe the memory usage of OS, you can use the top command to observe, and find the proper process. and at the same time, you can use the jvisualvm to observe the usage memory of tomcat.
An application is running in a high end Linux server with 16 GB RAM and 100 GB Hard Disk. We have following command to run the program.
nohup java -server -Xmn512m -Xms1024m -Xmx1024m -Xss256k -verbose:gc
-Xloggc:/logs/gc_log.txt MyClass >> /output.txt
This was written by someone else I'm trying to understand why -verbose:gc -Xloggc: is used.
When I checked this command it collects gc logs but does this cause performance issue to application?
Also I want to relook at my memory arguments, the application is using complete 16 GB RAM and CPU utilization is less tha 5 percent. We are analyzing the reasons for memory utlization.
but Whether we can increase memory parameter values for better performance or existing will suffice?
The answer to fist part of your question is -verbose:gc implies that more GC logs will be appended to the file provided in -Xloggc. If the CPU usage is <5%, then there is no issue in keeping gc logs as verbose.
For 2nd part, it is not possible that if you assign 1024m of heap and still application takes 16GB RAM.
I've got a Java webapp running on one tomcat instance. During peak times the webapp serves around 30 pages per second and normally around 15.
My environment is:
O/S: SUSE Linux Enterprise Server 10 (x86_64)
RAM: 16GB
server: Tomcat 6.0.20
JVM: Java HotSpot(TM) 64-Bit Server VM 1.6.0_14
JVM options:
CATALINA_OPTS="-Xms512m -Xmx1024m -XX:PermSize=128m -XX:MaxPermSize=256m
-XX:+UseParallelGC
-Djava.awt.headless=true
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
JAVA_OPTS="-server"
After a couple of days of uptime the Full GC starts occurring more frequently and it becomes a serious problem to the application's availability. After a tomcat restart the problem goes away but, of course, returns after 5 to 10 or 30 days (not consistent).
The Full GC log before and after a restart is at http://pastebin.com/raw.php?i=4NtkNXmi
It shows a log before the restart at 6.6 days uptime where the app was suffering because Full GC needed 2.5 seconds and was happening every ~6 secs.
Then it shows a log just after the restart where Full GC only happened every 5-10 minutes.
I've got two dumps using jmap -dump:format=b,file=dump.hprof PID when the Full GCs where occurring (I'm not sure whether I got them exactly right when a Full GC was occurring or between 2 Full GCs) and opened them in http://www.eclipse.org/mat/ but didn't get anything useful in Leak Suspects:
60MB: 1 instance of "org.hibernate.impl.SessionFactoryImpl" (I use hibernate with ehcache)
80MB: 1,024 instances of "org.apache.tomcat.util.threads.ThreadWithAttributes" (these are probably the 1024 workers of tomcat)
45MB: 37 instances of "net.sf.ehcache.store.compound.impl.MemoryOnlyStore" (these should be my ~37 cache regions in ehcache)
Note that I never get an OutOfMemoryError.
Any ideas on where should I look next?
When we had this issue we eventually tracked it down to the young generation being too small. Although we had given plenty of ram the young generation wasn't given it's fair share.
This meant that small garbage collections would happen more frequently and caused some young objects to be moved into the tenured generation meaning more large garbage collections also.
Try using the -XX:NewRatio with a fairly low value (say 2 or 3) and see if this helps.
More info can be found here.
I've switched from -Xmx1024m to -Xmx2048m and the problem went away. I now have 100 days of uptime.
Beside tuning the various options of JVM I would also suggest to upgrade to a newer release of the VM, because later versions have much better tuned garbage collector (also without trying the new experimental one).
Beside that also if it's (partially) true that assigning more ram to JVM could increase the time required to perform GC there is a tradeoff point between using the whole 16 GB of memory and increasing your memory occupation, so you can try double all values, to start
Xms1024m -Xmx2048m -XX:PermSize=256m -XX:MaxPermSize=512m
Regards
Massimo
What might be happening in your case is that you have a lot of objects who live a little longer than NewGen life cycle. If survivor space is too small, they go straight to the OldGen. -XX:+PrintTenuringDistribution could provide some insight. Your NewGen is large enough, so try decreasing SurvivorRatio.
also, jconsole will probably provide more visual insight into what happens with your memory, try it.