Java using more memory than the allocated memory - java

An Apache Tomcat (Atlassian Confluence) instance is started using the following Java options:
JAVA_OPTS="-Xms256m -Xmx512m -XX:MaxPermSize=256m -Djava.awt.headless=true "
However I see that after starting up it quickly eats through most of the 1GB of memory that is available on my virtual server.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6082 root 19 0 1105m 760m 16m S 0.7 74.2 5:20.51 java
Shouldn't the overall consumed memory (heap + PermGen) stay under what is specified using -Xmx? One of the problems this is causing is that I cannot shutdown the server using the shutdown script since it tries to spawn a JVM with 256MB of memory which fails because of it not being available.

For example, a native library can easily allocate memory outside Java heap.
Direct ByteBuffer also does that: http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html
The contents of direct buffers may
reside outside of the normal
garbage-collected heap, and so their
impact upon the memory footprint of an
application might not be obvious.
There are good reasons to allocate huge direct ByteBuffers.
http://ehcache.org/documentation/offheap_store.html

Total Tomcat memory consumption should be calculated at NO LESS THAN Xmx + XX:MaxPermSize (in your case, 768MB), but I do recall seeing somewhere that it can go over that. Xmx is only the heap space, and PermGen is outside the heap (kind of).

Related

Linux top output unrealistic?

I'm having a weird issue with a java process which is consuming a lot of resources on a linux VM.
The output of top for the process is the below :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1182 blabla 20 0 25.5g 21g 45m S 382.9 33.9 139840:17 java
So this shows that the process is actually consuming 21G of physical memory ?
When checking the process in more detail I can see it was started with -Xmx4G . Is there any logical explanation for the 21G shown on "top" ?
(Ignore the first -Xmx from ps output below)
blabla#dtraflocorh391:~ $ ps aux | grep 1182
blabla 1182 249 33.8 26716716 22363868 ? Sl Feb13 139847:01 java -Xms32M -Xmx128M -showversion -Djavax.xml.stream.XMLEventFactory=com.ctc.wstx.stax.WstxEventFactory -Djavax.xml.stream.XMLInputFactory=com.ctc.wstx.stax.WstxInputFactory -Djavax.xml.stream.XMLOutputFactory=com.ctc.wstx.stax.WstxOutputFactory -Xms256M -Xmx4G -verbose:gc -XX:+PrintGCTimeStamps ... ... ...
...
...
-Xmx controls the maximum heap size - essentially where stuff created with the new keyword is stored.
Java has other pools of memory - for the execution stack, for caching JIT compiled code, for storing loaded classes and dynamically created classes, for memory-mapped files, the heap space used by native code (JNI, or just Hotspot itself): see How is the java memory pool divided?
JConsole is one tool you can use to see what your Java program is using memory for. Once JConsole has told you which area is big/full, you can start to reason about why it's got like that.
For example, I once had a program that loaded a new Jackson object mapper for every parse -- and this version of Jackson had a bug whereby it left dynamic classes in non-heap memory which never got GCd. Your problem might be this, or it might be something completely different. JConsole will tell.
So this shows that the process is actually consuming 21G of physical memory ?
Yes. The "top" command is reporting the processes physical memory usage as provided by the operating system. There is no reason to distrust it.
Is there any logical explanation for the 21G shown on "top" ?
It could be lots of off-heap memory allocation, or a large memory-mapped file, or ... a huge number of thread stacks.
Or potentially other things.
Note that the VIRT figure represents the total virtual memory that has been committed to the process, one way or another. The 'RES' figure represents total physical memory usage to hold the subset of the 'VIRT' figure that is currently "paged in".
So, the VIRT figure is a better measure of the memory resources the Java process has asked for. The fact that VIRT and RES are so different is evidence that points to your process causing "thrashing".
See also:
Difference between "on-heap" and "off-heap"
Relation between memory host and memory arguments xms and xmx from Java

Tomcat was killed by kernel

My tomcat was auto-shutdown suddenly.I checked in log file and found that It was killed with message:
kernel: Killed process 17420, UID 0, (java) total-vm:8695172kB, anon-rss:4389088kB, file-rss:20kB
My setting for running tomcat is -Xms2048m -Xmx4096m -XX:NewSize=256m -XX:MaxNewSize=512m -XX:PermSize=256m -XX:MaxPermSize=1024m
My system when run command "free -m" is:
total used free shared buffers cached
Mem: 7859 7713 146 0 97 1600
-/+ buffers/cache: 6015 1844 Swap: 0 0 0
I monitor program with "top -p", the result as below
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8048440k total, 7900616k used, 147824k free, 100208k buffers Swap: 0k total, 0k used, 0k free, 1640888k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4473 root 20 0 8670m 2.5g 6568 S 0.0 32.6 71:07.84 java
My question is:
1.Why VIRT = 8670m (in "top -p" result) is greater than Mem: 8048440k total but my application is still running?
Why my tomcat was kill by kernel? I don't see any strange with memory (It's similar with when it's running)
To avoid this error happen, what will I do and why?
The only thing I know that causes the kernel to kill tasks in Linux is the out of memory killer. This article from Oracle might be a little more recently and relevant.
The solution depends on what else is running on the system. From what you showed, you have less than 2GB of usable memory, but your Java heap max is topping out around 4GB. What we don't know is how big the Java heap is at the time you took that snapshot. If it's at its initial 2GB, then you could be running close to the limit. Also based on your formatting, you have no swap space to use as a fallback.
If you have any other significant processes on the system, you need to account for their maximum memory usage. The short answer is try to reduce the Xmx and MaxPermSize if at all possible, you'll have to analyze your load to see if this is possible or will cause unreasonable GC CPU usage.
Some notes:
Java uses more memory than the heap, it has memory for the native code running the VM itself.
Java 8 stores permgen outside of the heap, so I believe it adds memory on top of the Xmx parameter, you may want to note that if running Java 8.
As you reduce the memory limit, you'll hit 3 ranges:
Far above real requirements: no noticeable difference
Very close to real requirements: server freezes/stops responding and uses 100% CPU (GC overhead)
Below real requirements: OutOfMemoryErrors
It's possible for a process's VM size to exceed RAM+swap size per your first question. I remember running Java on a swapless embedded system with 256MB RAM and seeing 500MB of memory usage and being surprised. Some reasons:
In Linux you can allocate memory, but it's not actually used until you write to it
Memory-mapped files (and probably things like shared memory segments) count towards this limit. I believe Java opens all of the jar files as memory mapped files so included in that virt size are all of the jars on your classpath, including the 80MB or so rt.jar.
Shared objects probably count towards VIRT but only occupy space once (i.e. one copy of so loaded for many processes)
I've heard, but I can't find a reference right now, that Linux can actually use binaries/.so files as read-only "swap" space, meaning essentially loading a 2MB binary/so will increase your VM size by 2MB but not actually use all of that RAM, because it pages in from disk only the parts actually accessed.
Linux OS has a OOM Mechanism, when OS's memory is insufficient. The OOM will kill the cost max memory program(In most situations, Linux Out Of Memory Management). Obviously Your tomcat own the max memory.
How to solve? In my experience, you must observe the memory usage of OS, you can use the top command to observe, and find the proper process. and at the same time, you can use the jvisualvm to observe the usage memory of tomcat.

Understanding memory usage for Jetty

I have a Jetty server that I use for websocket connections for an app I am working on. The only issue is that Jetty is consuming way too much virtual memory (!2.5GB of virtual memory) and around 650RES.
My issue is that as mentioned above, most of the memory (around 12gb) is not the heap size so analyzing it and understanding what is happening is harder.
Do you have any tips on how to understand where the 12gb consumption is coming from and how to figure out memory leaks or any other issues with the server?
I wanted to clerify what I mean by virtual memory (because my understanding could be wrong). Virtual memory is "VIRT" when I run top. Here is what I get:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-------------------------------------------------------------
9442 root 20 0 12.6g 603m 10m S 0 1.3 1:50.06 java
Thanks!
Please paste the JVM Options you use on startup. You can adjust the maximum memory used by the JVM with the -Xmx option as already mentioned.
Your application has been using only 603MB reserved memory. So doesn't look like it should concern you. You can get some detailed information about memory usage by either using "jmap", enable jmx and connect via jconsole or use a profiler. If you want to stay in *nix land you can also try "free" if your OS supports it.
In your case Jetty is NOT occupying 12,5 gig of memory. It's occupying 603MB. Google for "virtual memory linux" for example and you should get plenty of information about the difference between virtual and reserved memory.
Virtual memory has next to no cost in a 64-bit environment so I am not sure what the concern is. The resident memory is 650 MB or a mere 1.3% of MEM. It's not clear it is using much memory.
The default maximum heap size is 1/4 of the main memory for 64-bit JVMs. If you have 48 GB of memory you might find the default heap size is 12 GB and with some shared libraries, threads etc this can result in a virtual memory size of 12.5 GB. This doesn't mean you have a memory leak, or that you even have a problem but if you would prefer you can reduce the maximum heap size.
BTW: You can buy 32 GB for less than $200. If you are running low on memory, I would buy some more.

My app has 128MB heap but uses 1.6GB of real RAM. How can I find out where the 1.4GB go to?

On linux platform, My java app on jboss has 128MB heap but uses 1.6GB of real RAM. How can I find out where the 1.4GB go to?
PID USER PR NI VIRT RES SHR S %CPU %MEM CODE DATA TIME+ COMMAND
1379 root 16 0 9.7g 1.6g 1980 S 0.7 1.3 36 9.7g 0:11.03 java
thanks,
Emre
I'm not sure how you'd find out. But my theory is that your application has mapped a huge file as a MemoryMappedBuffer. The stats in your question say that you are using 9.7 Gigabytes of address space.
My first step would be to examine the process in a memory profiler, such as VisualVM or YourKit. This can give quite a bit of initial insight into what's actually happening to your process's memory usage.
Another tool that could be useful in troubleshooting this sort of issues is jmap.
What do you mean by "my app has 128MB heap", are you starting JBoss with -Xmx 128M ?
Anyway heap size does not determine the amount of memory your process uses, the JVM will allocate memory for other things, including a stack for each thread.
But in your case it is probably not the explanation because 1.4GB seems enormous. Are you doing something particular in your application ?
Emre He,
according to your settings you set only perm gen start/max size.
Try to add these parameters to your JVM: -Xms128m -Xmx128m

How to reduce Sun/Oracle JVM internal overhead?

This problem is specifically about Sun Java JVM running on Linux x86-64. I'm trying to figure out why the Sun JVM takes so much of system's physical memory even when I have set Heap and Non-Heap limits.
The program I'm running is Eclipse 3.7 with multiple plugins/features. The most used features are PDT, EGit and Mylyn. I'm starting the Eclipse with the following command line switches:
-nosplash -vmargs -Xincgc -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=50 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -Dorg.eclipse.swt.internal.gtk.disablePrinting
Worth noting are especially the switches:
-Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m
These switches should limit the JVM Heap to maximum of 200 MB and Non-Heap to 150 MB ("CMS Permanent generation" and "Code Cache" as labeled by JConsole). Logically the JVM should take total of 350 MB plus the internal overhead required by the JVM.
In reality, the JVM takes 544.6 MB for my current Eclipse process as computed by ps_mem.py (http://www.pixelbeat.org/scripts/ps_mem.py) which computes the real physical memory pages reserved by the Linux 2.6+ kernel. That's internal Sun JVM overhead of 35% or roughly 200MB!
Any hints about how to decrease this overhead?
Here's some additional info:
$ ps auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
me 23440 2.4 14.4 1394144 558440 ? Sl Oct12 210:41 /usr/bin/java ...
And according to JConsole, the process has used 160 MB of heap and 151 MB of non-heap.
I'm not saying that I cannot afford using extra 200MB for running Eclipse, but if there's a way to reduce this waste, I'd rather use that 200MB for kernel block device buffers or file cache. In addition, I have similar experience with other Java programs -- perhaps I could reduce the overhead for all of them with similar tweaks.
Update: After posting the question, I found previous post to SO:
Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable?
It seems that I should use pmap to investigate the problem.
I think the reason for the high memory consumption of your Eclipse Environment is the use of SWT. SWT is a native graphic library living outside of the heap of the JVM, and to worsen the situation, the implementation on Linux is not really optimized.
I don't think there's really a chance to reduce the memory consumption of your eclipse environment concerning the memory outside the heap.
Eclipse is a memory and cpu hog. In addition to the Java class libraries all the low end GUI stuff is handled by native system calls so you will have a substantial "native" JNI library to execute the low level X term calls attached to your process.
Eclipse offers millions of useful features and lots of helpers to speed up your day to day programming tasks - but lean and mean it is not. Any reduction in memory or resources will probably result in a noticeable slowdown. It really depends on how much you value your time vs. your computers memory.
If you want lean and mean gvim and make are unbeatable. If you want the code completion, automatic builds etc. you must expect to pay for this with extra resources.
If I run the following program
public static void main(String... args) throws InterruptedException {
for (int i = 0; i < 60; i++) {
System.out.println("waiting " + i);
Thread.sleep(1000);
}
}
with ps auwx prints
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
500 13165 0.0 0.0 596680 13572 pts/2 Sl+ 13:54 0:00 java -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -cp . Main
The amount of memory used is 13.5 MB. There about 200 MB of shared libraries which counts towards the VSZ size. The rest can be acounted for in the max heap, max perm gen with an overhead for the thread stacks etc.
The problem doesn't appear to be with the JVM but the application running in it. Using additional shared libraries, direct memory and memory mapped files can increase the amount of memory used.
Given you can buy 16 GB for around $100, do you know this is actually a problem?

Categories

Resources