I'm having a weird issue with a java process which is consuming a lot of resources on a linux VM.
The output of top for the process is the below :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1182 blabla 20 0 25.5g 21g 45m S 382.9 33.9 139840:17 java
So this shows that the process is actually consuming 21G of physical memory ?
When checking the process in more detail I can see it was started with -Xmx4G . Is there any logical explanation for the 21G shown on "top" ?
(Ignore the first -Xmx from ps output below)
blabla#dtraflocorh391:~ $ ps aux | grep 1182
blabla 1182 249 33.8 26716716 22363868 ? Sl Feb13 139847:01 java -Xms32M -Xmx128M -showversion -Djavax.xml.stream.XMLEventFactory=com.ctc.wstx.stax.WstxEventFactory -Djavax.xml.stream.XMLInputFactory=com.ctc.wstx.stax.WstxInputFactory -Djavax.xml.stream.XMLOutputFactory=com.ctc.wstx.stax.WstxOutputFactory -Xms256M -Xmx4G -verbose:gc -XX:+PrintGCTimeStamps ... ... ...
...
...
-Xmx controls the maximum heap size - essentially where stuff created with the new keyword is stored.
Java has other pools of memory - for the execution stack, for caching JIT compiled code, for storing loaded classes and dynamically created classes, for memory-mapped files, the heap space used by native code (JNI, or just Hotspot itself): see How is the java memory pool divided?
JConsole is one tool you can use to see what your Java program is using memory for. Once JConsole has told you which area is big/full, you can start to reason about why it's got like that.
For example, I once had a program that loaded a new Jackson object mapper for every parse -- and this version of Jackson had a bug whereby it left dynamic classes in non-heap memory which never got GCd. Your problem might be this, or it might be something completely different. JConsole will tell.
So this shows that the process is actually consuming 21G of physical memory ?
Yes. The "top" command is reporting the processes physical memory usage as provided by the operating system. There is no reason to distrust it.
Is there any logical explanation for the 21G shown on "top" ?
It could be lots of off-heap memory allocation, or a large memory-mapped file, or ... a huge number of thread stacks.
Or potentially other things.
Note that the VIRT figure represents the total virtual memory that has been committed to the process, one way or another. The 'RES' figure represents total physical memory usage to hold the subset of the 'VIRT' figure that is currently "paged in".
So, the VIRT figure is a better measure of the memory resources the Java process has asked for. The fact that VIRT and RES are so different is evidence that points to your process causing "thrashing".
See also:
Difference between "on-heap" and "off-heap"
Relation between memory host and memory arguments xms and xmx from Java
Related
I've made a program that does like "tail -f" on a number of log files on a machine, using Apache Tailer from commons IO. Basically it runs in a thread, opens the file as a RandomAccessFile, checks its length, seeks to the end etc. It sends all log lines collected to a client.
The somewhat uncomfortable thing about it, is that on Linux it can show an enormous amount of VIRT memory. Right now it says 16.1g VIRT (!!) and 203m RES.
I have read up a little on virtual memory and understood that it's often "nothing to worry about".. But still, 16 GB? Is it really healthy?
When I look at the process with pmap, none of the log file names are shown so I guess they are not memory mapped.. And I read (man pmap) that "[ anon ]" in the "Mapping" column of pmap output means "allocated memory". Now what does that mean? :)
However, pmap -x shows:
Address Kbytes RSS Dirty Mode Mapping
...
---------------- ------ ------ ------
total kB 16928328 208824 197096
..so I suppose it's not residing in RAM, after all.. But how does it work memory-wise when opening a file like this, seeking to the end of it, etc?
Should I worry about all those GB of VIRT memory ? It "watches" 84 different log files right now, and the total size of these on disk are 31414239 bytes.
EDIT:
I just tested it on another, less "production-like", Linux machine and did not get the same numbers. VIRT got to ~2,5 GB at most there. I found that some of the default JVM settings were different (checked with "java -XX:+PrintFlagsFinal -version"):
Value Small machine Big machine
InitialHeapSize 62690688 2114573120
MaxHeapSize 1004535808 32038191104
ParallelGCThreads 2 13
..So, uhm.. I guess it grabs more heap on the big machine, since the max limit is (way) higher? And I also guess it's a good idea to always specify those values explicitly..
A couple of things:
Each Tailer instance will have its own thread. And each thread has a stack. By default (on a 64bit JVM) thread stacks are 1Mb each, so you will be using 84Mb for stacks. You might consider reducing that using the -Xss option at launch time.
A large virt size is not necessarily bad. But if it translates into a demand on physical memory ... and you don't have that much ... then that really is bad.
Hmm I actually run it without any JVM args at all right now. Is that good or bad? :-)
I understand now. Yes it is bad. The JVM's default heap sizing on a large 64bit machine are far more than you really need.
Assuming that your application is only doing simple processing of the log lines, I recommend that you set the max heap size to a relatively small size (e.g. 64Mb). That way if you do get a leak it won't impact on the rest of your system by gobbling up lots of real memory.
On linux platform, My java app on jboss has 128MB heap but uses 1.6GB of real RAM. How can I find out where the 1.4GB go to?
PID USER PR NI VIRT RES SHR S %CPU %MEM CODE DATA TIME+ COMMAND
1379 root 16 0 9.7g 1.6g 1980 S 0.7 1.3 36 9.7g 0:11.03 java
thanks,
Emre
I'm not sure how you'd find out. But my theory is that your application has mapped a huge file as a MemoryMappedBuffer. The stats in your question say that you are using 9.7 Gigabytes of address space.
My first step would be to examine the process in a memory profiler, such as VisualVM or YourKit. This can give quite a bit of initial insight into what's actually happening to your process's memory usage.
Another tool that could be useful in troubleshooting this sort of issues is jmap.
What do you mean by "my app has 128MB heap", are you starting JBoss with -Xmx 128M ?
Anyway heap size does not determine the amount of memory your process uses, the JVM will allocate memory for other things, including a stack for each thread.
But in your case it is probably not the explanation because 1.4GB seems enormous. Are you doing something particular in your application ?
Emre He,
according to your settings you set only perm gen start/max size.
Try to add these parameters to your JVM: -Xms128m -Xmx128m
I am just testing some JNI calls with a simple stand alone java program on a 16-core CPU. I am using 64-bit JVM & OS is Linux AS5. But, as soon as I start my test program with 64-bit c++ libraries, I see that the 'SIZE' column shows 16G. The top command output is something like this:
PID PSID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
3505 31483 xxxxxxx 23 16 0 16G 215M sleep 0:02 0.00% java
I understand that my heap is ok, but JNI memory can increase the process size, but I am confused as to why it starts with 16G - SIZE, which I believe is the Virtual Memory size? Is it really taking that much memory? Should I be concerned with it?
Odd, but it could be that your JNI calls are either somehow rapidly leaking memory or allocating a number of massive chunks of memory. What command line are you using to start the JVM, and what is your JNI code doing?
The comment by Greg Hewgill basically says it all: You're using only 215 MB of real memory. On my machine a nothing-doing Java process without any VM arguments gets 8 GB. You've got four times as many cores, so starting with twice as much memory makes sense to me.
As this number uses nothing but the virtual address space, which is something like 2*48 B or maybe 2*64 B, it's actually free. You could use -mx1G in order to limit the memory to 1 GB, but note that it is a hard limit and you'll get an OOME when the process needs more.
Server Configuration:
Physical Ram - 16Gb
Swap Memory - 27Gb
OS - Solaris 10
Physical Memory Free - 598 Mb
Swap Memory Used - ~26Gb
Java Version - Java HotSpot(TM) Server VM - 1.6.0_17-b04
My Task is to reduce used swap memory:-
Solutions i have though of
Stop all java applications and wait till physical memory is sufficiently freed. then
execute command "swapoff -a"(Yet to find out Solaris equivalent of this command) ...wait till swap memory used goes down to zero. then execute command "swapon -a"
Increase Physical Memory
I need help on following points:-
Whats the solaris equivalent of swapoff and swapon?
Will option 1 work to clear used swap?
Thanks a Million!!!
First, Java and swap don't mix. If your java app is swapping, you're simply doomed. Few things murder a machine like a swapped java process. GC and swap is just a nightmare.
So, given that, if you machine with the java process is swapping -- that machine is too small. Get more memory, or reduce the load on the machine (including the heap of the java process if possible).
The fact that your machine has no physical memory (600ish Mb), and no free swap space (1ish Gb) is another indicator that the machine is way overloaded.
All manner of things could be faulting your Java process when resources are exhausted.
Killing the Java process will "take it out of swap", since the process doesn't exist, it can't be in swap. Same for all of the other processes. "Swap memory" may not instantly go down, but if a process doesn't exist -- it can't swap (barring the use of persistent shared memory buffers that have the misfortune of being swapped out, and Java doesn't typically use those.)
There isn't really a good way that I know of to tell the OS to lock a specific program in to physical RAM and prevent it from being paged out. And, frankly, you don't want to.
Whatever is taking all of your RAM, you need to seriously consider reducing its footprint(s), or moving the Java process off of this machine. You're simply running in to hard limits, and there's no more blood in this stone.
Not quite clear to me what you're asking - stopping applications which takes memory should fre memory (and swap space potentially). It's not clear from your description that Java is taking all the memory on your box - there's usually no reasons to run JVM allocating more memory that physical memory on the box. Check how you start JVM and how much memory is allocated.
Here is how to manage swap on Solaris:
http://www.aspdeveloper.net/tiki-index.php?page=SolarisSwap
a bit late to the party, but for Solaris:
list details of the swap space with:
swap -l
which lists swap slices. eg:
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 136,1 16 1206736 1084736
/export/data/swapfile -16 40944 40944
swapoff equivalent:
swap -d <file or device>
eg:
swap -d /dev/dsk/c1t0d0s3
swapon equivalent:
swap -a <file or device>
eg:
swap -a /dev/dsk/c1t0d0s3
Note: you may have to create a device before you can use the -a switch by editing the /etc/vfstab file and adding information describing the swap slice. eg:
/dev/dsk/c1t0d0s3 --swap -no -
An Apache Tomcat (Atlassian Confluence) instance is started using the following Java options:
JAVA_OPTS="-Xms256m -Xmx512m -XX:MaxPermSize=256m -Djava.awt.headless=true "
However I see that after starting up it quickly eats through most of the 1GB of memory that is available on my virtual server.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6082 root 19 0 1105m 760m 16m S 0.7 74.2 5:20.51 java
Shouldn't the overall consumed memory (heap + PermGen) stay under what is specified using -Xmx? One of the problems this is causing is that I cannot shutdown the server using the shutdown script since it tries to spawn a JVM with 256MB of memory which fails because of it not being available.
For example, a native library can easily allocate memory outside Java heap.
Direct ByteBuffer also does that: http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html
The contents of direct buffers may
reside outside of the normal
garbage-collected heap, and so their
impact upon the memory footprint of an
application might not be obvious.
There are good reasons to allocate huge direct ByteBuffers.
http://ehcache.org/documentation/offheap_store.html
Total Tomcat memory consumption should be calculated at NO LESS THAN Xmx + XX:MaxPermSize (in your case, 768MB), but I do recall seeing somewhere that it can go over that. Xmx is only the heap space, and PermGen is outside the heap (kind of).