My tomcat was auto-shutdown suddenly.I checked in log file and found that It was killed with message:
kernel: Killed process 17420, UID 0, (java) total-vm:8695172kB, anon-rss:4389088kB, file-rss:20kB
My setting for running tomcat is -Xms2048m -Xmx4096m -XX:NewSize=256m -XX:MaxNewSize=512m -XX:PermSize=256m -XX:MaxPermSize=1024m
My system when run command "free -m" is:
total used free shared buffers cached
Mem: 7859 7713 146 0 97 1600
-/+ buffers/cache: 6015 1844 Swap: 0 0 0
I monitor program with "top -p", the result as below
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8048440k total, 7900616k used, 147824k free, 100208k buffers Swap: 0k total, 0k used, 0k free, 1640888k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4473 root 20 0 8670m 2.5g 6568 S 0.0 32.6 71:07.84 java
My question is:
1.Why VIRT = 8670m (in "top -p" result) is greater than Mem: 8048440k total but my application is still running?
Why my tomcat was kill by kernel? I don't see any strange with memory (It's similar with when it's running)
To avoid this error happen, what will I do and why?
The only thing I know that causes the kernel to kill tasks in Linux is the out of memory killer. This article from Oracle might be a little more recently and relevant.
The solution depends on what else is running on the system. From what you showed, you have less than 2GB of usable memory, but your Java heap max is topping out around 4GB. What we don't know is how big the Java heap is at the time you took that snapshot. If it's at its initial 2GB, then you could be running close to the limit. Also based on your formatting, you have no swap space to use as a fallback.
If you have any other significant processes on the system, you need to account for their maximum memory usage. The short answer is try to reduce the Xmx and MaxPermSize if at all possible, you'll have to analyze your load to see if this is possible or will cause unreasonable GC CPU usage.
Some notes:
Java uses more memory than the heap, it has memory for the native code running the VM itself.
Java 8 stores permgen outside of the heap, so I believe it adds memory on top of the Xmx parameter, you may want to note that if running Java 8.
As you reduce the memory limit, you'll hit 3 ranges:
Far above real requirements: no noticeable difference
Very close to real requirements: server freezes/stops responding and uses 100% CPU (GC overhead)
Below real requirements: OutOfMemoryErrors
It's possible for a process's VM size to exceed RAM+swap size per your first question. I remember running Java on a swapless embedded system with 256MB RAM and seeing 500MB of memory usage and being surprised. Some reasons:
In Linux you can allocate memory, but it's not actually used until you write to it
Memory-mapped files (and probably things like shared memory segments) count towards this limit. I believe Java opens all of the jar files as memory mapped files so included in that virt size are all of the jars on your classpath, including the 80MB or so rt.jar.
Shared objects probably count towards VIRT but only occupy space once (i.e. one copy of so loaded for many processes)
I've heard, but I can't find a reference right now, that Linux can actually use binaries/.so files as read-only "swap" space, meaning essentially loading a 2MB binary/so will increase your VM size by 2MB but not actually use all of that RAM, because it pages in from disk only the parts actually accessed.
Linux OS has a OOM Mechanism, when OS's memory is insufficient. The OOM will kill the cost max memory program(In most situations, Linux Out Of Memory Management). Obviously Your tomcat own the max memory.
How to solve? In my experience, you must observe the memory usage of OS, you can use the top command to observe, and find the proper process. and at the same time, you can use the jvisualvm to observe the usage memory of tomcat.
Related
It is related to my previous question
I set Xms as 512M, Xmx as 6G for one java process. I have three such processes.
My total ram is 32 GB. Out of that 2G is always occupied.
I executed free command to ensure that minimum 27G is free. But my jobs required only 18 GB max at any time.
It was running fine. Each job occupied around 4 to 5 GB but used around 3 to 4 GB. I understand that Xmx doesn't mean that process should always occupy 6 GB
When another X process started on the same server with another user, it has occupied 14G. Then one of my process got failed.
I understand that I need to increase ram or manage both collision jobs.
Here the question is that how can I force my job to use 6 GB always and why does it throw GC limit reached error in this case?
I used visualvm to monitor them. And jstat also.
Any advises are welcome.
Simple answer: -Xmx is not a hard limit to JVM. It only limits the heap available to Java inside JVM. Lower your -Xmx and you may stabilize process memory on a size that suits you.
Long answer: JVM is a complex machine. Think of this like an OS for your Java code. The Virtual Machine does need extra memory for its own housekeeping (e.g. GC metadata), memory occupied by threads' stack size, "off-heap" memory (e.g. memory allocated by native code through JNI; buffers) etc.
-Xmx only limits the heap size for objects: the memory that's dealt with directly in your Java code. Everything else is not accounted for by this setting.
There's a newer JVM setting -XX:MaxRam (1, 2) that tries to keep the entire process memory within that limit.
From your other question:
It is multi threading. 100 reader, 100 writer threads. Each one has it's own connection to the database.
Keep in mind that the OS' I/O buffers also need memory for their own function.
If you have over 200 threads, you also pay the price: N*(Stack size), and approx. N*(TLAB size) reserved in Young Gen for each thread (dynamically resizable):
java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab
size_t MinTLABSize = 2048
intx ThreadStackSize = 1024
Approximately half a gigabyte just for this (and probably more)!
Thread Stack Size (in Kbytes). (0 means use default stack size)
[Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.] - Java HotSpot VM Options; Linux x86 JDK source
In short: -Xss (stack size) defaults depend on the VM and OS environment.
Thread Local Allocation Buffers are more intricate and help against allocation contention/resource locking. Explanation of the setting here, for their function: TLAB allocation and TLABs and Heap Parsability.
Further reading: "Native Memory Tracking" and Q: "Java using much more memory than heap size"
why does it throw GC limit reached error in this case.
"GC overhead limit exceeded". In short: each GC cycle reclaimed too little memory and the ergonomics decided to abort. Your process needs more memory.
When another X process started on the same server with another user, it has occupied 14g. Then one of my process got failed.
Another point on running multiple large memory processes back-to-back, consider this:
java -Xms28g -Xmx28g <...>;
# above process finishes
java -Xms28g -Xmx28g <...>; # crashes, cant allocate enough memory
When the first process finishes, your OS needs some time to zero out the memory deallocated by the ending process before it can give these physical memory regions to the second process. This task may need some time and until then you cannot start another "big" process that immediately asks for the full 28GB of heap (observed on WinNT 6.1). This can be worked around with:
Reduce -Xms so the allocation happens later in 2nd processes' life-time
Reduce overall -Xmx heap
Delay the start of the second process
I have a Jetty server that I use for websocket connections for an app I am working on. The only issue is that Jetty is consuming way too much virtual memory (!2.5GB of virtual memory) and around 650RES.
My issue is that as mentioned above, most of the memory (around 12gb) is not the heap size so analyzing it and understanding what is happening is harder.
Do you have any tips on how to understand where the 12gb consumption is coming from and how to figure out memory leaks or any other issues with the server?
I wanted to clerify what I mean by virtual memory (because my understanding could be wrong). Virtual memory is "VIRT" when I run top. Here is what I get:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-------------------------------------------------------------
9442 root 20 0 12.6g 603m 10m S 0 1.3 1:50.06 java
Thanks!
Please paste the JVM Options you use on startup. You can adjust the maximum memory used by the JVM with the -Xmx option as already mentioned.
Your application has been using only 603MB reserved memory. So doesn't look like it should concern you. You can get some detailed information about memory usage by either using "jmap", enable jmx and connect via jconsole or use a profiler. If you want to stay in *nix land you can also try "free" if your OS supports it.
In your case Jetty is NOT occupying 12,5 gig of memory. It's occupying 603MB. Google for "virtual memory linux" for example and you should get plenty of information about the difference between virtual and reserved memory.
Virtual memory has next to no cost in a 64-bit environment so I am not sure what the concern is. The resident memory is 650 MB or a mere 1.3% of MEM. It's not clear it is using much memory.
The default maximum heap size is 1/4 of the main memory for 64-bit JVMs. If you have 48 GB of memory you might find the default heap size is 12 GB and with some shared libraries, threads etc this can result in a virtual memory size of 12.5 GB. This doesn't mean you have a memory leak, or that you even have a problem but if you would prefer you can reduce the maximum heap size.
BTW: You can buy 32 GB for less than $200. If you are running low on memory, I would buy some more.
So every couple of days my java process on Ubuntu is killed automatically, and I can't figure out why.
My box has 35.84 GB of RAM, when I launch my Java process I pass it the -Xmx28g parameter, so it should be using way less than the maximum RAM available.
I ran jstat as follows:
# jstat -gccause -t `pgrep java` 60000
The last few lines of output from jstat immediately before the process was killed were:
Time S0 S1 E O P YGC YGCT FGC FGCT GCT LGCC GCC
14236.1 99.98 0.00 69.80 99.40 49.88 1011 232.305 11 171.041 403.347 unknown GCCause No GC
14296.2 93.02 0.00 65.79 99.43 49.88 1015 233.000 11 171.041 404.041 unknown GCCause No GC
14356.1 79.20 0.00 80.50 99.55 49.88 1019 233.945 11 171.041 404.986 unknown GCCause No GC
14416.2 0.00 99.98 24.32 99.64 49.88 1024 234.945 11 171.041 405.987 unknown GCCause No GC
This seems to be what went down in the /var/log/syslog around this time: https://gist.github.com/1369135
There is really nothing running on this server other than my java app. What's going on?
edit: I'm running java version 1.6.0_20, the only notable parameters I'm passing to java on startup are "-server -Xmx28g". I'm not using an application server but my app embeds the "Simple web framework".
Assuming the problem is the OOM killer, then it has killed your process in a desperate attempt to keep the OS functioning in a severe memory shortage crisis.
I would conclude that:
your JVM is actually using significantly more than 28Gb; i.e. you've got significant non-heap memory usage, and
the OS is not configured with an adequate amount of swap space.
I'd try adding more swap space, so that the OS can swap out parts of your application in an emergency.
Alternatively, reduce the JVM's heap size.
Note that "-Xmx ..." sets the maximum heap size, not the maximum amount of memory that your JVM can use. The JVM puts some stuff outside the heap, including such things as the memory for thread stacks and memory-mapped files that your application is using.
The syslog confirms that it is the OOM killer at work.
In what way does the linked syslog say so?
It says this:
Nov 15 13:53:49 ip-10-71-94-36 kernel: [3707038.606133] Out of memory: kill process 6368 (run.sh) score 4747288 or a child
Nov 15 13:53:49 ip-10-71-94-36 kernel: [3707038.606146] Killed process 9359 (java)
The console says that java was killed, not that it quit.
Correct. It was killed by the operating system's OOM killer.
If it had run out of memory it would typically throw an OutOfMemory exception, which it didn't.
That is what would have happened if you had filled up the Java heap.
That is not what is going on here. The actual problem is that there is not enough physical RAM to hold the Java heap. The OOM killer deals with it ...
I'm running with such a huge heap because I need to store millions of objects each of which require several kilobytes of RAM.
Unfortunately, you are trying to use way more RAM than is available on the system. This is causing virtual memory to thrash, affecting the entire operating system.
When the system starts to thrash badly, the OOM killer (not the JVM) identifies your Java process as the cause of the problem. It then kills it (with a SIGKILL) to protect the rest of the system. If it didn't, there is a risk that the entire system would lock up completely and need to be hard rebooted.
Finally, you said:
My box has 35.84 GB of RAM ...
That is rather a strange value. 32 GiB is 34,359,738,368 bytes or 34.35 GB.
But based on that and the observed behavior, I suspect that that is the available virtual memory rather than physical RAM. Alternatively, your "box" could be a virtual machine with RAM overcommit enabled at the hypervisor level.
Welcome to the OOM-killer, a linux 'feature' that is the bane of large-memory applications everywhere. There's no simple recipe to deal, just google for it and start reading and weaping.
While I can't put my mental fingers on a concise explanation of the shenigans of the OOM killer, I recall that the critical tuning parameter is called 'swappiness'. On one of our big servers, we have:
/etc/sysctl.conf:vm.swappiness=20
Read http://www.gentooexperimental.org/~patrick/weblog/archives/2009-11.html.
What JVM are you using? and what application server? It's possible that you're allocating way too much memory, and that can be problematic - the garbage collector might have trouble doing its job.
I'm not sure if this is your case, but I found quite interesting this article explaining the way Linux overcommits memory.
wow, can you actually have 28 GB of heap?! May be you should try reducing it, keep it at no more than 50% of the RAM I think (so ~18 GB, or may be even 15 GB). Plus 171 Full GCs are a lot! How long was this app running? 171 in 2-3 days sounds huge. btw the gist indicates an OOM before termination - I think reducing the heap will fix it ( you may be limiting the JVM from expanding native space). Try adjusting various parameters, try stack size for example (-Xss) if needed. Check max perm size and other sections too. Its a memory problem and it may not necessarily be the heap.
Ubuntu has a "watchdog" process which kills other processes when memory runs low.
See the manpage:
http://manpages.ubuntu.com/manpages/natty/man8/watchdog.8.html
This problem is specifically about Sun Java JVM running on Linux x86-64. I'm trying to figure out why the Sun JVM takes so much of system's physical memory even when I have set Heap and Non-Heap limits.
The program I'm running is Eclipse 3.7 with multiple plugins/features. The most used features are PDT, EGit and Mylyn. I'm starting the Eclipse with the following command line switches:
-nosplash -vmargs -Xincgc -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=50 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -Dorg.eclipse.swt.internal.gtk.disablePrinting
Worth noting are especially the switches:
-Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m
These switches should limit the JVM Heap to maximum of 200 MB and Non-Heap to 150 MB ("CMS Permanent generation" and "Code Cache" as labeled by JConsole). Logically the JVM should take total of 350 MB plus the internal overhead required by the JVM.
In reality, the JVM takes 544.6 MB for my current Eclipse process as computed by ps_mem.py (http://www.pixelbeat.org/scripts/ps_mem.py) which computes the real physical memory pages reserved by the Linux 2.6+ kernel. That's internal Sun JVM overhead of 35% or roughly 200MB!
Any hints about how to decrease this overhead?
Here's some additional info:
$ ps auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
me 23440 2.4 14.4 1394144 558440 ? Sl Oct12 210:41 /usr/bin/java ...
And according to JConsole, the process has used 160 MB of heap and 151 MB of non-heap.
I'm not saying that I cannot afford using extra 200MB for running Eclipse, but if there's a way to reduce this waste, I'd rather use that 200MB for kernel block device buffers or file cache. In addition, I have similar experience with other Java programs -- perhaps I could reduce the overhead for all of them with similar tweaks.
Update: After posting the question, I found previous post to SO:
Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable?
It seems that I should use pmap to investigate the problem.
I think the reason for the high memory consumption of your Eclipse Environment is the use of SWT. SWT is a native graphic library living outside of the heap of the JVM, and to worsen the situation, the implementation on Linux is not really optimized.
I don't think there's really a chance to reduce the memory consumption of your eclipse environment concerning the memory outside the heap.
Eclipse is a memory and cpu hog. In addition to the Java class libraries all the low end GUI stuff is handled by native system calls so you will have a substantial "native" JNI library to execute the low level X term calls attached to your process.
Eclipse offers millions of useful features and lots of helpers to speed up your day to day programming tasks - but lean and mean it is not. Any reduction in memory or resources will probably result in a noticeable slowdown. It really depends on how much you value your time vs. your computers memory.
If you want lean and mean gvim and make are unbeatable. If you want the code completion, automatic builds etc. you must expect to pay for this with extra resources.
If I run the following program
public static void main(String... args) throws InterruptedException {
for (int i = 0; i < 60; i++) {
System.out.println("waiting " + i);
Thread.sleep(1000);
}
}
with ps auwx prints
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
500 13165 0.0 0.0 596680 13572 pts/2 Sl+ 13:54 0:00 java -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -cp . Main
The amount of memory used is 13.5 MB. There about 200 MB of shared libraries which counts towards the VSZ size. The rest can be acounted for in the max heap, max perm gen with an overhead for the thread stacks etc.
The problem doesn't appear to be with the JVM but the application running in it. Using additional shared libraries, direct memory and memory mapped files can increase the amount of memory used.
Given you can buy 16 GB for around $100, do you know this is actually a problem?
An Apache Tomcat (Atlassian Confluence) instance is started using the following Java options:
JAVA_OPTS="-Xms256m -Xmx512m -XX:MaxPermSize=256m -Djava.awt.headless=true "
However I see that after starting up it quickly eats through most of the 1GB of memory that is available on my virtual server.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6082 root 19 0 1105m 760m 16m S 0.7 74.2 5:20.51 java
Shouldn't the overall consumed memory (heap + PermGen) stay under what is specified using -Xmx? One of the problems this is causing is that I cannot shutdown the server using the shutdown script since it tries to spawn a JVM with 256MB of memory which fails because of it not being available.
For example, a native library can easily allocate memory outside Java heap.
Direct ByteBuffer also does that: http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html
The contents of direct buffers may
reside outside of the normal
garbage-collected heap, and so their
impact upon the memory footprint of an
application might not be obvious.
There are good reasons to allocate huge direct ByteBuffers.
http://ehcache.org/documentation/offheap_store.html
Total Tomcat memory consumption should be calculated at NO LESS THAN Xmx + XX:MaxPermSize (in your case, 768MB), but I do recall seeing somewhere that it can go over that. Xmx is only the heap space, and PermGen is outside the heap (kind of).