I am running tomcat7 service which process quite a big load from customers. I left application during a weekend and when I back I noticed that tomcat CPU usage increased to 99% and in the logs I have found following errors:
Exception in thread "http-bio-8080-exec-908" java.lang.OutOfMemoryError: Java heap space
Exception in thread "http-bio-8080-exec-948" java.lang.OutOfMemoryError: Java heap space
Does it means that at the time I've got OutOfMemory exception I had 908 and 948 opened active threads?
Currently my tomcat is runnning under default configurations I've never increased heap size yet.
We are receiving around 200 queries/sec.
My hardware:
CPU : Intel(R) Xeon(R) CPU X5650 # 2.67GHz
Memory: 2GB
Could you please point me into right direction, what should I look at in order to resolve this issue.
Thanks for any help!
There could be couple of reasons,
Since you have not increased the default memory allocation, with high load, it probably won't have enough resources to serve all requests and some are obviously running out of memory. So first thing to try is to tweak the jvm memory configuration.
If 1 does not working, it could be a bug in your application that might be preventing garbage collection. You might want to run your application with a profiler attached to see what objects are collected over time and use that information to debug.
When your JVM runs out of a particular resource, java.lang.OutOfMemoryError: Java heap space error will occur.
Solution 1 : You should increase the availability of that resource. When your application does not have enough Java heap space memory to run properly. For that, you need to alter your JVM launch configuration and add the following:
-Xmx1024m
The above configuration would give the application 1024MB of Java heap space. You can use g or G for GB, m or M for MB, k or K for KB. For example all of the following are equivalent to saying that the maximum Java heap space is 1GB:
-Xmx1073741824
-Xmx1048576k
-Xmx1024m
-Xmx1g
Solution 2 : If your application contains a memory leak, adding more heap will just postpone the java.lang.OutOfMemoryError: Java heap space error.
If you wish to solve the underlying problem with the Java heap space instead of masking the symptoms, you have several tools available.
For Example: Plumbr tool. Among other performance problems it catches all java.lang.OutOfMemoryErrors and automatically tells you what causes them. You can go through about other available tools. And the choice is yours!
Source:
https://plumbr.eu/outofmemoryerror/java-heap-space
Related
I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys
In our application we have both, Apache Server (for the front end only) & JBoss 4.2 (for the business / backend end). We are using Ubuntu 12 as server OS. Our application is throwing java.lang.OutOfMemoryError: "Java heap space" repeatedly. (It throws OOMEs for an hour or so and then goes back to working normally for next 2-3 hours. Then it repeats the pattern.) Our Java memory settings are
-Xms512m -Xmx1024m
Our server has 6 GB of Ram physically. Please guide us do we need to increase java Heap size. If yes, what should be the ideal size considering physical 6GB of Ram.
Are you sure you dont have memory leaks? Also if you are using high memory using api like POI for doc or itext for PDF the you are utilizing code to keep memory footprint low. You can use a profiler to see what exactly is happening. If you still need to increase increase step by step till it gets to a appropriate value.
like
-Xms512m -Xmx1024m
then
-Xms512m -Xmx2048m
so on ...
I would check whether you have a memory leak e.g. are there objects building up and not being freed.
You can do that with a profiler e.g. visualvm or jmap -histo:live might be enough.
If you don't have a memory leak and the memory usage is valid I would try increasing the maximum to the maximum amount of memory you would want the JVM to use e.g perhaps 4 GB.
How to identify the issue when java OutOfMemoryError or stackoverflow comes in production. By which reason it is coming or why the server is down.
For example I am developing an application which is lived on production and UAT. Instantly on production java OutOfMemoryError or stackoverflow.
Then how can we track this issue, by which reason it has happened ? Is there any technique that can tell me by which code flow this is happening ?
Please explain it. I have faced this issue many times.
If you face it in production and you cannot really reason about it from stacktraces or logs, you need to analyze what was in there.
Get the VM to dump on OOM
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/tmp"
And use that for analysis. The memory analyzer tool (http://eclipse.org/mat/) is a good standalone program for this analysis.
The Oracle docs:- Troubleshooting Memory Leaks has detailed explanation on it:
This error is thrown when there is insufficient space to allocate an
object in the Java heap or in a particular area of the heap. The
garbage collector cannot make any further space available to
accommodate a new object, and the heap cannot be expanded further.
.....
An early step to diagnose an OutOfMemoryError is to determine what the
error means. Does it mean that the Java heap is full, or does it mean
that the native heap is full? To help you answer this question, the
following subsections explain some of the possible error messages,
with reference to the detail part of the message:
Exception in thread "main": java.lang.OutOfMemoryError: Java heap
space
See 3.1.1 Detail Message: Java heap space.
Exception in thread "main": java.lang.OutOfMemoryError: PermGen space
See 3.1.2 Detail Message: PermGen space.
Exception in thread "main": java.lang.OutOfMemoryError: Requested
array size exceeds VM limit
See 3.1.3 Detail Message: Requested array size exceeds VM limit.
Exception in thread "main": java.lang.OutOfMemoryError: request
bytes for . Out of swap space?
See 3.1.4 Detail Message: request bytes for . Out of
swap space?.
Exception in thread "main": java.lang.OutOfMemoryError:
(Native method)
See 3.1.5 Detail Message: (Native method).
UPDATE:-
You can download the HotSpot VM source code from OpenJDK. If you want to monitor and track the memory footprint of your Java Heap spaces ie, the young generation and old generation spaces is to enable verbose GC from your HotSpot VM. You may add the following parameters within your JVM start-up arguments:
-verbose:gc –XX:+PrintGCDetails –XX:+PrintGCTimeStamps –Xloggc:<app path>/gc.log
You can use jvisualvm to manage your process at the runtime.
You can see memory, heap space, objects etc ...
This program is located in your bin directory of your JDK.
It's best to try to reproduce the problem in place where you are free to debug it - on dev server or on your local machine. Then try to debug, look for recursive invocations, heap size and what objects are being created. Unfortunately it's not always easy to reproduce prod env (with it's load and so on) on local machine, therefore finding the root cause of such error might be a challenge.
The amount of memory given to Java process is specified at startup. memory divided into separate areas, heap and permgen being the most familiar sub-areas.
While you specify the maximum size of the heap allowed for this particular process via -Xmx,
the corresponding parameter for permgen is -XX:MaxPermSize. 90% of the Java apps seem to require between 64 and 512 MB of permgen to work properly. In order to find your limits, experiment a bit.
to solve this issue you have change your VM arguments
-Xms256m -Xmx1024m -XX:+DisableExplicitGC -Dcom.sun.management.jmxremote
-XX:PermSize=256m -XX:MaxPermSize=512m
add above two line in VM argument i am sure you will not face this problem any more
to know more about go to OutOfMemory
Low memory configuration :-
It is possible that you have estimate less memory for your application for example your application need 2 Gb of memory but you have configured only 512 Mb so here you will get an OOME(Out-of-memory errors )
Due to Memoryleak :-
Memory leak is responsible for decreasing the available memory for heap and can lead to out of memory error for more read What is a Memory Leak in java?
Memory fragmentation :-
It is possible that there may be space in heap but it may be not contiguous . And heap needs compaction . Rearrange its memory.
Excess GC overhead :-
Some JVM implementations, such as the Oracle HotSpot, will throw an out-of-memory error when GC overhead becomes too great. This feature is designed to prevent near-constant garbage collection—for example, spending more than 90% of execution time on garbage collection while freeing less than 2% of memory. Configuring a larger heap is most likely to fix this issue, but if not you’ll need to analyze memory usage using a heap dump
Allocating over-sized temporary objects:-
Program logic that attempts to allocate overly large temporary objects. Since the JVM can’t satisfy the request, an out-of-memory error is triggered and the transaction will be aborted. This can be difficult to diagnose, as no heap dump or allocation-analysis tool will highlight the problem. You can only identify the area of code triggering the error, and once discovered, fix or remove the cause of the problem.
for more pls visit my site
http://all-about-java-and-weblogic-server.blogspot.com/2014/02/main-causes-of-out-of-memory-errors-in.html
Weblogic 10.3.5 running with 32 bit JRockit R 28.2.4-14 using -Xmx1024m -Xms1024m always gets out of native memory after 5-8 Undeploy-Redeploy cycles of our Java EE EAR files.
According to the error message and what is displayed in VisualVM, it is not the Java Heap that gets too full but insufficient system memory which is available.
java.lang.OutOfMemoryError: class allocation, 865324184 loaded, 464M footprint,
in check_alloc (src/jvm/model/classload/classalloc.c:215).
Attempting to allocate 1G bytes
There is insufficient native memory for the Java
Runtime Environment to continue.
Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Disable compressed references (-XXcompressedRefs=false)
at sun.misc.Unsafe.defineClass(Native Method)
at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:45)
at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)
at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:95)
at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:313)
at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1322)
I understand the possible solutions that are suggested, but as everything is fine if the application is only deployed once, it seems classes are not correctly freed when undeploying. A heap dump after undeployment shows there are many of our classes left in memory. Shouldn't they be garbage collected then?
The path to GC Root shows a Thread <JNI Local> java.lang.Thread # 0x129ac778 JDWP Transport Listener: dt_socket Native Stack, Thread. There is no traffic on the server and I don't know why this stays to be active.
This memory leak is most likely caused in the perm-gen space (this is how it's called on Hotspot JVM). JRockit doesnt have dedicated Perm-Gen space, but uses "regular" heap space for this.
Have a look at the following sites which I found really helpful for understanding what's happening here:
What is a PermGen leak
Busting PermGen Myths
I find Eclipse MAT very helpful for debugging PermGen leaks:
My approach is usually something like this:
Do a heap dump after undeployment
Find one of you application classes (doesn't really matter which, all should leak). Alternatively display duplicate classes.
Display the path to the GC root
Why the fix looks like depends on the cause.
Most likely some classes are holding to the classloader which causes the full application to leak on redeploy. You can read about this article about classloaders.
I'm trying to understand why out ColdFusion 9 (JRun) server is throwing the following error:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
The JVM arguments are as follows:
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -
I had jconsole running when the dump happened and I am trying to reconcile some numbers with the -XX:MaxPermSize=192m setting above. When JRun died it had the following memory usage:
Heap
PSYoungGen total 136960K, used 60012K [0x5f180000, 0x67e30000, 0x68d00000)
eden space 130624K, 45% used [0x5f180000,0x62c1b178,0x67110000)
from space 6336K, 0% used [0x67800000,0x67800000,0x67e30000)
to space 6720K, 0% used [0x67110000,0x67110000,0x677a0000)
PSOldGen total 405696K, used 241824K [0x11500000, 0x2a130000, 0x5f180000)
object space 405696K, 59% used [0x11500000,0x20128360,0x2a130000)
PSPermGen total 77440K, used 77070K [0x05500000, 0x0a0a0000, 0x11500000)
object space 77440K, 99% used [0x05500000,0x0a043af0,0x0a0a0000)
My first question is that the dump shows the PSPermGen being the problem - it says the total is 77440K, but it should be 196608K (based on my 192m JVM argument), right? What am I missing here? Is this something to do with the other non-heap pool - the Code Cache?
I'm running on a 32bit machine, Windows Server 2008 Standard. I was thinking of increasing the PSPermGen JVM argument, but I want to understand why it doesn't seem to be using its current allocation.
Thanks in advance!
An "out of swap space" OOME happens when the JVM has asked the operating system for more memory, and the operating system has been unable to fulfill the request because all swap (disc) space has already been allocated. Basically, you've hit a system-wide hard limit on the amount of virtual memory that is available.
This can happen through no fault of your application, or the JVM. Or it might be a consequence of increasing -Xmx etc beyond your system's capacity to support it.
There are three approaches to addressing this:
Add more physical memory to the system.
Increase the amount of swap space available on the system; e.g. on Linux look at the manual entry for swapon and friends. (But be careful that the ratio of active virtual memory to physical memory doesn't get too large ... or your system is liable to "thrash", and performance will drop through the floor.)
Cut down the number and size of processes that are running on the system.
If you got into this situation because you've been increasing -Xmx to combat other OOMEs, then now would be good time to track down the (probable) memory leaks that are the root cause of your problems.
"ChunkPool::allocate. Out of swap space" usually means the JVM process has failed to allocate memory for its internal processing.
This is usually not directly related to your heap usage as it is the JVM process itself that has run out of memory. Check the size of the JVM process within windows. You may have hit an upper limit there.
This bug report also gives an explanation.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004956
This is usually caused by native, non java objects not being released by your application rather than java objects on the heap.
Some example causes are:
Large thread stack size, or many threads being spawned and not cleaned up correctly. The thread stacks live in native "C" memory rather than the java heap. I've seen this one myself.
Swing/AWT windows being programatically created and not dispoed when no longer used. The native widgets behind AWT don't live on the heap as well.
Direct buffers from nio not being released. The data for the direct buffer is allocated to the native process memory, not the java heap.
Memory leaks in jni invocations.
Many files opened an not closed.
I found this blog helpfull when diagnosing a similar problem. http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html
Check your setDomainEnv.cmd (.sh)file. there will be three different conditions on PermSize
-XX:MaxPermSize=xxxm -XX:PermSize=xxxm. Change everywhere