Weblogic 10.3.5 running with 32 bit JRockit R 28.2.4-14 using -Xmx1024m -Xms1024m always gets out of native memory after 5-8 Undeploy-Redeploy cycles of our Java EE EAR files.
According to the error message and what is displayed in VisualVM, it is not the Java Heap that gets too full but insufficient system memory which is available.
java.lang.OutOfMemoryError: class allocation, 865324184 loaded, 464M footprint,
in check_alloc (src/jvm/model/classload/classalloc.c:215).
Attempting to allocate 1G bytes
There is insufficient native memory for the Java
Runtime Environment to continue.
Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Disable compressed references (-XXcompressedRefs=false)
at sun.misc.Unsafe.defineClass(Native Method)
at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:45)
at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)
at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:95)
at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:313)
at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1322)
I understand the possible solutions that are suggested, but as everything is fine if the application is only deployed once, it seems classes are not correctly freed when undeploying. A heap dump after undeployment shows there are many of our classes left in memory. Shouldn't they be garbage collected then?
The path to GC Root shows a Thread <JNI Local> java.lang.Thread # 0x129ac778 JDWP Transport Listener: dt_socket Native Stack, Thread. There is no traffic on the server and I don't know why this stays to be active.
This memory leak is most likely caused in the perm-gen space (this is how it's called on Hotspot JVM). JRockit doesnt have dedicated Perm-Gen space, but uses "regular" heap space for this.
Have a look at the following sites which I found really helpful for understanding what's happening here:
What is a PermGen leak
Busting PermGen Myths
I find Eclipse MAT very helpful for debugging PermGen leaks:
My approach is usually something like this:
Do a heap dump after undeployment
Find one of you application classes (doesn't really matter which, all should leak). Alternatively display duplicate classes.
Display the path to the GC root
Why the fix looks like depends on the cause.
Most likely some classes are holding to the classloader which causes the full application to leak on redeploy. You can read about this article about classloaders.
Related
Problem
We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT.
However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT.
Context
Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making the dump, our buffers/cache used space was at 56GB.
We used the following command to create the dump: sudo -u {application user} jmap -dump:file=/mnt/heapdump/dump_prd.bin {pid}
The heap dump file size is 16,4GB and when analyzing it with Eclipse MAT, it says there are around 1GB live objects and ~14,8GB unreachable/shallow heap.
EDIT: Here is some more info about the problem we see happening. We monitor our memory usage, and we see it grow and grow, until there is ~300mb free memory left. Then it stays around that amount of memory, until the process crashes, unfortunately without error in the application log.
This makes us assume it is a hard OOM error because this only happens when the memory is near-depleted. We use the settings -Xms25000m -Xmx40000m for our JVM.
Question
Basically, we are wondering why the majority of our memory isn't captured in this dump. The top retained size classes don't look too suspicious, so we are wondering if there is something heap dump-related what we are doing wrong.
When dumping its heap, the JVM will first run a garbage collection cycle to free any unreachable objects.
How can I take a heap dump on Java 5 without garbage collecting first?
In my experience, in a true OutOfMemoryError where your application is simply demanding more heap space than is available, this GC is a fool's errand and the final heap dump will be the size of the max. heap size.
When the heap dump is much smaller, that means the system was not truly out of memory, but perhaps had memory pressure. For example, there is the java.lang.OutOfMemoryError: GC overhead limit exceeded error, which means that the JVM may have been able to free enough memory to service some new allocation request, but it had to spend too much time collecting garbage.
It's also possible that you don't have a memory problem. What makes you think you do? You didn't mention anything about heap usage or an OutOfMemoryError. You've only mentioned the JVM's memory footprint on the operating system.
In my experience, having a heap dump much smaller than the real memory used can be due to a leak in the JNI.
Despite you don't use directly any native code, there are certain libraries that use it to speed up.
In our case, it was a Deflater and Inflater not properly ended.
I got the following error in browser when log into the system
java.lang.RuntimeException: javax.servlet.ServletException:
java.lang.OutOfMemoryError: PermGen space
com.opensymphony.sitemesh.webapp.decorator.BaseWebAppDecorator.render(BaseWebAppDecorator.java:39)
com.opensymphony.sitemesh.webapp.SiteMeshFilter.doFilter(SiteMeshFilter.java:84)
java.lang.OutOfMemoryError
Thrown when the Java Virtual Machine cannot allocate an object because
it is out of memory, and no more memory could be made available by the
garbage collector. OutOfMemoryError objects may be constructed by the
virtual machine as if suppression were disabled and/or the stack trace
was not writable.
Increase the heap size of your JVM
It is possible to increase heap size allocated by the JVM by using command line options Here we have 3 options
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
Common causes of OutofMemory in PermGen is ClassLoader. Whenever a class is loaded into JVM, all its meta data, along with Classloader, is kept on PermGen area and they will be garbage collected when the Classloader which loaded them is ready for garbage collection. In Case Classloader has a memory leak than all classes loaded by it will remain in memory and cause permGen outofmemory once you repeat it a couple of times.
Now there are two ways to solve this:
1. Find the cause of Memory Leak or if there is any memory leak.
2. Increase size of PermGen Space by using JVM param -XX:MaxPermSize and -XX:PermSize.
java.lang.OutOfMemoryError: PermGen space
Two step solution.
Step One : Alter your application launch configuration and add (or increase if present) the -XX:MaxPermSize parameter . This is a temporary step as if this is a live system most probably the extra memory will also get filled at some time in the future.But this will fix the error for now.
Step two : Release unnecessary resources. Make sure all database connections are closed. Load and unload classes judiciously.restructure you program to work with small amounts of data at a time. Analyse and Fix memory leaks
Either optimize your program or increase java heap size with run time parameters.(-Xxx)
It is important to know, whether the PermGenError occurs after every depolyment, or after few ones.
If it is the first case, just increase the memory size for your PermGen.
If it is the second, you have some dead cows in your code. In this case, increasing memory size will only postpone the error, as your memory is still growing in size after every deployment. You have to profile your app like for example with jvisualvm (here is nice tutorial) and find some dead cows. It is important to understand, that PermGenError is not about the classes itself, but about classLoaders (Classloader leaks: the dreaded "java.lang.OutOfMemoryError: PermGen space" exception).
In my case (Glssfish server), the problem was the Log4j2 libs added to the web app. I had to add them also to the server libs (domain-dir/lib), so that according to the Glassfish Class Loaders Hierarchy they where loaded by the Common classLoader.
How to identify the issue when java OutOfMemoryError or stackoverflow comes in production. By which reason it is coming or why the server is down.
For example I am developing an application which is lived on production and UAT. Instantly on production java OutOfMemoryError or stackoverflow.
Then how can we track this issue, by which reason it has happened ? Is there any technique that can tell me by which code flow this is happening ?
Please explain it. I have faced this issue many times.
If you face it in production and you cannot really reason about it from stacktraces or logs, you need to analyze what was in there.
Get the VM to dump on OOM
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/tmp"
And use that for analysis. The memory analyzer tool (http://eclipse.org/mat/) is a good standalone program for this analysis.
The Oracle docs:- Troubleshooting Memory Leaks has detailed explanation on it:
This error is thrown when there is insufficient space to allocate an
object in the Java heap or in a particular area of the heap. The
garbage collector cannot make any further space available to
accommodate a new object, and the heap cannot be expanded further.
.....
An early step to diagnose an OutOfMemoryError is to determine what the
error means. Does it mean that the Java heap is full, or does it mean
that the native heap is full? To help you answer this question, the
following subsections explain some of the possible error messages,
with reference to the detail part of the message:
Exception in thread "main": java.lang.OutOfMemoryError: Java heap
space
See 3.1.1 Detail Message: Java heap space.
Exception in thread "main": java.lang.OutOfMemoryError: PermGen space
See 3.1.2 Detail Message: PermGen space.
Exception in thread "main": java.lang.OutOfMemoryError: Requested
array size exceeds VM limit
See 3.1.3 Detail Message: Requested array size exceeds VM limit.
Exception in thread "main": java.lang.OutOfMemoryError: request
bytes for . Out of swap space?
See 3.1.4 Detail Message: request bytes for . Out of
swap space?.
Exception in thread "main": java.lang.OutOfMemoryError:
(Native method)
See 3.1.5 Detail Message: (Native method).
UPDATE:-
You can download the HotSpot VM source code from OpenJDK. If you want to monitor and track the memory footprint of your Java Heap spaces ie, the young generation and old generation spaces is to enable verbose GC from your HotSpot VM. You may add the following parameters within your JVM start-up arguments:
-verbose:gc –XX:+PrintGCDetails –XX:+PrintGCTimeStamps –Xloggc:<app path>/gc.log
You can use jvisualvm to manage your process at the runtime.
You can see memory, heap space, objects etc ...
This program is located in your bin directory of your JDK.
It's best to try to reproduce the problem in place where you are free to debug it - on dev server or on your local machine. Then try to debug, look for recursive invocations, heap size and what objects are being created. Unfortunately it's not always easy to reproduce prod env (with it's load and so on) on local machine, therefore finding the root cause of such error might be a challenge.
The amount of memory given to Java process is specified at startup. memory divided into separate areas, heap and permgen being the most familiar sub-areas.
While you specify the maximum size of the heap allowed for this particular process via -Xmx,
the corresponding parameter for permgen is -XX:MaxPermSize. 90% of the Java apps seem to require between 64 and 512 MB of permgen to work properly. In order to find your limits, experiment a bit.
to solve this issue you have change your VM arguments
-Xms256m -Xmx1024m -XX:+DisableExplicitGC -Dcom.sun.management.jmxremote
-XX:PermSize=256m -XX:MaxPermSize=512m
add above two line in VM argument i am sure you will not face this problem any more
to know more about go to OutOfMemory
Low memory configuration :-
It is possible that you have estimate less memory for your application for example your application need 2 Gb of memory but you have configured only 512 Mb so here you will get an OOME(Out-of-memory errors )
Due to Memoryleak :-
Memory leak is responsible for decreasing the available memory for heap and can lead to out of memory error for more read What is a Memory Leak in java?
Memory fragmentation :-
It is possible that there may be space in heap but it may be not contiguous . And heap needs compaction . Rearrange its memory.
Excess GC overhead :-
Some JVM implementations, such as the Oracle HotSpot, will throw an out-of-memory error when GC overhead becomes too great. This feature is designed to prevent near-constant garbage collection—for example, spending more than 90% of execution time on garbage collection while freeing less than 2% of memory. Configuring a larger heap is most likely to fix this issue, but if not you’ll need to analyze memory usage using a heap dump
Allocating over-sized temporary objects:-
Program logic that attempts to allocate overly large temporary objects. Since the JVM can’t satisfy the request, an out-of-memory error is triggered and the transaction will be aborted. This can be difficult to diagnose, as no heap dump or allocation-analysis tool will highlight the problem. You can only identify the area of code triggering the error, and once discovered, fix or remove the cause of the problem.
for more pls visit my site
http://all-about-java-and-weblogic-server.blogspot.com/2014/02/main-causes-of-out-of-memory-errors-in.html
we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory:
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.
I'm trying to understand why out ColdFusion 9 (JRun) server is throwing the following error:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
The JVM arguments are as follows:
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -
I had jconsole running when the dump happened and I am trying to reconcile some numbers with the -XX:MaxPermSize=192m setting above. When JRun died it had the following memory usage:
Heap
PSYoungGen total 136960K, used 60012K [0x5f180000, 0x67e30000, 0x68d00000)
eden space 130624K, 45% used [0x5f180000,0x62c1b178,0x67110000)
from space 6336K, 0% used [0x67800000,0x67800000,0x67e30000)
to space 6720K, 0% used [0x67110000,0x67110000,0x677a0000)
PSOldGen total 405696K, used 241824K [0x11500000, 0x2a130000, 0x5f180000)
object space 405696K, 59% used [0x11500000,0x20128360,0x2a130000)
PSPermGen total 77440K, used 77070K [0x05500000, 0x0a0a0000, 0x11500000)
object space 77440K, 99% used [0x05500000,0x0a043af0,0x0a0a0000)
My first question is that the dump shows the PSPermGen being the problem - it says the total is 77440K, but it should be 196608K (based on my 192m JVM argument), right? What am I missing here? Is this something to do with the other non-heap pool - the Code Cache?
I'm running on a 32bit machine, Windows Server 2008 Standard. I was thinking of increasing the PSPermGen JVM argument, but I want to understand why it doesn't seem to be using its current allocation.
Thanks in advance!
An "out of swap space" OOME happens when the JVM has asked the operating system for more memory, and the operating system has been unable to fulfill the request because all swap (disc) space has already been allocated. Basically, you've hit a system-wide hard limit on the amount of virtual memory that is available.
This can happen through no fault of your application, or the JVM. Or it might be a consequence of increasing -Xmx etc beyond your system's capacity to support it.
There are three approaches to addressing this:
Add more physical memory to the system.
Increase the amount of swap space available on the system; e.g. on Linux look at the manual entry for swapon and friends. (But be careful that the ratio of active virtual memory to physical memory doesn't get too large ... or your system is liable to "thrash", and performance will drop through the floor.)
Cut down the number and size of processes that are running on the system.
If you got into this situation because you've been increasing -Xmx to combat other OOMEs, then now would be good time to track down the (probable) memory leaks that are the root cause of your problems.
"ChunkPool::allocate. Out of swap space" usually means the JVM process has failed to allocate memory for its internal processing.
This is usually not directly related to your heap usage as it is the JVM process itself that has run out of memory. Check the size of the JVM process within windows. You may have hit an upper limit there.
This bug report also gives an explanation.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004956
This is usually caused by native, non java objects not being released by your application rather than java objects on the heap.
Some example causes are:
Large thread stack size, or many threads being spawned and not cleaned up correctly. The thread stacks live in native "C" memory rather than the java heap. I've seen this one myself.
Swing/AWT windows being programatically created and not dispoed when no longer used. The native widgets behind AWT don't live on the heap as well.
Direct buffers from nio not being released. The data for the direct buffer is allocated to the native process memory, not the java heap.
Memory leaks in jni invocations.
Many files opened an not closed.
I found this blog helpfull when diagnosing a similar problem. http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html
Check your setDomainEnv.cmd (.sh)file. there will be three different conditions on PermSize
-XX:MaxPermSize=xxxm -XX:PermSize=xxxm. Change everywhere