JVM and Memory Usage - JRun server not using full PSPermGen allocation? - java

I'm trying to understand why out ColdFusion 9 (JRun) server is throwing the following error:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
The JVM arguments are as follows:
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -
I had jconsole running when the dump happened and I am trying to reconcile some numbers with the -XX:MaxPermSize=192m setting above. When JRun died it had the following memory usage:
Heap
PSYoungGen total 136960K, used 60012K [0x5f180000, 0x67e30000, 0x68d00000)
eden space 130624K, 45% used [0x5f180000,0x62c1b178,0x67110000)
from space 6336K, 0% used [0x67800000,0x67800000,0x67e30000)
to space 6720K, 0% used [0x67110000,0x67110000,0x677a0000)
PSOldGen total 405696K, used 241824K [0x11500000, 0x2a130000, 0x5f180000)
object space 405696K, 59% used [0x11500000,0x20128360,0x2a130000)
PSPermGen total 77440K, used 77070K [0x05500000, 0x0a0a0000, 0x11500000)
object space 77440K, 99% used [0x05500000,0x0a043af0,0x0a0a0000)
My first question is that the dump shows the PSPermGen being the problem - it says the total is 77440K, but it should be 196608K (based on my 192m JVM argument), right? What am I missing here? Is this something to do with the other non-heap pool - the Code Cache?
I'm running on a 32bit machine, Windows Server 2008 Standard. I was thinking of increasing the PSPermGen JVM argument, but I want to understand why it doesn't seem to be using its current allocation.
Thanks in advance!

An "out of swap space" OOME happens when the JVM has asked the operating system for more memory, and the operating system has been unable to fulfill the request because all swap (disc) space has already been allocated. Basically, you've hit a system-wide hard limit on the amount of virtual memory that is available.
This can happen through no fault of your application, or the JVM. Or it might be a consequence of increasing -Xmx etc beyond your system's capacity to support it.
There are three approaches to addressing this:
Add more physical memory to the system.
Increase the amount of swap space available on the system; e.g. on Linux look at the manual entry for swapon and friends. (But be careful that the ratio of active virtual memory to physical memory doesn't get too large ... or your system is liable to "thrash", and performance will drop through the floor.)
Cut down the number and size of processes that are running on the system.
If you got into this situation because you've been increasing -Xmx to combat other OOMEs, then now would be good time to track down the (probable) memory leaks that are the root cause of your problems.

"ChunkPool::allocate. Out of swap space" usually means the JVM process has failed to allocate memory for its internal processing.
This is usually not directly related to your heap usage as it is the JVM process itself that has run out of memory. Check the size of the JVM process within windows. You may have hit an upper limit there.
This bug report also gives an explanation.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004956
This is usually caused by native, non java objects not being released by your application rather than java objects on the heap.
Some example causes are:
Large thread stack size, or many threads being spawned and not cleaned up correctly. The thread stacks live in native "C" memory rather than the java heap. I've seen this one myself.
Swing/AWT windows being programatically created and not dispoed when no longer used. The native widgets behind AWT don't live on the heap as well.
Direct buffers from nio not being released. The data for the direct buffer is allocated to the native process memory, not the java heap.
Memory leaks in jni invocations.
Many files opened an not closed.
I found this blog helpfull when diagnosing a similar problem. http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html

Check your setDomainEnv.cmd (.sh)file. there will be three different conditions on PermSize
-XX:MaxPermSize=xxxm -XX:PermSize=xxxm. Change everywhere

Related

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

How to use ulimit with java correctly?

My java program must run in an environment where memory is constrained to a specified amount. When I run my java service it runs out of memory during startup.
This is an example of the commands I'm using and values I'm setting:
ulimit -Sv 1500000
java \
-Xmx1000m -Xms1000m \
-XX:MaxMetaspaceSize=500m \
-XX:CompressedClassSpaceSize=500m \
-XX:+ExitOnOutOfMemoryError \
MyClass
In theory, I've accounted for everything I could find documentation on. There's the heap (1000m), and the metaspace (500m). But it still runs out of memory on startup initializing the JVM. This happens when until I set the ulimit about 600mib larger than heap+metaspace.
What category of memory am I missing such that I can set ulimit appropriately?
Use case: I am running a task in a Docker container with limited memory. That means that linux cgroups is doing the limiting. When memory limits are exceeded, cgroups can only either pause or kill the process that exceeds it's bounds. I really want the java process to gracefully fail if something goes wrong and it uses too much memory so that the wrapping bash script can report the error to the task initiator.
We are using java 8 so we need to worry about metaspace instead of permgen.
Update: It does not die with an OutOfMemoryError. This is the error:
Error occurred during initialization of VM
Could not allocate metaspace: 524288000 bytes
It's really hard to effectively ulimit java. Many pools are unbounded, and the JVM fails catastrophically when an allocation attempt fails. Not all the memory is actually committed, but much of it is reserved thus counting toward the virtual memory limit imposed by ulimit.
After much investigation, I've uncovered many of the different categories of memory java uses. This answer applies to OpenJDK and Oracle 8.x on a 64-bit system:
Heap
This is the most well understood portion of the JVM memory. It is where the majority of your program memory is used. It can be controlled with the -Xmx and -Xms options.
Metaspace
This appears to hold metadata about classes that have been loaded. I could not find out whether this category will ever release memory to the OS, or if it will only ever grow. The default maximum appears to be 1g. It can be controlled with the -XX:MaxMetaspaceSize option. Note: specifying this might not do anything without also specifying the Compressed class space as well.
Compressed class space
This appears related to the Metaspace. I could not find out whether this category will ever release memory to the OS, or if it will only ever grow. The default maximum appears to be 1g. It can be controlled with the '-XX:CompressedClassSpaceSize` option.
Garbage collector overhead
There appears to be a fixed amount of overhead depending on the selected garbage collector, as well as an additional allocation based on the size of the heap. Observation suggests that this overhead is about 5% of the heap size. There are no known options for limiting this (other than to select a different GC algorithm).
Threads
Each thread reserves 1m for its stack. The JVM appears to reserve an additional 50m of memory as a safety measure against stack overflows. The stack size can be controlled with the -Xss option. The safety size cannot be controlled. Since there is no way to enforce a maximum thread count and each thread requires a certain amount of memory, this pool of memory is technically unbounded.
Jar files (and zip files)
The default zip implementation will use memory mapping for zip file access. This means that each jar and zip file accessed will be memory mapped (requiring an amount of reserved memory equal to the sum of file sizes). This behavior can be disabled by setting the sun.zip.disableMemoryMapping system property (as in -Dsun.zip.disableMemoryMapping=true)
NIO Direct Buffers
Any direct buffer (created using allocateDirect) will use that amount of off-heap memory. The best NIO performance comes with direct buffers, so many frameworks will use them.
The JVM provides no way to limit the total amount of memory allowed for NIO buffers, so this pool is technically unbounded.
Additionally, this memory is duplicated on-heap for each thread that touches the buffer. See this for more details.
Native memory allocated by libraries
If you are using any native libraries, any memory they allocate will be off-heap. Some core java libraries (like java.util.zip.ZipFile) also use native libraries that consume-heap memory.
The JVM provides no way to limit the total amount of memory allocated by native libraries, so this pool is technically unbounded.
malloc arenas
The JVM uses malloc for many of these native memory requests. To avoid thread contention issues, the malloc function uses multiple pre-allocated pools. The default number of pools is equal to 8 x cpu but can be overridden by setting the environment variable MALLOC_ARENAS_MAX. Each pool will reserve a certain amount of memory even if it's not all used.
Setting MALLOC_ARENAS_MAX to 1-4 is generally recommend for java, as most frequent allocations are done from the heap, and a lower arena count will prevent wasted virtual memory from counting towards the ulimit.
This category is not technically it's own pool, but it explains the virtual allocation of extra memory.

Java: Commandline parameter to release unused memory

In Bash, I use the commmand java -Xmx8192m -Xms512m -jar jarfile to start a Java process with an initial heap space of 512MB and maximum heap space of 8GB.
I like how the heap space increases based on demand, but once the heap space has been increased, it doesn't release although the process doesn't need the memory. How can I release the memory that isn't being used by the process?
Example: Process starts, and uses 600MB of memory. Heap space increases from 512MB to a little over 600MB. Process then drops down to 400MB RAM usage, but heap allocation stays at 600MB. How would I make the allocation stay near the RAM usage?
You cannot; it's simply not designed to work that way. Note that unused memory pages will simply be mapped out by your hardware, and so won't consume any real memory.
Generally you would not like JVM to return memory to the OS and later claim in back as both operations are not so cheap.
There are a couple XX parameters that may or may not work with your preferred garbage collector, namely
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
Source
I believe you'd need stop the world collector for them to be enforced.
Other JVMs may have their own parameters.
I'd normally have not replied but the amount of negative/false info ain't cool.
No, it is a required function. I think, the JVM in Android probably can do this, but I'm not sure.
But most of them - including all Java EE VMs - simply doesn't interested about this.
This is not so simple, as it seems - the VM is a process from the OS view, and has somewhere a mapped memory region for it, which is a stack or data segment.
In most cases it needs to be a continous interval. Memory allocation and release from the OS view happens with a system call, which the process uses to ask the OS its new segment limit.
What to do, if you have for example 2 gigabytes of RAM for your JVM, which uses only 500 megs, but this 500 meg is dispersed in some ten-bytes fragment in this 2 gigs? This memory release function would need also a defragmentation step, which would multiply the resource costs of the GC runs.
As Java runs, and Java objects are constructed and destructed by the garbage collector, the free and allocated memory areas are dispersed in the stack/data segment.
When we don't see java, but native OS processes, the situation is the same: if you malloc() ten 1meg block, and then release the first 9, there is no way to give it back to the OS, altough newer libraries and os apis have extensive development about this. Of course, if you later allocates memory again, this allocation will be done from the just-freed regions.
My opinion is, that even if this is a little bit costly and complex (and a quite large programming work), it worths its price, and I think it isn't the best image from our collective programming culture, that it isn't done since decades in everything, included the java vms.

Weird behavior of Java -Xmx on large amounts of ram

You can control the maximum heap size in java using the -Xmx option.
We are experiencing some weird behavior on Windows with this switch. We run some very beefy servers (think 196gb ram). Windows version is Windows Server 2008R2
Java version is 1.6.0_18, 64-Bit (obviously).
Anyway, we were having some weird bugs where processes were quitting with out of memory exceptions even though the process was using much less memory than specified by the -Xmx setting.
So we wrote simple program that would allocate a 1GB byte array each time one pressed the enter key, and initialize the byte array to random values (to prevent any memory compression etc).
Basically, whats happening is that if we run the program with -Xmx35000m (roughly 35 gb) we get an out of memory exception when we hit 25 GB of process space (using windows task manager to measure). We hit this after allocating 24 GB worth of 1 GB blocks, BTW, so that checks out.
Simply specifying a larger value for -Xmx option makes the program work fine to larger amounts of ram.
So, what is going on? Is -Xmx just "off". BTW: We need to specify -Xmx55000m to get a 35 GB process space...
Any ideas on what is going on?
Is their a bug in the Windows JVM?
Is it safe to simply set the -Xmx option bigger, even though there is a disconnect between the -Xmx option and what is going on process wise?
Theory #1
When you request a 35Gb heap using -Xmx35000m, what you are actually saying is that to allow the total space used for the heap to be 35Gb. But the total space consists of the Tenured Object space (for objects that survive multiple GC cycles), the Eden space for newly created objects, and other spaces into which objects will be copied during garbage collection.
The issue is that some of the spaces are not and cannot be used for allocating new objects. So in effect, you "lose" a significant percent of your 35Gb to overheads.
There are various -XX options that can be used to tweak the sizes of the respective spaces, etc. You might try fiddling with them to see if they make a difference. Refer to this document for more information. (The commonly used GC tuning options are listed in section 8. The -XX:NewSpace option looks promising ...)
Theory #2
This might be happening because you are allocating huge objects. IIRC, objects above a certain size can be allocated directly into the Tenured Object space. In your (highly artificial) benchmark, this might result in the JVM not putting stuff into the Eden space, and therefore being able to use less of the total heap space than is normal.
As an experiment, try changing your benchmark to allocate lots of small objects, and see if it manages to use more of the available space before OOME-ing.
Here are some other theories that I would discount:
"You are running into OS-imposed limits." I would discount this, since you said that you can get significantly greater memory utilization by increasing the -Xmx... setting.
"The Windows task manager is reporting bogus numbers." I would discount this because the numbers reported roughly match the 25Gb that you think your application had managed to allocate.
"You are losing space to other things; e.g. the permgen heap." AFAIK, the permgen heap size is controlled and accounted independently of the "normal" heaps. Other non-heap memory usage is either a constant (for the app) or dependent on the app doing specific things.
"You are suffering from heap fragmentation." All of the JVM garbage collectors are "copying collectors", and this family of collectors has the property that heap nodes are automatically compacted.
"JVM bug on Windows." Highly unlikely. There must be tens of thousands of 64bit Java on Windows installations that maximize the heap size. Someone else would have noticed ...
Finally, if you are NOT doing this because your application requires you to allocate memory in huge chunks, and hang onto it "for ever" ... there's a good chance that you are chasing shadows. A "normal" large-memory application doesn't do this kind of thing, and the JVM is tuned for normal applications ... not anomalous ones.
And if your application really does behave this way, the pragmatic solution is to just set the -Xmx... option larger, and only worry if you start running into OS-level issues.
To get a feeling for what exactly you are measuring you should use some different tools:
the Windows Task Manager (I only know Windows XP, but I heard rumours that the Task Manager has improved since then.)
procexp and vmmap from Sysinternals
jconsole from the JVM (you are using the SunOracle HotSpot JVM, aren't you?)
Now you should answer the following questions:
What does jconsole say about the used heap size? How does that differ from procexp?
Does the value from procexp change if you fill the byte arrays with non-zero numbers instead of keeping them at 0?
did you try turning on the verbose output for the GC to find out why the last allocation fails. is it because the OS fails to allocate a heap beyond 25GB for the native JVM process or is it because the GC is hitting some sort of limit on the maximum memory it can manage. I would recommend you also connect to the command line process using jconsole to see what the status of the heap is just before the allocation failure. Also tools like the sysinternals process explorer might give better details as where the failure is occurring if it is in the jvm process.
Since the process is dying at 25GB and you have a generational collector maybe the rest of the generations are consuming 10GB. I would recommend you install JDK 1.6_u24 and use jvisualvm with the visualGC plugin to see what the GC is doing especially factor in the size of all the generations to see how the 35GB heap is being chopped up into different regions by the GC / VM memory manager.
see this link if you are not familiar with Generational GC http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing.total_heap
I assume this has to do with fragmenting the heap. The free memory is probably not available as a single contiguous free area and when you try to allocate a large block this fails because the requested memory cannot be allocated in a single piece.
The memory displayed by windows task manager is the total memory allocated to the process which includes memory for code, stack, perm gen and heap.
The memory you measure using your click program is the amount of heap jvm makes available to running jvm programs.
Natrually the total allocated memory to JVM by windows should be greater than what JVM makes available to your program as heap memory.

Eclipse release heap back to system

I'm using Eclipse 3.6 with latest Sun Java 6 on Linux (64 bit) with a larger number of large projects. In some special circumstances (SVN updates for example) Eclipse needs up to 1 GB heap. But most of the time it only needs 350 MB. When I enable the heap status panel then I see this most of the time:
350M of 878M
I start Eclipse with these settings: -Xms128m -Xmx1024m
So most of the time lots of MB are just wasted and are just used rarely when memory usage peaks for a short time. I don't like that at all and I want Eclipse to release the memory back to the system, so I can use it for other programs.
When Eclipse needs more memory while there is not enough free RAM than Linux can swap out other running programs, I can live with that. I heard there is a -XX:MaxHeapFreeRatio option. But I never figured out what values I have to use so it works. No value I tried ever made a difference.
So how can I tell Eclipse (Or Java) to release unused heap?
Found a solution. I switched Java to use the G1 garbage collector and now the HeapFreeRatio parameters works as intended. So I use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory is actually released back to the operating system.
You can go to the Preferences -> General and check the Show heap status. This activate a nice view of your heap in the corner of Eclipse. Something like this:
If you click the trash bin, it will try to run garbage collection and return the memory.
Java's heap is nothing more than a big data structure managed within the JVM process' heap space. The two heaps are logically-separate entities even though they occupy the same memory.
The JVM is at the mercy of the host system's implementations of malloc(), which allocates memory from the system using brk(). On Linux systems (Solaris, too), memory allocated for the process heap is almost never returned, largely because it becomes fragmented and the heap must be contiguous. This means that memory allocated to the process will increase monotonically, and the only way to keep the size down is not to allocate it in the first place.
-Xms and -Xmx tell the JVM how to size the Java heap ahead of time, which causes it to allocate process memory. Java can garbage collect until the sun burns out, but that cleanup is internal to the JVM and the process memory backing it doesn't get returned.
Elaboration from comment below:
The standard way for a program written in C (notably the JVM running Eclipse for you) to allocate memory is to call malloc(3), which uses the OS-provided mechanism for allocating memory to the process and then managing individual allocations within those allocations. The details of how malloc() and free() work are implementation-specific.
On most flavors of Unix, a process gets exactly one data segment, which is a contiguous region of memory that has pointers to the start and end. The process can adjust the size of this segment by calling brk(2) and increasing the end pointer to allocate more memory or decreasing it to return it to the system. Only the end can be adjusted. This means that if your implementation of malloc() enlarges the data segment, the corresponding implementation of free() can't shrink it unless it determines that there's space at the end that's not being used. In practice, a humongous chunk of memory you allocated with malloc() rarely winds up at the very end of the data segment when you free() it, which is why processes tend to grow monotonically.

Categories

Resources