How does Hotspot Java/JVM store memory? - java

Is there anywhere that a hostspot JVM process stores memory besides these places:
perm gen
tenured generation
eden space
from space
to space
code cache
That is to say: What possible ways are there that the hotspot process can reserve & commit memory on a machine such that said memory would not show up in the statistics for one of those places?
Some answers I can think of (let me know if these are true):
virtual memory used for thread stacks are not represented in those numbers
any loaded dlls or files.
EDIT:
some other answers given:
java.exe itself
JNI methods could allocate memory itself
any native code (eg. from dlls) could allocate memory.
general JVM metadata for running itself.

You're correct so far (DLLs include all JNI libraries and whatever memory they've allocated). The VM also has its own code (e.g., the contents of java), bookkeeping information about the memory that's allocated to Java programs, and potentially memory used by VM agents. Basically, what you described in your first list are the items that make up the "running memory" of the virtual machine; the rest of the JVM's memory are all of the items that represent the virtual machine's "hardware", such as the libraries that connect it to the OS's networking, graphics, and so on.

Related

How to use ulimit with java correctly?

My java program must run in an environment where memory is constrained to a specified amount. When I run my java service it runs out of memory during startup.
This is an example of the commands I'm using and values I'm setting:
ulimit -Sv 1500000
java \
-Xmx1000m -Xms1000m \
-XX:MaxMetaspaceSize=500m \
-XX:CompressedClassSpaceSize=500m \
-XX:+ExitOnOutOfMemoryError \
MyClass
In theory, I've accounted for everything I could find documentation on. There's the heap (1000m), and the metaspace (500m). But it still runs out of memory on startup initializing the JVM. This happens when until I set the ulimit about 600mib larger than heap+metaspace.
What category of memory am I missing such that I can set ulimit appropriately?
Use case: I am running a task in a Docker container with limited memory. That means that linux cgroups is doing the limiting. When memory limits are exceeded, cgroups can only either pause or kill the process that exceeds it's bounds. I really want the java process to gracefully fail if something goes wrong and it uses too much memory so that the wrapping bash script can report the error to the task initiator.
We are using java 8 so we need to worry about metaspace instead of permgen.
Update: It does not die with an OutOfMemoryError. This is the error:
Error occurred during initialization of VM
Could not allocate metaspace: 524288000 bytes
It's really hard to effectively ulimit java. Many pools are unbounded, and the JVM fails catastrophically when an allocation attempt fails. Not all the memory is actually committed, but much of it is reserved thus counting toward the virtual memory limit imposed by ulimit.
After much investigation, I've uncovered many of the different categories of memory java uses. This answer applies to OpenJDK and Oracle 8.x on a 64-bit system:
Heap
This is the most well understood portion of the JVM memory. It is where the majority of your program memory is used. It can be controlled with the -Xmx and -Xms options.
Metaspace
This appears to hold metadata about classes that have been loaded. I could not find out whether this category will ever release memory to the OS, or if it will only ever grow. The default maximum appears to be 1g. It can be controlled with the -XX:MaxMetaspaceSize option. Note: specifying this might not do anything without also specifying the Compressed class space as well.
Compressed class space
This appears related to the Metaspace. I could not find out whether this category will ever release memory to the OS, or if it will only ever grow. The default maximum appears to be 1g. It can be controlled with the '-XX:CompressedClassSpaceSize` option.
Garbage collector overhead
There appears to be a fixed amount of overhead depending on the selected garbage collector, as well as an additional allocation based on the size of the heap. Observation suggests that this overhead is about 5% of the heap size. There are no known options for limiting this (other than to select a different GC algorithm).
Threads
Each thread reserves 1m for its stack. The JVM appears to reserve an additional 50m of memory as a safety measure against stack overflows. The stack size can be controlled with the -Xss option. The safety size cannot be controlled. Since there is no way to enforce a maximum thread count and each thread requires a certain amount of memory, this pool of memory is technically unbounded.
Jar files (and zip files)
The default zip implementation will use memory mapping for zip file access. This means that each jar and zip file accessed will be memory mapped (requiring an amount of reserved memory equal to the sum of file sizes). This behavior can be disabled by setting the sun.zip.disableMemoryMapping system property (as in -Dsun.zip.disableMemoryMapping=true)
NIO Direct Buffers
Any direct buffer (created using allocateDirect) will use that amount of off-heap memory. The best NIO performance comes with direct buffers, so many frameworks will use them.
The JVM provides no way to limit the total amount of memory allowed for NIO buffers, so this pool is technically unbounded.
Additionally, this memory is duplicated on-heap for each thread that touches the buffer. See this for more details.
Native memory allocated by libraries
If you are using any native libraries, any memory they allocate will be off-heap. Some core java libraries (like java.util.zip.ZipFile) also use native libraries that consume-heap memory.
The JVM provides no way to limit the total amount of memory allocated by native libraries, so this pool is technically unbounded.
malloc arenas
The JVM uses malloc for many of these native memory requests. To avoid thread contention issues, the malloc function uses multiple pre-allocated pools. The default number of pools is equal to 8 x cpu but can be overridden by setting the environment variable MALLOC_ARENAS_MAX. Each pool will reserve a certain amount of memory even if it's not all used.
Setting MALLOC_ARENAS_MAX to 1-4 is generally recommend for java, as most frequent allocations are done from the heap, and a lower arena count will prevent wasted virtual memory from counting towards the ulimit.
This category is not technically it's own pool, but it explains the virtual allocation of extra memory.

Java maximum memory argument does not appear to work [duplicate]

This question already has answers here:
JVM memory usage out of control
(2 answers)
Closed 7 years ago.
I have a Jar file that is ran in a server environment on demand, and I would like to limit the amount of memory that it uses so that multiple simultaneous instances can run comfortably. However, after setting the -Xmx512M parameter, it appears that Java is still using more memory than that. I am using the following command:
java -Xmx512M -jar Reporter.jar /tmp/REPmKLs8K
However I can see that the process is using more than this:
Resource: Virtual Memory Size
Exceeded: 1657 > 400 (MB)
Executable: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/jre/bin/java
Command: java -Xmx512M -jar Reporter.jar /tmp/REPmKLs8K
I'm not sure why this is, and it could potentially be an issue with the memory reporting software (ConfigServer Firewall). Has anyone experienced anything similar?
-Xmx is the maximum heap size, not the process size.
See: What does Java option -Xmx stand for?
-Xmx is used in order to specify the max heap allocation, but java needs more memory for JVM, pergem space (Java 7 and below), etc... You can see this post for memory structure
You can use tools like JVisualVM in order to profile the real memory usage in the JVM.
-Xmx doesn't control what you think it controls.
It only controls the JVM heap, not everything goes in the JVM heap, and the heap takes up way more native memory that what you specify for management and bookkeeping.
You can't control what you want to control, -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation. The JVM uses up native memory in an entirely different way and it dependant on each JVM implementation and the OS it is running on.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads.
Plain and simple the JVM uses more memory than what is supplied in -Xms and -Xmx and the other command line parameters.
The Classloaders, and applications can have more than one, eat up lots of memory that isn't documented easily. The JIT eats up memory, trading space for time, which is a good trade off most of the time.
Some of the above links may refer to older Java versions; Java 8 handles garbage collection and memory allocation differently, but the general rules above apply.

Eclipse release heap back to system

I'm using Eclipse 3.6 with latest Sun Java 6 on Linux (64 bit) with a larger number of large projects. In some special circumstances (SVN updates for example) Eclipse needs up to 1 GB heap. But most of the time it only needs 350 MB. When I enable the heap status panel then I see this most of the time:
350M of 878M
I start Eclipse with these settings: -Xms128m -Xmx1024m
So most of the time lots of MB are just wasted and are just used rarely when memory usage peaks for a short time. I don't like that at all and I want Eclipse to release the memory back to the system, so I can use it for other programs.
When Eclipse needs more memory while there is not enough free RAM than Linux can swap out other running programs, I can live with that. I heard there is a -XX:MaxHeapFreeRatio option. But I never figured out what values I have to use so it works. No value I tried ever made a difference.
So how can I tell Eclipse (Or Java) to release unused heap?
Found a solution. I switched Java to use the G1 garbage collector and now the HeapFreeRatio parameters works as intended. So I use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory is actually released back to the operating system.
You can go to the Preferences -> General and check the Show heap status. This activate a nice view of your heap in the corner of Eclipse. Something like this:
If you click the trash bin, it will try to run garbage collection and return the memory.
Java's heap is nothing more than a big data structure managed within the JVM process' heap space. The two heaps are logically-separate entities even though they occupy the same memory.
The JVM is at the mercy of the host system's implementations of malloc(), which allocates memory from the system using brk(). On Linux systems (Solaris, too), memory allocated for the process heap is almost never returned, largely because it becomes fragmented and the heap must be contiguous. This means that memory allocated to the process will increase monotonically, and the only way to keep the size down is not to allocate it in the first place.
-Xms and -Xmx tell the JVM how to size the Java heap ahead of time, which causes it to allocate process memory. Java can garbage collect until the sun burns out, but that cleanup is internal to the JVM and the process memory backing it doesn't get returned.
Elaboration from comment below:
The standard way for a program written in C (notably the JVM running Eclipse for you) to allocate memory is to call malloc(3), which uses the OS-provided mechanism for allocating memory to the process and then managing individual allocations within those allocations. The details of how malloc() and free() work are implementation-specific.
On most flavors of Unix, a process gets exactly one data segment, which is a contiguous region of memory that has pointers to the start and end. The process can adjust the size of this segment by calling brk(2) and increasing the end pointer to allocate more memory or decreasing it to return it to the system. Only the end can be adjusted. This means that if your implementation of malloc() enlarges the data segment, the corresponding implementation of free() can't shrink it unless it determines that there's space at the end that's not being used. In practice, a humongous chunk of memory you allocated with malloc() rarely winds up at the very end of the data segment when you free() it, which is why processes tend to grow monotonically.

JVM and Memory Usage - JRun server not using full PSPermGen allocation?

I'm trying to understand why out ColdFusion 9 (JRun) server is throwing the following error:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
The JVM arguments are as follows:
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -
I had jconsole running when the dump happened and I am trying to reconcile some numbers with the -XX:MaxPermSize=192m setting above. When JRun died it had the following memory usage:
Heap
PSYoungGen total 136960K, used 60012K [0x5f180000, 0x67e30000, 0x68d00000)
eden space 130624K, 45% used [0x5f180000,0x62c1b178,0x67110000)
from space 6336K, 0% used [0x67800000,0x67800000,0x67e30000)
to space 6720K, 0% used [0x67110000,0x67110000,0x677a0000)
PSOldGen total 405696K, used 241824K [0x11500000, 0x2a130000, 0x5f180000)
object space 405696K, 59% used [0x11500000,0x20128360,0x2a130000)
PSPermGen total 77440K, used 77070K [0x05500000, 0x0a0a0000, 0x11500000)
object space 77440K, 99% used [0x05500000,0x0a043af0,0x0a0a0000)
My first question is that the dump shows the PSPermGen being the problem - it says the total is 77440K, but it should be 196608K (based on my 192m JVM argument), right? What am I missing here? Is this something to do with the other non-heap pool - the Code Cache?
I'm running on a 32bit machine, Windows Server 2008 Standard. I was thinking of increasing the PSPermGen JVM argument, but I want to understand why it doesn't seem to be using its current allocation.
Thanks in advance!
An "out of swap space" OOME happens when the JVM has asked the operating system for more memory, and the operating system has been unable to fulfill the request because all swap (disc) space has already been allocated. Basically, you've hit a system-wide hard limit on the amount of virtual memory that is available.
This can happen through no fault of your application, or the JVM. Or it might be a consequence of increasing -Xmx etc beyond your system's capacity to support it.
There are three approaches to addressing this:
Add more physical memory to the system.
Increase the amount of swap space available on the system; e.g. on Linux look at the manual entry for swapon and friends. (But be careful that the ratio of active virtual memory to physical memory doesn't get too large ... or your system is liable to "thrash", and performance will drop through the floor.)
Cut down the number and size of processes that are running on the system.
If you got into this situation because you've been increasing -Xmx to combat other OOMEs, then now would be good time to track down the (probable) memory leaks that are the root cause of your problems.
"ChunkPool::allocate. Out of swap space" usually means the JVM process has failed to allocate memory for its internal processing.
This is usually not directly related to your heap usage as it is the JVM process itself that has run out of memory. Check the size of the JVM process within windows. You may have hit an upper limit there.
This bug report also gives an explanation.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004956
This is usually caused by native, non java objects not being released by your application rather than java objects on the heap.
Some example causes are:
Large thread stack size, or many threads being spawned and not cleaned up correctly. The thread stacks live in native "C" memory rather than the java heap. I've seen this one myself.
Swing/AWT windows being programatically created and not dispoed when no longer used. The native widgets behind AWT don't live on the heap as well.
Direct buffers from nio not being released. The data for the direct buffer is allocated to the native process memory, not the java heap.
Memory leaks in jni invocations.
Many files opened an not closed.
I found this blog helpfull when diagnosing a similar problem. http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html
Check your setDomainEnv.cmd (.sh)file. there will be three different conditions on PermSize
-XX:MaxPermSize=xxxm -XX:PermSize=xxxm. Change everywhere

What runs in a C heap vs a Java heap in HP-UX environment JVMs?

I've been running into a peculiar issue with certain Java applications in the HP-UX environment.
The heap is set to -mx512, yet, looking at the memory regions for this java process using gpm, it shows it using upwards of 1.6GBs of RSS memory, with 1.1GB allocated to the DATA region. Grows quite rapidly over a 24-48hour period and then slows down substantially, still growing 2MB every few hours. However, the Java heap shows no sign of leakage.
Curious how this was possible I researched a bit and found this HP write-up on memory leaks in java heap and c heap: http://docs.hp.com/en/JAVAPERFTUNE/Memory-Management.pdf
My question is what determines what is ran in the C heap vs the java heap, and for things that do not run through the java heap, how would you identify those objects being run on the C heap? Additionally does the java heap sit inside the C heap?
Consider what makes up a Java process.
You have:
the JVM (a C program)
JNI Data
Java byte codes
Java data
Notably, they ALL live in the C heap (the JVM Heap is part of the C heap, naturally).
In the Java heap is simply Java byte codes and the Java data. But what is also in the Java heap is "free space".
The typical (i.e. Sun) JVM only grows it Java Heap as necessary, but never shrinks it. Once it reaches its defined maximum (-Xmx512M), it stops growing and deals with whatever is left. When that maximum heap is exhausted, you get the OutOfMemory exception.
What that Xmx512M option DOES NOT do, is limit the overall size of the process. It limits only the Java Heap part of the process.
For example, you could have a contrived Java program that uses 10mb of Java heap, but calls a JNI call that allocates 500MB of C Heap. You can see how your process size is large, even though the Java heap is small. Also, with the new NIO libraries, you can attach memory outside of the heap as well.
The other aspect that you must consider is that the Java GC is typically a "Copying Collector". Which means it takes the "live" data from memory it's collecting, and copies it to a different section of memory. This empty space that is copies to IS NOT PART OF THE HEAP, at least, not in terms of the Xmx parameter. It's, like, "the new Heap", and becomes part of the heap after the copy (the old space is used for the next GC). If you have a 512MB heap, and it's at 510MB, Java is going to copy the live data someplace. The naive thought would be to another large open space (like 500+MB). If all of your data were "live", then it would need a large chunk like that to copy into.
So, you can see that in the most extreme edge case, you need at least double the free memory on your system to handle a specific heap size. At least 1GB for a 512MB heap.
Turns out that not the case in practice, and memory allocation and such is more complicated than that, but you do need a large chunk of free memory to handle the heap copies, and this impacts the overall process size.
Finally, note that the JVM does fun things like mapping in the rt.jar classes in to the VM to ease startup. They're mapped in a read only block, and can be shared across other Java processes. These shared pages will "count" against all Java processes, even though it is really only consuming physical memory once (the magic of virtual memory).
Now as to why your process continues to grow, if you never hit the Java OOM message, that means that your leak is NOT in the Java heap, but that doesn't mean it may not be in something else (the JRE runtime, a 3rd party JNI librariy, a native JDBC driver, etc.).
In general, only the data in Java objects is stored on the Java heap, all other memory required by the Java VM is allocated from the "native" or "C" heap (in fact, the Java heap itself is just one contiguous chunk allocated from the C heap).
Since the JVM requires the Java heap (or heaps if generational garbage collection is in use) to be a contiguous piece of memory, the whole maximum heap size (-mx value) is usually allocated at JVM start time. In practice, the Java VM will attempt to minimise its use of this space so that the Operating System doesn't need to reserve any real memory to it (the OS is canny enough to know when a piece of storage has never been written to).
The Java heap, therefore, will occupy a certain amount of space in memory.
The rest of the storage will be used by the Java VM and any JNI code in use. For example, the JVM requires memory to store Java bytecode and constant pools from loaded classes, the result of JIT compiled code, work areas for compiling JIT code, native thread stacks and other such sundries.
JNI code is just platform-specific (compiled) C code that can be bound to a Java object in the form of a "native" method. When this method is executed the bound code is executed and can allocate memory using standard C routines (eg malloc) which will consume memory on the C heap.
My only guess with the figures you have given is a memory leak in the Java VM. You might want to try one of the other VMs they listed in the paper you referred. Another (much more difficult) alternative might be to compile the open java on the HP platform.
Sun's Java isn't 100% open yet, they are working on it, but I believe that there is one in sourceforge that is.
Java also thrashes memory by the way. Sometimes it confuses OS memory management a little (you see it when windows runs out of memory and asks Java to free some up, Java touches all it's objects causing them to be loaded in from the swapfile, windows screams in agony and dies), but I don't think that's what you are seeing.

Categories

Resources