we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory:
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.
Related
We developed an highly CPU intensive Java server application that is having a serious memory leak (or so it seems). As time passes, the application seems to eat up increasingly more memory (as seen with Windows Task Manager) but if I analyse it a specialized Java profiler the memory seems to be staying the same. For example, in task manager I see the application taking over 8gb of memory and growing, but in the Java profiler I see that heap memory is at most 2gb. I tried all possible combinations of JAVA_OPTS (-Xmx, -Xms, all types of GC) and nothing worked, Is the Java process not releasing memory back to OS? Is there any way to force it to do so?
1)
I suggest you to set -Xmx2100m and observe heap usage under load.
JVM may take as much OS memory as it decide to be performant, until it reaches Xmx limit. In modern JVMs default Xmx is calculated upon total memory available in OS, so it may be large value.
I think your app does not have memory leak, your JVM simply allocate a lot of memory, because it can.
Observe your JVM thru jvisualvm.
2)
Second suggestion - do you use any JNI code? Does your app call any native library (ie. dll under windows)?
I've been profiling the x64 version of my application as the memory usage has been outrageously high, all of it seems to be coming from the JavaFX MediaPlayer, i'm correctly releasing listeners and eventhandlers.
Here is the stark contrast.
The x32 version at start
And now the x64 version at start
The x32 version stays below 256mb while the x64 will shoot over a gig; this is while both are left to play through their playlist.
All the code is the same.
JDK: jdk1.8.0_20
JRE: jre1.8.0_20
VM arguments on both
-XX:MinHeapFreeRatio=40 -XX:MaxHeapFreeRatio=70 -Xms3670k -Xmx256m -Dsun.java2d.noddraw=true -XX:+UseParallelGC
Same issue occurring on another x64 Java application
Is this a bug or am I overlooking something?
What you are seeing is the memory usage of the entire JVM running your process. The -Xmx256m setting only limits the maximum heap space available for your application to allocate (and the JVM would enforce that). Outside of heap space, the JVM can use additional memory for a host of other purposes (I am sure I will miss a few in the list below):
PermGen, which has now be replaced by the Metaspace. According to the documentation, there is no default limit for this:
-XX:MaxMetaspaceSize=size
Sets the maximum amount of native memory that can be allocated for class metadata. By default, the size is not limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system.
Stack space (memory used = (number of threads) * stack size. You can control this with the -Xss parameter
Off-heap space (either use of ByteBuffers in your code, or use of third pary libraries like EHCache which would in turn use off-heap memory)
JNI code
GC (garbage collectors need their own memory, which is again not part of the heap and can vary greatly depending on the collector used and the application memory usage)
In your case, you are seeing the "almost doubling" of memory use, plus probably a more relax Metaspace allocation when you move from a 32bit to a 64bit JVM. Using -XX:MaxMetaspaceSize=128m will probably bring the memory usage down to under 512MB for the 64bit JVM.
I don't know your application, respectively how it is implemented.
One possible reason for such a surprize differences could be how much memory can be used before a garbage collection is performed. It is thinkable that a machine with 64 bit words is allocated with more memory then a machine with 32 bit words. The garbage collector could run less often, so there would be more garbage memory still allocated, even when it is not really necessary or usefull.
We have a customer that uses WebSphere 7.0 on RedHat Linux Server 5.6 (Tikanga) with IBM JVM 1.6.
When we look at the OS reports for memory usage, we see very high numbers and OS starts to use SWAP memory in some point due to lack in memory.
On the other hand, JConsole graphs show perfectly normal behavior of memory - Heap size increases until GC is invoked when expected and Heap size drops to ~30% in normal cycles. Non heap is as expected and very constant in size.
Does anyone have an idea what this extra native memory usage can be attributed to?
I would check you are looking at resident memory and not virtual memory (the later can be very high)
If you swap, even slightly this can cause the JVM to halt for very long periods of time on a GC. If your application is not locking up for second or minutes, it probably isn't swapping (another program could be)
If your program really is using native memory, this will most like be due to a native library you have imported. If you have a look at /proc/{id}/mmap this may give you a clue, but more likely to will have to check which native libraries you are loading.
Note: if you have lots of threads, the stack space for all these reads can add up. I would try to keep these to a minimum if you can, but I have seen JVMs with many thousands and this can chew up native memory. GUI components can also use native memory but I assume you don't have any of those.
I have a Java SE desktop application which uses a lot of memory (1,1 GB would be desired). All target machines (Win 7, Win Vista) have plenty of physical memory (at least 4GB, most of them have more). There is also enough free memory.
Now, when the machines have some uptime and a lot of programs were started and terminated, the memory becomes fragmented (this is what I assume). This leads to the following error when the JVM is started:
JVM creation failed
Error occurred during initialization of VM
Could not reserve enough space for object heap
Even closing all running programs doesn't help in such a situation (despite Task Manager and other tools report enough free memory). The only thing thas helps is to reboot the machine and fire up the Java applicaton as one of the first programs launched.
As far as I've investigated, the Oracle VM requires one contiguous chunk of memory.
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
I start my JVM with the following arguments:
-J-client -J-Xss2m -J-Xms512m -J-Xmx1100m -J-XX:PermSize=64m -J-Dsun.zip.disableMemoryMapping=true
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
Use an OS which doesn't get fragmented virtual memory. e.g. 64-bit windows or any version of UNIX.
BTW It is hard for me to imagine how this is possible in the first place but I know it to be the case. Each process has its own virtual memory so its arrangement of virtual memory shouldn't depend on anything which is already running or has run before.
I believe it might be a hang over from the MS-DOS TSR days. Shared libraries loaded are given absolute addresses (added to the end of the signed address space, 2 GB, the high half is reserved for the OS and the last 512 MB for the BIOS) in memory meaning they must use the same address range in every program they are used in. Over time the maximum address is determined by the lowest shared library loaded or used (I don't know which one by I suspect the lowest loaded)
In Bash, I use the commmand java -Xmx8192m -Xms512m -jar jarfile to start a Java process with an initial heap space of 512MB and maximum heap space of 8GB.
I like how the heap space increases based on demand, but once the heap space has been increased, it doesn't release although the process doesn't need the memory. How can I release the memory that isn't being used by the process?
Example: Process starts, and uses 600MB of memory. Heap space increases from 512MB to a little over 600MB. Process then drops down to 400MB RAM usage, but heap allocation stays at 600MB. How would I make the allocation stay near the RAM usage?
You cannot; it's simply not designed to work that way. Note that unused memory pages will simply be mapped out by your hardware, and so won't consume any real memory.
Generally you would not like JVM to return memory to the OS and later claim in back as both operations are not so cheap.
There are a couple XX parameters that may or may not work with your preferred garbage collector, namely
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
Source
I believe you'd need stop the world collector for them to be enforced.
Other JVMs may have their own parameters.
I'd normally have not replied but the amount of negative/false info ain't cool.
No, it is a required function. I think, the JVM in Android probably can do this, but I'm not sure.
But most of them - including all Java EE VMs - simply doesn't interested about this.
This is not so simple, as it seems - the VM is a process from the OS view, and has somewhere a mapped memory region for it, which is a stack or data segment.
In most cases it needs to be a continous interval. Memory allocation and release from the OS view happens with a system call, which the process uses to ask the OS its new segment limit.
What to do, if you have for example 2 gigabytes of RAM for your JVM, which uses only 500 megs, but this 500 meg is dispersed in some ten-bytes fragment in this 2 gigs? This memory release function would need also a defragmentation step, which would multiply the resource costs of the GC runs.
As Java runs, and Java objects are constructed and destructed by the garbage collector, the free and allocated memory areas are dispersed in the stack/data segment.
When we don't see java, but native OS processes, the situation is the same: if you malloc() ten 1meg block, and then release the first 9, there is no way to give it back to the OS, altough newer libraries and os apis have extensive development about this. Of course, if you later allocates memory again, this allocation will be done from the just-freed regions.
My opinion is, that even if this is a little bit costly and complex (and a quite large programming work), it worths its price, and I think it isn't the best image from our collective programming culture, that it isn't done since decades in everything, included the java vms.