I've been profiling the x64 version of my application as the memory usage has been outrageously high, all of it seems to be coming from the JavaFX MediaPlayer, i'm correctly releasing listeners and eventhandlers.
Here is the stark contrast.
The x32 version at start
And now the x64 version at start
The x32 version stays below 256mb while the x64 will shoot over a gig; this is while both are left to play through their playlist.
All the code is the same.
JDK: jdk1.8.0_20
JRE: jre1.8.0_20
VM arguments on both
-XX:MinHeapFreeRatio=40 -XX:MaxHeapFreeRatio=70 -Xms3670k -Xmx256m -Dsun.java2d.noddraw=true -XX:+UseParallelGC
Same issue occurring on another x64 Java application
Is this a bug or am I overlooking something?
What you are seeing is the memory usage of the entire JVM running your process. The -Xmx256m setting only limits the maximum heap space available for your application to allocate (and the JVM would enforce that). Outside of heap space, the JVM can use additional memory for a host of other purposes (I am sure I will miss a few in the list below):
PermGen, which has now be replaced by the Metaspace. According to the documentation, there is no default limit for this:
-XX:MaxMetaspaceSize=size
Sets the maximum amount of native memory that can be allocated for class metadata. By default, the size is not limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system.
Stack space (memory used = (number of threads) * stack size. You can control this with the -Xss parameter
Off-heap space (either use of ByteBuffers in your code, or use of third pary libraries like EHCache which would in turn use off-heap memory)
JNI code
GC (garbage collectors need their own memory, which is again not part of the heap and can vary greatly depending on the collector used and the application memory usage)
In your case, you are seeing the "almost doubling" of memory use, plus probably a more relax Metaspace allocation when you move from a 32bit to a 64bit JVM. Using -XX:MaxMetaspaceSize=128m will probably bring the memory usage down to under 512MB for the 64bit JVM.
I don't know your application, respectively how it is implemented.
One possible reason for such a surprize differences could be how much memory can be used before a garbage collection is performed. It is thinkable that a machine with 64 bit words is allocated with more memory then a machine with 32 bit words. The garbage collector could run less often, so there would be more garbage memory still allocated, even when it is not really necessary or usefull.
Related
I built a simple application using Springboot.The ZGC garbage collector I use when deploying to a Linux server USES a lot of memory..I tried to limit the maximum heap memory to 500MB with Xmx500m, but the JAVA program still used more than 1GB. When I used the G1 collector, it only used 350MB.I don't know why, is this a BUG of JDK11?Or do I have a problem with my boot parameters?
####Runtime environment
operating system: CentOS Linux release 7.8.2003
JDK version: jdk11
springboot version: v2.3.0.RELEASE
Here is my Java startup command
java -Xms128m -Xmx500m \
-XX:+UnlockExperimentalVMOptions -XX:+UseZGC \
-jar app.jar
Here is a screenshot of the memory usage at run time
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201259.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201357.png?raw=true
Here's what happens when you use the default garbage collector
Java startup command
java -Xms128m -Xmx500m \
-jar app.jar
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202442.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202421.png?raw=true
By default jdk11 USES the G1 garbage collector. Theoretically, shouldn't G1 be more memory intensive than ZGC?Why didn't I use it that way?Did I misunderstand?Since I'm a beginner to the JVM, I don't understand why.
ZGC employs a technique known as colored pointers. The idea is to use some free bits in 64-bit pointers into the heap for embedded metadata. However, when dereferencing such pointers, these bits need to be masked, which implies some extra work for the JVM.
To avoid the overhead of masking pointers, ZGC involves multi-mapping technique. Multi-mapping is when multiple ranges of virtual memory are mapped to the same range of physical memory.
ZGC uses 3 views of Java heap ("marked0", "marked1", "remapped"), i.e. 3 different "colors" of heap pointers and 3 virtual memory mappings for the same heap.
As a consequence, the operating system may report 3x larger memory usage. For example, for a 512 MB heap, the reported committed memory may be as large as 1.5 GB, not counting memory besides the heap. Note: multi-mapping affects the reported used memory, but physically the heap will still use 512 MB in RAM. This sometimes leads to a funny effect that RSS of the process looks larger than the amount of physical RAM.
See also:
ZGC: A Scalable Low-Latency Garbage Collector by Per Lidén
Understanding Low Latency JVM GCs by Jean Philippe Bempel
JVM uses much more than just the heap memory - read this excellent answer to understand JVM memory consumption better: Java using much more memory than heap size (or size correctly Docker memory limit)
You'll need to go beyond the heap inspection and use things like Native Memory Tracking to get a clearer picture.
I don't know what's the particular issue with your application, but ZGC is often mentioned as to be good for large heaps.
It's also a brand new collector and got many changes recently - I'd upgrade to JDK 14 if you want to use it (see "Change Log" here: https://wiki.openjdk.java.net/display/zgc/Main)
This is a result of the throughput-latency-footprint tradeoff. When choosing between these 3 things, you can only pick 2.
ZGC is a concurrent GC with low pause times. Since you don't want to give up throughput, you trade latency and throughput for footprint. So, there is nothing surprising in such high memory consumption.
G1 is not a low-pause collector, so you shift that tradeoff towards footprint and get bigger pause times but win some memory.
The amount of OS memory the JVM uses (ie, "committed heap") depends on how often the GC runs (and also whether it uncommits unneeded memory if the app starts to use less), which is a tunable option. Unfortunately ZGC isn't (currently) as aggressive about this by default as G1, but both have some tuning options that you can try.
P.S. As others have noted, the RES htop column is misleading, but the VisualVM chart shows the real picture.
We have a desktop Java Swing application. For shipping it, we need to specify the minimum memory requirements for deploying this application. In the JVM parameters we specify 2GB as max heap size.
Is there any tool for a Windows based machine which can quantify the requirements?
Also, as follow-up question, I would like to know: If we do not specify the max heap size in Java 7, does the JVM still automatically adjust the heap size on the fly before throwing an OutOfMemoryError?
Possible approach:
If you specify your own product to work with at max. 2GB of heap, you also have to consider the other parts of memory, allocated within the Java virtual machine:
To find out your memory consumption, I suggest you to test your application with MemoryMXBean. This includes methods such as getHeapMemoryUsage() and getNonHeapMemoryUsage().
Then stress-test your applications and periodically check these properties. This way you should get a feeling for how much memory your application consumes.
Additionally to that, Windows specifies 2GB as minimum RAM for Windows 10.
So, your final minimum requirements should be Minimum = MaximumHeap (2GB) + StressTestNonHeap (?) + WindowsMinimum (2GB) + SomeSecurityThreshold (~1GB).
Further approaches:
You could also use VisualVM to check your memory consumption.
Another possibility is to use Java HotSpot Native Memory Tracking (NMT), for which I posted an example on Stack Overflow.
Anything that also informs you about non-heap memory useage is applicable.
Max heap limits:
Regarding your question
Also on another note just wanted to know if we do not specify the max heap limits with Java 7, does the JVM automatically allocates heap on the fly to adjust before throwing out of memory.
If you do not specify the max heap size, the JVM will set it automatically depending on the used GC (in Java 7 this should be UseParallelOldGC) and your system. To test this, run java -XX:+PrintVMOptions -XX:+AggressiveOpts -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal -version and check what values are set for MaxHeapSize and UseParallelOldGC.
GC considerations:
Also: You probably want to consider using the garbage first (G1) GC, which will be the default GC in Java 9. In this question I show that the G1 GC also re-shrinks the heap if it thinks it is pratical. This may be useful if your application has memory-intensive and non-memory-intensive parts. This way, the heap may shrink during the non-memory-intensive parts, which most probably won't happen with the ParallelOldGC.
When you run the JVM without the maximum heap size for the server JVM it uses 1/4 of main memory up to 32 GB. If you use the 32-bit windows client VM, it uses 64MB or 128MB.
The best way to determine the required memory consumption is to test you application with different memory sizes. The minimum memory is the lowest memory size you are willing to support. Only you know what you are comfortable supporting.
java -Xms is apparently not having an affect on the amount of memory the java process consumes during a run.
I have an app that consumes about 1Gb from the system point of view. I tried setting -Xms2048m (and -Xmx4096m) and I see absolutely no change in memory consumption.
The hotspot docs claim the heap size is bounded below by the Xms value or the default.
The only thing I can think of is maybe the process cannot grab a contiguous block of memory, so it grabbed all it could and then will allocate more later, or maybe windows is not letting it have that much memory to start with. (64-bit windows 7)
(I don't need this for anything, it is just something curious I noticed)
The default memory usage windows task manager shows you is not what's allocated in the processes virtual memory space. It's how much that process has actually written into the virtual space that has had to be mapped onto real memory. If you enable the column for 'Commit Size' in your task manager that will show what is actually considered "used" from the perspective of your processes's virtual address space. (roughly Xms + permsize + size of VM and system stuff itself.)
For Java 1 try with -ms and -mx
Since Java2 you can use -Xms and -Xmx
My experience is, that -ms and `-mx works also in Java2. See http://www.devx.com/tips/Tip/5578
The JVM need a continuous region of memory for the heap. This means it allocates the maximum size as virtual memory on startup. This is not as bad as it sounds as the OS only allocates main memory to the application as it uses it, (not when it allocates virtual memory)
If you look at the amount of memory used in a tool like VisualVM, you can find that even with overhead of 150 - 500 MB, the size is less than the minimum size. This is because Java doesn't just use the minimum size if it doesn't have a use for it.
Instead the minimum size is the point below which it makes only minor attempts to clean up memory. (You may see it perform minor GCs) In most cases this means the application will use the minimum size very quickly. However, a "hello world" program will not use the minimum size.
maybe windows is not letting it have that much memory to start with
The JVM will fail to start if it cannot allocate the maximum size as a continuous block. (This was a common problem on 32-bit Window, such that the limit could be 1.5 GB or as low as 1.2 GB)
we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory:
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.
I'm using Eclipse 3.6 with latest Sun Java 6 on Linux (64 bit) with a larger number of large projects. In some special circumstances (SVN updates for example) Eclipse needs up to 1 GB heap. But most of the time it only needs 350 MB. When I enable the heap status panel then I see this most of the time:
350M of 878M
I start Eclipse with these settings: -Xms128m -Xmx1024m
So most of the time lots of MB are just wasted and are just used rarely when memory usage peaks for a short time. I don't like that at all and I want Eclipse to release the memory back to the system, so I can use it for other programs.
When Eclipse needs more memory while there is not enough free RAM than Linux can swap out other running programs, I can live with that. I heard there is a -XX:MaxHeapFreeRatio option. But I never figured out what values I have to use so it works. No value I tried ever made a difference.
So how can I tell Eclipse (Or Java) to release unused heap?
Found a solution. I switched Java to use the G1 garbage collector and now the HeapFreeRatio parameters works as intended. So I use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory is actually released back to the operating system.
You can go to the Preferences -> General and check the Show heap status. This activate a nice view of your heap in the corner of Eclipse. Something like this:
If you click the trash bin, it will try to run garbage collection and return the memory.
Java's heap is nothing more than a big data structure managed within the JVM process' heap space. The two heaps are logically-separate entities even though they occupy the same memory.
The JVM is at the mercy of the host system's implementations of malloc(), which allocates memory from the system using brk(). On Linux systems (Solaris, too), memory allocated for the process heap is almost never returned, largely because it becomes fragmented and the heap must be contiguous. This means that memory allocated to the process will increase monotonically, and the only way to keep the size down is not to allocate it in the first place.
-Xms and -Xmx tell the JVM how to size the Java heap ahead of time, which causes it to allocate process memory. Java can garbage collect until the sun burns out, but that cleanup is internal to the JVM and the process memory backing it doesn't get returned.
Elaboration from comment below:
The standard way for a program written in C (notably the JVM running Eclipse for you) to allocate memory is to call malloc(3), which uses the OS-provided mechanism for allocating memory to the process and then managing individual allocations within those allocations. The details of how malloc() and free() work are implementation-specific.
On most flavors of Unix, a process gets exactly one data segment, which is a contiguous region of memory that has pointers to the start and end. The process can adjust the size of this segment by calling brk(2) and increasing the end pointer to allocate more memory or decreasing it to return it to the system. Only the end can be adjusted. This means that if your implementation of malloc() enlarges the data segment, the corresponding implementation of free() can't shrink it unless it determines that there's space at the end that's not being used. In practice, a humongous chunk of memory you allocated with malloc() rarely winds up at the very end of the data segment when you free() it, which is why processes tend to grow monotonically.