How does JVM calculate the committed heap memory? - java

Based on the documentation : https://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryUsage.html
committed - represents the amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine. The amount of committed memory may change over time (increase or decrease). The Java virtual machine may release memory to the system and committed could be less than init. committed will always be greater than or equal to used.
but the question is how JVM calculate the committed memory?

Here you can find a little bit more detail, but it does not explain the exact way how committed heap space is increased:
There is also a committed heap size which acts as a "high water
mark", moving up once the JVM cannot free up space even on old
collection / young collection to make room. In this case, the
committed heap size is increased. This cycle repeats until the
committed heap size matches the maximum heap size, the maximum space
allocatable.
https://support.mulesoft.com/s/article/Java-JVM-memory-allocation-pattern-explained

Related

Does JVM metaspace always trigger GC when resizing

For JVM after Java 8
When the size of metaspace > -XX:metaspaceSize, it will trigger a gc.
No matter how you configure -XX:metaspaceSize and -XX:maxMetaspaceSize, the initial size of metaspace is usually a fixed value (20.8M) on a 64-bit server.
The JVM will resize metaspace automatically when it reaches close to the current capacity.
Then, if -XX:metaspaceSize is 20G for example, the current size of the metaspace is 18M and a large number of new objects (about 100M) must be allocated, the JVM must resize the metaspace for these new objects, will JVM triggers a full GC before resizing?
First of all, "the size of the metaspace" is ambiguous, and thus meaningless without the context. There are at least five metrics: reserved, committed, capacity and used memory as described in this answer, and the high-water mark, also known as capacity_until_gc.
Metaspace is not just one contiguous region of memory, so it does not resize in the common sense. Instead, when allocation happens, one or more of the above metrics changes.
On the fastest path a block of metadata is allocated from the current chunk. used memory increases in this case, and that's it.
If there is not enough room in the current chunk, JVM searches for a possibly free existing chunk. If it succeeds in reusing chunks, capacity increases. No GC happens until this point.
If there are no free chunks, JVM tries to commit more memory, unless the new committed size would exceed capacity_until_gc.
If capacity_until_gc threshold is reached, JVM triggers a GC cycle.
If GC does not free enough memory, the high-water mark is increased so that another Virtual Space will be allocated.
After GC, the high-water mark value is adjusted basing on the following JVM flags:
-XX:MinMetaspaceFreeRatio (used to calculate how much free space is desirable in the metaspace capacity to decide how much to increase the HWM);
-XX:MaxMetaspaceFreeRatio (used to decide how much free space is desirable in the metaspace capacity before decreasing the HWM);
-XX:MinMetaspaceExpansion (the minimum expansion of Metaspace in bytes);
-XX:MaxMetaspaceExpansion (the maximum expansion of Metaspace without full GC).
TL;DR It's not that simple. JVM can definitely commit more Metaspace memory without triggering GC. However, when HWM is reached, GC is triggered and HWM is recomputed according to the ergonomics policy.
You can configure metaspace size but JVM can increase or decreased size by depended platform.
See Oracle docs.
-XX:MetaspaceSize=size
Sets the size of the allocated class metadata space that will trigger a garbage collection the first time it is exceeded. This threshold for a garbage collection is increased or decreased depending on the amount of metadata used. The default size depends on the platform.

Java JVM parameter Xms doesn't take effect immediately

I run my Java Application through tomcat, and set the -Xms1024m, however I found the size of the Java heap just 200~300m after start the application, I think the Xms means the minimum heap size, why the application doesn't reach to the minimum Heap size 1024m immediately after the application startup?
Edit, BTW, the JVM is hotspot 7.0.
It seems the GC does it in the
method HeapRegion::setup_heap_region_size(uintx min_heap_size) from the c++ file \openjdk-7-fcs-src-b147-27_jun_2011\openjdk\hotspot\src\share\vm\gc_implementation\g1\heapRegion.cpp , and method parse_each_vm_init_arg from file \openjdk-7-fcs-src-b147- 27_jun_2011\openjdk\hotspot\src\share\vm\runtime\arguments.cpp , someone who familiar with JVM GC source code can help to do some analysis for it.
It only shows you the space used. The heap size determines the capacity of each region. Note: the actual size available is less as you have two survivors spaces and only one is active in normal operation. e.g. say you have a survivor spaces of 50 MB each, a 1 GB heap will say only 950 MB is available.
When you set the minimum heap size, this doesn't mean that space has been used yet. However you might not get a GC until this heap size is reached (or when the Eden which is a portion of the heap is full)
Say you have a 1 GB heap and the Eden space is 100 MB to start with. You will get a GC once the Eden fills up even though little tenured space is used. Eventually the Eden grows and the tenured space starts to fill and once it is full, it might grow the heap size to greater than the minimum heap size you gave it.

Impact of heap parameters on GC/performance?

Most of the place on net , I get below info about heap parameters
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Here is mine understanding/question when I mention -Xms 512M -Xmx 2048M parameters ,
-Xms :- My understanding is if my java process is actually needed only 200M , with mention of -Xms 512M , java process will still be assigned only 200M(actual memory required) instead of 500M . But if I already know that my application is going to take this 512M memory on start up then specifying less than will have impact on performance as anyways heap block need to resized which is costly operation.
Per discussion with my colleague, By default GC will trigger on 60% of Xms value. Is that correct ? If yes is it minor GC or full GC that is dependant on Xms value ?
Update on Xms:-
This seems to be true after reading JVM heap parameters but again is value 60% by default and is it minor or full GC that is dependent on Xms value?
-Xmx:- My understanding is with mention of -Xmx 2048M , java process is actually going to reserve 2048M memory for its use from OS itso that another process can not be given its share.If java process needed anyhow more than 2048M memory, then out of memory will be thrown.
Also i believe there is some relation of Full GC trigger on value of -Xmx. Because what I observed is when memory reaches near to 70% of Xmx, Full GC happens in jconsole. Is that correct?
Configuration :- I am using linux box(64 bit JVM 8). Default GC i.e Parallel GC
GC is not triggered based on just Xms or Xmx value.
Heap = New + Old generations
The heap size (which is initially set to Xms) is split into 2 generations - New (aka Young) and Old (aka Tenured). New generation is by default 1/3rd of the total heap size while Old generation is 2/3rd of the heap size. This can be adjusted by using JVM parameter called NewRatio. Its default value is 2.
Young Generation is further divided in Eden and 2 Survivor spaces. The default ratio of these 3 spaces are: 3/4th, 1/8th, 1/8th.
Side note: This is about Parallel GC collectors. For G1 - new GC algorithm divides the heap space differently.
Minor GC
All new objects are allocated in Eden space (except massive ones which are directly stored in Old generation). When Eden space becomes full Minor GC is triggered. Objects which survive multiple minor GCs are promoted to Old Generation (default is 15 cycles which can be changed using JVM parameter: MaxTenuringThreshold).
Major GC
Unlike concurrent collector, where Major GC is triggered based on used-space (by default 70%), parallel collectors calculate threshold based on 3 goals mentioned below.
Parallel Collector Goals
Max GC pause time - Maximum time spent in doing GC
Throughput - Percentage of time spent in GC vs Application. Default (1%)
Footprint - Maximum heap size (Xmx)
Thus by default, Parallel Collector tries to spend maximum 1% of total application running time in Garbage Collection.
More details here
Xms to Xmx
During startup JVM creates heap of size Xms but reserves the extra space (Xmx) to be able to grow later. That reserved space is called Virtual Space. Do note that it just reserves the space and does not commit.
2 parameters decide when heap size grows (or shrinks) between Xms and Xmx.
MinHeapFreeRatio (default: 40%): Once the free heap space dips below 40%, a Full GC is triggered, and the heap size grows by 20%. Thus, heap size can keep growing incrementally until it reaches Xmx.
MaxHeapFreeRatio (default: 70%): On the flip side, heap free space crosses 70%, then Heap size is reduced by 5% incrementally during each GC until it reaches Xms.
These parameters can be set during startup. Read more about it here and here.
PS: JVM GC is fascinating topic and I would recommend reading this excellent article to understand in-depth. All the JVM tuning parameters can be found here.

When does out of memory happen?

Recently, when running our application, we met an out of memory exception.
This is the heap dump right before the exception happened
Heap
def new generation total 1572864K, used 366283K [0x00000006b0000000, 0x000000071aaa0000, 0x000000071aaa0000)
eden space 1398144K, 13% used [0x00000006b0000000, 0x00000006bbb12d40, 0x0000000705560000)
from space 174720K, 100% used [0x0000000710000000, 0x000000071aaa0000, 0x000000071aaa0000)
to space 174720K, 0% used [0x0000000705560000, 0x0000000705560000, 0x0000000710000000)
tenured generation total 3495296K, used 2658714K [0x000000071aaa0000, 0x00000007f0000000, 0x00000007f0000000)
the space 3495296K, 76% used [0x000000071aaa0000, 0x00000007bcf06ba8, 0x00000007bcf06c00, 0x00000007f0000000)
compacting perm gen total 42048K, used 41778K [0x00000007f0000000, 0x00000007f2910000, 0x0000000800000000)
the space 42048K, 99% used [0x00000007f0000000, 0x00000007f28ccb80, 0x00000007f28ccc00, 0x00000007f2910000)
No shared spaces configured.
It looks like old gen was almost full (76%). I assume when it finally reaches 100% OOM happens. However, it looks like eden is only at 13%.
Can someone explain why OOM happens even if there is still some space in young gen?
There is a dozen of different reasons why JVM may throw OutOfMemoryError, including
Java heap space: when trying to allocate an object or an array larger than maximum continuous free block in either of heap generations;
GC overhead limit exceeded: when the proportion of time JVM spends doing garbage collection becomes too high (see GCTimeLimit, GCHeapFreeLimit);
PermGen space (before Java 8) or Metaspace (since Java 8): when the amount of class metadata exceeds MaxPermSize or MaxMetaspaceSize;
Requested array size exceeds VM limit: when trying to allocate an array with length larger than Integer.MAX_VALUE - 2;
Unable to create new native thread: when reaching the OS limit of user processes (see ulimit -u) or when there is not enough virtual memory to reserve space for thread stack;
Direct buffer memory: when the size of all direct ByteBuffers exceeds MaxDirectMemorySize or when there is no virtual memory available to satisfy direct buffer allocation;
When JVM cannot allocate memory for its internal structures, either because run out of available virtual memory or because certain OS limit reached (e.g. maximum number of memory map areas);
When JNI code failed to allocate some native resource;
Etc. Not to mention that an application can throw OutOfMemoryError itself at any time just because a developer decides so.
To find out what is the reason of your particular error, you should at least look at the error message, the stacktrace and GC logs.

Under which circumstance will JVM decide to grow size of heap?

A JVM application runs on Oracle Hotspot JVM, it starts up with default JVM settings, but with 100MB of initial heap size and 1GB of maximum heap size.
Under which circumstances will JVM decide to grow the current heap size, instead of trying GC?
HotSpot JVM continuously monitors allocation rates and objects lifetimes. It tries to achieve two key factors:
let short-lived objects die in eden
promote long-lived object to heap on time to prevent unnecessarily copying between survivor spaces
In a nutshell you can describe it as the HotSpot have some configured threshold which indicates how much pecentage of total allocated heap have to by free after running garbage collector. For example if this threshold is configured for 70% and after running full GC heap usage will be 80%, then additional memory will be allocated to hit the threshold. Of course bigger heap means longer pauses while smaller heap means more frequent collections.
But you have to remember that JVM is very complex, and you can change this behaviour, for example by using flags:
AdaptiveSizePausePolicy, which will pick heap size to achieve shortest pauses
AdaptiveSizeThroughPutPolicy, which will pick heap size to achieve highest throughtput
GCTimeLimit and GCTimeRatio, which sets time spent in application execution
Number of object which occupies the Heap increases while garbage collection is not possible.
When objects not possible to collect as garbage since they are use by current process, JVM need to increase it's heap size towards it is maximum to allow to create new objects.

Categories

Resources