Most of the place on net , I get below info about heap parameters
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Here is mine understanding/question when I mention -Xms 512M -Xmx 2048M parameters ,
-Xms :- My understanding is if my java process is actually needed only 200M , with mention of -Xms 512M , java process will still be assigned only 200M(actual memory required) instead of 500M . But if I already know that my application is going to take this 512M memory on start up then specifying less than will have impact on performance as anyways heap block need to resized which is costly operation.
Per discussion with my colleague, By default GC will trigger on 60% of Xms value. Is that correct ? If yes is it minor GC or full GC that is dependant on Xms value ?
Update on Xms:-
This seems to be true after reading JVM heap parameters but again is value 60% by default and is it minor or full GC that is dependent on Xms value?
-Xmx:- My understanding is with mention of -Xmx 2048M , java process is actually going to reserve 2048M memory for its use from OS itso that another process can not be given its share.If java process needed anyhow more than 2048M memory, then out of memory will be thrown.
Also i believe there is some relation of Full GC trigger on value of -Xmx. Because what I observed is when memory reaches near to 70% of Xmx, Full GC happens in jconsole. Is that correct?
Configuration :- I am using linux box(64 bit JVM 8). Default GC i.e Parallel GC
GC is not triggered based on just Xms or Xmx value.
Heap = New + Old generations
The heap size (which is initially set to Xms) is split into 2 generations - New (aka Young) and Old (aka Tenured). New generation is by default 1/3rd of the total heap size while Old generation is 2/3rd of the heap size. This can be adjusted by using JVM parameter called NewRatio. Its default value is 2.
Young Generation is further divided in Eden and 2 Survivor spaces. The default ratio of these 3 spaces are: 3/4th, 1/8th, 1/8th.
Side note: This is about Parallel GC collectors. For G1 - new GC algorithm divides the heap space differently.
Minor GC
All new objects are allocated in Eden space (except massive ones which are directly stored in Old generation). When Eden space becomes full Minor GC is triggered. Objects which survive multiple minor GCs are promoted to Old Generation (default is 15 cycles which can be changed using JVM parameter: MaxTenuringThreshold).
Major GC
Unlike concurrent collector, where Major GC is triggered based on used-space (by default 70%), parallel collectors calculate threshold based on 3 goals mentioned below.
Parallel Collector Goals
Max GC pause time - Maximum time spent in doing GC
Throughput - Percentage of time spent in GC vs Application. Default (1%)
Footprint - Maximum heap size (Xmx)
Thus by default, Parallel Collector tries to spend maximum 1% of total application running time in Garbage Collection.
More details here
Xms to Xmx
During startup JVM creates heap of size Xms but reserves the extra space (Xmx) to be able to grow later. That reserved space is called Virtual Space. Do note that it just reserves the space and does not commit.
2 parameters decide when heap size grows (or shrinks) between Xms and Xmx.
MinHeapFreeRatio (default: 40%): Once the free heap space dips below 40%, a Full GC is triggered, and the heap size grows by 20%. Thus, heap size can keep growing incrementally until it reaches Xmx.
MaxHeapFreeRatio (default: 70%): On the flip side, heap free space crosses 70%, then Heap size is reduced by 5% incrementally during each GC until it reaches Xms.
These parameters can be set during startup. Read more about it here and here.
PS: JVM GC is fascinating topic and I would recommend reading this excellent article to understand in-depth. All the JVM tuning parameters can be found here.
Related
For JVM after Java 8
When the size of metaspace > -XX:metaspaceSize, it will trigger a gc.
No matter how you configure -XX:metaspaceSize and -XX:maxMetaspaceSize, the initial size of metaspace is usually a fixed value (20.8M) on a 64-bit server.
The JVM will resize metaspace automatically when it reaches close to the current capacity.
Then, if -XX:metaspaceSize is 20G for example, the current size of the metaspace is 18M and a large number of new objects (about 100M) must be allocated, the JVM must resize the metaspace for these new objects, will JVM triggers a full GC before resizing?
First of all, "the size of the metaspace" is ambiguous, and thus meaningless without the context. There are at least five metrics: reserved, committed, capacity and used memory as described in this answer, and the high-water mark, also known as capacity_until_gc.
Metaspace is not just one contiguous region of memory, so it does not resize in the common sense. Instead, when allocation happens, one or more of the above metrics changes.
On the fastest path a block of metadata is allocated from the current chunk. used memory increases in this case, and that's it.
If there is not enough room in the current chunk, JVM searches for a possibly free existing chunk. If it succeeds in reusing chunks, capacity increases. No GC happens until this point.
If there are no free chunks, JVM tries to commit more memory, unless the new committed size would exceed capacity_until_gc.
If capacity_until_gc threshold is reached, JVM triggers a GC cycle.
If GC does not free enough memory, the high-water mark is increased so that another Virtual Space will be allocated.
After GC, the high-water mark value is adjusted basing on the following JVM flags:
-XX:MinMetaspaceFreeRatio (used to calculate how much free space is desirable in the metaspace capacity to decide how much to increase the HWM);
-XX:MaxMetaspaceFreeRatio (used to decide how much free space is desirable in the metaspace capacity before decreasing the HWM);
-XX:MinMetaspaceExpansion (the minimum expansion of Metaspace in bytes);
-XX:MaxMetaspaceExpansion (the maximum expansion of Metaspace without full GC).
TL;DR It's not that simple. JVM can definitely commit more Metaspace memory without triggering GC. However, when HWM is reached, GC is triggered and HWM is recomputed according to the ergonomics policy.
You can configure metaspace size but JVM can increase or decreased size by depended platform.
See Oracle docs.
-XX:MetaspaceSize=size
Sets the size of the allocated class metadata space that will trigger a garbage collection the first time it is exceeded. This threshold for a garbage collection is increased or decreased depending on the amount of metadata used. The default size depends on the platform.
I am profiling a Java process with VisualVM and I found out that while my used heap remains constantly below 100 MB, the heap size keeps increasing to a point at which it is 10 times bigger than the used heap!
Reading from the docs:
By default, the virtual machine grows or shrinks the heap at each
collection to try to keep the proportion of free space to live objects
at each collection within a specific range. This target range is set
as a percentage by the parameters -XX:MinHeapFreeRatio= and
-XX:MaxHeapFreeRatio=, and the total size is bounded below by -Xms and above by -Xmx.
So, consindering that MinHeapFreeRatio and MaxHeapFreeRatio are set to 40% and 70% respectively, why is this happening?
I run my Java Application through tomcat, and set the -Xms1024m, however I found the size of the Java heap just 200~300m after start the application, I think the Xms means the minimum heap size, why the application doesn't reach to the minimum Heap size 1024m immediately after the application startup?
Edit, BTW, the JVM is hotspot 7.0.
It seems the GC does it in the
method HeapRegion::setup_heap_region_size(uintx min_heap_size) from the c++ file \openjdk-7-fcs-src-b147-27_jun_2011\openjdk\hotspot\src\share\vm\gc_implementation\g1\heapRegion.cpp , and method parse_each_vm_init_arg from file \openjdk-7-fcs-src-b147- 27_jun_2011\openjdk\hotspot\src\share\vm\runtime\arguments.cpp , someone who familiar with JVM GC source code can help to do some analysis for it.
It only shows you the space used. The heap size determines the capacity of each region. Note: the actual size available is less as you have two survivors spaces and only one is active in normal operation. e.g. say you have a survivor spaces of 50 MB each, a 1 GB heap will say only 950 MB is available.
When you set the minimum heap size, this doesn't mean that space has been used yet. However you might not get a GC until this heap size is reached (or when the Eden which is a portion of the heap is full)
Say you have a 1 GB heap and the Eden space is 100 MB to start with. You will get a GC once the Eden fills up even though little tenured space is used. Eventually the Eden grows and the tenured space starts to fill and once it is full, it might grow the heap size to greater than the minimum heap size you gave it.
A JVM application runs on Oracle Hotspot JVM, it starts up with default JVM settings, but with 100MB of initial heap size and 1GB of maximum heap size.
Under which circumstances will JVM decide to grow the current heap size, instead of trying GC?
HotSpot JVM continuously monitors allocation rates and objects lifetimes. It tries to achieve two key factors:
let short-lived objects die in eden
promote long-lived object to heap on time to prevent unnecessarily copying between survivor spaces
In a nutshell you can describe it as the HotSpot have some configured threshold which indicates how much pecentage of total allocated heap have to by free after running garbage collector. For example if this threshold is configured for 70% and after running full GC heap usage will be 80%, then additional memory will be allocated to hit the threshold. Of course bigger heap means longer pauses while smaller heap means more frequent collections.
But you have to remember that JVM is very complex, and you can change this behaviour, for example by using flags:
AdaptiveSizePausePolicy, which will pick heap size to achieve shortest pauses
AdaptiveSizeThroughPutPolicy, which will pick heap size to achieve highest throughtput
GCTimeLimit and GCTimeRatio, which sets time spent in application execution
Number of object which occupies the Heap increases while garbage collection is not possible.
When objects not possible to collect as garbage since they are use by current process, JVM need to increase it's heap size towards it is maximum to allow to create new objects.
While reading some notes on performance tuning, I did notice a recommendations while setting memory size:
Java application should size both initial and maximum permanent generation size to the same value since growing or contracting the permanent generation space requires a full GC. Similar suggestion is given while setting the heap size, I.e. -Xmx=-Xms.
My question is, then why do we have -Xms setting at all?
Also,
Why GC gets triggered often if I''ve different value for -Xmx and -Xms, and not when I''ve same size for -Xmx and -Xms.
To add more to my second question, If I start with minimum heap size of 64M and Max 512 M, I believe full GC will not get triggered unless memory utilized by my app reaches 512M.
Similarly If I start with 512M for both -Xmx and -Xms, still JVM will trigger full GC when my app memory use reaches this limit. So why it's advised to set both max and min to the same value?
The setting flags were designed before the VM had generational, incremental collection. In that case complete collections were all there were. In more modern collectors full collections are rare. That's good because incremental collections are normally a few milliseconds, so the UI experience doesn't change. Complete collections of big arenas can take several seconds ore more. Changing the arena size - as the document says - is guarenteed to cause a full collection every time.
The guidance isn't perfect 100% of the time. There are few kinds of apps where allowing the arena to grow is reasonable.
-Xms=64m -Xmx=512m does not mean "start up with a heap between 64 and 512 MB". It instructs the JVM to request 64MB of committed memory and 512MB of reserved memory at startup. The heap will be 64MB to start with, and as it fills up, will expand into the space reserved for it. So, with Xms of 64MB you would see a full collection before the heap filled to 64MB.
If you start your application with a low value for Xms and turn on GC logging (-verbose:gc -Xloggc:FILENAME) the log file will show how the heap and generation sizes change as the application runs.
Minor collections may be more frequent with a lower Xms because the new generation will be smaller (assuming you are using proportional generation sizing rather than explicit) and so will fill more quickly.
One reason to have -Xms < -Xmx is to allow the JVM to not pre-allocate the whole Xmx upfront so that the difference is available (foe a while perhaps) to other applications. Gory details here: http://www.ibm.com/developerworks/library/j-memusage/