For JVM after Java 8
When the size of metaspace > -XX:metaspaceSize, it will trigger a gc.
No matter how you configure -XX:metaspaceSize and -XX:maxMetaspaceSize, the initial size of metaspace is usually a fixed value (20.8M) on a 64-bit server.
The JVM will resize metaspace automatically when it reaches close to the current capacity.
Then, if -XX:metaspaceSize is 20G for example, the current size of the metaspace is 18M and a large number of new objects (about 100M) must be allocated, the JVM must resize the metaspace for these new objects, will JVM triggers a full GC before resizing?
First of all, "the size of the metaspace" is ambiguous, and thus meaningless without the context. There are at least five metrics: reserved, committed, capacity and used memory as described in this answer, and the high-water mark, also known as capacity_until_gc.
Metaspace is not just one contiguous region of memory, so it does not resize in the common sense. Instead, when allocation happens, one or more of the above metrics changes.
On the fastest path a block of metadata is allocated from the current chunk. used memory increases in this case, and that's it.
If there is not enough room in the current chunk, JVM searches for a possibly free existing chunk. If it succeeds in reusing chunks, capacity increases. No GC happens until this point.
If there are no free chunks, JVM tries to commit more memory, unless the new committed size would exceed capacity_until_gc.
If capacity_until_gc threshold is reached, JVM triggers a GC cycle.
If GC does not free enough memory, the high-water mark is increased so that another Virtual Space will be allocated.
After GC, the high-water mark value is adjusted basing on the following JVM flags:
-XX:MinMetaspaceFreeRatio (used to calculate how much free space is desirable in the metaspace capacity to decide how much to increase the HWM);
-XX:MaxMetaspaceFreeRatio (used to decide how much free space is desirable in the metaspace capacity before decreasing the HWM);
-XX:MinMetaspaceExpansion (the minimum expansion of Metaspace in bytes);
-XX:MaxMetaspaceExpansion (the maximum expansion of Metaspace without full GC).
TL;DR It's not that simple. JVM can definitely commit more Metaspace memory without triggering GC. However, when HWM is reached, GC is triggered and HWM is recomputed according to the ergonomics policy.
You can configure metaspace size but JVM can increase or decreased size by depended platform.
See Oracle docs.
-XX:MetaspaceSize=size
Sets the size of the allocated class metadata space that will trigger a garbage collection the first time it is exceeded. This threshold for a garbage collection is increased or decreased depending on the amount of metadata used. The default size depends on the platform.
Related
This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC.
-Xms is the default and -Xmx is 2048mb.
What is happening!!? How can I avoid this?
http://imagebin.org/92614
Shrinking heap http://imagebin.org/index.php?mode=image&id=92614
n.b originally posted on serverfault.com, moved to stackoverflow.com as requested
Google found me the following, from the IBM JVM FAQ (how's that for an NLA):
When does the Java heap shrink?
Heap shrinkage occurs when GC determines that there is a lot of free heap storage, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.
The Sun JVM does something similar. Below is an excerpt from an Oracle Technology Network article entitled Ergonomics in the 5.0 Java Virtual Machine.
The heap will grow or shrink to a size that will support the chosen throughput goal. Some oscillations in the size of the heap during initialization and during a change in the application's behavior can be expected.
...
It is typical that the size of the heap will oscillate as the garbage collector tries to satisfy competing goals. This is true even if the application has reached a steady state. The pressure to achieve a throughput goal (which may require a larger heap) competes with the goals for a maximum pause time and a minimum footprint (which both may require a small heap).
I suggest you have a look at the rest of that document; it may have more information relevant to your problem.
There is a JVM argument that controls when the heap is resized.
-XX:MaxHeapFreeRatio
The default value for this is 70. The free ratio is the amount of space not allocated on the heap over the total heap size. It the percentage of free space rises above the default of 70% the jvm will rreduce the size of the heap to allow the OS to use the memory.
If the heap is shrinking too often you can increase the value of -XX:MaxHeapFreeRatio. If it is set to 100 presumably it will never skrink.
Just a guess:
It looks like the system is pretty much idle. There might be some caching going on, and stuff drops out of the cache and gets gc'd. Or since it is a queuing system, maybe it has some messages, in the queue, which slowly get delivered and gc'd afterwards.
The increased frequence of gc-runs might be due to ever decreasing load on the system.
As to how to avoid it. Why do you want to avoid it? It seems like your CPU load is zero. So you are free to let the gc do whatever it wants
Based on the documentation : https://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryUsage.html
committed - represents the amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine. The amount of committed memory may change over time (increase or decrease). The Java virtual machine may release memory to the system and committed could be less than init. committed will always be greater than or equal to used.
but the question is how JVM calculate the committed memory?
Here you can find a little bit more detail, but it does not explain the exact way how committed heap space is increased:
There is also a committed heap size which acts as a "high water
mark", moving up once the JVM cannot free up space even on old
collection / young collection to make room. In this case, the
committed heap size is increased. This cycle repeats until the
committed heap size matches the maximum heap size, the maximum space
allocatable.
https://support.mulesoft.com/s/article/Java-JVM-memory-allocation-pattern-explained
Most of the place on net , I get below info about heap parameters
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Here is mine understanding/question when I mention -Xms 512M -Xmx 2048M parameters ,
-Xms :- My understanding is if my java process is actually needed only 200M , with mention of -Xms 512M , java process will still be assigned only 200M(actual memory required) instead of 500M . But if I already know that my application is going to take this 512M memory on start up then specifying less than will have impact on performance as anyways heap block need to resized which is costly operation.
Per discussion with my colleague, By default GC will trigger on 60% of Xms value. Is that correct ? If yes is it minor GC or full GC that is dependant on Xms value ?
Update on Xms:-
This seems to be true after reading JVM heap parameters but again is value 60% by default and is it minor or full GC that is dependent on Xms value?
-Xmx:- My understanding is with mention of -Xmx 2048M , java process is actually going to reserve 2048M memory for its use from OS itso that another process can not be given its share.If java process needed anyhow more than 2048M memory, then out of memory will be thrown.
Also i believe there is some relation of Full GC trigger on value of -Xmx. Because what I observed is when memory reaches near to 70% of Xmx, Full GC happens in jconsole. Is that correct?
Configuration :- I am using linux box(64 bit JVM 8). Default GC i.e Parallel GC
GC is not triggered based on just Xms or Xmx value.
Heap = New + Old generations
The heap size (which is initially set to Xms) is split into 2 generations - New (aka Young) and Old (aka Tenured). New generation is by default 1/3rd of the total heap size while Old generation is 2/3rd of the heap size. This can be adjusted by using JVM parameter called NewRatio. Its default value is 2.
Young Generation is further divided in Eden and 2 Survivor spaces. The default ratio of these 3 spaces are: 3/4th, 1/8th, 1/8th.
Side note: This is about Parallel GC collectors. For G1 - new GC algorithm divides the heap space differently.
Minor GC
All new objects are allocated in Eden space (except massive ones which are directly stored in Old generation). When Eden space becomes full Minor GC is triggered. Objects which survive multiple minor GCs are promoted to Old Generation (default is 15 cycles which can be changed using JVM parameter: MaxTenuringThreshold).
Major GC
Unlike concurrent collector, where Major GC is triggered based on used-space (by default 70%), parallel collectors calculate threshold based on 3 goals mentioned below.
Parallel Collector Goals
Max GC pause time - Maximum time spent in doing GC
Throughput - Percentage of time spent in GC vs Application. Default (1%)
Footprint - Maximum heap size (Xmx)
Thus by default, Parallel Collector tries to spend maximum 1% of total application running time in Garbage Collection.
More details here
Xms to Xmx
During startup JVM creates heap of size Xms but reserves the extra space (Xmx) to be able to grow later. That reserved space is called Virtual Space. Do note that it just reserves the space and does not commit.
2 parameters decide when heap size grows (or shrinks) between Xms and Xmx.
MinHeapFreeRatio (default: 40%): Once the free heap space dips below 40%, a Full GC is triggered, and the heap size grows by 20%. Thus, heap size can keep growing incrementally until it reaches Xmx.
MaxHeapFreeRatio (default: 70%): On the flip side, heap free space crosses 70%, then Heap size is reduced by 5% incrementally during each GC until it reaches Xms.
These parameters can be set during startup. Read more about it here and here.
PS: JVM GC is fascinating topic and I would recommend reading this excellent article to understand in-depth. All the JVM tuning parameters can be found here.
An object can be promoted from Young Generation to Old Generation when it reaches the Tenuring Threshold or when the "TO" Survival Space is full when it is being transferred.
Therefore, my question is: In order to improve performance, if I know my object will be frequently used (referenced), is it possible to automatically/manually declare an object in Old/Permanent Generation so that not declaring it in the Eden would delay the necessity of Minor Garbage Collection, thus delaying the "Stop The World" event and improving the applications performance?
Generally:
No - not for a specific single object.
In more detail:
An allocation rougly looks the following:
Use thread local allocation buffer (TLAB), if tlab_top + size <= tlab_end. This is the fastest path. Allocation is just the tlab_top pointer increment.
If TLAB is almost full, create a new TLAB in the Eden space and retry in a fresh TLAB.
If TLAB remaining space is not enough but is still to big to discard, try to allocate an object directly in the Eden space. Allocation in the Eden space needs to be done using an atomic operation, since Eden is shared between all threads.
If allocation in the Eden space fails (eden_top + size > eden_end), typically a minor collection occurs.
If there is not enough space in the Eden space even after a Young GC, an attempt to allocate directly in the old generation is made.
"Hack":
The following parameter:
XX:PretenureSizeThreshold=size
This parameter is default set to 0, so it is deactivated. If set, it defines a size threshold for objects to be automatically allocated into the old generation.
BUT:
You should use this with care: Setting this parameter wrong may change your GCs behaviour drastically. And: Only a few percent of objects survive their first GC, so most objects don't have to be copied during the young GC.
Therefore, the young GC is very fast and you should not really need to "optimize" it by forcing object allocation to old generation.
Java parameters:
If you want to get an overview over possible Java paramters, run the following:
java -XX:+PrintVMOptions -XX:+AggressiveOpts -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal -version
This will print all flags you can set.
Different garbage collectors:
Also keep in mind that there are different garbage collectors out there, and that it is planned that Java 9 should use the garbage first (G1) GC as default garbage collector, which again may handle big objects differently (by allocating them into humangous regions).
Additional source:
Stack overflow question: Size of Huge Objects directly allocated to Old Generation
You cannot create an object directly in old generation, it has to go through the eden space and survivor spaces (the young generation) before reaching the old generation. However, if you know that your objects are long lived (For example if your have implemented something like a cache) you can set the following JVM parameters:
-XX:InitialTenuringThreshold=7: Sets the initial tenuring threshold to use in adaptive GC sizing in the parallel young collector. The tenuring threshold is the number of times an object survives a young collection before being promoted to the old, or tenured, generation.
-XX:MaxTenuringThreshold=n: Sets the maximum tenuring threshold for use in adaptive GC sizing. The current largest value is 15. The default value is 15 for the parallel collector and is 4 for CMS.
Source: http://www.oracle.com/technetwork/articles/java/vmoptions-jsp-140102.html
So, you can reduce the application's tenuring threshold.
I actually did this and the stop the world GC time reduced for minor GC ( I had a huge 250GB JVM so the effect was quite profound
).
A JVM application runs on Oracle Hotspot JVM, it starts up with default JVM settings, but with 100MB of initial heap size and 1GB of maximum heap size.
Under which circumstances will JVM decide to grow the current heap size, instead of trying GC?
HotSpot JVM continuously monitors allocation rates and objects lifetimes. It tries to achieve two key factors:
let short-lived objects die in eden
promote long-lived object to heap on time to prevent unnecessarily copying between survivor spaces
In a nutshell you can describe it as the HotSpot have some configured threshold which indicates how much pecentage of total allocated heap have to by free after running garbage collector. For example if this threshold is configured for 70% and after running full GC heap usage will be 80%, then additional memory will be allocated to hit the threshold. Of course bigger heap means longer pauses while smaller heap means more frequent collections.
But you have to remember that JVM is very complex, and you can change this behaviour, for example by using flags:
AdaptiveSizePausePolicy, which will pick heap size to achieve shortest pauses
AdaptiveSizeThroughPutPolicy, which will pick heap size to achieve highest throughtput
GCTimeLimit and GCTimeRatio, which sets time spent in application execution
Number of object which occupies the Heap increases while garbage collection is not possible.
When objects not possible to collect as garbage since they are use by current process, JVM need to increase it's heap size towards it is maximum to allow to create new objects.