I run my Java Application through tomcat, and set the -Xms1024m, however I found the size of the Java heap just 200~300m after start the application, I think the Xms means the minimum heap size, why the application doesn't reach to the minimum Heap size 1024m immediately after the application startup?
Edit, BTW, the JVM is hotspot 7.0.
It seems the GC does it in the
method HeapRegion::setup_heap_region_size(uintx min_heap_size) from the c++ file \openjdk-7-fcs-src-b147-27_jun_2011\openjdk\hotspot\src\share\vm\gc_implementation\g1\heapRegion.cpp , and method parse_each_vm_init_arg from file \openjdk-7-fcs-src-b147- 27_jun_2011\openjdk\hotspot\src\share\vm\runtime\arguments.cpp , someone who familiar with JVM GC source code can help to do some analysis for it.
It only shows you the space used. The heap size determines the capacity of each region. Note: the actual size available is less as you have two survivors spaces and only one is active in normal operation. e.g. say you have a survivor spaces of 50 MB each, a 1 GB heap will say only 950 MB is available.
When you set the minimum heap size, this doesn't mean that space has been used yet. However you might not get a GC until this heap size is reached (or when the Eden which is a portion of the heap is full)
Say you have a 1 GB heap and the Eden space is 100 MB to start with. You will get a GC once the Eden fills up even though little tenured space is used. Eventually the Eden grows and the tenured space starts to fill and once it is full, it might grow the heap size to greater than the minimum heap size you gave it.
Related
I am profiling a Java process with VisualVM and I found out that while my used heap remains constantly below 100 MB, the heap size keeps increasing to a point at which it is 10 times bigger than the used heap!
Reading from the docs:
By default, the virtual machine grows or shrinks the heap at each
collection to try to keep the proportion of free space to live objects
at each collection within a specific range. This target range is set
as a percentage by the parameters -XX:MinHeapFreeRatio= and
-XX:MaxHeapFreeRatio=, and the total size is bounded below by -Xms and above by -Xmx.
So, consindering that MinHeapFreeRatio and MaxHeapFreeRatio are set to 40% and 70% respectively, why is this happening?
In jvisual vm i see three attributes under Monitor>Heap, i see 3 attributes depicting memory details all with differnt figures
Size : ?
Used :- I believe this is the actual memory used
Max :- I believe this is the max heap size allocated to java process (specified with Xmx)
I am not sure what size actually depicts?
The three attributes can be defined as next:
Size: The actual total reserved heap size
Used: The actual used heap size.
Max: The max size of the Java heap (young generation + tenured generation)
Indeed when you launch your JVM, the initial heap size (can be defined with -Xms) will be the initial total reserved heap size, then according to how your application behaves, it could need to increase the total reserved size until it reaches the max size and if it is still not enough you could get OOME.
Size depicts the heap block size assigned to java process. Try with -Xms 512m or 1024m then your size to start with will be 512m but used memory may be much lower. As soon as used memory grows , heap resizing occurs proactively so that memory can be allocated to live objects.
Its like you have Gas tank of 30 litre max capacity . But you know for now you may just need 20 litres for the trip but actually used in trip is 5 litres
Heap size is actual size of heap your running application has.
Used heap is used portion of heap size.
Max heap size is the maximum value the application's heap size can have (can be defined by the arg option -Xmx).
When monitor memory usage of a java application, you see that heap size may vary during running of the application. It can not be greater than max heap size. For a sample profiling (monitoring of an application), see below image:
Most of the place on net , I get below info about heap parameters
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Here is mine understanding/question when I mention -Xms 512M -Xmx 2048M parameters ,
-Xms :- My understanding is if my java process is actually needed only 200M , with mention of -Xms 512M , java process will still be assigned only 200M(actual memory required) instead of 500M . But if I already know that my application is going to take this 512M memory on start up then specifying less than will have impact on performance as anyways heap block need to resized which is costly operation.
Per discussion with my colleague, By default GC will trigger on 60% of Xms value. Is that correct ? If yes is it minor GC or full GC that is dependant on Xms value ?
Update on Xms:-
This seems to be true after reading JVM heap parameters but again is value 60% by default and is it minor or full GC that is dependent on Xms value?
-Xmx:- My understanding is with mention of -Xmx 2048M , java process is actually going to reserve 2048M memory for its use from OS itso that another process can not be given its share.If java process needed anyhow more than 2048M memory, then out of memory will be thrown.
Also i believe there is some relation of Full GC trigger on value of -Xmx. Because what I observed is when memory reaches near to 70% of Xmx, Full GC happens in jconsole. Is that correct?
Configuration :- I am using linux box(64 bit JVM 8). Default GC i.e Parallel GC
GC is not triggered based on just Xms or Xmx value.
Heap = New + Old generations
The heap size (which is initially set to Xms) is split into 2 generations - New (aka Young) and Old (aka Tenured). New generation is by default 1/3rd of the total heap size while Old generation is 2/3rd of the heap size. This can be adjusted by using JVM parameter called NewRatio. Its default value is 2.
Young Generation is further divided in Eden and 2 Survivor spaces. The default ratio of these 3 spaces are: 3/4th, 1/8th, 1/8th.
Side note: This is about Parallel GC collectors. For G1 - new GC algorithm divides the heap space differently.
Minor GC
All new objects are allocated in Eden space (except massive ones which are directly stored in Old generation). When Eden space becomes full Minor GC is triggered. Objects which survive multiple minor GCs are promoted to Old Generation (default is 15 cycles which can be changed using JVM parameter: MaxTenuringThreshold).
Major GC
Unlike concurrent collector, where Major GC is triggered based on used-space (by default 70%), parallel collectors calculate threshold based on 3 goals mentioned below.
Parallel Collector Goals
Max GC pause time - Maximum time spent in doing GC
Throughput - Percentage of time spent in GC vs Application. Default (1%)
Footprint - Maximum heap size (Xmx)
Thus by default, Parallel Collector tries to spend maximum 1% of total application running time in Garbage Collection.
More details here
Xms to Xmx
During startup JVM creates heap of size Xms but reserves the extra space (Xmx) to be able to grow later. That reserved space is called Virtual Space. Do note that it just reserves the space and does not commit.
2 parameters decide when heap size grows (or shrinks) between Xms and Xmx.
MinHeapFreeRatio (default: 40%): Once the free heap space dips below 40%, a Full GC is triggered, and the heap size grows by 20%. Thus, heap size can keep growing incrementally until it reaches Xmx.
MaxHeapFreeRatio (default: 70%): On the flip side, heap free space crosses 70%, then Heap size is reduced by 5% incrementally during each GC until it reaches Xms.
These parameters can be set during startup. Read more about it here and here.
PS: JVM GC is fascinating topic and I would recommend reading this excellent article to understand in-depth. All the JVM tuning parameters can be found here.
I have a JEE application that has recently started to see spikes in CPU usage (e.g. 100% of 27 cores on a 40 core server) and increasingly longer periods of time where the application is unavailable. It's very similar behavior to the issue described in the following post, to include the fact that bouncing the application server makes the issue go away until it appears again after a few hours:
Old Gen heap is full and the Eden and Survivor are low and almost empty
I've taken some core dump outputs while the application is experiencing these "freezes" and I am seeing the following JVM GC output:
PSYoungGen total 11221504K, used 2435K
eden space 9238528K, 0% used
from space 19829796K, 0% used
to space 1970176K, 0% used
ParOldGen total 39613440K, used 39276477K
object space 39613440K, 99% used
PSPermGen total 254976K, used 115497K
object space 254976K, 45% used
Based on the referenced post and the above output, I think I understand that the "freezes" are being driven by the garbage collector running (in vain?) on the ParOldGen space. The parts I am missing:
Why the PermGen space remains at 45% used. That is, will the ~39GB of stuff in ParOldGen ultimately transition into PSPermGen?
What is the significance of the nearly empty PSYoungGen space? Does this mean that the application isn't creating any/many new object instances at steady state?
The post above also describes the option of "giving more headroom" to ParOldGen, but I'm not clear if that means increasing the total heap size via -Xmx or if there's an explicit JVM GC parameter. I see the NewRatio argument controls the size of the young generation relative to the old generation. Would the fact that the PSYoungGen is essentially empty mean that it's too large, and that I should use a smaller NewRatio value?
Thanks in advance for any assistance.
will the ~39GB of stuff in ParOldGen ultimately transition into PSPermGen?
The PermGen in Java 7 (replaced with metaspace in Java 8) is for holding code. The only things which pass from the heap to PermGen is byte code, so unless you are generating or loading classes, nothing passes from one to the other. They are different spaces.
What is the significance of the nearly empty PSYoungGen space?
The young gen is empty after a full GC. Full GCs are common once your old gen starts to fill up.
Does this mean that the application isn't creating any/many new object instances at steady state?
It is more likely to mean it has recently Full GC-ed.
describes the option of "giving more headroom" to ParOldGen, but I'm not clear if that means increasing the total heap size via -Xmx or if there's an explicit JVM GC parameter.
Increasing the maximum heap could give you more head room, but I would first check
you don't have a memory leak.
you can't move the bulk of your data off heap e.g. a database or native memory.
Would the fact that the PSYoungGen is essentially empty mean that it's too large, and that I should use a smaller NewRatio value?
That could help give you more space for the old gen, but it might just give you a bit more time before it runs out of memory.
Why the PermGen space remains at 45% used. That is, will the ~39GB of stuff in ParOldGen ultimately transition into PSPermGen?
No. OldGen & PermGen spaces can't be shared. PSPermGen mostly contains classes loaded by class loader in PermGen space. ParOldGen contains heap memory for long lived objects.
In JDK 1.8, PermGen has replaced by Metaspace. Refer to this article for more details.
Refer to below SE questions for more details:
Java heap terminology: young, old and permanent generations?
What is the significance of the nearly empty PSYoungGen space? Does this mean that the application isn't creating any/many new object instances at steady state?
#Peter Lawrey has correctly answered it. After Full GC, YoungGen is nearly empty => you did not accumulate garbage from short lived objects in your application
The post above also describes the option of "giving more headroom" to ParOldGen, but I'm not clear if that means increasing the total heap size via -Xmx or if there's an explicit JVM GC parameter.
Your old gen is full implies that your application is retaining heap from long lived objects.
Now you have to check possible memory leaks (if any) in your application using profiling tools like visualvm or MAT
If you don't have memory leaks, you can increase your heap size with -Xmx.
Would the fact that the PSYoungGen is essentially empty mean that it's too large, and that I should use a smaller NewRatio value?
Since you are using larger heaps, I recommend you to use G1GC algorithm. If you use G1GC algorithm, don't customize default values since G1GC algorithm takes care of better heap management.
Have a look at oracle article on G1GC switches (Complete List of G1 GC Switches section), Use Cases article and related SE question:
Java 7 (JDK 7) garbage collection and documentation on G1
Recently, when running our application, we met an out of memory exception.
This is the heap dump right before the exception happened
Heap
def new generation total 1572864K, used 366283K [0x00000006b0000000, 0x000000071aaa0000, 0x000000071aaa0000)
eden space 1398144K, 13% used [0x00000006b0000000, 0x00000006bbb12d40, 0x0000000705560000)
from space 174720K, 100% used [0x0000000710000000, 0x000000071aaa0000, 0x000000071aaa0000)
to space 174720K, 0% used [0x0000000705560000, 0x0000000705560000, 0x0000000710000000)
tenured generation total 3495296K, used 2658714K [0x000000071aaa0000, 0x00000007f0000000, 0x00000007f0000000)
the space 3495296K, 76% used [0x000000071aaa0000, 0x00000007bcf06ba8, 0x00000007bcf06c00, 0x00000007f0000000)
compacting perm gen total 42048K, used 41778K [0x00000007f0000000, 0x00000007f2910000, 0x0000000800000000)
the space 42048K, 99% used [0x00000007f0000000, 0x00000007f28ccb80, 0x00000007f28ccc00, 0x00000007f2910000)
No shared spaces configured.
It looks like old gen was almost full (76%). I assume when it finally reaches 100% OOM happens. However, it looks like eden is only at 13%.
Can someone explain why OOM happens even if there is still some space in young gen?
There is a dozen of different reasons why JVM may throw OutOfMemoryError, including
Java heap space: when trying to allocate an object or an array larger than maximum continuous free block in either of heap generations;
GC overhead limit exceeded: when the proportion of time JVM spends doing garbage collection becomes too high (see GCTimeLimit, GCHeapFreeLimit);
PermGen space (before Java 8) or Metaspace (since Java 8): when the amount of class metadata exceeds MaxPermSize or MaxMetaspaceSize;
Requested array size exceeds VM limit: when trying to allocate an array with length larger than Integer.MAX_VALUE - 2;
Unable to create new native thread: when reaching the OS limit of user processes (see ulimit -u) or when there is not enough virtual memory to reserve space for thread stack;
Direct buffer memory: when the size of all direct ByteBuffers exceeds MaxDirectMemorySize or when there is no virtual memory available to satisfy direct buffer allocation;
When JVM cannot allocate memory for its internal structures, either because run out of available virtual memory or because certain OS limit reached (e.g. maximum number of memory map areas);
When JNI code failed to allocate some native resource;
Etc. Not to mention that an application can throw OutOfMemoryError itself at any time just because a developer decides so.
To find out what is the reason of your particular error, you should at least look at the error message, the stacktrace and GC logs.