I am trying to understand how Garbage collection process works. Came across good link .
Most of the articles says that during minor GC collection object is moved from eden to survivor space and during major GC collection
object is moved from survivor to tenured space otherwise all unreachable objects memory is reclaimed. I have three questions(need to ask
in single go as they are related) based on above statements :-
1)Minor vs Major GC collection ? What is the difference between two that one is called major and other is called minor collection?
As per my understanding during minor collection happens in parallel to application run while major collection makes application to
pause during that period.
2) What actually happens when object is moved from eden to survivor space ? Does the memory location of object is changed internally?
3) Why not just one space exist instead of three i.e eden, survivor and tenured space exist ? I know there is must be a reason behind it but i am missing it.
My point is when GC runs , collect unreachable object and leaves the reachable ones in that space only. Just one space seems to be sufficient. So what advantage three different
spaces are proving over one?
1) Minor GC occurs on new generation, major GC occurs on old generation. Whether it is parallel to the application or not depends on the kind of GC, only CMS and G1 can work concurrently
2) Yes, moving object during GC changes its physical location so all pointers to this object will be updated
3) This is to avoid often and long application freezing during GC. If it was one big heap then application would often freeze for long periods of time. JVM creates objects in small young generation, GCs in it occur frequently but quickly. Most objects created by JVM die quickly and they never get to old generation, so major GC happens rarily or it may never happen at all.
Source for my answers is this Oracle article on GC basics, so these answers would apply for HotSpot. No clue as to other VMs, although I would guess that the general idea might remain the same if the same implementation techniques were used in other VMs.
Minor vs Major GC collection? What is the difference between two that one is called major and other is called minor collection?
Minor GC is GC of the young generation, where new objects are allocated. Major GC is GC of all live objects, including the permanent generation (which is a bit interesting to me, but that's what the article says). Also, it appears that both major and minor GC are stop-the-world events.
What actually happens when object is moved from eden to survivor space? Does the memory location of object is changed internally?
I can't seem to find a reference at the moment, but I would assume so. Allowing for memory location to be changed lets compaction be performed, which improves memory allocation performance and ease. Allowing each space to be compacted separately makes sense, so I would guess that moving an object from one part of the heap to another would involve physically moving the object from one memory location to another.
Why not just one space exist instead of three (i.e eden, survivor and tenured space) exist?
Short answer: efficiency. If you have only one space, you'd have to check all objects when you GC, which becomes inefficient if you have lots of long-lived objects (and you're almost guaranteed to have a decent number in a long-running application), as those long-lived objects are likely to still be reachable from one GC to the next. Splitting the heap allows for GC to be optimized, as most of the GC efforts can be concentrated where object life can be assumed to be short (i.e. young generation), with longer-living objects being GC'd less frequently.
Related
I have followed up with a couple of good questions and their answers but I still have a doubt.
This is what I understand and would like to see if the understanding is correct.
GC (Allocation Failure) kicks in whenever new memory is to be allocated on YoungGen.
Also, the fact that depending on the size of the object, some objects might have to be pushed to OldGen and significantly larger objects could directly be moved to OldGen.
Application Behavior: The reason for 'Allocation Failure' was the creation of huge strings. On debugging further with JFR and HeapDump, everything points to a lot of char[] and String objects which are created in our system on a temporary basis (i.e. YoungGen candidate). Some of these strings indeed are huge (~25KB each). Although, there was enough space available in the YoungGen as per the error message and Heap is not even close to maximum memory possible.
During the same time, OldGen was increasing and was not getting cleaned even after full GC. There could be another memory leak but there is nothing that points to that. So, I don't understand why OldGen remains at the same level even after the full GC.
Apart from the validation of my understanding, the question is: Can the creation of a lot of temporary String/char[] objects (via strA + strB, new String()/StringBuilder().toString(), String.split(), String.substring(), Stream->buffer conversion etc.) cause GC to run very frequently even when the application has a lot of memory available in the YoungGen and heap in general? If yes, when and what are the alternatives?
Thanks!
I would say that the answer is a conditional yes.
Remember that young gen is split into 3 parts, eden, S0 and S1 which means that you do not have as much memory in young gen as you might think. If you overflow one of the survivor spaces, the remainder will be pushed to old gen (premature promotion), filling up old gen. Note also that promotion from young gen to old is based on the number of gc cycles. If you have frequent young gen gc where objects supposed to be short-lived are moved to old gen (because you have not finished with the temp objects), then you will fill up old gen. Note also that that just because you do a full gc, there are no guarantees that you will actually get any memory back.
So, use a tool like censum to analyse your gc logs and look especially for premature promotion.
It might be that you will have to resize your young gen/old gen ratio.
Gone thru this link but still
has confusion what actually happens in minor and major GC collection.
Say i have 100 objects in younger generation out of which 85 object are unreachabe objects. Now when Minor GC runs,
it will reclaim the memory of 85 objects and move 15 objects to older(tenured) generation.
Now 15 live objects exists in older generation out of which 3 are unreachable. Say Major GC takes places. It will keep
15 objects as it is and reclaim the memory for 3 unreachable object. Major GC is said to be slower than minor GC. My question is why ? Is it because of major GC happens on generally greater number of objects than minor as minor gc occurs more frequently than major?
As per understanding major GC should be faster as it needs to do less work i.e reclaiming memory from unreachable objects than minor GC because
high mortality rate in young generation.
1) Minor GC will first move 15 objects to one of survivor spaces, eg SS1, next GC will move those who are still alive to SS2, next GC will move those who survived back to SS1 and so forth. Only those who survived several (eg 8) relocations (minor GCs) will finally go to old generation.
2) Major GC happens only when JVM cannot allocate an object in old generation because there is no free space in it. To clean memory from dead objects GC goes over all objects in old generation, since old generation is several times larger than new generation, it may hold several times more objects, so GC processing will take several times longer
My question is why? Is it because of major GC happens on generally greater number of objects than minor as minor gc occurs more frequently than major?
You pretty much hit the nail on its head. From the Oracle article, emphasis mine:
Often a major collection is much slower because it involves all live objects.
So not only does a major GC analyze those 15 objects in the old generation, it also goes through the young generation (again) and permgen and GCs those areas of the heap. Minor GC only analyzes the young generation, so there generally wouldn't be as many objects to look at.
As per understanding major GC should be faster as it needs to do less work (i.e reclaiming memory from unreachable objects) than minor GC because high mortality rate in young generation.
I think I understand why you think that. I could imagine that major GC could be run very soon after a minor GC, when objects are promoted to an almost-full old generation. Thus, the young generation would (presumably) not contain too many objects to collect.
However, if I'm remembering things correctly, the old generation is usually larger than the young generation, so not only does the GC have to analyze more space, it also has to go over permgen again, as well as the remaining objects in the young generation (again). So that would probably be why major GC is slower -- simply because there's more stuff to do. You might be able to make major GC faster than minor GC by changing the sizes of the generation spaces such that the young generation is larger than both the old generation and permgen, but I don't think that would be a common setting to use...
I've read few articles about how garbage collection works and still don't understand how using generations helps? As I understood the main idea is that we start collection from the youngest generation and move to older generations. But why the authors of this idea decided that starting from the youngest generation is the most efficient way?
The older the generation, means object has been used quite a many times, and possibly will need again.
Removing recently created object makes no sense, May be its temporary(scope : local) object.
The authors start with the youngest generation first simply because that's what gets filled up first after your application starts, however in reality which generation is being swept and when is non-deterministic as your application runs.
The important points with generational GC are:
the young generation uses a copying collector which is copying objects to a space that it considers to be empty (the unused survivor spaces) from eden and the current survivor space and is therefore fast and the GC pause is minimal.
add to this fact that most objects die young and therefore the pause required to copy a small number of surviving objects from the eden and the current surviver space is small as only objects with live references are copied, after which eden and the previous survivor space can be wiped.
after being copied several times objects are copied to the tenured (old) generation; Eventually the tenured generation will fill up, however, this time there's not a clean space to copy the objects to, so the garbage collector has to sweap and compact within the generation, which is slow (when compared to the copy performed in eden and the survivor space) meaning a longer pause.
the good news, based on the most objects die young heuristic is, major GCs happen much less frequently than minor keeping GC pauses to a minimum over the lifetime of an application.
there's also a benefit that all new objects are allocated on the top of the heap, meaning there's mininal instructions required to do so, with defragmentation occurring naturally as part of the copy process.
Both these pages, Oracle Garbage Collection Tuning and Useful JVM Flags – Part 5 (Young Generation Garbage Collection), describe this.
Read this one.
Using different generations, makes the allocation of objects easy and fast as MOST of the allocations are done in a single region of Heap - Eden. Based on the observation that most objects die young from Weak Generational Hypothesis, collections in Young generation have more garbage which will reclaim more memory and its relatively small compared to the heap which means that time taken to scan the objects is also less. Thats why Young generation GCs are fast.
For more details on GC and generations, you can refer to this
I've read an extensive amount of documentation about the HotSpot GC of Java SE 6 and 7. When talking about strategies for obtaining contiguous regions of free memory, two 'competing' approaches are presented: that of Evacuation (usually applied on the young gen), where live objects are copied from 'from' to an empty 'to' and that of Compaction (fall-back of CMS), where live object are moved to one side inside a fragmented region to form a contiguous block of used an unused memory.
Both approaches are proportional to the size of the 'live set'. The difference is that evacuation requires x2 times space than the live set, where the compaction does not.
Why do we need the Evacuation technique at all? The amount of copying that needs to be done is the same, however it requires reservation of more heap size, and it does not allow for faster remapping of references.
True: the evacuation can be executed in parallel (where-as compaction cannot, or at least not as easily) but this trait is never mentioned and seems not that important (considering that remapping is much more expensive than moving).
One big problem is that with "evacuation" the vacated space is, indeed vacant, while with "compaction" some other object Y may be moved into the space where object X was. This makes it a lot harder to correct pointers, since one can't simply use the fact that a pointer points to an invalid location to clue the code that it needs to be updated. And one can't store the "forwarding pointer" in the "invalid" location.
This makes GC much less concurrent -- the app must be in "GC freeze" for a longer period of time.
Compaction is more suitable in cases where the number of reclaimable objects is expected to be low(e.g. Tenured generation) because after a few GC cycles the long living objects tend to occupy the lower portion of the heap and hence less work is needed to be done by the collector. If in such a case a copying collector is used that would perform very poorly because almost the same surviving objects from the previous cycles will need to be copied again and again from one location to the other.
Copying is suitable when the number of reclaimable objects is very high(e.g. Young generation) since very few surviving objects needs to be copied. If in such a case compaction is used that may perform poorly because the surviving objects may be scattered across the heap.
Other than that as mentioned in #Hot Licks answer Copying collector allows us to store a forwarding pointer which prevents from running into an infinite loop in case another object from the same "From" space refers to an already moved object.
Also, Compaction can not begin until all the live objects are identified, but live objects can be copied to the new location as soon as they are identified(using multiple threads).
As I understand, a generational GC divides objects into generations.
And on each cycle, GC runs on only one generation.
Why? Why Garbage Collecting of only one generation is enough?
P.S: I understand all these from here .
If you read the link I provided in earlier question you had about Generational GC, you will understand why it does so, the cycle is when the white set memory is filled up.
To optimize for this scenario, memory
is managed in generations, or memory
pools holding objects of different
ages. Garbage collection occurs in
each generation when the generation
fills up. Objects are allocated in a
generation for younger objects or the
young generation, and because of
infant mortality most objects die
there. When the young generation fills
up it causes a minor collection. Minor
collections can be optimized assuming
a high infant mortality rate. The
costs of such collections are, to the
first order, proportional to the
number of live objects being
collected. A young generation full of
dead objects is collected very
quickly. Some surviving objects are
moved to a tenured generation. When
the tenured generation needs to be
collected there is a major collection
that is often much slower because it
involves all live objects.
Basically, each objects is divided into generations (based on the hypothesis about the object) and places them into a memory heap for a particular generation. When that memory heap is filled up, the GC cycle begins, and those objects that still references are moved to another memory heap and fresh objects are added.
It's not always enough -- it's just that it's usually enough, so it saves time by not examining objects that are likely to stay alive anyway.
Every object has a generation, saying how many garbage collections it has survived. If an object has survived a few garbage collections, chances are that it will also survive the next one.
MSDN has a great explanation:
A generational garbage collector makes the following assumptions:
The newer an object is, the shorter its lifetime will be.
The older an object is, the longer its lifetime will be.
Newer objects tend to have strong relationships to each other and are frequently accessed around the same time.
Compacting a portion of the heap is faster than compacting the whole heap.
Because of this, you could save some time by only trying to collect younger objects, and collecting the older generations only if that doesn't free up enough memory.
The answer is there really.
It has been empirically observed that in many programs, the most recently created objects are also those most likely to become unreachable quickly (known as infant mortality or the generational hypothesis).
And
Generational garbage collection is a heuristic approach, and some unreachable objects may not be reclaimed on each cycle. It may therefore occasionally be necessary to perform a full mark and sweep or copying garbage collection to reclaim all available space.
Basically, generational collection gives you better performance over a full garbage collection at the cost of completeness. That's why a mixture of the two is used in practice.