-XX:G1ReservePercent and to-space exhausted - java

I'm trying to understand what the -XX:G1ReservePercent actually does. The descriptions I found in the official documentation are not really comprehensive:
Sets the percentage of reserve memory to keep free so as to reduce the
risk of to-space overflows. The default is 10 percent. When you
increase or decrease the percentage, make sure to adjust the total
Java heap by the same amount.
and the desciption of to-space exhausted log entry is this:
When you see to-space overflow/exhausted messages in your logs, the G1
GC does not have enough memory for either survivor or promoted
objects, or for both.
[...]
To alleviate the problem, try the following adjustments:
Increase the value of the -XX:G1ReservePercent option (and the total
heap accordingly) to increase the amount of reserve memory for "to-space".
[...]
Judging by the quote the to-space exhausted means that when performing mixed evacuation we do not have enough a free region to move survivors to.
But this then contradicts to the following official tuning advice in case of Full GC (emphasize mine):
Force G1 to start marking earlier. G1 automatically determines the
Initiating Heap Occupancy Percent (IHOP) threshold based on earlier
application behavior. If the application behavior changes, these
predictions might be wrong. There are two options: Lower the target
occupancy for when to start space-reclamation by increasing the buffer
used in an adaptive IHOP calculation by modifying
-XX:G1ReservePercent;
So what is the buffer and what does setting -XX:G1ReservePercent do (From the first glance AdaptiveIHOP has nothing to do with it...) ?
Does it keep some heap space always reserved so when mixed evacuation occur we always have free regiong to move survivors to?
Or the space is used for G1 internal housekeeping tasks? If so it is not clear what data the to-space contains so it exhausted?

To me, understanding what it really does, means to go the source code. I chose jdk-15.
The best description of this flag is here:
It determines the minimum reserve we should have in the heap to minimize the probability of promotion failure.
Excellent, so this has to do with "promotion failures" (whatever those are). But according to you quote in bold, this has something to do with AdaptiveIHOP also? Well yes, this parameter matters only in AdaptiveIHOP (which is on by default).
Another thing to notice is that G1ReservePercentage has no guarantee to be maintained all the time, it is a best effort.
If you look at how it is used in the very next line:
if (_free_regions_at_end_of_collection > _reserve_regions) {
absolute_max_length = _free_regions_at_end_of_collection - _reserve_regions;
}
things start to make some sense (for that bold statement). Notice how _reserve_regions are extracted for some computation. G1 will reserve that space for "promotion failures" (we will get to that).
If G1 reserves that space, it means less space is available for actual allocations. So if you "increase this buffer" (you make G1ReservePercent bigger as the quote suggests), your space for new objects becomes smaller and as such the time when GC needs to kick in will come sooner, so the time for when space reclamation needs to happen will come sooner too ("Lower the target occupancy for when to start space-reclamation..."). It is one complicated sentence, but in simple words it means that :
If you increase G1ReservePercentage, space will be needed to be reclaimed faster (more often GC calls).
Which, to be fair, is obvious. Not that I agree that you should increase that value, but this is what that sentence says.
After a certain GC cycle, be that minor, mixed or major, G1 knows how many regions are free:
_free_regions_at_end_of_collection = _g1h->num_free_regions();
Which of course, is the length of "free list":
return _free_list.length();
Based on this value and _reserve_regions (G1ReservePercent) plus the target pause time (200 ms by default) it can compute how many regions it needs for the next cycle. By the time next cycle ends, there might be a case when there are no empty regions (or the ones that are empty can not take all the live Objects that are supposed to be moved). Where is Eden supposed to move live Objects (Survivor), or, if old region is fragmented - where are live objects supposed to be moved to defragmente? This is what this buffer is for.
It acts as a safety-net (the comments in the code make this far more easier to understand). This is needed in the hopes that it will avoid a Full GC. Because if there are no free regions (or enough) to move live Objects a Full GC needs to happen (most probably followed by a young GC also).
This value is usually known to be small when this message is present in logs. Either increase it, or much better give more heap to the application.

Related

Why doesn't the JVM use more Heap Memory

I tried to increase my heap memory like this:
-Xmx9g -Xms8g
to be honest, just because I can.
Now I'm wondering, why the JVM doesn't use more of it, and schedules the GC less frequent.
System:
JVM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03, mixed mode)
Java: version 1.7.0_51, vendor Oracle Corporation
Edit:
I want to improve my configuration for a modelling process (throughput beats responsiveness).
The HotSpot JVM in Java 1.7 separates the heap into a few spaces, and the relevant ones for this discussion are:
Eden, where new objects go
Survivor, where objects go if they're needed after Eden is GC'ed
When you allocate a new object, it's simply appended into the Eden space. Once Eden is full, it's cleaned in what's known as a "minor collection." Objects in Eden that are still reachable are copied to Survivor, and then Eden is wiped (thus collecting any objects that weren't copied over).
What you want is to fill up Eden, not the heap as a whole.
For example, take this simple app:
public class Heaps {
public static void main(String[] args) {
Object probe = new Object();
for (;;) {
Object o = new Object();
if (o.hashCode() == probe.hashCode()) {
System.out.print(".");
}
}
}
}
The probe stuff is just there to make sure the JVM can't optimize away the loop; the repeated new Object() is really what we're after. If you run this with the default JVM options, you'll get a graph like the one you saw. Objects are allocated on Eden, which is just a small fraction of the overall heap. Once Eden is full, it triggers a minor collection, which wipes out all those new objects and brings the heap usage down to its "baseline," close to 0.
So, how do you fill up the whole heap? Set Eden to be very big! Oracle publishes its heap tuning parameters, and the two that are relevant here are -XX:NewSize and -XX:MaxNewSize. When I ran the above program with -Xms9g -XX:NewSize=8g -XX:MaxNewSize=8g, I got something closer to what you expected.
In one run, this used up nearly all of the heap, and all of the Eden space I specified; subsequent runs only took up a fraction of the Eden I specified, as you can see here. I'm not quite sure why this is.
VisualVM has a plugin called Visual GC that lets you see more details about your heap. Here's a screen shot from mine, taken at just the right moment that shows Eden nearly full, while the old space is pretty much empty (since none of those new Object()s in the loop survive Eden collections).
(I'll try to answer the question of "why" from a different angle here.)
Normally you want to balance two things with your GC settings: throughput and responsiveness.
Throughput is determined by how much time is spent doing GC overall, responsiveness is determined by the lengths of the individual GC runs. Default GC settings were determined to give you a reasonable compromise between the two.
High throughput means that measured over a long period of time the GC overhead will be less.
High responsiveness on the other hand will make it more likely that a short piece of code will run in more or less the same time and won't be held up for very long by GC.
If you tune your GC parameters to allow the filling of all 9GBs of heap, what you'll find is that the throughput might have increased (although I'm not certain that it always will) but when the GC does eventually run, your application freezes for several seconds. This might be acceptable for a process that runs a single, long-running calculation but not for a HTTP server and even less so for a desktop application.
The moral of the story is: you can tune your GC to do whatever you want but unless you've got a specific problem that you diagnosed (correctly), you're likely to end up worse than with the default settings.
Update: Since it seems you want high throughput but aren't bothered about pauses, your best option is to use the throughput collector (-XX:+UseParallelGC). I obviously can't give you the exact parameters, you have to tune them using this guide by observing the effects of each change you make. I probably don't need to tell you this but my advice is to always change one parameter at a time, then check how it affects performance.

What is the normal behavior of Java GC and Java Heap Space usage?

I am unsure whether there is a generic answer for this, but I was wondering what the normal Java GC pattern and java heap space usage looks like. I am testing my Java 1.6 application using JMeter. I am collecting JMX GC logs and plotting them with JMeter JMX GC and Memory plugin extension. The GC pattern looks quite stable with most GC operations being 30-40ms, occasional 90ms. The memory consumption goes in a saw-tooth pattern. The JHS usage grows constantly upwards e.g. to 3GB and every 40 minutes the memory usage does a free-fall drop down to around 1GB. The max-min delta however grows, so the sawtooth height constantly grows. Does it do a full GC every 40mins?
Most of your descriptions in general, are how the GC works. However, none of your specific observations, especially numbers, hold for general case.
To start with, each JVM has one or several GC implementations and you could choose which one to use. Take the mostly applied one i.e. SUN JVM (I like to call it this way) and the common server GC pattern as example.
Firstly, the memory are divided into 4 regions.
A young generation which holds all of the recently created objects. When this generation is full, GC does a stop-the-world collection by stopping your program from working, execute a black-gray-white algorithm and get the obselete objects and remove them. So this is your 30-40 ms.
If an object survived a certain rounds of GC in the young gen, it would be moved into a swap generation. The swap generation holds the objects until another number of GCs - then move them to the old generation. There are 2 swap generations which does a double buffering kind of thing to facilitate the young gen to work faster. If young gen dumps stuff to swap gen and found swap gen is mostly full, a GC would happen on swap gen and potentially move the survived objects to old gen. This most likely makes your 90ms, though I am not 100% sure how swap gen works. Someone correct me if I am wrong.
All the objects survived swap gen would be moved to the old generation. The old generation would only be GC-ed until it's mostly full. In your case, every 40 min.
There is another "permanent gen" which is used to load your jar target byte code and resources.
All size of the areas can be adjusted by JVM parameters.
You can try to use VisualVM which would give you a dynamic idea of how it works.
P.S. not all JVM / GC works the same way. If you use G1 collector, or JRocket, it might happens slightly different, but the general idea holds.
Java GC work in terms of generations of objects. There are young, tenure and permament generations. It seems like in your case: every 30-40ms GC process only young generation (and transfers survived objects into tenure generation). And every 40 mins it performs full collecting (it causes stop-the-world pause). Note: it happens not by time, but by percentage of used memory.
There are several JVM options, which allows you to chose generation's sizes, type of GC (there are several algorithms for GC, in java 1.6 Serial GC is used by default, for example -XX:-UseConcMarkSweepGC), parameters of GC work.
You'd better try to find good articles about generations and different types of GC (algorithms are really different, some of them allow to avoid stop-the-world pauses at all!)
yes, most likely. Instead of guessing you can use jstat to monitor your GCs.
I suggest you use a memory profiler to ensure there is nothing simple you can do ti improve the amount of garbage you are producing.
BTW, If you increase the size of the young generation, you can reduce how much garbage makes it into the tenured space reducing the frequency of full collections. You may find you less than one full collection per day if you tune it enough.
For a more extreme case, I have tuned a trading system to less than one collection per day (minor or major)

Repeated Full GC with available Heap

I'm experiencing repeated Full GCs even when the heap is not fully used.
This is how the gc logs look like: http://d.pr/i/iFug (the blue line is the used heap and the grey rectangles are Full GCs).
It seems to be a problem similar to the one posted in this question: Frequent full GC with empty heap
However, that thread didn't provide any actual answers to the problem. My application does uses RMI, and the production servers are indeed using 1.6 before the Upgrade 45 that increased GC intervals from 1 min to 1 hour (http://docs.oracle.com/javase/7/docs/technotes/guides/rmi/relnotes.html). However, from the rest of the log, I can't see that Full-GC-every-1-min pattern.
What could possibly be causing this?
Most likely the cause is you have reached the current size of the heap. The size of the heap is smaller than the maximum you set and is adjusted as the program runs.
e.g. Say you set a maximum of 1 GB, the initial heap size might be 256 MB, and when you reach 256 MB it performs a full GC, after this GC it might decide that 400 MB would be a better size and when this is reach a full GC is performed etc.
You get a major collection when the tenured space fills or fails to find free space. Eg if it is fragmented.
You also get full collections if your survivor spaces are too small.
In short, the most likely cause is the gc tuning parameters you used. I suggest you simplify your tuning parameters until your system behaves in a manner you expect.
As noted in the linked thread, disable explicit GC and see if the FullGC pattern occurs again : -XX:+DisableExplicitGC. The RMI code may be triggering explicit GC in given interval, which may not be desirable in some cases.
If the FullGCs still occur, I would take thread dumps and possibly a heap dump to analyze the issue.
Also, use jstat to see the occupation of Eden, Survivor, OldGen spaces.

Java Heap Behaviour

Given a java memory configuration like the following
-Xmx2048m -Xms512m
What would be the behaviour of the VM when memory usage increases past 512m? Is there a particular algorithm that it follows? ie. Does it go straight to the max, does it double, does it go in increments, or does it only allocate as it needs memory? How expensive an operation is it?
I'm looking specifically at the Oracle/Sun JVM, version 1.6. I assume this is documented on the Oracle website somewhere, but I'm having trouble finding it.
It's the garbage collector's job to decide when resizing is necessary, so it is determined by the GC parameter 'MinFreeHeapRatio.' If the GC needs more space, it will grow to a size where the % of heap specified by that value is available.
A typical value on a modern platform is 40ish, so if you start at 512MB and have less than 40% free, meaning you exceeded ~308MB, it will increase until 40% is free again. So say after collection there are still 400MB worth of live objects, your heap will go up to ~667MB. (Yes it is named ratio but expects a % value as argument... search me!)
Note this is a bit inexact, the garbage collector is "generational" and actually can resize individual generations, but it also has forced ratios between generations sizes and if your objects are distributed between long lived and short lived in roughly the way it estimates, it works out pretty well for back of the envelope.
This applies to defaults in Java 6. If you use custom garbage collector config it may be different. You can read about that here: http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing.total_heap
(The "expense" of the operation kind of depends on the operating system and what else is going on. If the system is loaded down and the OS has to do some swapping to make a contiguous block of memory for you, then it could be very expensive!)
Use of -verbose:gc and/or -XX:+PrintGCDetails options should give you many finer details.
Here is an example output with the -verbose:gc option switched on:
[GC 325407K->83000K(776768K), 0.2300771 secs]
[GC 325816K->83372K(776768K), 0.2454258 secs]
[Full GC 267628K->83769K(776768K), 1.8479984 secs]
An explanation of the above taken from the official document:
Here we see two minor collections followed by one major collection.
The numbers before and after the arrow (e.g., 325407K->83000K from the
first line) indicate the combined size of live objects before and
after garbage collection, respectively. After minor collections the
size includes some objects that are garbage (no longer alive) but that
cannot be reclaimed. These objects are either contained in the tenured
generation, or referenced from the tenured or permanent generations.
The next number in parentheses (e.g., (776768K) again from the first
line) is the committed size of the heap: the amount of space usable
for java objects without requesting more memory from the operating
system. Note that this number does not include one of the survivor
spaces, since only one can be used at any given time, and also does
not include the permanent generation, which holds metadata used by the
virtual machine.
The last item on the line (e.g., 0.2300771 secs) indicates the time
taken to perform the collection; in this case approximately a quarter
of a second.
The format for the major collection in the third line is similar.
Running an application this way along with updating minimum and maximum heap sizes can give good insight into heap allocation and garbage collection patterns of the VM.
its to be noted that JVM on Initialization virtually reserves maximum address space but does not allocate physical memory. Typically the JVM will allocate space into Old and Young generation. There are intermidatory space in the Young Generation . The new objects invoked are contained in the Young Generation.
When the intermidatory space is filled up , GC is invoked which moves reference Objects to one of the Intermidatory space called Survivor Space in the Young Generation Segment. The GC might follow "stop-the-world" by saving thread's state algorithm or an alogorithm so the processes keep running(?).
When the Survivor space fills JVM invokes a full GC.

Why does java wait so long to run the garbage collector?

I am building a Java web app, using the Play! Framework. I'm hosting it on playapps.net. I have been puzzling for a while over the provided graphs of memory consumption. Here is a sample:
The graph comes from a period of consistent but nominal activity. I did nothing to trigger the falloff in memory, so I presume this occurred because the garbage collector ran as it has almost reached its allowable memory consumption.
My questions:
Is it fair for me to assume that my application does not have a memory leak, as it appears that all the memory is correctly reclaimed by the garbage collector when it does run?
(from the title) Why is java waiting until the last possible second to run the garbage collector? I am seeing significant performance degradation as the memory consumption grows to the top fourth of the graph.
If my assertions above are correct, then how can I go about fixing this issue? The other posts I have read on SO seem opposed to calls to System.gc(), ranging from neutral ("it's only a request to run GC, so the JVM may just ignore you") to outright opposed ("code that relies on System.gc() is fundamentally broken"). Or am I off base here, and I should be looking for defects in my own code that is causing this behavior and intermittent performance loss?
UPDATE
I have opened a discussion on PlayApps.net pointing to this question and mentioning some of the points here; specifically #Affe's comment regarding the settings for a full GC being set very conservatively, and #G_H's comment about settings for the initial and max heap size.
Here's a link to the discussion, though you unfortunately need a playapps account to view it.
I will report the feedback here when I get it; thanks so much everyone for your answers, I've already learned a great deal from them!
Resolution
Playapps support, which is still great, didn't have many suggestions for me, their only thought being that if I was using the cache extensively this may be keeping objects alive longer than need be, but that isn't the case. I still learned a ton (woo hoo!), and I gave #Ryan Amos the green check as I took his suggestion of calling System.gc() every half day, which for now is working fine.
Any detailed answer is going to depend on which garbage collector you're using, but there are some things that are basically the same across all (modern, sun/oracle) GCs.
Every time you see the usage in the graph go down, that is a garbage collection. The only way heap gets freed is through garbage collection. The thing is there are two types of garbage collections, minor and full. The heap gets divided into two basic "areas." Young and tenured. (There are lots more subgroups in reality.) Anything that is taking up space in Young and is still in use when the minor GC comes along to free up some memory, is going to get 'promoted' into tenured. Once something makes the leap into tenured, it sits around indefinitely until the heap has no free space and a full garbage collection is necessary.
So one interpretation of that graph is that your young generation is fairly small (by default it can be a fairly small % of total heap on some JVMs) and you're keeping objects "alive" for comparatively very long times. (perhaps you're holding references to them in the web session?) So your objects are 'surviving' garbage collections until they get promoted into tenured space, where they stick around indefinitely until the JVM is well and good truly out of memory.
Again, that's just one common situation that fits with the data you have. Would need full details about the JVM configuration and the GC logs to really tell for sure what's going on.
Java won't run the garbage cleaner until it has to, because the garbage cleaner slows things down quite a bit and shouldn't be run that frequently. I think you would be OK to schedule a cleaning more frequently, such as every 3 hours. If an application never consumes full memory, there should be no reason to ever run the garbage cleaner, which is why Java only runs it when the memory is very high.
So basically, don't worry about what others say: do what works best. If you find performance improvements from running the garbage cleaner at 66% memory, do it.
I am noticing that the graph isn't sloping strictly upward until the drop, but has smaller local variations. Although I'm not certain, I don't think memory use would show these small drops if there was no garbage collection going on.
There are minor and major collections in Java. Minor collections occur frequently, whereas major collections are rarer and diminish performance more. Minor collections probably tend to sweep up stuff like short-lived object instances created within methods. A major collection will remove a lot more, which is what probably happened at the end of your graph.
Now, some answers that were posted while I'm typing this give good explanations regarding the differences in garbage collectors, object generations and more. But that still doesn't explain why it would take so absurdly long (nearly 24 hours) before a serious cleaning is done.
Two things of interest that can be set for a JVM at startup are the maximum allowed heap size, and the initial heap size. The maximum is a hard limit, once you reach that, further garbage collection doesn't reduce memory usage and if you need to allocate new space for objects or other data, you'll get an OutOfMemoryError. However, internally there's a soft limit as well: the current heap size. A JVM doesn't immediately gobble up the maximum amount of memory. Instead, it starts at your initial heap size and then increases the heap when it's needed. Think of it a bit as the RAM of your JVM, that can increase dynamically.
If the actual memory use of your application starts to reach the current heap size, a garbage collection will typically be instigated. This might reduce the memory use, so an increase in heap size isn't needed. But it's also possible that the application currently does need all that memory and would exceed the heap size. In that case, it is increased provided that it hasn't already reached the maximum set limit.
Now, what might be your case is that the initial heap size is set to the same value as the maximum. Suppose that would be so, then the JVM will immediately seize all that memory. It will take a very long time before the application has accumulated enough garbage to reach the heap size in memory usage. But at that moment you'll see a large collection. Starting with a small enough heap and allowing it to grow keeps the memory use limited to what's needed.
This is assuming that your graph shows heap use and not allocated heap size. If that's not the case and you are actually seeing the heap itself grow like this, something else is going on. I'll admit I'm not savvy enough regarding the internals of garbage collection and its scheduling to be absolutely certain of what's happening here, most of this is from observation of leaking applications in profilers. So if I've provided faulty info, I'll take this answer down.
As you might have noticed, this does not affect you. The garbage collection only kicks in if the JVM feels there is a need for it to run and this happens for the sake of optimization, there's no use of doing many small collections if you can make a single full collection and do a full cleanup.
The current JVM contains some really interesting algorithms and the garbage collection itself id divided into 3 different regions, you can find a lot more about this here, here's a sample:
Three types of collection algorithms
The HotSpot JVM provides three GC algorithms, each tuned for a specific type of collection within a specific generation. The copy (also known as scavenge) collection quickly cleans up short-lived objects in the new generation heap. The mark-compact algorithm employs a slower, more robust technique to collect longer-lived objects in the old generation heap. The incremental algorithm attempts to improve old generation collection by performing robust GC while minimizing pauses.
Copy/scavenge collection
Using the copy algorithm, the JVM reclaims most objects in the new generation object space (also known as eden) simply by making small scavenges -- a Java term for collecting and removing refuse. Longer-lived objects are ultimately copied, or tenured, into the old object space.
Mark-compact collection
As more objects become tenured, the old object space begins to reach maximum occupancy. The mark-compact algorithm, used to collect objects in the old object space, has different requirements than the copy collection algorithm used in the new object space.
The mark-compact algorithm first scans all objects, marking all reachable objects. It then compacts all remaining gaps of dead objects. The mark-compact algorithm occupies more time than the copy collection algorithm; however, it requires less memory and eliminates memory fragmentation.
Incremental (train) collection
The new generation copy/scavenge and the old generation mark-compact algorithms can't eliminate all JVM pauses. Such pauses are proportional to the number of live objects. To address the need for pauseless GC, the HotSpot JVM also offers incremental, or train, collection.
Incremental collection breaks up old object collection pauses into many tiny pauses even with large object areas. Instead of just a new and an old generation, this algorithm has a middle generation comprising many small spaces. There is some overhead associated with incremental collection; you might see as much as a 10-percent speed degradation.
The -Xincgc and -Xnoincgc parameters control how you use incremental collection. The next release of HotSpot JVM, version 1.4, will attempt continuous, pauseless GC that will probably be a variation of the incremental algorithm. I won't discuss incremental collection since it will soon change.
This generational garbage collector is one of the most efficient solutions we have for the problem nowadays.
I had an app that produced a graph like that and acted as you describe. I was using the CMS collector (-XX:+UseConcMarkSweepGC). Here is what was going on in my case.
I did not have enough memory configured for the application, so over time I was running into fragmentation problems in the heap. This caused GCs with greater and greater frequency, but it did not actually throw an OOME or fail out of CMS to the serial collector (which it is supposed to do in that case) because the stats it keeps only count application paused time (GC blocks the world), application concurrent time (GC runs with application threads) is ignored for those calculations. I tuned some parameters, mainly gave it a whole crap load more heap (with a very large new space), set -XX:CMSFullGCsBeforeCompaction=1, and the problem stopped occurring.
Probably you do have memory leaks that's cleared every 24 hours.

Categories

Resources