Minor GC pause times are too higfh. possible reasons? - java

I Am experiencing a regular high minor GC pause times(~ 9seconds).
The application is a server written in Java, executing 3 transactions/seconds.
Eventhough there's no I/O excessive activity
Heap parameters are:
-Xms1G
-Xmx14G
-XX:+UseConcMarkSweepGC
-XX:+DisableExplicitGC
-XX:+PrintGC
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
What are the possible reasons for such minor gc pause times values?

As the other answer says, without GC log snippets it's not possible to answer this definitively. Some things to think about:
Assuming no impact from the underlying OS (scheduling, CPU thrashing), the time taken for a minor collection will be proportional to the amount of live data in the young gen. when the collector runs (every live object in the young gen. gets copied during a minor GC). Looking at the graph of your old gen. you are seeing consistent growth which would indicate you are promoting significant amounts of data during minor GC. You're either creating a lot of long-lived objects or you're maintaining references unnecessarily.
To reduce the pauses you could try reducing the size of the Eden space (so there is less potential data to copy on each minor GC) and also reduce the tenuring threshold so that objects get moved out of the survivor spaces more quickly. The downside of this will be your minor GCs will happen more frequently so you will probably see a degradation in throughput.
I would also change the -Xms value. You clearly need more than 1Gb in your heap so it would be best to set it to 14Gb to avoid the heap having to be resized by the JVM as the amount of data increases.

For questions in category "Why my GC pause that long?" you should always provide some snippets for GC logs.
As a pure speculation, here are few reasons why minor GC may be abnormally slow:
JVM were suspended from execution by OS (e.i. CPU starvation, swapping, virtual server freeze)
There are some issue with putty JVM at safepoint (unlikely looking at your pause pattern though)
Object survival spikes
Reference object processing overhead (you need add -XX:+PrintReferebceGC to get reference processing info into GC log)

Try using
-XX:+UseG1GC -XX:MaxGCPauseMillis=1000
This will try to keep max GC pause below 1s.
You need to assign enough memory using -Xmx and set the MaxGCPauseMillis as per need.

IMHO:
There is no sufficient evidence that there is indeed minor GC "pause time". What is shown is Garbage Collection Time, and GC time != GC Pause time. Garbage collection activity time and garbage collection pause time are two different beasts - or in technical words, I should say that these are JVM ergonomics/performance goals.
Minor GC pause time is commonly negilible and it is the major GC pause time that affects the application responsiveness.
There is something called as JVM/Ergonomics goals, and "Throughput Goal" is one of the three goals, so I think in this case at max what can be said is that throughput goal is not going very good for young generation as lot of time is spent on GC.

Related

GC gets triggered often

I would like to understand why the GC gets triggered even though I have plenty of heap left unused.. I have allocated 1.7 GB of RAM. I still see 10% of GC CPU usage often.
I use this - -XX:+UseG1GC with Java 17
JVMs will always have some gc threads running (unless you use Epsilon GC which perform no gc, I do not recommend using this unless you know why you need it), because the JVM manages memory for you.
Heap in G1 is divided two spaces: young and old. All objects are created in young space. When the young space fills (it always do eventually, unless you are developing zero garbage), it will trigger some gc cleaning unreferenced objects from the young and promoting some objects which are still referenced to old.
Those spikes in the right screenshot will correspond to young collection events (where unreferenced objects get cleaned). Young space is always much more small than the old space. So it fills frequently. That is why you see those spikes regarding there is much more memory free.
DISCLAIMER This is a really very high level explanation of memory management in the JVM. Some important concepts have been not mentioned.
You can read more about g1 gc collector here
Also take a look at jstat tool which will help you understand what is happening in your heap.

Force the JVM to collect garbage early and reduce system memory used early

We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.

JVM heap used percentage - when to generate alert

We have an application which is deployed on Tomcat 8 application server and currently monitoring server (Zabbix) is configured to generate alert if the heap memory is 90% utilized.
There were certain alerts generated which prompted us to do heap dump analysis. Nothing really came out of heap dump, there was no memory leak. There were lot of unreachable object which were not cleaned up because of no GC.
JVM configurations:
-Xms8192m -Xmx8192m -XX:PermSize=128M -XX:MaxPermSize=256m
-XX:+UseParallelGC -XX:NewRatio=3 -XX:+PrintGCDetails
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/apache-tomcat-8.0.33
-XX:ParallelGCThreads=2
-Xloggc:/app/apache-tomcat-8.0.33/logs/gc.log
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps -XX:GCLogFileSize=50m -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=30
We tried running garbage collection manually using jcmd command and it cleared up the memory. GC logs after running jcmd:
2016-11-04T03:06:31.751-0400: 1974627.198: [Full GC (System.gc()) [PSYoungGen: 18528K->0K(2049024K)] [ParOldGen: 5750601K->25745K(6291456K)] 5769129K->25745K(8340480K), [Metaspace: 21786K->21592K(1069056K)], 0.1337369 secs] [Times: user=0.19 sys=0.00, real=0.14 secs]
Questions:
Is there any configuration above due to which GC is not running automatically.
What is the reason of this behavior? I understand that Java will do GC when it needs to. But, if it is not running GC even when heap is 90% utilized, what should be the alert threshold (and if it even makes sense to have any alert based on heap utilization).
When the garbage collector decides to collect differs per garbage collector. I have not been able to find any hard promises on when your (Parallel GC) garbage collector runs. Many Garbage collectors also tune on several different variables, which can influence when it will run.
As you have noted yourself, your application can have high heap usage and still run fine. What you are looking for in an application is that the Garbage Collector is still efficient. Meaning it can clean up quiet a lot of garbage in a single run.
Some aspects of garbage collection
Most garbage collectors have two or more strategies, one for 'young' objects and one for 'old' objects. When a young object has not been collected in the latest (several) collects, it becomes an old object. The idea behind this is that if an object has not been collected it probably wont be collected next time either. (Most objects either live really short, or really long). The garbage collector does a very efficient, but not perfect cleaning of the young objects. When that doesn't free up enough data, then a more costly garbage collection is done on all (young en old) objects.
This will often generate a saw tooth (taken from this site):
Here you see many small drops in heap size and a slowly growing heap. Every now and then a large collection is done, and there is a large drop. The actually 'used' memory is the amount of memory left after a large collection.
Aspects to measure
This leads to the following aspects you can look at when determining the health of your application:
The amount of time spent by you application garbage collecting (both in total and as a percentage of CPU time).
The amount of memory available right after a garbage collect.
The a quick increment in the number of large garbage collects.
In most cases you will need monitor the behavior of your application under load, to see what are good values for you.
The parallel garbage collector uses a similar condition to determine if all is still well:
If more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, then an OutOfMemoryError is thrown.
All of these statistic you can see nicely using VisualVM and Jconsole. I am not sure which you can use as triggers in your monitoring tools

PS Old Gen memory in Heap Memory Usage: GC settings for Java Out Of Memory Exception

Below are my JVM settings:
JAVA_OPTS=-server -Xms2G -Xmx2G -XX:MaxPermSize=512M -Dsun.rmi.dgc.client.gcInterval=1200000 -Dsun.rmi.dgc.server.gcInterval=1200000 -XX:+UseParallelOldGC -XX:ParallelGCThreads=2 -XX:+UseCompressedOops -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jbos88,server=y,suspend=n
Problem:
Total Heap Memory: 2GB
Old Gen: 1.4GB (2/3 of Heap)
New Gen: 600MB(1/3 of Heap)
The Old Gen grows in memory beyond 70% 0f its allocated size and never is subjected to GC even at 100% i.e 1.4GB.
One can see the graph below it peaks and never is GC, the drop in memory is when it was forced to GC from the JConsole.
This problem is eventually bringing the web server down.
Anything that i am missing or wrongly setting the JVM?
Thanks for the help in advance.
Updating my Question:
After heap analysis it appears like Stateful session bean is the prime suspect:
We have stateful session beans that hold the persistence logic assisted by Hibernate.
The GC will be called eventually, the old gen is almost never called (because it is extremely slow).
The GC does run but it will only run on the new gen and survivor gen at first, it has a completely different algorithm for cleaning the old gen which is slower then new/survivor gens.
Those numbers are really high, the oldgen should never reach sum a high number compared to the newgen.
My guess is that you have a memory leak.
I can only guess your program is dealing with big files, you are probably saving references to them for too long.
Even still having the main problem (memory leak) resolved, if you still want the old gen to be cleared in frequent small sized pauses, you may try setting
-XX:MaxGCPauseMillis=(time in millis)
and this is only applicable with Parallel Collector and when Adaptive Sizing Policy is on. By default Adaptive Sizing Policy is on, but, if you want to explicitly mention this you can use.
-XX:+UseAdaptiveSizePolicy
Or else you can switch to CMS collector where you can use
-XX:CMSInitiatingOccupancyFraction=(% value)
-XX:+UseCMSInitiatingOccupancyOnly
Which is a more reliable way of collecting the old gen when it has reached a certain fraction of the the old gen.
The stateful session beans were making the JVM run out of memory.
Explicitly handling them using #Remove annotation resolved this issue.

Force hotspot to make frequent GCs?

I am benchmarking a server process, in Java and it appears that Hotspot is not making many GCs, but when it does, its hitting performance massively.
Can I force hotspot to make frequent smaller GCs, rather than a few massive long GCs?
You can try changing the GC to parallel or concurrent.
Here's a link to the documentation.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
Interferring with when the GC is called, is usually a bad idea.
A better approach would be tuning the sizes of eden, survivor and old space if you have problems with performance of the gc.
If a full sweep has to be done it does not really matter how often it was called, the speed will always be relatively slow, the only fast gc calls are those in eden and survivor space.
So increasing eden and survivor space might solve your problem, but unfortunately a good memory profiling is rather time consuming and complex to perform.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
(link stolen from other answer) also gives the options on how to configure that if necessary. -XX:NewRatio=2 or -XX:NewRatio=3 might increase your speed but it might also slow it up. Unfortunately that is very application dependant.
You can tell the JVM to do a garbage collection programatically by: System.gc(). Please note that the Javadoc says that this is only a suggestion. You can try calling this before getting into a critical section where you don't want the GC performance penalty.
You can increase how often the GC is performed by decreasing the young/new sizes or call gc more often. This doesn't mean you will pause for a less time, just that it will happen mroe often.
The best way to reduce the impact of GC is to memory profile your application and reduce the amount of garbage you are producing. This will not only make your code faster, but reduce how often and for how long each GC occurs.
In the more extreme case, you can reduce how often the GC occurs to less than once per day, removing it as an issue all together.

Categories

Resources