Serial Mark-Sweep-Compact (PSOldGen) PS stands for? - java

When I searched for PSOldGen garbage collector which I saw in the gc log, I found out that it is Serial Mark-Sweep-Compact. If this gc is serial, what does PS in PSOldGen stand for? AFAIK it is Parallel scavenge. But this confuses me.
[Full GC [PSYoungGen: 647K->0K(60352K)] [PSOldGen: 45361K->45875K(54528K)] 46008K->45875K(114880K) [PSPermGen: 10201K->10201K(21248K)], 0.0359430 secs]

There are 2 collectors in JVM: young space collector and old space collector. HotSpot JVM are implementing bunch of algorithms, but only certain combination of collectors are workable.
PSYoungGen is a "parallel scavenge" young space GC algorithm, but its not compatible with default serial algorithm for old space (Tenured). PSOldGen is a serial old space algorithm which was added specifically to work with parallel scavenge young space algorithm - PSYoungGen.
You can enable parallel algorithm for old space too (-XX:+UseParallelOldGC), in this case you will see PSYoungGen, ParOldGen pair of algorithms at work.
You can also enable another parallel young space algorithm -XX:+UseParNewGC, which will tandem with default serial old space algorithm Tenured.
Have I lost you already? :)
You can read more about algorithms implemented in HotSpot JVM in my blog.

You are correct, in a way, except it really depends on how you configured your JVM command line options. The young gen GC is Parallel Scavenge and multithreaded.
Interestingly, if you start it using -XX:+UseParallelGC then, you'll get a serial (single-threaded) Old Gen GC. If you use -XX:+UseParallelOldGC then you get both a multi-threaded, parallel young gen GC and a multi-threaded, parallel old gen GC.
Source: Java Performance, chapter 7, Garbage Collectors section.
Surprising, isn't it. there's a lot of scope for tinkering here too! The Java Performance book is well worth a read!

Related

GC gets triggered often

I would like to understand why the GC gets triggered even though I have plenty of heap left unused.. I have allocated 1.7 GB of RAM. I still see 10% of GC CPU usage often.
I use this - -XX:+UseG1GC with Java 17
JVMs will always have some gc threads running (unless you use Epsilon GC which perform no gc, I do not recommend using this unless you know why you need it), because the JVM manages memory for you.
Heap in G1 is divided two spaces: young and old. All objects are created in young space. When the young space fills (it always do eventually, unless you are developing zero garbage), it will trigger some gc cleaning unreferenced objects from the young and promoting some objects which are still referenced to old.
Those spikes in the right screenshot will correspond to young collection events (where unreferenced objects get cleaned). Young space is always much more small than the old space. So it fills frequently. That is why you see those spikes regarding there is much more memory free.
DISCLAIMER This is a really very high level explanation of memory management in the JVM. Some important concepts have been not mentioned.
You can read more about g1 gc collector here
Also take a look at jstat tool which will help you understand what is happening in your heap.

JVM heap used percentage - when to generate alert

We have an application which is deployed on Tomcat 8 application server and currently monitoring server (Zabbix) is configured to generate alert if the heap memory is 90% utilized.
There were certain alerts generated which prompted us to do heap dump analysis. Nothing really came out of heap dump, there was no memory leak. There were lot of unreachable object which were not cleaned up because of no GC.
JVM configurations:
-Xms8192m -Xmx8192m -XX:PermSize=128M -XX:MaxPermSize=256m
-XX:+UseParallelGC -XX:NewRatio=3 -XX:+PrintGCDetails
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/apache-tomcat-8.0.33
-XX:ParallelGCThreads=2
-Xloggc:/app/apache-tomcat-8.0.33/logs/gc.log
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps -XX:GCLogFileSize=50m -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=30
We tried running garbage collection manually using jcmd command and it cleared up the memory. GC logs after running jcmd:
2016-11-04T03:06:31.751-0400: 1974627.198: [Full GC (System.gc()) [PSYoungGen: 18528K->0K(2049024K)] [ParOldGen: 5750601K->25745K(6291456K)] 5769129K->25745K(8340480K), [Metaspace: 21786K->21592K(1069056K)], 0.1337369 secs] [Times: user=0.19 sys=0.00, real=0.14 secs]
Questions:
Is there any configuration above due to which GC is not running automatically.
What is the reason of this behavior? I understand that Java will do GC when it needs to. But, if it is not running GC even when heap is 90% utilized, what should be the alert threshold (and if it even makes sense to have any alert based on heap utilization).
When the garbage collector decides to collect differs per garbage collector. I have not been able to find any hard promises on when your (Parallel GC) garbage collector runs. Many Garbage collectors also tune on several different variables, which can influence when it will run.
As you have noted yourself, your application can have high heap usage and still run fine. What you are looking for in an application is that the Garbage Collector is still efficient. Meaning it can clean up quiet a lot of garbage in a single run.
Some aspects of garbage collection
Most garbage collectors have two or more strategies, one for 'young' objects and one for 'old' objects. When a young object has not been collected in the latest (several) collects, it becomes an old object. The idea behind this is that if an object has not been collected it probably wont be collected next time either. (Most objects either live really short, or really long). The garbage collector does a very efficient, but not perfect cleaning of the young objects. When that doesn't free up enough data, then a more costly garbage collection is done on all (young en old) objects.
This will often generate a saw tooth (taken from this site):
Here you see many small drops in heap size and a slowly growing heap. Every now and then a large collection is done, and there is a large drop. The actually 'used' memory is the amount of memory left after a large collection.
Aspects to measure
This leads to the following aspects you can look at when determining the health of your application:
The amount of time spent by you application garbage collecting (both in total and as a percentage of CPU time).
The amount of memory available right after a garbage collect.
The a quick increment in the number of large garbage collects.
In most cases you will need monitor the behavior of your application under load, to see what are good values for you.
The parallel garbage collector uses a similar condition to determine if all is still well:
If more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, then an OutOfMemoryError is thrown.
All of these statistic you can see nicely using VisualVM and Jconsole. I am not sure which you can use as triggers in your monitoring tools

What Java Garbage Collectors cleanup PermGen?

List of Garbage Collectors:
Serial GC
Parallel GC
Parallel Old GC
Conc Mark Sweep GC
G1 GC
I know that the Conc Mark Sweep GC supports cleaning up PermGen when you enable the ClassUnloading JVM option. Do other Garbage Collectors support cleaning up PermGen?
Reason: We use Spring, Hibernate, and Groovy that create a lot of Proxies and Perm Gen gets big.
Edit:
Should have mentioned that I am using Java 7. I'm aware of Java 8 removing Perm Gen and hopefully will upgrade sometime in the future. In the meantime, my question is regarding if the other garbage collectors support cleaning up PermGen other than Conc Mark Sweep.
Serial GC
Parallel GC (Believe -server uses this by default and confirmed that it cleans up perm gen)
Parallel Old GC
Conc Mark Sweep GC (Can clean perm gen using JVM flag)
G1 GC
All algorithms are cleaning PermGen, but
not every GC cycle include PermGen cleaning
CMS can clean PermGen concurrently, G1 have been requiring stop-the-world Full GC to unload classes (clean PermGen) until Java 8u40
Java 8 have metaspace instead of PermGen, but it needs to be garbage collected too (otherwise you'll get OOME in metaspace)
I have been fighting OOME in PermGen quite a lot when I was actively used ClassLoaders to simulate multiple JVM in single process for test purposes. My conclusion: PermGen GC is just not very reliable. One run it works as expected, other it throws OOEM.
The problem with perm gen is that it was not supposed to be dynamic, it was supposed to be static data, like classes and constants. But what we developers are doing with java are things like creating classes on the fly, redefining classes, etc which required a more dynamic usage of that space and that's why in the metaspace case, it has to be.
The biggest issues with perm gen and, going back to your question that IMO doesn't go away with metaspace is the creation and destruction of the class loaders as it is too easy to leak class metadata in ThreadLocals and other libraries loaded by other classloaders and that will leave live objects that cannot be reclaimed by any of the collectors that you use.
Nowadays all production garbage collectors clean up the metaspace, but, they are not bound to clean it with the same frequency as other memory regions

jvm conf for normal gc at high load

I have server application based on Netty. It decode message (from json) and send it back to the client (simple echo). When i have a lot of messages send from one client (more than 15k/second) garbage collector don't start and memory usage grown up.
How can i configure jvm to decrease gc pauses and decrease memory usage?
Your description sounds like a memory leak. Does the garbage collector eventually start, or do you end up with an OutOfMemoryError?
If you don't, then it sounds like you're running into a situation where objects are living long enough to get into the tenured generation (I'm assuming Sun JVM here). And the solution to that is to increase the size of the young generation relative to the tenured generation.
Here's a link that explains the Sun JVM generational collector (it's for the 1.5 JVM, but I believe that the options haven't changed for 1.6): http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
The options that you would want to experiment with are NewRatio, which is the ratio between the young and tenured generations, and SurvivorRatio, which is the ratio between Eden and the two survivor spaces. I might try the following:
-XX:NewRatio=1 gives the young generation half of the object heap
-XX:SurvivorRatio=2 makes each survivor space be half that of Eden
These two settings will make the "Eden" space for new objects take 1/4 of the heap. This is pretty big, so hopefully most objects will spend their entire lives in Eden. The survivor ration gives another 1/4 of the heap to the survivor spaces (1/8 to each), to hold objects with a medium life.
Of course, don't blindly set options. Instead, use jconsole (part of the JDK distribution) to see what's really happening with your heap. You might find that the default survivor ratio of (1:6) is better than what I've suggested.
To configure jvm to decrease gc pauses and decrease memory usage, you need to choose an appropriate GC collector. CMS is a low pause collector. You can set -XX:+UseConcMarkSweepGC to enable it. And, you can fine-tune other parameters such as
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=nn
to control GC pause.

Shrinking survivor spaces lead to continuous full GC

I've had this troubling experience with a Tomcat server, which runs:
our Hudson server;
a staging version of our web application, redeployed 5-8 times per day.
The problem is that we end up with continuous garbage collection, but the old generation is nowhere near to being filled. I've noticed that the survivor spaces are next to inexisting, and the garbage collector output is similar to:
[GC 103688K->103688K(3140544K), 0.0226020 secs]
[Full GC 103688K->103677K(3140544K), 1.7742510 secs]
[GC 103677K->103677K(3140544K), 0.0228900 secs]
[Full GC 103677K->103677K(3140544K), 1.7771920 secs]
[GC 103677K->103677K(3143040K), 0.0216210 secs]
[Full GC 103677K->103677K(3143040K), 1.7717220 secs]
[GC 103679K->103677K(3143040K), 0.0219180 secs]
[Full GC 103677K->103677K(3143040K), 1.7685010 secs]
[GC 103677K->103677K(3145408K), 0.0189870 secs]
[Full GC 103677K->103676K(3145408K), 1.7735280 secs]
The heap information before restarting Tomcat is:
Attaching to process ID 10171, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 14.1-b02
using thread-local object allocation.
Parallel GC with 8 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 3221225472 (3072.0MB)
NewSize = 2686976 (2.5625MB)
MaxNewSize = 17592186044415 MB
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 268435456 (256.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 1073479680 (1023.75MB)
used = 0 (0.0MB)
free = 1073479680 (1023.75MB)
0.0% used
From Space:
capacity = 131072 (0.125MB)
used = 0 (0.0MB)
free = 131072 (0.125MB)
0.0% used
To Space:
capacity = 131072 (0.125MB)
used = 0 (0.0MB)
free = 131072 (0.125MB)
0.0% used
PS Old Generation
capacity = 2147483648 (2048.0MB)
used = 106164824 (101.24666595458984MB)
free = 2041318824 (1946.7533340454102MB)
4.943684861063957% used
PS Perm Generation
capacity = 268435456 (256.0MB)
used = 268435272 (255.99982452392578MB)
free = 184 (1.7547607421875E-4MB)
99.99993145465851% used
The relevant JVM flags passed to Tomcat are:
-verbose:gc -Dsun.rmi.dgc.client.gcInterval=0x7FFFFFFFFFFFFFFE -Xmx3g -XX:MaxPermSize=256m
Please note that the survivor spaces are sized at about 40 MB at startup.
How can I avoid this problem?
Updates:
The JVM version is
$ java -version
java version "1.6.0_15"
Java(TM) SE Runtime Environment (build 1.6.0_15-b03)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02, mixed mode)
I'm going to look into bumping up the PermGen size and seeing if that helps - probably the sizing of the survivor spaces was unrelated.
The key is probably PS Perm Generation which is at 99.999% (only 184 bytes out of 256***MB*** free).
Usually, I'd suggest that you give it more perm gen but you already gave it 256MB which should be plenty. My guess is that you have a memory leak in some code generation library. Perm Gen is mostly used for bytecode for classes.
It's very easy to have ClassLoader leaks - all it takes is a single object loaded through the ClassLoader being referred by an object not loaded by it. A constantly redeployed app will then quickly fill PermGenSpace.
This article explains what to look out for, and a followup describes how to diagnose and fix the problem.
I think this is not that uncommon for an application server that gets continuously deployed to. The perm gen space, which is full for you, is where classes go. Keep in mind that JSPs are compiled as Java classes, and when you change a JSP, a new class gets generated and loaded.
We have had this problem, and our solution is to have the app server restart occasionally.
This is what I'd do:
Deploy Hudson to a separate server from your staging server
Configure Hudson to restart your staging server from time to time. You can either do this one of two ways:
Restart periodically (e.g., every night at midnight, regardless of if there's build activity); or
Have the web app deployment job trigger the server restart job. If you do this make sure there's a really long quiet period for the restart job (we set ours to 2 hours), so that you don't get a server restart for every build (i.e., if two web app deployments happen within 2 hours, they'll only trigger one server restart).
The flag -XX:SurvivorRatio sets the ratio between Eden and the survivor spaces. According to the JDK 1.5 tuning doc, the default value is 32, which gives a 1:32 ratio. This is in accordance with what you're seeing. It seems incredibly small to me, although I understand that only a very small number of objects are expected to make their way from Eden to the survivor space.
So, assuming that you have a lot of long-lived objects, you should decrease the survivor ratio. The risk is that you only have those long-lived objects during a startup phase, and so are limiting the Eden size. For a testing server, I doubt this is going to be an issue.
I'd probably also reduce the size of the Eden space, by increasing -XX:NewRatio (the default is 3). My gut says that a hundred MB or so is sufficient for the young generation, and you'll just be increasing the cost of garbage collection to have such a large amount of space allocated (ie, object will live in Eden far too long). But that's just instinct, and should definitely be validated for your environment.
And a semi-related comment, after reading other replies: if you're not seeing errors for running out of permgen space, don't spend your time fiddling with it. The permgen is managed separately from the rest of the heap.

Categories

Resources