Tenured Generation Garbage Collection is Not clearing - java

I have a java spring boot project. When the code related to multi-threading (Executor service)is executed memory
is getting filled. GC is not clearing this memory. After reading GC docs, came to know that the tenured memory is not getting cleared.
By monitoring the JVM by Java Profiler, I notice that this Tenured Generation never get cleared(until full, for my case).
how can I make gc clear the tenured space?
we running app using docker image

There are two potential issues here.
The garbage collector, can only release objects that are unreachable. So if the tenured objects are still reachable, they won't ever be released. This is a memory leak scenario.
The JVM could be not running the old / tenured space collection because it doesn't need to. The JVM will typically only run the old space collector when it thinks it is necessary / economical to do. This is actually a good thing, because most of the easily (cheaply) collectable garbage is normally in the new space.
After reading gc docs, came to know that the tenured memory is not getting cleared.
That's probably a misreading of the documentation. The GC will collect objects in tenured space. It should happen before you start getting OOMEs.
How can I make the GC clear the tenured space?
You can call System.gc() ... which gives the JVM a hint that it would be a good thing to run the GC now. However:
It is only a hint. The JVM may ignore it.
It may or may not run an old space collection.
It won't "clear" the old generation. It won't necessarily even release all of the unreachable objects.
It is generally inefficient to call System.gc(). The JVM typically has a better idea of the most efficient time to run the GC, and which kind of GC to run.
Running the GC is probably not actually going to release memory for the rest of the system to use.
In short, it is probably a BAD IDEA to call System.gc(). In most cases, it doesn't achieve anything.
It certainly help if your real problem is a memory leak. If you are getting OOME's, it won't help.

Related

GC gets triggered often

I would like to understand why the GC gets triggered even though I have plenty of heap left unused.. I have allocated 1.7 GB of RAM. I still see 10% of GC CPU usage often.
I use this - -XX:+UseG1GC with Java 17
JVMs will always have some gc threads running (unless you use Epsilon GC which perform no gc, I do not recommend using this unless you know why you need it), because the JVM manages memory for you.
Heap in G1 is divided two spaces: young and old. All objects are created in young space. When the young space fills (it always do eventually, unless you are developing zero garbage), it will trigger some gc cleaning unreferenced objects from the young and promoting some objects which are still referenced to old.
Those spikes in the right screenshot will correspond to young collection events (where unreferenced objects get cleaned). Young space is always much more small than the old space. So it fills frequently. That is why you see those spikes regarding there is much more memory free.
DISCLAIMER This is a really very high level explanation of memory management in the JVM. Some important concepts have been not mentioned.
You can read more about g1 gc collector here
Also take a look at jstat tool which will help you understand what is happening in your heap.

Where is my JVM memory leak? Garbage collector is working properly?

I have a Scala daemon application, that runs in a server in Rackspace with a limit of 2GB. Because of an unknown reason, the server get stuck after some time the application is running. I am suspecting there is a memory leak, because the server memory gets full after some time.
I tried to run jvisualvm, making snapshots of memory in two different moments and compare them to see if there were objects that remained allocated, but I could not find anything.
The heap allocation is just around 400MB. Here is a snapshot of the JVM memory in New Relic:
Notice that PS Eden Space heap is what keeps increasing. I did a work around that kills the application every 3 hours and starts it back again (this is why the graph suddenly goes back down).
Any idea of why this PS Eden is increasing? How to fix it?
Edit 1:
Screenshot of the machine that halted minutes before 13:00
Edit 2:
On a new round, a left the server hang itself, and used G1GC. Here is the new relic graph for this run:
It's normal that the Eden grows constantly, that is where new objects are allocated. Eden will keep growing until it get's full or until a partial collection runs that collects unused objects, and shifts objects being used to the survivor region S0.
This is as per the way this type of garbage collection was designed. The idea is that it's OK that Eden is full, we let it grow and garbage collect it only when it's most convenient, minimizing the impact for application code.
Try to remove the workaround, let the server freeze and see if there are any out of memory errors in the logs. Too many classes would have cause such errors.
Try to see if the OldGen is full. Then using visualvm, force a garbage collection, and see if it goes down. If it doesn't, there is the problem.
Then take a heap dump and a thread dump and analyse the heap dump in MAT - Eclipse Memory Analyser tool, see this tutorial as well. it could be that the server just needs more memory.
One important notion, in Java there is really no notion of a memory leak, the garbage collector works mostly flawlessly to collect unused objects.
Usually the problem comes from objects that are created but are for example kept around in static collections or thread local variables accidentally, and because they are referenced get never collected.
A tool that has a free trial and allows to generate a report that pinpoints a lot of these common causes is Plumbr. That is probably the best chance at a quick solution, try to run plumbr to see if it finds something, if not then MAT analisys of the heap dump.

Identify old gen in heap dump (or take heap dump of old gen only)

I think I have a memory leak.
(they say the first step is admitting the problem, right?)
Anyway, I think I do - see attached image for heap by regions: .
Green is Eden, blue/red is S0/S1, purple is old. I have unlimited tenuring (>15), lots of time passed between memory being allocated and it spilling to old gen. Hence - a memory leak. I think.
So - the question - how can I analyze what is leaking? As you can see, my Eden is very active. Lot's of objects being created and destroyed all the time.
Is there a way of taking a heap dump of the old gen only? Or somehow identify the old gen in a full heap dump (if so, with what tool)?
Edit 1:
Clarification: I'm not doing anything that should retain objects in memory. Everything I allocate after the initial startup should die young.
Edit2:
New findings: I took a heap dump, GCed like crazy and took another. The second one shows a significantly reduced level of old gen usage. The main difference between the two were objects held by finalizers.
Don't finalizers run in young GC cycles? Do they always wait for a full GC to be cleaned?
seeing some things propagate to old gen isn't a huge concern. After your old gen reaches a certain threshold a full GC will kick off. If that isn't able to reclaim the memory then you have an issue. The fact that you are seeing some memory allocated during a young collection shouldn't be an alarming concern.
lots of time passed between memory being allocated and it spilling to
old gen. Hence - a memory leak. I think
Not really.. just because memory is being added to old gen doesn't mean it is a memory leak. It is normal practice during a young collection that older objects get promoted to old gen. It is during those young collections when older objects get added to the old gen. This may just be your application still ramping up. In large scale applications there may be features not used every day, which may be getting into memory later then you expected.
That being said, if you really are concerned with any memory being added to the old gen and want to investigate further, I would recommend running this application on a demo environment. Attach a profiler (VisualVM will work) and load test (JMeter is good and free) your application. If you look at the objects you can get an idea of what generation an object is. You also want to see what happens when your old gen reaches a threshold where a full GC will kick off (normally in the 70%-90% range). If your old gen recovers back to the 20% threshold, then there is no leak. In some cases the old gen may never reach the point where a full GC gets kicked off, but instead level off as you expected. The load test will help identify that.
If it doesn't recover and you confirm you have a memory leak then you will want to capture a heap dump (hprof) and use a tool like MAT (Memory Analyzer Tool) to analyze the dump to find the culprit.
Using JVisualVM (part of the JDK since Java 6 Build 10 or something like that), you can look at the TYPE of objects that are in memory. That will help you track down where the leak is. Of course, it takes a lot of digging into the code, but that's the best tool I've used that always available and reliable.
Watch out for objects being passed around, it could be that you have a handle that's being kept in a list or array that's not being cleared out. I find that if I watch the number of objects being created, and kept, in JVisualVM over a period of a few minutes, I usually get an idea of where in the code to go dig for the offending objects not being released.

Force hotspot to make frequent GCs?

I am benchmarking a server process, in Java and it appears that Hotspot is not making many GCs, but when it does, its hitting performance massively.
Can I force hotspot to make frequent smaller GCs, rather than a few massive long GCs?
You can try changing the GC to parallel or concurrent.
Here's a link to the documentation.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
Interferring with when the GC is called, is usually a bad idea.
A better approach would be tuning the sizes of eden, survivor and old space if you have problems with performance of the gc.
If a full sweep has to be done it does not really matter how often it was called, the speed will always be relatively slow, the only fast gc calls are those in eden and survivor space.
So increasing eden and survivor space might solve your problem, but unfortunately a good memory profiling is rather time consuming and complex to perform.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
(link stolen from other answer) also gives the options on how to configure that if necessary. -XX:NewRatio=2 or -XX:NewRatio=3 might increase your speed but it might also slow it up. Unfortunately that is very application dependant.
You can tell the JVM to do a garbage collection programatically by: System.gc(). Please note that the Javadoc says that this is only a suggestion. You can try calling this before getting into a critical section where you don't want the GC performance penalty.
You can increase how often the GC is performed by decreasing the young/new sizes or call gc more often. This doesn't mean you will pause for a less time, just that it will happen mroe often.
The best way to reduce the impact of GC is to memory profile your application and reduce the amount of garbage you are producing. This will not only make your code faster, but reduce how often and for how long each GC occurs.
In the more extreme case, you can reduce how often the GC occurs to less than once per day, removing it as an issue all together.

Difference between System.gc() and dead object reclamation performed by taking a live-only heap dump?

There are at least two ways, directly or indirectly, of suggesting that the JVM expend effort collecting garbage:
System.gc()
taking a heap dump and requesting live objects only
In the latter, I can get hold a heap dump programmatically, for example through
hotspotMBean = ManagementFactory.newPlatformMXBeanProxy(ManagementFactory.getPlatformMBeanServer(), "com.sun.management:type=HotSpotDiagnostic", HotSpotDiagnosticMXBean.class);
hotspotMBean.dumpHeap(filename, live);
What difference, if any, is there between what these two operations will do to collect non-strongly-reachable objects?
I believe I have evidence that the heap dump approach is more aggressive than System.gc() in the presence of some combination of weak references, RMI distributed garbage collection and invisible objects strongly-reachable from the stack. In particular that objects that are only weakly reachable locally and have become Unreferenced with respect to RMI appear to be collected only by the heap dump. I haven't yet been able to distil this into a small test case, but it is reproducible.
(Before I'm warned against relying on particular GC behaviour in prod code, I'm not. I discovered this while investigating a potential memory leak, and noticed that the result varied depending on when I took the heap dump. I'm just curious.)
This is using HotSpot 64-Bit Server VM 1.6.0_22 on Windows 7.
The System.gc() is probably less aggressive because it just indicates the JVM that it should run the GC. The GC is then free to decide it should collect (find and free memory of) all dead objects, some of them, etc. It can decide the previous great collection happened too recently and it is not the time to collect all objects again.
I believe that dumping heap and explicitly asking only the living objects will cause the GC to compute precisely for each object if it should still be alive. This part of the collecting work being done, it does not cost much more to free the memory used by dead objects as well.
Alas I have no strong evidence of this behaviour and this is more a wild guess than a real explanation.

Categories

Resources