(Committed and Max lines are the same)
I am looking at the memory usage for a Java application in newrelic. Here are several questions:
# 1
The committed PS Survivor Space Heap varied in past few days. But should it be a constant since it is configured by JVM?
# 2
From what I am understanding, the heap memory should decrease when there is a garbage collection. The memory of Eden could decrease when a major gc or a minor gc happens, while the memory of Old could decrease when a major gc happens.
But if you look at Old memory usage, some time between June 6th and 7th, the memory went up and then later it went down. This should represent that a major gc happend, right? However, there was still lots of unused memory left. It didn't seem it almost reach the limit. Then how did the major gc be triggered? Same for Eden memory usage, it never reached the limit but it still decreased.
The application fetches a file from other places. This file could be large and be processed in memory. Could this explain the issue above?
You need to provide more information about your configuration to answer this definitively, I will assume you are using the Hotspot JVM from Oracle and that you are using the G1 collector. Posting the flags you start the JVM with would also be useful.
The key term here is 'committed'. This is memory reserved by the JVM, but not necessarily in use (or even mapped to physical pages, it's just a range of virtual memory that can be used by the JVM). There's a good description of this in the MemoryUsage class of the java.lang.management package (check the API docs). It says, "committed represents the amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine. The amount of committed memory may change over time (increase or decrease). The Java virtual machine may release memory to the system..." This is why you see it change.
Assuming you are using G1 then the collector performs incremental compaction. You are correct that if the collector could not keep up with allocation in the old gen and it was getting low on space it would perform a full compacting collection. This is not happening here as the last graph shows you are using nowhere near the allocated heap space. However, to avoid this G1 will collect and compact concurrently with your application. This is why you see usage go up (as you application instantiates more objects) and then go down (as the G1 collector reclaims space from no longer required objects). For a more detailed explanation of how G1 works there is a good read in the documentation, https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/g1_gc.html.
Related
I would like to understand why the GC gets triggered even though I have plenty of heap left unused.. I have allocated 1.7 GB of RAM. I still see 10% of GC CPU usage often.
I use this - -XX:+UseG1GC with Java 17
JVMs will always have some gc threads running (unless you use Epsilon GC which perform no gc, I do not recommend using this unless you know why you need it), because the JVM manages memory for you.
Heap in G1 is divided two spaces: young and old. All objects are created in young space. When the young space fills (it always do eventually, unless you are developing zero garbage), it will trigger some gc cleaning unreferenced objects from the young and promoting some objects which are still referenced to old.
Those spikes in the right screenshot will correspond to young collection events (where unreferenced objects get cleaned). Young space is always much more small than the old space. So it fills frequently. That is why you see those spikes regarding there is much more memory free.
DISCLAIMER This is a really very high level explanation of memory management in the JVM. Some important concepts have been not mentioned.
You can read more about g1 gc collector here
Also take a look at jstat tool which will help you understand what is happening in your heap.
My Java program reads data from a stream and creates an in-memory cache of parts of it. At some point it throws an OutOfMemoryError, and I've caused it to create a heap dump at that time so that I can see what causes the issue.
But when I load the heap dump I see that about half of the memory is unused: I've started the VM with -Xmx8000m, and the heap dump, when loaded into Eclipse Memory Analyzer or VirtualVM only shows about 4GB in use. The dump file itself however is around 8GB in file size.
What is also odd is that both tools report a lot of int arrays of size int[262136] as "unreferenced objects", i.e. garbage. There is about 4GB of those - so that really points to them not being garbage but being the reason for the OOM..
My code does not create in arrays of this size, at all, btw.
Why do I get this OOM, and what is the matter with those int[] arrays?
I am running on a Java 11 JDK, but the same issue also occurs on Java 14.
This is a problem with Java's garbage collector, and it was very hard to find.
The default garbage collector for these versions is the G1 collector. This garbage collector divides the available memory into fixed size memory regions. These are always a power of 2 big, starting at 1MB, and depend on the -Xmx max memory parameter.
Those int[262136] arrays are a trick that the gc uses to somehow mark these regions as Java objects. This int array takes exactly 1mb of space, so it has the size of the region. It marks them as being unreferenced so most tools do not see them, or mark them as garbage. This is highly misleading as it seems to be the cause of the OOM issue.
The real reason for the OOM is that the caching code allocates (and releases) objects that are considered "Humongous objects" by the G1 garbage collector. It has huge problems with reclaiming or moving these objects, and this apparently causes memory fragmentation - which in turn causes the OOM even though enough memory appears to be available. For some reason the gc logging does not give any indication that this could be an issue 8-(.
A good test to see whether this is the cause of your issue is to run the same program with either the old "mark and sweep GC" (by adding the parameter -XX:+UseConcMarkSweepGC to the Java command line; this works best, but this gc has been removed starting from Java 15), or by trying with the parallel GC (by adding -XX:+UseParallelGC).
To solve this either use one of the above GC's, or play with the -XX:G1HeapRegionSize parameter. Set it to a larger power-of-2 size (like 2m, 4m, 16m) to see if that fixes the issue.
Some more information on this can be found on the site of jxray.com, a heap dump analysis tool: https://jxray.com/documentation#humongous_objs, and in an Oracle article about the G1 collector at https://www.oracle.com/technical-resources/articles/java/g1gc.html.
This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
I have a Java app which shows different GC behaviors in different environments. In one environment, the heap usage graph is a slow sawtooth with major GCs every 10 hours or so, only when the heap is >90% full. In another environment, the JVM does major GCs every hour on the dot (the heap is normally between 10% and 30% at these times).
My question is, what are the factors which cause the JVM to decide to do a major GC?
Obviously it collects when the heap is nearly full, but there is some other cause at play which I am guessing is related to an hourly scheduled task within my app (although there is no spike in memory usage at this time).
I assume GC behaviour depends heavily on the JVM; I am using:
Java HotSpot(TM) 64-Bit Server VM 1.7.0_21 Oracle Corporation
No specific GC options, so using the default settings for 64-bit server (PS MarkSweep and PS Scavenge)
Other info:
This is a web app running in Tomcat 6.
Perm gen hovers around 10% in both environments.
The environment with the sawtooth behaviour has 7Gb max heap, the other has 14Gb.
Please, no guesswork. The JVM must have rules for deciding when to perform a major GC, and these rules must be encoded deep in the source somewhere. If anyone knows what they are, or where they are documented, please share!
I have found four conditions that can cause a major GC (given my JVM config):
The old gen area is full (even if it can be grown, a major GC will still be run first)
The perm gen area is full (even if it can be grown, a major GC will still be run first)
Someone is manually calling System.gc(): a bad library or something related to RMI (see links 1, 2 and 3)
The young gen areas are all full and nothing is ready to be moved into old gen (see 1)
As others have commented, cases 1 and 2 can be improved by allocating plenty of heap and permgen, and setting -Xms and -Xmx to the same value (along with the perm equivalents) to avoid dynamic heap resizing.
Case 3 can be avoided using the -XX:+DisableExplicitGC flag.
Case 4 requires more involved tuning, e.g., -XX:NewRatio=N (see Oracle's tuning guide).
Garbage collection is a pretty complicated topic, and while you could learn all the details about this, I think what’s happening in your case is pretty simple.
Sun’s Garbage Collection Tuning guide, under the “Explicit Garbage Collection” heading, warns:
applications can interact with garbage collection … by invoking full garbage collections explicitly … This can force a major collection to be done when it may not be necessary … One of the most commonly encountered uses of explicit garbage collection occurs with RMI … RMI forces full collections periodically
That guide says that the default time between garbage collections is one minute, but the sun.rmi Properties reference, under sun.rmi.dgc.server.gcInterval says:
The default value is 3600000 milliseconds (one hour).
If you’re seeing major collections every hour in one application but not another, it’s probably because the application is using RMI, possibly only internally, and you haven’t added -XX:+DisableExplicitGC to the startup flags.
Disable explicit GC, or test this hypothesis by setting -Dsun.rmi.dgc.server.gcInterval=7200000 and observing if GCs happen every two hours instead.
It depends on your configurations, since HotSpot configures itself differently in different Java environments. For example, in a server with more than 2GB and two processors some JVMs will be configured in '-server' mode instead of the default '-client' mode, which configure the sizes of the memory spaces (generations) differently, and that has an impact as to when garbage collection will occur.
A full GC can occur automatically, but also if you call the garbage collector in your code (ex: using System.gc()). Automatically, it depends on how the minor collections are behaving.
There are at least two algorithms being used. If you are using defaults, a copying algorithm is used for minor collections, and a mark-sweep algorithm for major collections.
A copying algorithm consists of copying used memory from one block to another, and then clearing the space containing the blocks with no references to them. The copying algorithm in the JVM uses uses a large area for objects that are created for the first time (called Eden), and two smaller ones (called survivors). Surviving objects are copied once from Eden and several times from the survivor spaces during each minor collection until they become tenured and are copied to another space (called tenured space) where they can only be removed in a major collection.
Most of the objects in Eden die quickly, so the first collection copies the surviving objects to the survivor spaces (which are by default much smaller). There are two survivors s1 and s2. Every time the Eden fills, the surviving objects from Eden and s1 are copied to s2, Eden and s1 are cleared. Next time, survivors from Eden and s2 are copied back to s1. They keep on being copied from s1 to s2 to s1 until a certain number of copies is reached, or because a block is too big and doesn't fit, or some other criteria. Then the surviving memory block is copied to the tenured generation.
The tenured objects are not affected by the minor collections. They accumulate until the area gets full (or the garbage collector is called). Then the JVM will run a mark-sweep algorithm in a major collection which will preserve only the surviving objects that still have references.
If you have larger objects that don't fit into the survivors, they might be copied directly to the tenured space, which will fill more quickly and you will get major collections more frequently.
Also, the sizes of the survivor spaces, amount of copies between s1 and s2, Eden size related to the size of s1 and s2, size of the tenured generation, all these may be automatically configured differently in different environments with JVM ergonomics, which may automatically select a -server or -client behavior. You might try to run both JVMs as -server or -client and check if they still behave differently.
Even if this will get down votes... My best guess (you will have to test this) would be that the heap needs to expand and when this happens a full gc will be triggered. Not all memory is allocated at once to JVM.
You can test this by setting -Xms and -Xmx to the same value, for example 7GB each