I am running an J2EE application on 3 year old Solaris system with a used heap that is about 300 MB. From the gc logs I have seen that the full gc that is triggered a few times a day takes about 5 seconds and recovers about 200 MB every time. What could be the reason for a full gc to take such a long time on such a small heap?
I run java 1.6.0_37.
A slow full GC (and minor GC for that matter) is primary a result of a poor hardware setup and secondly software configuration (i.e. GC ergonomics), and at last the number of object residing in the heap.
Looking at the hardware, what CPU model and vendor are you using on your Solaris? Is it a SMP system with more than one core. Do you have more than one thread per core? Do your GC utilize all available virtual processors on the system i.e. is the garbage collection distributed across more than one processor?
Another situation making full GC to perform slow is if a part of the heap is swapped out from main memory. In that case the memory pages swapped out must be swapped in during the garbage collection which can be a rather time consuming process. In that case you do not have sufficient physical memory installed on the machine.
Does any other applications on the system compete for the same physical resources, i.e. CPU and memory?
Looking at the GC ergonomics, what collector are you using? I would recommend the parallel throughput collector or the G1 collector using multiple collector threads. I would also recommend to use a NUMA configuration.
Some general rules:
The better hardware and GC ergonomics, the faster the individual garbage collections will perform.
The fewer and smaller objects the application creates, the less often will the garbage collector run.
The fewer long lived object created, the less often will the full garbage collector run.
For more information about GC ergonomics:
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
Related
(Committed and Max lines are the same)
I am looking at the memory usage for a Java application in newrelic. Here are several questions:
# 1
The committed PS Survivor Space Heap varied in past few days. But should it be a constant since it is configured by JVM?
# 2
From what I am understanding, the heap memory should decrease when there is a garbage collection. The memory of Eden could decrease when a major gc or a minor gc happens, while the memory of Old could decrease when a major gc happens.
But if you look at Old memory usage, some time between June 6th and 7th, the memory went up and then later it went down. This should represent that a major gc happend, right? However, there was still lots of unused memory left. It didn't seem it almost reach the limit. Then how did the major gc be triggered? Same for Eden memory usage, it never reached the limit but it still decreased.
The application fetches a file from other places. This file could be large and be processed in memory. Could this explain the issue above?
You need to provide more information about your configuration to answer this definitively, I will assume you are using the Hotspot JVM from Oracle and that you are using the G1 collector. Posting the flags you start the JVM with would also be useful.
The key term here is 'committed'. This is memory reserved by the JVM, but not necessarily in use (or even mapped to physical pages, it's just a range of virtual memory that can be used by the JVM). There's a good description of this in the MemoryUsage class of the java.lang.management package (check the API docs). It says, "committed represents the amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine. The amount of committed memory may change over time (increase or decrease). The Java virtual machine may release memory to the system..." This is why you see it change.
Assuming you are using G1 then the collector performs incremental compaction. You are correct that if the collector could not keep up with allocation in the old gen and it was getting low on space it would perform a full compacting collection. This is not happening here as the last graph shows you are using nowhere near the allocated heap space. However, to avoid this G1 will collect and compact concurrently with your application. This is why you see usage go up (as you application instantiates more objects) and then go down (as the G1 collector reclaims space from no longer required objects). For a more detailed explanation of how G1 works there is a good read in the documentation, https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/g1_gc.html.
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
According to:
9 Garbage-First Garbage Collector
and:
G1: Java's Garbage First Garbage Collector
G1 targeted for multiprocessor machines with large memories.
Those 2 papers (and other web papers), does not describe why we need:
a. large memories
b. multiprocessor (I assume this need due to concurrent & parallel)
What is the technical explanation for those requirements ?
It's other way around. G1 is not targeted for large memories. If your application demands large heap size, G1 is effective.
Why your application demands large heaps? It's depend on business requirements and specific needs of application. You may load huge set of master data Or you may use in - memory caching to reduce response times. Think of big data applications,(Spark,Hadoop) which are processing teta bytes of data and use memory for processing.
Multiprocessors machines have more processing powers and effective for parallel execution of different tasks. Large heap applications obviously demands more processing power.
By setting Max pause time goal, G1GC try to meet that goal. Compared to other algorithms, by default G1GC spends 10% of time in garbage collection activities. You have to fine tune the parameters properly to achieve your pause time goals.
This related question is helpful to get some more insight into G1GC: Java 7 (JDK 7) garbage collection and documentation on G1
G1 is the only collection algorithm in the hotspot VM that can deal with very large heaps efficiently. However, a large heap is NOT a requirement but instead the G1 is built for situations where your application needs a very large heap. In low-heap situations, it is still outperformed by older algorithms. The same is true for the number of processors.
I've got a memory-intensive Java 8 server application, which is also latency-sensitive. The application runs fine with 2.5G heap but there are spikes of garbage collector CPU usage once a day,
Now I wonder how to reduce the spikes. I probably can't reduce the memory usage or add more memory. I am Ok with the average CPU usage of GC. I would like just to distribute the CPU load over time evenly. Is it possible ?
First of all you should make sure that it's the CPU utilization that introduces latency and not stop-the-world pauses (e.g. Full GC if you're using CMS).
If Full GCs are not the issue then you can inspect the effective VM flags of your application by starting it with all present flags (present flags may affect the defaults of others) and then appending -XX:+PrintFlagsFinal.
Look for ConcGCThreads and see if reducing that number has the desired effect. It'll use less cores (but more walltime) for the concurrent cycles.
We have a pool of server that sits behind the load balancer. The machines in this pool does garbage collection every 6 seconds on average. It takes almost half a second to garbage collect. We also see a CPU spike during garbage collection.
The client machines see a spike in average time to make a connection to the server almost 10% during a day.
Theory : CPU is busy doing GC and that's why it cannot allocate a connection faster.
Is it a valid theory?
JVM : IBM
GC algorithm :gencon
Nursery : 5 GB
Heap Size : 18 GB
I'd say with that many allocations all bets are off--it could absolutely get worse over time, I mean if you are doing GC every 6 seconds all day long that seems problematic.
Do you have access to that code? Can it be re-written to reuse objects and be more intelligent about allocation? I've done a few embedded systems and the trick is to NEVER call new once the system is up and running (Quite doable if you have control over the entire system)
If you don't have access to the code, check into some of the GC tuning options available (including the selection of the garbage collector used)--both distributed with the JDK and 3rd party options. You may be able to improve performance with a few command-line modifications.
It's possible I guess.
Given garbage collection is such an intensive process, is there any reason for it to occur every 6 seconds? I'm not familiar with the IBM JVM or the particular collection algorithm you are using so I can't really comment on those. However, there are some good tuning documents provided by Sun (now offered by Oracle) that discuss the different types of collectors and when you would use them. See this link for some ideas.
One way to prove your theory could to be add some code that logs the time a connection was requested and the time when it was actually allocated. If the GC related CPU spikes seem to coincide with longer times in allocating connections, then that'd prove your theory. Your problem will then become how to get around it.