I've got a memory-intensive Java 8 server application, which is also latency-sensitive. The application runs fine with 2.5G heap but there are spikes of garbage collector CPU usage once a day,
Now I wonder how to reduce the spikes. I probably can't reduce the memory usage or add more memory. I am Ok with the average CPU usage of GC. I would like just to distribute the CPU load over time evenly. Is it possible ?
First of all you should make sure that it's the CPU utilization that introduces latency and not stop-the-world pauses (e.g. Full GC if you're using CMS).
If Full GCs are not the issue then you can inspect the effective VM flags of your application by starting it with all present flags (present flags may affect the defaults of others) and then appending -XX:+PrintFlagsFinal.
Look for ConcGCThreads and see if reducing that number has the desired effect. It'll use less cores (but more walltime) for the concurrent cycles.
Related
This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
Atttached is the overview display of my Jconsole
As you can see the Heap Memory Usage Spikes up and the CPU Usage is very spiky as well. However, when the Heap Memory Usage dips(I guess GC happens), the CPU did not spike for a moment.
What could be the possible problem of the Heap Memory increasing, which causes CPU spikes and in-turn, high CPU utilisation.
The standard while loop uses around 50% of cpu so it isnt wise having many while loops at once. So what i designed to combat that is a waitBlock that waits a specific amount of time using an inputStream that blocks until there is an inputStream, and so if the waitBlock waits 0001 millisecond then the loop will run 1000 times a second, so although the cpu usage will be basically 0% it has speed limitations. Whith the memory usage from what i understand, if you are using sockets etc. And you dont close sockets and you keep recreating them,the memory usage just builds and builds. Although I might be able to help more if i know what you have done in your code.
It seems when a Full GC is performed, the CPU utilisation is normalised.
I have a multi-threaded program that does heavy memory allocation. The performance is fine on a quad-core i7 CPU and the speed up is around 3.9X. But, when the program is executed on a 12-core Xeon CPU, the speedup value does not go beyond 5.5X.
I should mention that the GC seems not to be a problem because VisualGC reports below 1 seconds for GC after more than 100 seconds of execution. The main memory usage belongs to the Eden section of heap and other sections hardly get used. The code does massive int array allocations and performs some arithmetic operations on them. It is somehow like state-space exploring and allocation of new instances cannot be avoided.
As you know, the standard memory allocators of both Windows and Linux show unsatisfactory performance for multi-threaded programs and good alternatives like tcmalloc and Hoard are available for C/C++. Since the parallel section consists of fully independent tasks and the GC time is very low, I doubted that the main reason should be the bad performance of JVM's memory allocator when too many threads compete for allocation.
Does anybody have experience with JVM's allocator in massive multithreaded programs and can give advise on how I can overcome this problem??
P.S. I have tested the code using JVM 6,7, and 8. The allocation rate is also very high (around 10 millions per second) but as I mentioned the Eden section is heavily used and the working set is less than a Gigabyte.
Is it the case that Eden space is less than what it should be in your case? If so, consider using
-XX:NewRatio=1 or other appropriate value.
To ascertain that, please use
-XX:+PrintTenuringDistribution
to see the distribution..
I am running an J2EE application on 3 year old Solaris system with a used heap that is about 300 MB. From the gc logs I have seen that the full gc that is triggered a few times a day takes about 5 seconds and recovers about 200 MB every time. What could be the reason for a full gc to take such a long time on such a small heap?
I run java 1.6.0_37.
A slow full GC (and minor GC for that matter) is primary a result of a poor hardware setup and secondly software configuration (i.e. GC ergonomics), and at last the number of object residing in the heap.
Looking at the hardware, what CPU model and vendor are you using on your Solaris? Is it a SMP system with more than one core. Do you have more than one thread per core? Do your GC utilize all available virtual processors on the system i.e. is the garbage collection distributed across more than one processor?
Another situation making full GC to perform slow is if a part of the heap is swapped out from main memory. In that case the memory pages swapped out must be swapped in during the garbage collection which can be a rather time consuming process. In that case you do not have sufficient physical memory installed on the machine.
Does any other applications on the system compete for the same physical resources, i.e. CPU and memory?
Looking at the GC ergonomics, what collector are you using? I would recommend the parallel throughput collector or the G1 collector using multiple collector threads. I would also recommend to use a NUMA configuration.
Some general rules:
The better hardware and GC ergonomics, the faster the individual garbage collections will perform.
The fewer and smaller objects the application creates, the less often will the garbage collector run.
The fewer long lived object created, the less often will the full garbage collector run.
For more information about GC ergonomics:
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html