Atttached is the overview display of my Jconsole
As you can see the Heap Memory Usage Spikes up and the CPU Usage is very spiky as well. However, when the Heap Memory Usage dips(I guess GC happens), the CPU did not spike for a moment.
What could be the possible problem of the Heap Memory increasing, which causes CPU spikes and in-turn, high CPU utilisation.
The standard while loop uses around 50% of cpu so it isnt wise having many while loops at once. So what i designed to combat that is a waitBlock that waits a specific amount of time using an inputStream that blocks until there is an inputStream, and so if the waitBlock waits 0001 millisecond then the loop will run 1000 times a second, so although the cpu usage will be basically 0% it has speed limitations. Whith the memory usage from what i understand, if you are using sockets etc. And you dont close sockets and you keep recreating them,the memory usage just builds and builds. Although I might be able to help more if i know what you have done in your code.
It seems when a Full GC is performed, the CPU utilisation is normalised.
Related
What are the limitations on Java heap space? What are the advantages/disadvantages of not increasing the size to a large value?
This question stems from a program that I am running sometimes running out of space. I am aware that we can change the size of the heap to a larger value in order to reduce the chance of this. However, I am looking for pros and cons of keeping the heap size small or large. Does decreasing it increase speed?
Increasing the heap size may, under some circumstances, delay the activities of the garbage collector. If you are lucky, it may be delayed until your program has finished its work and terminated.
Depending on the data structures that you are moving around in your program, a larger heap size may also improve performance because the GC can work more effectively. But to tune a program into that direction is … tricky, at best.
Using in-memory caches in your program will definitely improve performance (if you have a decent cache-hit ratio), and of course, this will need heap space, perhaps a lot.
But if your program reports OutOfMemoryError: heap space because of the data to process, you do not have much alternatives other than increasing the heap space; performance is your least problem in this case. Or you change your algorithm that it will not load all data into memory, instead processing it on disk. But then again, performance is out of the door.
If you run a server of some kind, having about 80% to 85% heap utilisation is a good value, if you do not have heavy spikes. If you know for sure that incoming request do not cause significant additional memory load, you may even go up to 95%. You want value for money, and you paid for the memory, one or the other way – so you want to use it!
You can even set Xms and Xmx to different values; then the JVM can increase the heap space when needed, and today it can even release that additional memory if no longer needed. But this increase costs performance – on the other side, a slow system is always better than one that crashes.
A too small heap size may affect performance if your system also does not have enough cores, so that the garbage collectors do compete over the CPU with the business threads. At some point, the CPU spends a significant time on garbage collection. Again, if you run a server, this behaviour is nearly unavoidable, but for a tool-like program, increasing the heap can prevent it (because the program may come to an end before the GC needs to get active). This is already said in the beginning …
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
I've got a memory-intensive Java 8 server application, which is also latency-sensitive. The application runs fine with 2.5G heap but there are spikes of garbage collector CPU usage once a day,
Now I wonder how to reduce the spikes. I probably can't reduce the memory usage or add more memory. I am Ok with the average CPU usage of GC. I would like just to distribute the CPU load over time evenly. Is it possible ?
First of all you should make sure that it's the CPU utilization that introduces latency and not stop-the-world pauses (e.g. Full GC if you're using CMS).
If Full GCs are not the issue then you can inspect the effective VM flags of your application by starting it with all present flags (present flags may affect the defaults of others) and then appending -XX:+PrintFlagsFinal.
Look for ConcGCThreads and see if reducing that number has the desired effect. It'll use less cores (but more walltime) for the concurrent cycles.
I was doing a quick experiment to see how my algorithm's memory performance looks like.
The input is about 2 Mb and the algorithm takes about 1 second to run it.
I ran this in a loop for 500 times to be able to look at the memory allocation.
This is how jConsole shows the memory usage:
As you can see heap memory usage increases (kinda exponentially) every two times before GC starts (even though the input is the same).
Does anybody know if this is expected and why it happens? Is it some optimization done by JVM?
Thanks!
Does anybody know if this is expected and why it happens? Is it some optimization done by JVM?
The JVM is trying to minimise the time spent GC-ing. If you use more memory, it doesn't have to GC as often.
a leak?
If you look at memory usage after a GC it is much the same so clearly it doesn't have a memory leak. Or at least not a big one.
You have to look at the memory used after Full GCs to confirm there is a memory leak and I assume these are minor collections.
I have a multi-threaded program that does heavy memory allocation. The performance is fine on a quad-core i7 CPU and the speed up is around 3.9X. But, when the program is executed on a 12-core Xeon CPU, the speedup value does not go beyond 5.5X.
I should mention that the GC seems not to be a problem because VisualGC reports below 1 seconds for GC after more than 100 seconds of execution. The main memory usage belongs to the Eden section of heap and other sections hardly get used. The code does massive int array allocations and performs some arithmetic operations on them. It is somehow like state-space exploring and allocation of new instances cannot be avoided.
As you know, the standard memory allocators of both Windows and Linux show unsatisfactory performance for multi-threaded programs and good alternatives like tcmalloc and Hoard are available for C/C++. Since the parallel section consists of fully independent tasks and the GC time is very low, I doubted that the main reason should be the bad performance of JVM's memory allocator when too many threads compete for allocation.
Does anybody have experience with JVM's allocator in massive multithreaded programs and can give advise on how I can overcome this problem??
P.S. I have tested the code using JVM 6,7, and 8. The allocation rate is also very high (around 10 millions per second) but as I mentioned the Eden section is heavily used and the working set is less than a Gigabyte.
Is it the case that Eden space is less than what it should be in your case? If so, consider using
-XX:NewRatio=1 or other appropriate value.
To ascertain that, please use
-XX:+PrintTenuringDistribution
to see the distribution..