How can I tell if the heap memory of 1 Java application is too high by jvm monitoring? Currently our Java application heap memory is set to 16G, but judging from experience, this value is set too high and I want to convince other colleagues to lower this value for saving resources, but I don't know how to visually show that the memory setting is too high through monitoring.
This problem, I think the more difficult point is that memory is not like CPU, how much is actually used and how much is set by Pod, there is an obvious contrast, Java heap memory set to 16G, the startup directly occupies 16G, and regardless of whether the heap memory is set to 8G or 16G, I understand that it is nothing more than a matter of triggering the garbage collection frequency, I can't justify lowering the memory to 8G, or 4G I can't prove that by reducing the memory to 8G or 4G, the program will run smoothly and not trigger an OOM exception.
Related
I have a Java microserive that is increasing in memory daily in production that I have just inheritted. It is currently using 80% of a 4GB heap and rising a few percent each day.
The -Xms and -Xmx values in the JVM options are both set to 4GB.
Does this mean the garbage collector wont activate until the JVM reaches its heap limit?
Does this mean the garbage collector wont activate until the JVM reaches its heap limit?
No, the garbage collector will activate its collection cycles whenever the GC algorithm tells it to - which will usually be frequent incremental collections, and occasional full collections. Running out of heap can force it to perform a full collection more often and be more aggressive, but it's certainly not true (for all GC implementations I'm aware of anyway) to say it won't do anything until it's out of, or nearly out of heap space.
What could be happening (depending on the GC algorithm) is that the incremental collections are performing regularly & as intended, but is unable to clear everything properly - and the slower full collection won't be performed until you're lower on space. That's perfectly valid behaviour - you've given the VM 4GB heap, and it'll use that in the way it sees fit. If that's the case, you'd see a sudden drop when it gets closer to the 4GB limit and performs a full collection.
Alternatively, it's possible you have a memory leak in the application somewhere and it's going to run out of memory eventually, be unable to free any space, and then throw an OutOfMemoryError.
If it were me I'd profile & stress test the application under a testing environment with the same heap & GC options set to check the behaviour - that's really the only way to find out for sure.
This means the JVM will always allocate 4GB of heap. It may or may not run the GC before heap is full. AFAIK this is up to the GC implementation. My experience is that it won't garbage collect before the heap is (almost) full, but that may not be true for all implementations. If you suspect a memory leak I'd suggest reducing -Xms, then monitor if heap grows beyond -Xms value.
BTW: Is there a reason for this high -Xms value? All cases I have seen where -Xms was specified, this was by mistake / lack of of knowledge
This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.
This question is regarding the Garbage collection behavior when request need more memory than allocated to Pod . If GC is not able to free memory, will it continue to run GC continuously or throw out
of memory.
One pod contains java based app and another contain PHP based. In case of java xmx value is same as given to pod limit.
I can only talk about Java GC. (PHP's GC behavior will be different.)
If GC is not able to free memory, will it continue to run GC continuously after regular interval or throw out of memory.
It depends on the JVM options.
A JVM starts with an initial size for the heap and will expand it as required. However, it will only expand the heap up to a fixed maximum size. That maximum size is determined when the JVM starts from either an option (-Xmx) or default heap size rules. It can't be changed after startup.
As the heap space used gets close to the limit, the GC is likely to occur more and more frequently. The default behavior on a modern JVM is to monitor the %time spent doing garbage collection. If it exceeds a (configurable) threshold, then you will get an OOME with a message about the GC Overhead Threshold having been exceeded. This can happen even if there is enough space to "limp along" for a bit longer.
You can turn off the GC Overhead Limit stuff, but it is inadvisable.
The JVM will also throw an OOME if it simply doesn't have enough heap space after doing a full garbage collection.
Finally, a JVM will throw an OOME if it tries to grow the heap and the OS refuses to give it the memory it requested. This could happen because:
the OS has run out of RAM
the OS has run out of swap space
the process has exceeded a ulimit, or
the process group (container) has exceeded a container limit.
The JVM is only marginally aware of the memory available in its environment. On a bare metal OS or a VM under a hypervisor, the default heap size depends on the amount of RAM. On a bare metal OS, that is physical RAM. On a VM, it will be ... what ever the guest OS sees as its physical memory.
With Kubernetes, the memory available to an application is likely to be further limited by cgroups or similar. I understand that recent Java releases have tweaks that make them more suitable for running in containers. I think this means that they can use the cgroup memory limits rather than the physical memory size when calculating a default heap size.
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
I am running a Java application on a Linux-Cluster with SLURM as resource manager. To run my application I have to specify for SLURM the amount of memory I will need. SLURM will run my application in a kind of VM with the specified amount of memory. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
My problem is that I am exceeding the amount of memory I have chosen on SLURM and it terminates my application. It seems that the JVM uses about 1GB of memory, probably for things like GC or so.
Is there a possibility to restrict the size of the JVM or at least to tame it.
Cheers,
Markus
The maximum heap setting only limited the maximum heap. There are other memory regions which you have not limited such as
thread stacks
perm gen
shared libraries
native memory used by libraries
direct memory
memory mapped files.
If you want to limit the over all memory usage you need to be clear about whether you are limiting virtual memory or resident memory. Often monitoring tools make the mistake of monitoring virtual memory which shows a surprising lack of understanding of how applications work, or even why you monitor an application in the first place.
You want to monitor resident memory usage which means you need to know how much memory your application uses over time apart from the heap, then work out how much heap you can have plus some margin for error.
. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
At a guess I would start with 1/2 GB with -Xmx512m and see what is the peak resident memory and increase it if you find there is always a few hundred MB head room.
BTW 1 GB of memory doesn't cost that much these days (as little as $5). Your time could be worth much more than the resources you are trying to save.