This question is regarding the Garbage collection behavior when request need more memory than allocated to Pod . If GC is not able to free memory, will it continue to run GC continuously or throw out
of memory.
One pod contains java based app and another contain PHP based. In case of java xmx value is same as given to pod limit.
I can only talk about Java GC. (PHP's GC behavior will be different.)
If GC is not able to free memory, will it continue to run GC continuously after regular interval or throw out of memory.
It depends on the JVM options.
A JVM starts with an initial size for the heap and will expand it as required. However, it will only expand the heap up to a fixed maximum size. That maximum size is determined when the JVM starts from either an option (-Xmx) or default heap size rules. It can't be changed after startup.
As the heap space used gets close to the limit, the GC is likely to occur more and more frequently. The default behavior on a modern JVM is to monitor the %time spent doing garbage collection. If it exceeds a (configurable) threshold, then you will get an OOME with a message about the GC Overhead Threshold having been exceeded. This can happen even if there is enough space to "limp along" for a bit longer.
You can turn off the GC Overhead Limit stuff, but it is inadvisable.
The JVM will also throw an OOME if it simply doesn't have enough heap space after doing a full garbage collection.
Finally, a JVM will throw an OOME if it tries to grow the heap and the OS refuses to give it the memory it requested. This could happen because:
the OS has run out of RAM
the OS has run out of swap space
the process has exceeded a ulimit, or
the process group (container) has exceeded a container limit.
The JVM is only marginally aware of the memory available in its environment. On a bare metal OS or a VM under a hypervisor, the default heap size depends on the amount of RAM. On a bare metal OS, that is physical RAM. On a VM, it will be ... what ever the guest OS sees as its physical memory.
With Kubernetes, the memory available to an application is likely to be further limited by cgroups or similar. I understand that recent Java releases have tweaks that make them more suitable for running in containers. I think this means that they can use the cgroup memory limits rather than the physical memory size when calculating a default heap size.
Related
I have a Java microserive that is increasing in memory daily in production that I have just inheritted. It is currently using 80% of a 4GB heap and rising a few percent each day.
The -Xms and -Xmx values in the JVM options are both set to 4GB.
Does this mean the garbage collector wont activate until the JVM reaches its heap limit?
Does this mean the garbage collector wont activate until the JVM reaches its heap limit?
No, the garbage collector will activate its collection cycles whenever the GC algorithm tells it to - which will usually be frequent incremental collections, and occasional full collections. Running out of heap can force it to perform a full collection more often and be more aggressive, but it's certainly not true (for all GC implementations I'm aware of anyway) to say it won't do anything until it's out of, or nearly out of heap space.
What could be happening (depending on the GC algorithm) is that the incremental collections are performing regularly & as intended, but is unable to clear everything properly - and the slower full collection won't be performed until you're lower on space. That's perfectly valid behaviour - you've given the VM 4GB heap, and it'll use that in the way it sees fit. If that's the case, you'd see a sudden drop when it gets closer to the 4GB limit and performs a full collection.
Alternatively, it's possible you have a memory leak in the application somewhere and it's going to run out of memory eventually, be unable to free any space, and then throw an OutOfMemoryError.
If it were me I'd profile & stress test the application under a testing environment with the same heap & GC options set to check the behaviour - that's really the only way to find out for sure.
This means the JVM will always allocate 4GB of heap. It may or may not run the GC before heap is full. AFAIK this is up to the GC implementation. My experience is that it won't garbage collect before the heap is (almost) full, but that may not be true for all implementations. If you suspect a memory leak I'd suggest reducing -Xms, then monitor if heap grows beyond -Xms value.
BTW: Is there a reason for this high -Xms value? All cases I have seen where -Xms was specified, this was by mistake / lack of of knowledge
This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.
I know that the -Xms flag of JVM process is to allow the JVM process to use a specific amount of memory to initialize its process. And in regard to performance of a Java application, it is often recommended to set the same values to both -Xms and -Xmx when starting the application, like -Xms2048M -Xmx2048M.
I'm curious whether the -Xms and -Xmx flags mean that the JVM process makes a reservation for the specific amount of memory to prevent other processes in the same machine from using it.
Is this right?
Xmx merely reserves virtual address space.
Xms actually allocates (commits) it but does not necessarily prefault it.
How operating systems respond to allocations varies.
Windows does allow you to reserve very large chunks of address space (Xmx) but will not allow overcommit (Xms). The limit is defined by swap + physical. The exception are large pages (which need to be enabled with a group policy setting), which will limit it by physical ram.
Linux behavior is more complicated, it depends on the vm.overcommit_memory and related sysctls and various flags passed to the mmap syscall, which to some extent can be controlled by JVM configuration flags. The behavior can range from a) Xms can exceed total ram + swap to b) Xmx is capped by available physical ram.
Short answer: Depends on the OS, though it's definitely a NO in all popular operating systems.
I'll take the example of Linux's memory allocation terminology here.
-Xms and -Xmx specify the minimum and maximum size of JVM heap. These sizes reflect VIRTUAL MEMORY allocations which can be physical mapped to pages in RAM called the RESIDENT SIZE of the process at any time.
When the JVM starts, it'll allocate -Xms amount of virtual memory. This can be mapped to resident memory (physical memory) once you dynamically create more objects on heap. This operation will not require JVM requesting any new allocation from the OS, but will increase you RAM utilization, because those virtual pages will now actually have corresponding physical memory allocation too. However, once your process tries to create more objects on heap after consuming all its Xms allocation on RAM, it has to request the OS for more virtual memory from the OS, which may/may not also be mapped to physical memory later depending on when you need it. The limit for this is your -Xmx allocation.
Note that this is all possible because the memory in linux is shared. So, even if a process allocates memory beforehand, what it gets is virtual memory which is just an addressable contiguous fictional allocation that may or may not be mapped to real physical pages depending on the demand. Read this answer for a short description of how memory management works in popular operating systems. Here is a much detailed (slightly outdated but very useful) information on how Linux's memory management works.
Also note that, these flags only affect heap sizes. The resident memory size that you will see will be larger than the current JVM heap size. More specifically, the memory consumed by a JVM is equals to its HEAP SIZE plus DIRECT MEMORY which reflects things coming from method stacks, native buffer allocations etc.
Does JVM process makes a reservation for the specific amount of memory?
Yes, the JVM reserves the memory specified by Xms at the start and might reserve upto Xmx but the reservation need not be in the physical memory, it can also be in the swap. The JVM pages will be swaped in and out of memory as needed.
Why is it recommended to have same value for Xms and Xmx?
Note: Setting Xms and Xmx is generally recommended for production systems where the machines are dedicated for a single application (or there aren't many applications competing for system resources). This does not generalize it is good everywhere.
Avoids Heap Size:
The JVM starts with the heap size specified by the Xms value initially. When the heap is exhausted due to allocation of objects by the application. The JVM starts increasing the heap. Each time the JVM increases the heap size it must ask the operating system for additional memory. This is a time consuming operation and results in increased gc pause times and inturn the response times for the requests.
Applications Behaviour In the Long Run:
Even though I cannot generalize, many applications over the long run eventually grow to the maximum heap value. This is another reason to start of with maximum memory instead of growing the heap over time and creating unnecessary overhead of heap resize. It is like asking the application to take up the memory at the start itself which it will eventually take.
Number of GCs::
Starting off with small heap sizes results in garbage collection more often. Bigger heap sizes reduce the number of gcs that happen because more memory is available to object allocation. However it must be noted that increased heap sizes increases gc pause times. This is an advantage only if your garbage collection has been tuned properly and the pause times don't increase significantly with increase in heap sizes.
One more reason for doing this is servers generally come with large amounts of memory, So why not use the resources available?
We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.
Suppose the maximum size of a JVM heap is 2GB (-Xmx2048m -Xms100m), we find that the peak used usage of this heap is 1GB and the peak committed usage is 1.2GB after it finishes. So, my question is whether the free space (2GB - 1.2GB) can be consumed by other applications while the JVM is running.
I think the free space cannot be used by others but I'm not sure currently: The operating system reserves 2GB free space before the JVM runs. The reserved space may not be consumed by other applications though the JVM cannot use it up.
JVM checks whether OS has enough address space for -Xmx, but OS won't actually allocate the memory until that much is requested by JVM. JVM will only reserve -Xms memory but can extend upto -Xmx provided that much memory is available.
Runtime.getRuntime().totalMemory() will return the current size of the heap, which will not exceed the maximum size specified on the command line.
That is approximately the amount of memory assigned to the JVM (not including non-heap JVM memory) by the operating system. Other memory is free for use by other applications.
Of course that is grossly oversimplified -- total system memory is total physical memory + total available swap, with other complications (e.g. Linux makes promises of memory to processes but doesn't actually commit it to that process unless it is touched, also simplified). In any case though, the short answer is: Yes, you specify a maximum size on the command line, but the current size is what is allocated to the JVM; the rest is available for other applications.
The memory seen by Java process is virtual memory. The operating system doesn't need to really reserved 2GB free physical memory for Java process.