Spring Boot App memory consumption - java

1) Ours is a Spring Boot/Java 8 application that we run using
xms = 256 MB, xmx = 2 GB
2) Our release engineers are running TOP command on the unix server where the
application is running and are reporting that application is using 3.5 GB
3) When I profile our application's production JVM instance using VisualVM,
I see the used heap size shows only about 1.4 GB
Any thoughts on why the memory consumption numbers are so different between #2 and #3, above?
Thanks for your feedback.

The -Xmx parameter only sets a maximum size for the Java heap. Any memory outside the Java heap is not limited/controlled by -Xmx.
Examples of non heap memory usage include thread stacks, direct memory and perm gen.
The total virtual memory used (as reported by top) is the sum of heap usage (which you have capped by using -Xmx) and non heap usage (which cannot be capped by -Xmx).
So, the numbers in #2 and #3 are not comparable because they are not measurements of the same thing.
They will never be the same but if you want to bring them closer to each other (or at least have more control over the amount of virtual memory used) then you might consider using ...
-XX:MaxPermSize to limit perm gen size
-XX:MaxHeapFreeRatio to facilitate more aggressive heap shrinkage

Related

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

The Java ZGC garbage collector USES a lot of memory

I built a simple application using Springboot.The ZGC garbage collector I use when deploying to a Linux server USES a lot of memory..I tried to limit the maximum heap memory to 500MB with Xmx500m, but the JAVA program still used more than 1GB. When I used the G1 collector, it only used 350MB.I don't know why, is this a BUG of JDK11?Or do I have a problem with my boot parameters?
####Runtime environment
operating system: CentOS Linux release 7.8.2003
JDK version: jdk11
springboot version: v2.3.0.RELEASE
Here is my Java startup command
java -Xms128m -Xmx500m \
-XX:+UnlockExperimentalVMOptions -XX:+UseZGC \
-jar app.jar
Here is a screenshot of the memory usage at run time
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201259.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20201357.png?raw=true
Here's what happens when you use the default garbage collector
Java startup command
java -Xms128m -Xmx500m \
-jar app.jar
Heap memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202442.png?raw=true
System memory usage
https://github.com/JoyfulAndSpeedyMan/assets/blob/master/2020-07-13%20202421.png?raw=true
By default jdk11 USES the G1 garbage collector. Theoretically, shouldn't G1 be more memory intensive than ZGC?Why didn't I use it that way?Did I misunderstand?Since I'm a beginner to the JVM, I don't understand why.
ZGC employs a technique known as colored pointers. The idea is to use some free bits in 64-bit pointers into the heap for embedded metadata. However, when dereferencing such pointers, these bits need to be masked, which implies some extra work for the JVM.
To avoid the overhead of masking pointers, ZGC involves multi-mapping technique. Multi-mapping is when multiple ranges of virtual memory are mapped to the same range of physical memory.
ZGC uses 3 views of Java heap ("marked0", "marked1", "remapped"), i.e. 3 different "colors" of heap pointers and 3 virtual memory mappings for the same heap.
As a consequence, the operating system may report 3x larger memory usage. For example, for a 512 MB heap, the reported committed memory may be as large as 1.5 GB, not counting memory besides the heap. Note: multi-mapping affects the reported used memory, but physically the heap will still use 512 MB in RAM. This sometimes leads to a funny effect that RSS of the process looks larger than the amount of physical RAM.
See also:
ZGC: A Scalable Low-Latency Garbage Collector by Per Lidén
Understanding Low Latency JVM GCs by Jean Philippe Bempel
JVM uses much more than just the heap memory - read this excellent answer to understand JVM memory consumption better: Java using much more memory than heap size (or size correctly Docker memory limit)
You'll need to go beyond the heap inspection and use things like Native Memory Tracking to get a clearer picture.
I don't know what's the particular issue with your application, but ZGC is often mentioned as to be good for large heaps.
It's also a brand new collector and got many changes recently - I'd upgrade to JDK 14 if you want to use it (see "Change Log" here: https://wiki.openjdk.java.net/display/zgc/Main)
This is a result of the throughput-latency-footprint tradeoff. When choosing between these 3 things, you can only pick 2.
ZGC is a concurrent GC with low pause times. Since you don't want to give up throughput, you trade latency and throughput for footprint. So, there is nothing surprising in such high memory consumption.
G1 is not a low-pause collector, so you shift that tradeoff towards footprint and get bigger pause times but win some memory.
The amount of OS memory the JVM uses (ie, "committed heap") depends on how often the GC runs (and also whether it uncommits unneeded memory if the app starts to use less), which is a tunable option. Unfortunately ZGC isn't (currently) as aggressive about this by default as G1, but both have some tuning options that you can try.
P.S. As others have noted, the RES htop column is misleading, but the VisualVM chart shows the real picture.

Why default java max heap is 1/4th of Physical memory?

I have read couples of articles on java heap space and found out that the default max heap for JVM is 1/4th of the actual physical space. But none of the article had reason for this ?
Whats the reason of having it as 1/4th of actual memory?
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
This dates back to JDK 5, which introduced JVM ergonomics. Prior to this, the JVM would set very small defaults for the heap space. JDK 1.1 had a default of 16Mb for both Xms and Xmx, JDK 1.2 changed this to Xms of 1Mb and Xmx of 64Mb by default. In JDK 1.3, Xms default increased to 2Mb.
Since Java was proving more popular on servers and memory capacities were increasing significantly, Sun introduced the concept of a server-class machine in JDK 5. This is one that has 2 or more physical processors and 2 or more Gb of memory (if I remember rightly, in JDK 5, the machine also had to not be running Windows to count as a server).
On server-class machines by default, the following parameters were set
Throughput garbage collector (i.e. the parallel collector)
initial heap size of 1/64 of physical memory up to 1Gbyte
maximum heap size of 1/4 of physical memory up to 1Gbyte
Server runtime compiler
Ergonomics provided two command-line flags that allowed a user to set a performance goal for the JVM; the idea being that the JVM would then figure out internally how to achieve this goal by modifying its parameters. The ultimate goal was to eliminate a lot of the -XX flags that were being used to tune JVM performance manually.
The parameters are:
-XX:MaxGCPauseMillis=nnn which sets the maximum pause time you want for GC in milliseconds.
-XX:GCTimeRatio= which sets the ratio of garbage collection time to application time being 1 / (1 + nnn). This was referred to as the throughput goal.
You can specify either of these goals or both. If the JVM manages to achieve both of these goals it then attempts to reduce the memory being used (the footprint goal).
There's more detail here:
https://www.oracle.com/technetwork/java/ergo5-140223.html

What's the Metaspace size in jvm8?

Based on the description of metaspace, it only uses the native memory (no paging).
Since the class metadata is allocated out of native memory, the max available space is the total available system memory.
I found above two explanation in the internet.
I have one question.
The so-called native memory is located in jvm process? The native memory size = java process memory size - heap size, right? If that, why they said the max available space is the total available system memory since the maximum size of 32-bit java process is limited only to about 2G ?
it only uses the native memory (no paging).
This memory can be swapped, as required.
The so-called native memory is located in jvm process?
Native memory is in the JVM process.
The native memory size = java process memory size - heap size, right?
Native memory is all the memory the native code can see. You might want to exclude the heap.
If that, why they said the max available space is the total available system memory
This is true if you don't have an OS or architecural limitations such as
the maximum size of 32-bit java process is limited only to about 2G ?
The maximum is 4 GB, but on different OSes, portions of virtual memory are used by the OS. On Windows XP, you have only 1.2 - 1.5 GB. On Some UNIXes a 32-bit process can use 3.0 - 3.5 GB

will java use more memory when running on machine with larger ram

If I have a smaller-ram machine and a larger-ram machine. I run the same java code on them.
Will jvm do garbage collection more lazily on the machine with larger ram?
The problem I am trying to solve is an out of memory issue. People reported that they have Out of memory issue on small ram machine. I want to test that but the only machine I have now has a much larger ram than theirs. I am wondering if I do the test on this larger-ram machine and keep track of the memory usage, will the memory usage be the same on a smaller-ram machine or it will use even less memory?
Thanks!
Erben
You need to take a look at the JVM memory parameters. actually you can set the as much memory as you want to your JVM :
-Xmx2048m -> this param to set the max memory that the JVM can allocate
-Xms1024m -> the init memory that JVM will allocate on the start up
-XX:MaxPermSize=512M -> this for the max Permanent Generation memory
so in your case you can set the much memory as in the another machine. so you machine will not take more RAM than the Xmx value
and you may want to check this parameters also.
-XX:MaxNewSize= -> this need to be 40% from your Xmx value
-XX:NewSize=614m -> this need to be 40% from your Xmx value
also you may tell you JVM what type of GC to use like :
-XX:+UseConcMarkSweepGC
SO if you set this parameters in the both machines, you will get the same results and the same GC activity most likely.
Yes it will. This depends on the default maximum heap size. You can check your current maximum heap size using this command:
java -XshowSettings:vm
On my wife's laptop (Windows 8.1, 4 GB RAM, 32-Bit-Java-Runtime) it is 247.5 MB, while on my laptop (Windows 7, 8 GB RAM, 64-Bit-Java-Runtime) it is 903.12 MB.
This is determined by Java (see https://stackoverflow.com/a/4667635/3236102, though the values shown there are for server-class-machines, they might be different from normal machines).
If you want your vm to simulate a low-RAM-machine just use the -Xmx flag to limit your machine to less RAM (e.g. -Xmx128m for 128 MB RAM allocation).
The best thing might be to ask the users that encounter the Out Of Memory-issues to check their maximum heap size (using the command above) and set your machine to the same maximum heap size, so you have the same conditions as they have.
The issue can be reproduced with larger RAM.
First you need to get the heap size configuration from the people who reported the issue.
Use the same heap size to reproduce the issue.
Use below jvm params for heap settings.
-Xmx512m Max heap memory that is used to store objects
-XX:MaxPermSize=64m Max perm gen size. This space is used to store meta info like loaded classes etc

Categories

Resources