I have a dockerized Java Application running in a Kubernetes cluster. Till now I have had configured a CPU limit of 1.5 cores. Now I increased the available CPUs to 3 to make my app perform better.
Unfortunately it needs significantly more Memory now and gets OOMKilled by Kubernetes. This graph shows a direct comparison of the overall memory consumption of the container for 1.5 cores (green) and 3 cores (yellow) (nothing different but the CPU limits):
The Java Heap is always looking good and seems not to be a problem. The memory consumption is in the native memory.
My application is implemented with Spring Boot 1.5.15.RELEASE, Hibernate 5.2.17.FINAL, Flyway, Tomcat. I compile with Java 8 and start it with a Docker OpenJDK 10 container.
I debugged a lot the last days using JProfiler and jmealloc as described in this post about native memory leak detection. JMEalloc told me about a large amount of Java.util.zipInflater.
Has anyone any clue what could explain the (for me very irrational) coupling of available CPUs to native memory consumption?
Any hints would be appreciated :-)!
Thanks & Regards
Matthias
Related
I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers
While a java application server will extend a unique JVM to run several (micro)services, a dockerized java microservices architecture will run a JVM for each dockerized microservice.
Considering 20+ java microservices and a limited number of host it seems that the amount of resources consumed by the JVMs on each host is huge.
Is there an efficient way to manage this problem ? Is it possible to tune each JVM to limit resources consumption ?
The aim is to limit the overhead of using docker in a java microservices architecture.
Each Docker and JVM copy running uses memory. Normally having multiple JVMs on a single node would use shared memory, but this isn't an option with docker.
What you can do is reduce the maximum heap size of each JVM. However, I would allow at least 1 GB per docker image as overhead plus your heap size for each JVM. While that sounds like a lot of memory it doesn't cost so much these days.
Say you give each JVM a 2 GB heap and add 1 GB for docker+JVM, you are looking needing a 64 GB server to run 20 JVMs/dockers.
Issue Description:
We are facing the following issue in a web application (on CQ5):
System Configurations details:
• System memory: 7GB
• Xmx: 3.5 GB
• Xms: 1 GB
• MaxPermGen: 300MB
• Max no of observed threads: 620 (including 300 http request serving threads)
• Xss: default
The issue is that the memory consumed by cq5 java process (which runs the servlet engine) keeps on increasing with time.
Once it reaches above 6 to 6.5 GB (and system memory reaches 7 GB), the JVM stops responding. (due to shortage of memory and heavy paging activity).
The heap and permgen however collectively remain at or below 3.8 (3.5+0.3) GB.
This means that non heap memory (native memory + thread stack space) keeps growing from a few 100 MBs (after CQ5 server restart) to more than 2-3 GBs (after long runs 4-5 hrs with heavy loads).
So our goal is basically to find out the memory leaks in non-heap memory which could be introduced due to 3rd party libraries, indirect references of Java code etc. We are not receiving any out of memory errors.
Help needed:
Now most of the tools we used are giving us good information and
details about heap memory. But we are unable to get a view to native
memory. Request to provide your valuable suggestions on how to
monitor non heap memory details (at object level or at memory area
level).
If anyone of you have faced a similar issue (non-heap
memory leak) in any of your applications, and would like to share
knowledge about how to fix non heap memory leaks, request you to
share your experience.
This is really dependent on your specific implementation: what code you've deployed, what infrastructure you're using, what version you're running, what application servers (if any) you're using, etc.
That said, I have experienced memory leak issues with CQ5.5 and the Image Servlet. It's actually a memory leak down in one of the Java libraries that powers the Image Servlet, way down under the covers. It's remedied by a Java version update, but it's caused by the Image servlet. Kind of a long shot that it's your issue, but probably worth mentioning.
Default JVM uses maximum 1.5 GB RAM/JVM Java application.
But my Server have 8 GB. Application still need more RAM. how to start cluster of JVM single unit server.
if in case memory increase single JVM garbage collector and other JVM demon goes slow down...
what is solution for this.. is right thing JVM clusters???
Application work high configuration. when request start JVM slow down and memory usage 95% to 99%
My server configuration. Linux
4 Core Multi Processors
8 GB RAM
no issue for HDD space.
Any Solution for this problem??
You might want to look into memory grids like:
Oracle Coherence: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html
GridGain: http://www.gridgain.com/
Terracotta: http://terracotta.org/
We use Coherence to run 3 JVMs on 1 machine, each process is using 1 Gb of the RAM.
There are a number of solutions.
Use a larger heap size (possibly 64-bit JVM)
Use less heap and more off heap memory. Off heap memory can scale into the TB.
Split the JVM into multiple processes. This is easier for some applications than others. I tend to avoid this as my applications can't be split easily.
We run the 32-bit Sun Java 5 JVM on 64-bit Linux 2.6 servers, but apparently this limits our maximum memory per process to 2GB. So it's been proposed that we upgrade to the 64-bit JVM's to remove the restriction. We currently run multiple JVM's (Tomcat instances) on a server in order to stay under the 2GB limit, but we'd like to consolidate them in the interest of simplifying deployment.
If you've done this, can you share your experiences, please? Are you running 64-bit JVM's in production? Would you recommend staying at Java 5, or would it be ok to move to both Java 6 and 64 bits simultaneously? Should we expect performance issues, either better or worse? Are there any particular areas we should focus our regression testing?
Thanks for any tips!
At the Kepler Science Operations Center we have about 50 machines with 32-64G each. The JVMs heaps are typically 7-20G. We are using Java 6. The OS has Linux 2.6 kernel.
When we migrated to 64bit I expected there would be some issues with running the 64-bit JVM But really there have not been. Out of memory conditions are more difficult to debug since the heap dumps are so much larger. The Java Service Wrapper needed some modifications to support larger heap sizes.
There are some sites on the web claiming GC does not scale well past 2G, but I've not seen any problems. Finally, we are doing throughput intensive rather interactive intensive computing. I've never looked at latency differences; my guess is worst case GC latency will be longer with the larger heap sizes.
We use a 64-bit JVM with heaps of around 40 Gb. In our application, a lot of data is cached, resulting in a large "old" generation. The default garbage collection settings did not work well and needed some painful tuning in production. Lesson: make sure that you have adequate load-testing infrastructure before you scale up like this. That said, once we got the kinks worked out, GC performance has been great.
I can confirm Sean's experience. We are running pure-Java, computationally intensive web services (home-cooked Jetty integration, with nowadays more than 1k servlet threads, and >6Gb of loaded data in memory), and all our applications scaled very well to a 64 bit JVM when we migrated 2 years ago. I would advise to use the latest Sun JVM, as substantial improvement in the GC overhead have been done in the last few releases. I did not have any issue with Tanukisoftware's Wrapper either.
Any JNI code you have written that assumes it's running in 32 bits will need to be retested. For problems you may run into porting c code from 32 to 64 bits see this link. It's not JNI specific but still applys. http://www.ibm.com/developerworks/library/l-port64.html
After migrating to JDK6 64bits from JDK5 32bits (Windows server), we got leak in "perm gen space" memory block. After playing with JDK parameters it was resolved. Hope you will be more lucky then we are.
If you use numactl --show you can see the size of the memory banks in your server.
I have found the GC doesn't scale well when it uses more than one memory bank. This is more a hardware than a software issue IMHO but it can effect your GC times all the same.