While a java application server will extend a unique JVM to run several (micro)services, a dockerized java microservices architecture will run a JVM for each dockerized microservice.
Considering 20+ java microservices and a limited number of host it seems that the amount of resources consumed by the JVMs on each host is huge.
Is there an efficient way to manage this problem ? Is it possible to tune each JVM to limit resources consumption ?
The aim is to limit the overhead of using docker in a java microservices architecture.
Each Docker and JVM copy running uses memory. Normally having multiple JVMs on a single node would use shared memory, but this isn't an option with docker.
What you can do is reduce the maximum heap size of each JVM. However, I would allow at least 1 GB per docker image as overhead plus your heap size for each JVM. While that sounds like a lot of memory it doesn't cost so much these days.
Say you give each JVM a 2 GB heap and add 1 GB for docker+JVM, you are looking needing a 64 GB server to run 20 JVMs/dockers.
Related
I have a Spring Boot API hosted on AWS Elastic Container Service (ECS) running inside a Docker container. I am using a m5.xlarge instance which has 4 vCPUs and 16GB on physical RAM on the cluster. I was currently fine-tuning CPU and Memory, but finding it very random and tedious, despite following this article:
https://medium.com/#vlad.fedosov/how-to-calculate-resources-reservation-for-ecs-task-3c68a1e12725
I'm still a little confused on what to set the JVM heap size too. From what I read, Java 10+ allows for automatic detection of container and will set the JVM to 25% of the Container RAM (Or is this the actually physical RAM of the Cluster??), however, I am using Java8
I am logging the garbage collections logs via VM arguments:
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${LOG_DIR}/gc.log -
My questions are -
What is the easiest way to get the JVM Heap size my app is using at runtime? Is there a command, a tool etc...?
I am running Java8 which I believe does NOT detect container and will set JVM based on PHYSICAL SERVER RAM - I am using a m5.xlarge instance on AWS with 4 vCPUs and 16GB RAM, so if I didn't specific -Xms or -Xmx JVM heap size would be 16GB * 0.25 = 4GB correct??
If my container memory/task on AWS ECS is 120% currently, how do I know if that is the JVM heap OR container memory that is the problem/too low OR maybe even my application code being inefficient? This is an API and it is querying the Database many thousand times per minute so many Objects are floating around residually in memory and not being garbage collected?? I'm not sure, but would appreciate any help
and 3. You'll have the data in log file you've specified in ${LOG_DIR}/gc.log
TIP: use gc_%t.log to grab all of gc logs (not only the last one) and write them to persistance storage.
You can visualise the data e.g. at gceasy.io - you'll see entire heap - collections, times, etc.
But remember your app is not only heap, it's also off heap - and it's not so easy to track off heap (eg native memory tracking is consuming additional resources, etc.) - the easiest way is to grab threaddump (the fastest way but not the recommended on PROD env is to build your app with JDK, run jstack command in app terminal and redirect it to file on persistent storage). Then you can also visualise it at https://fastthread.io/
it depends which update eg blog
especialy note vm flag XX:***RAMPercentage
Start investigating your app - gc.log if you'll see nothing suspicious then you'll need to check your container utilization.
I have 23 Java processes running on one machine with 32GB. No process specifies JVM memory params such as Xmx. java -XX:+PrintFlagsFinal -version | grep MaxHeapSize reports that max default heap size is 8GB as expected.
Every process runs embedded Tomcat (Spring Boot apps (most at v 2.3.4)) except one is a standalone tomcat 9 instance running three WARs. These apps have low usage (usually one user and 10 minutes use a day). They are not memory or CPU intensive. One of them is Spring Boot admin and another is Spring Cloud's Eureka service registry. For these two, I have only a main method that simply bootstraps the Spring Boot application.
Yet, RES memory as shown in top for every process keeps gradually increasing. For example, Spring Boot service registry has increased from 1.1GB to 1.5GB in the last 12 hours. All processes show a similar small increase in RES but the total increase has reduced free memory by 2 GB in that same 12 hour period. This was the same in the previous 12 hours (and so on)until current free memory is now only 4.7GB.
My concern is that I continue to see this trend (even without app usage). Memory is never freed from the apps so total free memory continues to decrease. Is this normal since perhaps each JVM sees that memory is still available in the OS and that 8GB heap space is available to it? Will the JVMs stop taking memory at some point say once an OS free memory threshold is reached? Or will it continue until all free memory is used?
Update
The heap used for most apps is under 200MB but the heap size is 1.5 - 2.8GB. Heap max is 8GB.
Resident memory as reported by the OS doesn't tell you what component is consuming it. You'll have to gather additional data to figure out which part of the process is growing
You'll have to track
java heap and metaspace use - you can monitor this with JMC, gc logging and many other java monitoring tools
jvm off-heap use - NMT
direct byte buffer use - MX beans, also available via JMC
use by mapped files - pmap -x <pid>
use by native libraries e.g. used via JNI - difficult to monitor
I also faced this situation and after a long time of research I found the solution here. Basically, for my case, it was just a matter of setting xms and xmx parameters on the jar invocation, forcing the GC to act constantly.
I have a dockerized Java Application running in a Kubernetes cluster. Till now I have had configured a CPU limit of 1.5 cores. Now I increased the available CPUs to 3 to make my app perform better.
Unfortunately it needs significantly more Memory now and gets OOMKilled by Kubernetes. This graph shows a direct comparison of the overall memory consumption of the container for 1.5 cores (green) and 3 cores (yellow) (nothing different but the CPU limits):
The Java Heap is always looking good and seems not to be a problem. The memory consumption is in the native memory.
My application is implemented with Spring Boot 1.5.15.RELEASE, Hibernate 5.2.17.FINAL, Flyway, Tomcat. I compile with Java 8 and start it with a Docker OpenJDK 10 container.
I debugged a lot the last days using JProfiler and jmealloc as described in this post about native memory leak detection. JMEalloc told me about a large amount of Java.util.zipInflater.
Has anyone any clue what could explain the (for me very irrational) coupling of available CPUs to native memory consumption?
Any hints would be appreciated :-)!
Thanks & Regards
Matthias
I have an application with a number of microservices and I'm trying to understand if Docker provides any memory advantages. My services are Java 7/Tomcat 7. Let's say I have 20 of them; is there any advantage for me to run Docker on top of an AWS EC2 Ubuntu 12.04 VM? I understand the value of run-anywhere for developer workstations, etc.; my primary question/concern is about the VM memory footprint. If I run each of these 20 services in their own container, with their own Tomcat, my assumption is that I'll need 20x the memory overhead for Tomcat, right? If this is true, I'm trying to decide if Docker is of value or is more overhead than it's worth. It seems like Docker's best value proposition is on top of a native OS, not as much in a VM; is there a different approach besides EC2 VM on AWS where Docker is best?
I'm curious how others would handle this situation or if Docker is even a solution in this space. Thanks for any insight you can provide.
No, there's no memory advantage over running 20 Tomcat processes. The Docker daemon and ancillary processes for 'publishing' ports will consume extra memory.
Docker's advantage is over 20 VMs, which will consume vastly more memory. It provides more isolation than processes alone, e.g. each process will see its own filesystem, network interface, process space. Also Docker provides advantages for packaging and shipping software.
Default JVM uses maximum 1.5 GB RAM/JVM Java application.
But my Server have 8 GB. Application still need more RAM. how to start cluster of JVM single unit server.
if in case memory increase single JVM garbage collector and other JVM demon goes slow down...
what is solution for this.. is right thing JVM clusters???
Application work high configuration. when request start JVM slow down and memory usage 95% to 99%
My server configuration. Linux
4 Core Multi Processors
8 GB RAM
no issue for HDD space.
Any Solution for this problem??
You might want to look into memory grids like:
Oracle Coherence: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html
GridGain: http://www.gridgain.com/
Terracotta: http://terracotta.org/
We use Coherence to run 3 JVMs on 1 machine, each process is using 1 Gb of the RAM.
There are a number of solutions.
Use a larger heap size (possibly 64-bit JVM)
Use less heap and more off heap memory. Off heap memory can scale into the TB.
Split the JVM into multiple processes. This is easier for some applications than others. I tend to avoid this as my applications can't be split easily.