I have an application with a number of microservices and I'm trying to understand if Docker provides any memory advantages. My services are Java 7/Tomcat 7. Let's say I have 20 of them; is there any advantage for me to run Docker on top of an AWS EC2 Ubuntu 12.04 VM? I understand the value of run-anywhere for developer workstations, etc.; my primary question/concern is about the VM memory footprint. If I run each of these 20 services in their own container, with their own Tomcat, my assumption is that I'll need 20x the memory overhead for Tomcat, right? If this is true, I'm trying to decide if Docker is of value or is more overhead than it's worth. It seems like Docker's best value proposition is on top of a native OS, not as much in a VM; is there a different approach besides EC2 VM on AWS where Docker is best?
I'm curious how others would handle this situation or if Docker is even a solution in this space. Thanks for any insight you can provide.
No, there's no memory advantage over running 20 Tomcat processes. The Docker daemon and ancillary processes for 'publishing' ports will consume extra memory.
Docker's advantage is over 20 VMs, which will consume vastly more memory. It provides more isolation than processes alone, e.g. each process will see its own filesystem, network interface, process space. Also Docker provides advantages for packaging and shipping software.
Related
I have a Spring Boot API hosted on AWS Elastic Container Service (ECS) running inside a Docker container. I am using a m5.xlarge instance which has 4 vCPUs and 16GB on physical RAM on the cluster. I was currently fine-tuning CPU and Memory, but finding it very random and tedious, despite following this article:
https://medium.com/#vlad.fedosov/how-to-calculate-resources-reservation-for-ecs-task-3c68a1e12725
I'm still a little confused on what to set the JVM heap size too. From what I read, Java 10+ allows for automatic detection of container and will set the JVM to 25% of the Container RAM (Or is this the actually physical RAM of the Cluster??), however, I am using Java8
I am logging the garbage collections logs via VM arguments:
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${LOG_DIR}/gc.log -
My questions are -
What is the easiest way to get the JVM Heap size my app is using at runtime? Is there a command, a tool etc...?
I am running Java8 which I believe does NOT detect container and will set JVM based on PHYSICAL SERVER RAM - I am using a m5.xlarge instance on AWS with 4 vCPUs and 16GB RAM, so if I didn't specific -Xms or -Xmx JVM heap size would be 16GB * 0.25 = 4GB correct??
If my container memory/task on AWS ECS is 120% currently, how do I know if that is the JVM heap OR container memory that is the problem/too low OR maybe even my application code being inefficient? This is an API and it is querying the Database many thousand times per minute so many Objects are floating around residually in memory and not being garbage collected?? I'm not sure, but would appreciate any help
and 3. You'll have the data in log file you've specified in ${LOG_DIR}/gc.log
TIP: use gc_%t.log to grab all of gc logs (not only the last one) and write them to persistance storage.
You can visualise the data e.g. at gceasy.io - you'll see entire heap - collections, times, etc.
But remember your app is not only heap, it's also off heap - and it's not so easy to track off heap (eg native memory tracking is consuming additional resources, etc.) - the easiest way is to grab threaddump (the fastest way but not the recommended on PROD env is to build your app with JDK, run jstack command in app terminal and redirect it to file on persistent storage). Then you can also visualise it at https://fastthread.io/
it depends which update eg blog
especialy note vm flag XX:***RAMPercentage
Start investigating your app - gc.log if you'll see nothing suspicious then you'll need to check your container utilization.
I have a dockerized Java Application running in a Kubernetes cluster. Till now I have had configured a CPU limit of 1.5 cores. Now I increased the available CPUs to 3 to make my app perform better.
Unfortunately it needs significantly more Memory now and gets OOMKilled by Kubernetes. This graph shows a direct comparison of the overall memory consumption of the container for 1.5 cores (green) and 3 cores (yellow) (nothing different but the CPU limits):
The Java Heap is always looking good and seems not to be a problem. The memory consumption is in the native memory.
My application is implemented with Spring Boot 1.5.15.RELEASE, Hibernate 5.2.17.FINAL, Flyway, Tomcat. I compile with Java 8 and start it with a Docker OpenJDK 10 container.
I debugged a lot the last days using JProfiler and jmealloc as described in this post about native memory leak detection. JMEalloc told me about a large amount of Java.util.zipInflater.
Has anyone any clue what could explain the (for me very irrational) coupling of available CPUs to native memory consumption?
Any hints would be appreciated :-)!
Thanks & Regards
Matthias
While a java application server will extend a unique JVM to run several (micro)services, a dockerized java microservices architecture will run a JVM for each dockerized microservice.
Considering 20+ java microservices and a limited number of host it seems that the amount of resources consumed by the JVMs on each host is huge.
Is there an efficient way to manage this problem ? Is it possible to tune each JVM to limit resources consumption ?
The aim is to limit the overhead of using docker in a java microservices architecture.
Each Docker and JVM copy running uses memory. Normally having multiple JVMs on a single node would use shared memory, but this isn't an option with docker.
What you can do is reduce the maximum heap size of each JVM. However, I would allow at least 1 GB per docker image as overhead plus your heap size for each JVM. While that sounds like a lot of memory it doesn't cost so much these days.
Say you give each JVM a 2 GB heap and add 1 GB for docker+JVM, you are looking needing a 64 GB server to run 20 JVMs/dockers.
I am trying to figure out why a same web application uses more memory in Ubuntu Linux 16 than running in Windows 10. Is there any reason for it? I always thought Linux was faster and lighter to run any application in server mode. By the way, both operating system (OS) are 64 bit.
See below screenshots and memory consumed.
Windows 10
Ubuntu Linux
As you can see in task managers, Linux is using more morey to run the same application. I also tried to run Spring Boot in a VM 64 bit and this requires more memory than running in a simple VM 32 bit.
Is Windows better to manage Java application with Spring Boot?
As mentioned in the comments, Windows and Linux have different memory management systems. There are a variety of reasons their memory usage could be different, for instance, if Windows Java is using a dynamic link library (DLL) loaded by another application, it may not be including the shared library in the memory allocation calculation. In addition, the code required to implement JVM and it's API on Windows versus Linux could be different.
Windows and Linux may be paging or swapping different parts of the JVM to disk while the program is running based on the operating systems configuration and how the kernel is programmed.
Your best bet is to run the code through a Java Profiler like VisualVM to try to get more information on how much memory various parts of the application is using. Windows can be notoriously tricky to calculate the actual memory usage of a program, see https://superuser.com/questions/895168/how-to-measure-total-ram-usage-of-a-program-under-windows
I have a VPS on which I serve Tomcat 6.0.
500mb memory was enough, when I had only two applications.
Last week I deployed another web application and formed a new virtual host editing Tomcat server.xml.
But the server responses slow down, linux started to eat the swap.
When I increased the memory 750 mb it become stable again.
But memory is not so cheap so I won't be happy to pay 250 mb RAM for each additional application.
Is "250 mb additional memory need" for each web app normal?
Is there any solution to decrease this cost?
For example, does "to put common libraries of these applications to shared folder of Tomcat" have positive impact on Tomcat memory and performance?
Note: Deployed applications are web applications that use Spring, hibernate, sitemesh and related libraries, war file size totals up to 30 mb.
Thanks.
It's unlikely that this memory is being consumed by the Spring / Hibernate / etc. classes themselves. So the size of their .jar files isn't going to matter much. Putting these libraries in Tomcat's shared library directory would help save a bit of memory in that one copy of these classes would be loaded only, but that won't save much.
Isn't it simply more likely that your applications are just using this much memory for data and so forth? You need to use a profiler to figure out what is consuming the heap memory first. Until you know the problem, it's not much use in pursuing particular solutions.
I think you need to measure the memory consumption of the application.
You can use JProfiler or Java6 build in profiling tools such as VisualVM and JHat (best way to start).
Check this link for more info:
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/