Tomcat commited virtual memory = more than x2 XmX Setting - java

I'm having difficulties understanding / ignorning current memory commitments (virtual memory) by my jvms.
For better context:
We're running an closed source java application. We just host / run the application by deploying the delivered .war files. But overall we're are responsible for running the application (this construct, as obscure as it is, is non negotiable ;) )
OS is RHEL 7, Tomcat v8. Overall we're running several Tomcats with an apache 2.2 in front of them as a loadbalancer
We experience massive memory swapping. It seems like every time we give the server VM more RAM it immediately gets consumed by the tomcats. Currently we run 7 tomcats with a total 48GB of RAM. The plan is to upgrade to 64GB or even higher but I fear this won't change a thing with tomcats hunger for memory...
For security reasons I've blacked out some internal paths/IPs/ports etc.
I do know that the memory consumption of a jvm consits of more than just assigned Heap Space, but in my case Xmx is set to 4096m and commited memory is about 10GB - as seen in the jconsole screenshot but can also be confirmed by a top on the server.
And this just doesn't seem right. I mean - over 2x more than heap space?
I've also read some memory related java stuff, but as stated above - the application is closed source which gives me not that much leeway in debugging / trying stuff out,
And since I am relatively new to tomcat I figured I might as well turn to a more experienced community :)
So here are the main questions, which came up:
Is there a certain setting I am missing which definatley caps the max
commited memory size for a tomcat?
Is the behaviour we're experiencing normal?
How should I talk with the devs about this problem? We've already done Heap Dumps but they weren't as telling as we hoped. Heap is correctly set and is almost always at around 50% load.
Am I missing something here?
I know it's hard to tell from a distance, especially if no source code can be provided.
But any help is appreciated :)
Thanks!
Alex

Related

How to profile spring-boot application memory consumption?

I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers

heroku R14 errors with java Play 1 app running on OpenJDK 1.8

I recently updated a Play v1 app to use OpenJDK v1.8 on Heroku and am finding that after typical load I start getting R14 errors relatively quickly - the physical memory has exceeded the 512MB limit and is now swapping impacting performance. I need to restart the applications frequently to stop the R14 errors. The application is a very typical web application and I would expect it to run comfortably within the memory constraints.
Here is a screenshot from NewRelic showing the physical memory being exceeded. I don't really have 59 JVMs, just the result of numerous restarts.
I don't quite understand why the "used heap" appears to impact the "physical memory" when it hasn't come close to the "committed heap" and why the "physical memory" seems to more closely follow the "used heap" rather than the "committed heap"
I've used the Eclipse Memory Analyzer to analyse some heap dumps and the Leak Suspects report mentions play.Play and play.mvc.Router as suspects though I'm not sure if that's expected and/or if they are directly related to the physical memory being exceeded.
See the generated by MAT for more details.
Leak Suspects
Top Components
Any guidance on how to resolve this would be great. I'm developing on OS X with Oracle Java 1.8 and have not yet been able to replicate the exact Heroku dyno environment locally (e.g. Ubuntu, OpenJDK 1.8) to attempt to reproduce the issue.
UPDATE 11/12/2014:
Here is the response from Heroku Support:
Play, and in turn Netty, allocate direct memory ByteBuffer objects for IO. Because they use direct memory, they will not be reported by the JVM (Netty 4.x now uses ByteBuf objects, which use heap memory).
The 65MB of anonymous maps that your app is using is not terribly uncommon (I’ve seen some use 100+mb). Some solutions include:
Limiting the concurrent of your application (possibly by setting play.pool)
Increase your dyno size to a 2X dyno.
Decrease your Xmx setting to 256m. This will give the JVM more room for non-heap allocation.
Please let us know if these solutions do not work for you, or if you continue to experience problems after adopting them.
If you want to reproduce the Heroku environment locally, I recommend installing Docker, and using this Docker image with your application. Let us know if you have any trouble with this.

Use Large pages in Jboss

I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon
In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.
It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.

Java/Tomcat heap size question

I am not a Java dev, but an app landed on my desk. It's a web-service server-side app that runs in a Tomcat container. The users hit it up from a client application.
The users constantly complain about how slow it is and the app has to be restarted about twice a week, cause things get really bad.
The previous developer told me that the app simply runs out of memory (as it loads more data over time) and eventually spends all its time doing garbage collection. Meanwhile, the Heap Size for Tomcat is set at 6GB. The box itself has 32GB of RAM.
Is there any harm in increasing the Heap Size to 16GB?
Seems like an easy way to fix the issue, but I am no Java expert.
You should identify the leak and fix it, not add more heap space. Thats just a stop gap.
You should configure tomcat to dump the heap on error, then analyze the heap in one of any number of tools after a crash. You can compute the retained sizes of all the clases, which should give you a very clear picture of what is wrong.
Im my profile I have a link to a blog post about this, since I had to do it recently.
No, there is no harm in increasing the Heap Size to 16GB.
The previous developer told me that the app simply runs out of memory (as it loads more data over time)
This looks like a memory leak, a serious bug in application. If you increase the amount of memory available from 6 to 16 GiB, you're still gonna have to restart the application, only less frequent. Some experienced developer should take a look at the application heap while running (look at hvgotcodes tips) and fix the application.
To resolve these issues you need to do performance testing. This includes both CPU and memory analysis. The JDK (6) bundles a tool called VisualVM, on my Mac OS X machine this is on the path by default as "jvisualvm". That's free and bundled, so it's a place to start.
Next up is the NetBeans Profiler (netbeans.org). That does more memory and CPU analysis. It's free as well, but a bit more complicated.
If you can spend the money, I highly recommend YourKit (http://www.yourkit.com/). It's not terribly expensive but it has a lot of built-in diagnostics that make it easier to figure out what's going on.
The one thing you can't do is assume that just adding more memory will fix the problem. If it's a leak, adding more memory may just make it run really badly a bit longer between restarts.
I suggest you use a profiling tool like JProfiler, VisualVM, jConsole, YourKit etc. You can take a heap dump of your application and analyze which objects are eating up memory.

Java Servlet Container on a small VPS

A while back I was using Virtual Private Server (VPS) that had very limited RAM. I used it to host Jetty. It was so slow that it became completely unusable. I believe the main problem was memory-related. I switched the project over to PHP and the problems disappeared.
Needless to say, I'm very hesitant to try Java again in a VPS. Even though the RAM in my VPS is significantly higher, it seems like PHP is streamlined for low RAM. Has anyone tried a VPS with a Servlet container and had a lot of success? Could it have been something simple with my Java config? Is PHP usually the better choice for a small VPS deployment?
ServerFault may be a better place to ask that than here, but in my experience 128 is dreadfully low. I run a Tomcat instance on a Linode VPS with 1 gig of guaranteed memory and haven't had any issues. The particular site in my case also has very low traffic, so I can't vouch for it under heavy loads
The 'Burst' signifies that your VPS may be given access to more than your 128 megs at times (depending on the overall server usage). For a server instance, access to this memory should be considered unreliable and your better off assuming the worst case scenario of only having 128 megs.
In other words, pay more for more memory =)
Edit:
Ask and ye shall receive. Top reports 1025 megs virtual memory, and 416 megs reserved. It's by far the largest memory hog running on my VPS.

Categories

Resources