I have a VPS on which I serve Tomcat 6.0.
500mb memory was enough, when I had only two applications.
Last week I deployed another web application and formed a new virtual host editing Tomcat server.xml.
But the server responses slow down, linux started to eat the swap.
When I increased the memory 750 mb it become stable again.
But memory is not so cheap so I won't be happy to pay 250 mb RAM for each additional application.
Is "250 mb additional memory need" for each web app normal?
Is there any solution to decrease this cost?
For example, does "to put common libraries of these applications to shared folder of Tomcat" have positive impact on Tomcat memory and performance?
Note: Deployed applications are web applications that use Spring, hibernate, sitemesh and related libraries, war file size totals up to 30 mb.
Thanks.
It's unlikely that this memory is being consumed by the Spring / Hibernate / etc. classes themselves. So the size of their .jar files isn't going to matter much. Putting these libraries in Tomcat's shared library directory would help save a bit of memory in that one copy of these classes would be loaded only, but that won't save much.
Isn't it simply more likely that your applications are just using this much memory for data and so forth? You need to use a profiler to figure out what is consuming the heap memory first. Until you know the problem, it's not much use in pursuing particular solutions.
I think you need to measure the memory consumption of the application.
You can use JProfiler or Java6 build in profiling tools such as VisualVM and JHat (best way to start).
Check this link for more info:
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
Related
I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers
I'm having difficulties understanding / ignorning current memory commitments (virtual memory) by my jvms.
For better context:
We're running an closed source java application. We just host / run the application by deploying the delivered .war files. But overall we're are responsible for running the application (this construct, as obscure as it is, is non negotiable ;) )
OS is RHEL 7, Tomcat v8. Overall we're running several Tomcats with an apache 2.2 in front of them as a loadbalancer
We experience massive memory swapping. It seems like every time we give the server VM more RAM it immediately gets consumed by the tomcats. Currently we run 7 tomcats with a total 48GB of RAM. The plan is to upgrade to 64GB or even higher but I fear this won't change a thing with tomcats hunger for memory...
For security reasons I've blacked out some internal paths/IPs/ports etc.
I do know that the memory consumption of a jvm consits of more than just assigned Heap Space, but in my case Xmx is set to 4096m and commited memory is about 10GB - as seen in the jconsole screenshot but can also be confirmed by a top on the server.
And this just doesn't seem right. I mean - over 2x more than heap space?
I've also read some memory related java stuff, but as stated above - the application is closed source which gives me not that much leeway in debugging / trying stuff out,
And since I am relatively new to tomcat I figured I might as well turn to a more experienced community :)
So here are the main questions, which came up:
Is there a certain setting I am missing which definatley caps the max
commited memory size for a tomcat?
Is the behaviour we're experiencing normal?
How should I talk with the devs about this problem? We've already done Heap Dumps but they weren't as telling as we hoped. Heap is correctly set and is almost always at around 50% load.
Am I missing something here?
I know it's hard to tell from a distance, especially if no source code can be provided.
But any help is appreciated :)
Thanks!
Alex
We are running weblogic and appear to have a memory leak - we eventually run out of heap space.
We have 5 apps (5 war deployments) on the server.
Can you think of a way to gather memory usage on a per application basis?
(Then we can concentrate our search by looking through the code in the appropriate app.)
I have run jmap to get a heap dump and loaded the results in jvisualvm but it's unclear where the bulk of objects have come from - for example Strings.
I was thinking that weblogic perhaps uses separate classloaders per application and so we may be able to figure something out via that route...
try using Eclipse MAT, it gives hint of memory leaks, among others features
I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon
In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.
It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.
I develop web applications and I use jBoss 4.0.2 and when I have redeployed my WAR several times with eclipse, jBoss will crash because it runs out of memory. And when I have to install new version to production enviroment, it will consume production servers memory, so that means I have to stop jBoss to prevent redeploying eat memory from customers server. Is there any work around for this problem?
Basically, no. Because of the way the JBoss classloaders work, each deployment will use up a chunk of PermGen that will not be released even if the application is undeployed.
You can mitigate the symptoms by ramping up the PermGen memory pool size to several hundred megs (or even gigs), which makes the problem easier to live with. I've also found that reducing the usage of static fields in your code (especially static fields that refer to large objects) reduces the impact on PermGen.
Ideally, I would not use hot deployment in production, but rather shut the server down, replace the WAR/EAR, then restart it.
I'm not sure it's linked, but I suspect it is - JBoss is not J2EE compliant as far as implementing application separation as it comes out of the box.
As it comes, there is one classloader into which all classes are put and thus it is not possible to unload classes and therefore you are going to have this problem. You can configure jboss to be more J2EE compliant in this respect.
Are you getting the "out of memory Permgen" or are you getting regular out of memory?
I also made progress by connecting JProfiler up to it and checking memory usage with this.
I ended up simply restarting Jboss all the time - didn't take up too much time.
Try this (which applies to Sun's Java):
-XX:+UseConcMarkSweepGC
-XX:+CMSPermGenSweepingEnabled
-XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=128m
CMS can actually GC the permanent generation heap (the heap where your classes are). Setting MaxPermSize is unnecessary, but the default is low for an application server.