Use Large pages in Jboss - java

I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon

In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.

It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.

Related

How to profile spring-boot application memory consumption?

I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers

Memory consumption for java web app (300MB too high?)

Can I pick your brains about a memory issue?
My java app, which isn't huge (like 14000 LOC) is using about 300MB of memory. It's running on Tomcat with a MySQL database. I'm using Hibernate, Spring and Velocity.
It doesn't seem to have any leaks, cause it stabilizes and 300MB, without growing further. (Also, I done some profiling.) There's been some concern from my team, however, about the amount of space it's using. Does this seem high. Do you have any suggestions for ways to shrink it?
Any thoughts are appreciated.
Joe
The number of LOC is not an indicator of how much heap a Java app is going to use; there is no correlation from one to the other.
300MB is not particularly large for a server application that is caching data, but it is somewhat large for an application that is not holding any type of cached or session data (but since this includes the webserver itself, 300MB is generally reasonable).
The amount of code (LOCs) rarely has much impact on the memory usage of your application, after all, it's the variables and objects stored that take most of the memory. To me, 300 megabytes doesn't sound much, but of course it depends on your specific usage scenario:
How much memory does the production server have?
How many users are there with this amount of memory used?
How much does the memory usage grow per user session?
How many users are you expecting to be concurrently accessing the application in production use?
Based on these, you can do some calculations, eg. is your production environment ready to handle the amount of users you expect, do you need more hardware, do you perhaps need to serialize some data to disk/db etc.
I can't make any promises, but i don't think you need to worry. We run a couple of web app's at work through Glassfish, using hibernate as well, and each uses about 800-900MB in dev, will often have 2 domain's running each of that size.
If you do need to reduce your footprint, at least make sure you are using Velocity 1.6 or higher. 1.5 wasted a fair bit of memory.

Java/Tomcat heap size question

I am not a Java dev, but an app landed on my desk. It's a web-service server-side app that runs in a Tomcat container. The users hit it up from a client application.
The users constantly complain about how slow it is and the app has to be restarted about twice a week, cause things get really bad.
The previous developer told me that the app simply runs out of memory (as it loads more data over time) and eventually spends all its time doing garbage collection. Meanwhile, the Heap Size for Tomcat is set at 6GB. The box itself has 32GB of RAM.
Is there any harm in increasing the Heap Size to 16GB?
Seems like an easy way to fix the issue, but I am no Java expert.
You should identify the leak and fix it, not add more heap space. Thats just a stop gap.
You should configure tomcat to dump the heap on error, then analyze the heap in one of any number of tools after a crash. You can compute the retained sizes of all the clases, which should give you a very clear picture of what is wrong.
Im my profile I have a link to a blog post about this, since I had to do it recently.
No, there is no harm in increasing the Heap Size to 16GB.
The previous developer told me that the app simply runs out of memory (as it loads more data over time)
This looks like a memory leak, a serious bug in application. If you increase the amount of memory available from 6 to 16 GiB, you're still gonna have to restart the application, only less frequent. Some experienced developer should take a look at the application heap while running (look at hvgotcodes tips) and fix the application.
To resolve these issues you need to do performance testing. This includes both CPU and memory analysis. The JDK (6) bundles a tool called VisualVM, on my Mac OS X machine this is on the path by default as "jvisualvm". That's free and bundled, so it's a place to start.
Next up is the NetBeans Profiler (netbeans.org). That does more memory and CPU analysis. It's free as well, but a bit more complicated.
If you can spend the money, I highly recommend YourKit (http://www.yourkit.com/). It's not terribly expensive but it has a lot of built-in diagnostics that make it easier to figure out what's going on.
The one thing you can't do is assume that just adding more memory will fix the problem. If it's a leak, adding more memory may just make it run really badly a bit longer between restarts.
I suggest you use a profiling tool like JProfiler, VisualVM, jConsole, YourKit etc. You can take a heap dump of your application and analyze which objects are eating up memory.

Application Servers(java) : Should adding RAM to server depend on each domain's -Xmx value?

We have Glassfish application server running in Linux servers.
Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs
With that math,each of our server has 8GB RAM (2 GB Buffer)
First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB;
Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?
A few rules of thumb:
You really want the sum of your XmXs and the RAM needed for the applications you need to run on the box constantly (including the OS) to be lower than your physical RAM available. Otherwise, something will be swapped to disk, and when that "something" needs to run, things will slow down dramatically. If things go constantly in and out of swap, nothing will perform.
You might be able to get by with lower XmX on some of your application servers (your question seems to imply some of your JVMs have too much RAM allocated) . I believe Tomcat can start with as few as 64mb of XmX, but many applications will run out of memory pretty quickly with that setup. Tuning your memory allocation and the garbage collector might be worth it. Frankly, I'm not very up to date with GC performance lately (I haven't had to tune anything to get decent performance in 4-5 years), so I don't know how valuable will that be. GC collection pauses used to be bad, and bigger heaps meant longer pauses...
Any memory you spare will not be "wasted". The OS will use it for disk cache and that might be a benefit.
In any case, the answer is normally... you need to run tests. Being able to run stress tests is really invaluable, and I suggest you spend some time writing one and running it. It will allow you to take educated decisions in this matter.

Tomcat memory and virtual hosts

I have a VPS on which I serve Tomcat 6.0.
500mb memory was enough, when I had only two applications.
Last week I deployed another web application and formed a new virtual host editing Tomcat server.xml.
But the server responses slow down, linux started to eat the swap.
When I increased the memory 750 mb it become stable again.
But memory is not so cheap so I won't be happy to pay 250 mb RAM for each additional application.
Is "250 mb additional memory need" for each web app normal?
Is there any solution to decrease this cost?
For example, does "to put common libraries of these applications to shared folder of Tomcat" have positive impact on Tomcat memory and performance?
Note: Deployed applications are web applications that use Spring, hibernate, sitemesh and related libraries, war file size totals up to 30 mb.
Thanks.
It's unlikely that this memory is being consumed by the Spring / Hibernate / etc. classes themselves. So the size of their .jar files isn't going to matter much. Putting these libraries in Tomcat's shared library directory would help save a bit of memory in that one copy of these classes would be loaded only, but that won't save much.
Isn't it simply more likely that your applications are just using this much memory for data and so forth? You need to use a profiler to figure out what is consuming the heap memory first. Until you know the problem, it's not much use in pursuing particular solutions.
I think you need to measure the memory consumption of the application.
You can use JProfiler or Java6 build in profiling tools such as VisualVM and JHat (best way to start).
Check this link for more info:
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/

Categories

Resources