Memory consumption for java web app (300MB too high?) - java

Can I pick your brains about a memory issue?
My java app, which isn't huge (like 14000 LOC) is using about 300MB of memory. It's running on Tomcat with a MySQL database. I'm using Hibernate, Spring and Velocity.
It doesn't seem to have any leaks, cause it stabilizes and 300MB, without growing further. (Also, I done some profiling.) There's been some concern from my team, however, about the amount of space it's using. Does this seem high. Do you have any suggestions for ways to shrink it?
Any thoughts are appreciated.
Joe

The number of LOC is not an indicator of how much heap a Java app is going to use; there is no correlation from one to the other.
300MB is not particularly large for a server application that is caching data, but it is somewhat large for an application that is not holding any type of cached or session data (but since this includes the webserver itself, 300MB is generally reasonable).

The amount of code (LOCs) rarely has much impact on the memory usage of your application, after all, it's the variables and objects stored that take most of the memory. To me, 300 megabytes doesn't sound much, but of course it depends on your specific usage scenario:
How much memory does the production server have?
How many users are there with this amount of memory used?
How much does the memory usage grow per user session?
How many users are you expecting to be concurrently accessing the application in production use?
Based on these, you can do some calculations, eg. is your production environment ready to handle the amount of users you expect, do you need more hardware, do you perhaps need to serialize some data to disk/db etc.

I can't make any promises, but i don't think you need to worry. We run a couple of web app's at work through Glassfish, using hibernate as well, and each uses about 800-900MB in dev, will often have 2 domain's running each of that size.

If you do need to reduce your footprint, at least make sure you are using Velocity 1.6 or higher. 1.5 wasted a fair bit of memory.

Related

Spring boot embedded tomcat server take over 800 MB RAM?

I'm working on a Springboot application that uses embedded tomcat server. Application take more than 800MB RAM. Is that common? Is there any way to bring memory uses down?
The amount of memory consumed by your tomcat totally depends upon your application requirement.
You need to do some sort of memory profiling of your application.
Is that common?
Yes, I could be. It all depends on your application, the way you create objects and the amount of memory being used by your objects.
You can start with putting your -Xms to 1GB and run your application and perform normal operations.
Use tools like JVisualVm or JConsole to observe the Heap Size and GC performance and even amount of memory consumed by different types of objects in the JVM.
This will give you an intial idea abount amount of Heap required by your application.
After this use tool like JMeter to load test your application check how the load is hampering your heap usage.
Suggested Reading:
http://blog.manupk.com/2012/09/java-memory-model-simplified.html
This is pretty common. Java VMs are pretty heavy. Look at the JVM start up flags, which will tell you what the heapsize can grow to (You may see something like -Xmx768m which allocates a maximum of 768M of heap). You might try setting the CATALINA_OPTS environment variable: CATALINA_OPTS=-Xmx512m, but if the springboot script that boots the VM overrides this you will have to track down the value being set in the script. However, the default value generally works well and will prevent the JVM from throwing out of memory errors if you start instantiating many or large (read: hibernate) objects that take a while to be garbage collected.
Is there any way to bring memory uses down?
There are two approaches:
You can attempt to "squeeze" the heap size. That is not recommended, as it leads to the JVM spending a larger percentage of the CPU in the GCs, more frequent GC pauses, and ultimately OOMEs.
This approach often doesn't work at all; i.e. it just causes the application to die quicker.
You can figure out why your application is using so much memory. It could be due to many things:
The problem may be too big.
Your application could be "bloated" with all sorts of unnecessary libraries, features, etc.
Your in-memory data structures could be poorly designed.
Your application could be caching too much in memory.
Your application could have memory leaks.
I agree with #cowbert's advice. Use performance monitoring tools to try to track down what is using most of the JVM's memory. If there is a memory leak, this often shows up as an unexpectedly large amount of memory used for certain kinds of objects.

Use Large pages in Jboss

I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon
In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.
It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.

What is the maximum object size before GAE throws Heap overflow error

I never thought of this until now, I have been using GAE for quite some time already--but never think of its memory model, since its JVM is there already, I can't say which JVM or version of JVM they are using.
So my question would be when will GAE throw Heap overflow error? Or at least would break my app or whatvever the GAE will do. I don't know.
For example, I push the String to the limits that I put a data with sizeof 2^31 -1
Design wise: I know this is crazy, but the idea is the same with having millions or billions or users pushing data into your GAE application, then your application's job is to process it (serialize/deserialize) before persisting.
Then the heap sum of those will be huge, they might not happen all at the same time, but for sure there will be a tangent point where heap use will be huge.
Is this something that GAE application have to be concered with?
You can read more on Adjusting Application Performance for your running application based on your needs and from the same link you can see the memory and CPU that each front end class has.
You need to code it so you will never run oom independent of user load. If you allow mutithread your instance might get reused and you need to take that into account. If mem, cpu or queue is too high appengine automatically will launch more instances each with their own ram , specified in app settings (128mb, 256mb etc)
An application on GAE is distributed across many, many instances. If you're handling millions of simultaneous users, you'll probably running thousands of instances. Each instance has its RAM (stack + heap space).
Your total heap may be huge, but at any one point, you only need to manage the heap for the requests running on a particular instance, which can only handle a fairly limited number of requests at once. For the memory sizes for different instance types, refer to:
https://developers.google.com/appengine/docs/adminconsole/performancesettings?hl=en

How to reduce memory consumption in Java application server

I have a Java application server which processes data from network clients. The problem that I face is the memory consumption of the Java application. What are the effective techniques for reducing the memory consumption in Java applications. Would you please share some experience?
Use a profiler, identify where the memory is consumed, and optimize what should be optimized.
It could be settings (number of threads in the pool, etc.), algorithms, data structures, database queries. It's impossible to know without knowing the internals of the application.
And even then, guessing has a good chance of giving incorrect results. Measuring and analyzing is the best thing to do.

Can Sun JVM handle gigantic heap sizes without problems, and how?

I have heard several people claiming that you can not scale the JVM heap size up. I've heard claims of the practical limit being 4 gigabytes (I heard an IBM consultant say that), 10 gigabytes, 32 gigabytes, and so on... I simply can not believe any of those numbers and have been wondering about the issue now for a while.
So, I have three part question I would hope someone with experience could answer:
Given the following case how would you tune the heap and GC settings?
Would there be noticeable hickups (pauses of JVM etc) that would be noticed by the end users?
Should this really still work? I think it should.
The case:
64 bit platform
64 cores
64 gigabytes of memory
The application server is client facing (ie. Jboss/tomcat web application server) - complete pauses of JVM would probably be noticed by end users
Sun JVM, probably 1.5
To prove I am not asking you guys to do my homework this is what I came up with:
-XX:+UseConcMarkSweepGC -XX:+AggressiveOpts -XX:+UnlockDiagnosticVMOptions -XX:-EliminateZeroing -Xmn768m -Xmx55000m
CMS should reduce the amount of pauses, although it comes with overhead. The other settings for CMS seem to default automatically to the number of CPUs so they seem sane to me. The rest that I added are extras that might do good or bad generally for performance, and they should probably be tested.
Definitely.
I think it's going to be difficult for anybody to give you anything more than general advice, without having further knowledge of your application.
What I would suggest is that you use VisualGC (or the VisualGC plugin for VisualVM) to actually look at what the garbage collection is doing when your app is running. Once you have a greater understanding of how the GC is working alongside your application, it'll be far easier to tune it.
#1. Given the following case how would you tune the heap and GC settings?
First, having 64 gigabytes of memory doesn't imply that you have to use them all for one JVM. Actually, it rather means you can run many of them. Then, it is impossible to answer your question without any access to your machine and application to measure and analyse things (knowing what your application is doing isn't enough). And no, I'm not asking to get access to your environment :)
#2. Would there be noticeable hickups (pauses of JVM etc) that would be noticed by the end users?
The goal of tuning is to find a good compromise between frequency and duration of (major) GCs. With a ~55g heap, GC won't be frequent but will take noticeable time, for sure (the bigger the heap, the longer the major GC). Using a Parallel or Concurrent garbage collector will help on multiprocessor systems but won't entirely solve this issue. Why do you need ~55g (this is mega ultra huge for a webapp IMO), that's my question. I'd rather run many clustered JVMs to handle load if required (at some point, the database will become the bottleneck anyway with a data oriented application).
#3. Should this really still work? I think it should.
Hmm... not sure I get the question. What is "this"? Instantiating a JVM with a big heap? Yes, it should. Is it equivalent to running several JVMs? No, certainly not.
PS: 4G is the maximum theoretical heap limit for the 32-bit JVM running on a 64-bit operating system (see Why can't I get a larger heap with the 32-bit JVM?)
PPS: On 64-bit VMs, you have 64 bits of addressability to work with resulting in a maximum Java heap size limited only by the amount of physical memory and swap space your system provides. (see How large a heap can I create using a 64-bit VM?)
Obviously heap size is not unlimited and the larger is the heap size, the more your JVM will eventually spend on GC. Though I think it is possible to set heap size quite high on 64-bit JVM, I still think it's not really practical. The advice here is better to have several JVMs running with the same parameters i.e. cluster of JBoss/Tomcat nodes running on the same physical machine and you will get better throughput.
EDIT: Also your GC behavior depends on the taxonomy of your heap. If you have a lot of short-living objects and each request to the server creates a lot of those, then your GC will collect a lot of garbage very often and thus on large heap size this will result in longer pauses. If you have very many long-living objects (e.g. caching most of your data in memory) and the amount of short-living objects is not that big, then having bigger heap size is OK.
As Chris Rice already wrote, I wouldn't expect any obvious problems with the GC for heap sizes up to 32-64GB, although there may of course be some point of your application logic, which can cause problems.
Not directly related to GC, but I would still recommend you to perform a realistic load test on your production system. I used to work on a project, where we had a similar setup (relatively large, clustered JBoss/Tomcat setup to serve a public web application) and without exaggeration, JBoss is not behaving very well under high load or with a high number of concurrent calls if you are using EJBs. JBoss is spending a lot of time in synchronized blocks when accessing and managing the EJB instance pools and if you opt for a cluster, it will even wait for intra-cluster network communication within these synchronized blocks. Be especially aware of poorly performing state replication, if you are using SFSBs.
Only to add some more switches I would use by default: -Xms55g can help to reduce the rampup time because it frees Java from the need to check if it can fall back to the initial size and allows also better internal initial sizing of memory areas.
Additionally we made good experiences with NewSize to give you a large young size to get rid of short term garbage: -XX:NewSize=1g Additionally most webapps create a lot of short time garbage that will never survive the request processing. You can even make that bigger. With Xms55g, the VM reserves a large chunk already. Maybe downsizing can help.
-Xincgc helps to clean the young generation incrementally and return the cpu often to the user threads.
-XX:CMSInitiatingOccupancyFraction=70 If you really fill all that memory, try to start CMS garbage collection earlier.
-XX:+CMSIncrementalMode puts the CMS into incremental mode to return the cpu to the user threads more often.
Attach to the process with jstat -gc -h 10 <pid> 1s and watch the GC working.
Will you really fill up the memory? I assume that 64cpus for request processing might even be able to work with less memory. What do you store in there?
Depending on your GC pause analysis, you may wish to implement Incremental mode whereby the long pause may be broken out over a period of time.
I have found memory architecture plays a part in large memory sizes. Applications in general don't perform as well if they use more than one memory bank. The JVM appears to suffer as well, esp the GC which has to sweep the whole memory.
If you have an application which doesn't fit into one memory bank, your application has to pull in memory which is not local to a processor and use memory local to another processor.
On linux you can run numactl --hardware to see the layout of processors and memory banks.

Categories

Resources