How to reduce memory consumption in Java application server - java

I have a Java application server which processes data from network clients. The problem that I face is the memory consumption of the Java application. What are the effective techniques for reducing the memory consumption in Java applications. Would you please share some experience?

Use a profiler, identify where the memory is consumed, and optimize what should be optimized.
It could be settings (number of threads in the pool, etc.), algorithms, data structures, database queries. It's impossible to know without knowing the internals of the application.
And even then, guessing has a good chance of giving incorrect results. Measuring and analyzing is the best thing to do.

Related

Selecting a JVM for an embedded streaming device

We're developing a device that's basically a raspberry pi that reads file data, processes it, and streams data out of a USB device at a given frame rate.
Due to the nature of the features we're using, we can't totally eliminate garbage allocation, and our GC pauses for even minor, young generation GC are causing frame skips.
Right now we're using HotSpot JVM, but my understanding is that it's better suited to large heap sizes, our memory needs rarely go over 256mb, so I'm wondering if there's a better VM with garbage collection that can give us pauses less than 15ms on a Raspberry Pi?
I think you will really struggle with this. You don't provide the flags you're using to start your JVM so it's not possible to recommend alternatives.
A well configured G1 collector with an application that does not generate constantly increasing long-life objects will avoid stop-the-world full GCs. However, your problem is that even minor GCs (which are typically very fast) are causing unacceptable latency. Part of the problem is the speed of the memory bus on the Pi, which is not really that great.
We (Azul, who I work for) produce a pauseless collector (C4) but that is designed for machines with more resources. It needs a minimum of 1Gb RAM and uses multiple cores to handle GC concurrently with application threads.
Ultimately, we've decided that we're fighting uphill making the application do something it really wasn't built for, or at least, it's not worth the development resources to continue along that path.
Our current solution is to leave the application as it is, and live with the reality of garbage collection pauses so we don't hamstring future development of our application with a crazy amount of optimization. Let Java do what it was designed to do.
To stop pauses that are causing frame skips, we have instead opted to create a second, tiny buffer application, managed by our primary application via IPC.

Spring boot embedded tomcat server take over 800 MB RAM?

I'm working on a Springboot application that uses embedded tomcat server. Application take more than 800MB RAM. Is that common? Is there any way to bring memory uses down?
The amount of memory consumed by your tomcat totally depends upon your application requirement.
You need to do some sort of memory profiling of your application.
Is that common?
Yes, I could be. It all depends on your application, the way you create objects and the amount of memory being used by your objects.
You can start with putting your -Xms to 1GB and run your application and perform normal operations.
Use tools like JVisualVm or JConsole to observe the Heap Size and GC performance and even amount of memory consumed by different types of objects in the JVM.
This will give you an intial idea abount amount of Heap required by your application.
After this use tool like JMeter to load test your application check how the load is hampering your heap usage.
Suggested Reading:
http://blog.manupk.com/2012/09/java-memory-model-simplified.html
This is pretty common. Java VMs are pretty heavy. Look at the JVM start up flags, which will tell you what the heapsize can grow to (You may see something like -Xmx768m which allocates a maximum of 768M of heap). You might try setting the CATALINA_OPTS environment variable: CATALINA_OPTS=-Xmx512m, but if the springboot script that boots the VM overrides this you will have to track down the value being set in the script. However, the default value generally works well and will prevent the JVM from throwing out of memory errors if you start instantiating many or large (read: hibernate) objects that take a while to be garbage collected.
Is there any way to bring memory uses down?
There are two approaches:
You can attempt to "squeeze" the heap size. That is not recommended, as it leads to the JVM spending a larger percentage of the CPU in the GCs, more frequent GC pauses, and ultimately OOMEs.
This approach often doesn't work at all; i.e. it just causes the application to die quicker.
You can figure out why your application is using so much memory. It could be due to many things:
The problem may be too big.
Your application could be "bloated" with all sorts of unnecessary libraries, features, etc.
Your in-memory data structures could be poorly designed.
Your application could be caching too much in memory.
Your application could have memory leaks.
I agree with #cowbert's advice. Use performance monitoring tools to try to track down what is using most of the JVM's memory. If there is a memory leak, this often shows up as an unexpectedly large amount of memory used for certain kinds of objects.

Memory consumption for java web app (300MB too high?)

Can I pick your brains about a memory issue?
My java app, which isn't huge (like 14000 LOC) is using about 300MB of memory. It's running on Tomcat with a MySQL database. I'm using Hibernate, Spring and Velocity.
It doesn't seem to have any leaks, cause it stabilizes and 300MB, without growing further. (Also, I done some profiling.) There's been some concern from my team, however, about the amount of space it's using. Does this seem high. Do you have any suggestions for ways to shrink it?
Any thoughts are appreciated.
Joe
The number of LOC is not an indicator of how much heap a Java app is going to use; there is no correlation from one to the other.
300MB is not particularly large for a server application that is caching data, but it is somewhat large for an application that is not holding any type of cached or session data (but since this includes the webserver itself, 300MB is generally reasonable).
The amount of code (LOCs) rarely has much impact on the memory usage of your application, after all, it's the variables and objects stored that take most of the memory. To me, 300 megabytes doesn't sound much, but of course it depends on your specific usage scenario:
How much memory does the production server have?
How many users are there with this amount of memory used?
How much does the memory usage grow per user session?
How many users are you expecting to be concurrently accessing the application in production use?
Based on these, you can do some calculations, eg. is your production environment ready to handle the amount of users you expect, do you need more hardware, do you perhaps need to serialize some data to disk/db etc.
I can't make any promises, but i don't think you need to worry. We run a couple of web app's at work through Glassfish, using hibernate as well, and each uses about 800-900MB in dev, will often have 2 domain's running each of that size.
If you do need to reduce your footprint, at least make sure you are using Velocity 1.6 or higher. 1.5 wasted a fair bit of memory.

What does fragmented memory look like?

I have a mobile application that is suffering from slow-down over time. My hunch, (In part fed by this article,) is that this is due to fragmentation of memory slowing the app down, but I'm not sure. Here's a pretty graph of the app's memory use over time:
fraggle rock http://kupio.com/image-dump/fragmented.png
The 4 peaks on the graph are 4 executions of the exact same task on the app. I start the task, it allocates a bunch of memory, it sits for a bit (The flat line on top) and then I stop the task. At that point it calls System.gc(); and the memory gets cleaned up.
As can be seen, each of the 4 runs of the exact same task take longer to execute. The low-points in the graph all return to the same level so there do not seem to be any memory leaks between task runs.
What I want to know is, is memory fragmentation a feasible explanation or should I look elsewhere first, bearing in mind that I've already done a lot of looking? The low-points on the graph are relatively low so my assumption is that in this state the memory would not be very fragmented since there can't be a lot of small memory holes to be causing problems.
I don't know how the j2me memory allocator works though, so I really don't know. Can anyone advise? Has anyone else had problems with this and recognises the memory profile of the app?
If you've got a little bit of time, you could test your theory by re-using the memory by using Memory Pool techniques: each run of the task uses the 'same' chunks of memory by getting them from the pool and returning them at release time.
If you're still seeing the degrading performance after doing this investigation, it's not memory fragmentation causing the problem. Let us all know your results and we can help troubleshoot further.
Memory fragmentation would account for it... what is not clear is whether the Apps use of memory is causing paging? this would also slow things up.... and could cause the same issues.
It the problem really is memory fragmentation, there is not much you can do about it.
But before you give up in despair, try running your app with a execution profiler to see if it is spending a lot of time executing in an unexpected place. It is possible that the slow down is actually due to a problem in your algorithms, and nothing to do with memory fragmentation. As people have already said, J2ME garbage collectors should not suffer from fragmentation issues.
Consider looking at garbage collection statistics. You should have a lot more on the last run than the first, if your theory is to hold. Another thought might be that something else eats your memory so your application has less.
In other words, profiler time :)
What OS are you running this on? I have some experience with Windows CE5 (or Windows Mobile) devices. CE5's operating system level memory architecture is quite broken and will fail soon for memory intensive applications. Your graph does not have any scales, but every process only gets 32MB of address space on CE5. The VM and shared libraries will take their fair share of that as well, leaving you with quite little left.
The only way around this is to re-use the memory you allocated instead of giving it back to the collector and re-allocating later. This is, of course, much more low-level programming than you would usually want to do in Java, but on this platform you might be out of luck.

How can I profile a very large Java webapp?

I have a very large Java app. It runs on Tomcat and is your typical Spring/Hibernate webapp. It is also an extremely large Java program. It's easy for me to test the performance of database queries, since I can run those separately, but I have no idea to look for Java bottlenecks on a stack like this. I tried Eclipse's TPTP profiler, but it really didn't seem to like my program, and I suspect that it is because my program is too large. Does anyone have any advice on profiling a large webapp?
The Visual VM profiler that now comes with the JDK can be attached to running processes and may at least give an initial overview of the performance. It is based on the Netbeans profiler.
Try jProfiler. It's easy to integrate with Tomcat
If you can get Tomcat and your application running in Netbeans.
Then you can use the Netbeans built-in profiler to test performance, memory usage, etc ...
Wikipage on tomcat in Netbeans.
I have used YourKit to profile applications with an 8 GB heap and it worked quite well.
Check JAMon. It's not a profiler, but it's the best tool for profiling that I can recommend. It's very easy to integrate with spring. We use it in test and live environment.
I've never found an easy way to do this because there's typically so much going on that it's hard to get a clear overall picture. With things like Hibernate even more so because the correct behavior may be to grab a big chunk of memory for cached data, even though your app's not really "doing anything", so another memory inefficient process that you run may get swamped in profiling.
Are you profiling for memory, speed, or just in general looking for poor performance? Try to test processes that you suspect are bad in isolation, it's certainly much easier.
JProbe, JProfiler, both good, free demos are available. Testing inside an IDE complicates the memory issues, I've found it easier not to bother.
Try JProfiler. It has a trial license and it is very full featured. To use it, you'll have to:
Add the JProfiler agent as an argument to your java command
Start the program on the server
Start JProfiler and choose the "Connect to an application running remotely"
Give it the port number and whatever host it's running on
All these are in the instructions that come with JProfiler, but the important part is that you'll connect through a host and port to your running app.
As for what to profile, I'm sure you have an idea of things that could be memory/CPU intensive - loading large data sets, sorting, even simple network I/O if it's done incorrectly. Do these things (it's great if you can automate load testing using some scripts that bang on your server) and collect a snapshot with JProfiler.
Then view the graphs at your leisure. Turn on CPU monitoring and watch where the CPU cycles are being spent. You'll be able to narrow down by percentage in each method call, so if you're using more than 1 or 2% of CPU in methods that you have source for, go investigate and see if you can make them less CPU intensive.
Same goes for memory. Disable all the CPU profiling, enable all the memory profiling, run the tests again and get your snapshot.
Rinse, repeat.
You might also take this time to read up on memory management and garbage collection. There's no better time to tune your garbage collection than when you're already profiling: http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
Pay special attention to the part on the eden/survivor object promotion. In web apps you get a lot of short-lived objects, so it often makes sense to increase the young generations at the expense of the tenured generations.

Categories

Resources