Tomcat 6 Web Application Eating Up Memory Over Time - java

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.

You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.

Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

Related

Profiling memory leak in a non-redundant uptime-critical application

We have a major challenge which have been stumping us for months now.
A couple of months ago, we took over the maintenance of a legacy application, where the last developer to touch the code, left the company several years ago.
This application needs to be more or less always online. It's developed many years ago without staging and test environments, and without a redundant infrastructure setup.
We're dealing with a legacy Java EJB application running on Payara application server (Glassfish derivative) on an Ubuntu server.
Within the last year or two, it has been necessary to restart Payara approximately once a week, and the Ubuntu server once a month.
This is due to a memory leak which slows down the application over a period of around a week. The GUI becomes almost entirely non-responsive, but a restart of Payara fixes this, at least for a while.
However after each Payara restart, there is still some kind of residual memory use. The baseline memory usage increases, thereby reducing the time between Payara restarts. Around every month, we thus do a full Ubuntu reboot, which fixes the issue.
Naturally we want to find the memory leak, but we are unable to run a profiler on the server because it's resource intensive, and would need to run for several days in order to capture the memory leak.
We have also tried several times to dump the heap using "gcore" command, but it always result in a segfault and then we need to reboot the Ubuntu server.
What other options / approaches do we have to figure out which objects in the heap are not being garbage collected?
I would try to clone the server in some way to another system where you can perform tests without clients being affected. Could even be a system with less resources, if you want to trigger a resource based problem.
To be able to observe the memory leak without having to wait for days, I would create a load test, maybe with Apache JMeter, to simulate accesses of a week within a day or even hours or minutes (don't know if the base load is at a level where that is feasible from the server and network infrastructure).
First you could set up the load test to act as a "regular" mix of requests like seen in the wild. After you can trigger the loss of response, you can try to find out, if there are specific requests that are more likely to be the cause for the leak than others. (It also could be that some basic component that is reused in nearly any call contains the leak, and so you cannot find out "the" call with the leak.)
Then you can instrument this test server with a profiler.
To get another approach (you could do it in parallel) you also can use a static code inspection tool like SonarQube to analyze the source code for typical patterns of memory leaks.
And one other idea comes to my mind, but it is coming with many preconditions: if you have recorded typical scenarios for the backend calls, and if you have enough development resources, and if it is a stateless web application where each call could be inspoected more or less individually, then you could try to set up partial integration tests where you simulate the incoming web calls, with database and file access, but if possible without the application server, and record the increase of the heap usage after each of the calls. Statistically you might be able to find out the "bad" call this way. (So this would be something I would try as very last option.)
Apart from heap dump have to tried any realtime app perf monitoring (APM) like appdynamics or the opensource alternative like https://github.com/scouter-project/scouter.
Alternate approach would be to analyse existing application issue Eg: Payara issues like these https://github.com/payara/Payara/issues/4098 or maybe the ubuntu patch you are currently running app on.
You can use jmap, an exe bundled with the JDK, to check the memory. From the documentation:-
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
For more information you can see the documentation or see the stackoverflow question How to analyse the heap dump using jmap in java
There is also a tool called jhat which can be used tp analise java heap.
From the documentation:-
The jhat command parses a java heap dump file and launches a webserver. jhat enables you to browse heap dumps using your favorite webbrowser. jhat supports pre-designed queries (such as 'show all instances of a known class "Foo"') as well as OQL (Object Query Language) - a SQL-like query language to query heap dumps. Help on OQL is available from the OQL help page shown by jhat. With the default port, OQL help is available at http://localhost:7000/oqlhelp/
See JHat Dcoumentation, or How to analyze the heap dump using jhat

Hibernate+JPA+Spring Memory Leak

I am deveoloping a web application with Hibernate, JPA, Spring and Struts2. When I run the application for a few hours in my web server (VPS Tomcat) the OS send a SIGKILL to tomcat because of the memory usage. My server has 288Mb, tomcat gets killed when it reaches 200Mb aprox. Someone has told me that I need more memory but my application is small and doesn´t have too much traffic, it is not in production yet. I am using postgresql and my database is about 150Mb, it has many images. I have tried to use a memory profiler with netbeans, but the IDE becomes to slow and I have not been able to find anything.
I'll appreciate any help.
Do you close properly your connections in a finally block ?
It's hard to reply without the code with only theses informations
i have used JProfiler and yourkit but i am not satisfied with output for actual performance tuning and memory usage currently we have been switched to java melody. This not only help performance optimization in dev but also in production system. Java melody is very easy to integrate and configure and in production you can enable or disable by just updating web.xml

Java Webapp Performance Issues

I have a Web Application, Made entirely with Java. The Webapp doesn't use any Graphical / Model Framework, instead, the webapp uses The Model-View Controller. It's made only with Servlet specification (Servlet ver. 2.4).
The webapp it's developed since 2001, and it's very complex. Initially, was built for work with Tomcat 4.x/5.x. Actually, runs on Tomcat 6.x. But, we still having memory Leaks.
In Depth, the specifications of The Webapp can resumed as:
Uses Servlet v. 2.4 Specification.
It doesn't use Any Framework
It doesn't use JavaEE (Not EJB)
It's based on JavaSE (With Servlets)
Works Only on IE 6+ (Because of it's age)
Infrastructure Specification
Actually, the webapp works in three environments:
First
IBM Server (I don't remember exactly the model)
Intel Xeon 2.4 Ghz
32GB RAM
1TB HDD
Tomcat (Version 6) is configured to use 8GB of RAM
Second
Dell Server
Intel Xeon 2.0Ghz
4GB RAM
500GB HDD
Tomcat (Version 5.5) is configured to use 1.5GB of RAM
Third
Dell Server
Amd Opteron 1214 2.20Ghz
4GB RAM
320GB HDD
Tomcat (Version 6) is Configured to use 1.5GB of RAM
Database specification
The webapp uses SQL Server 2008 R2 Express Edition as a DBMS, except for the user of the first server-specification, that uses SQL Server 2008 R2 Standard Edition. For the connection pools, the app uses Apache DBCP.
Problem
Well, it has very serious performance issues. The webapp slow down continually, and, many times Denies the Service. The only way to recover the app is restarting The Apache Tomcat Service.
During a performance Audit, i've found several programming issues (Like database connections that never closes, excesive use of Vector collection [instead of ArrayList]).
I want to know how can improve the performance for the app, which applications can help me to monitoring the Tomcat performance and the Webapp Memory usage.
All suggestions are gladly accepted.
You could also try stagemonitor. It is an open source performance monitoring library. It records request response times, JVM metrics, request details including a call stack (profile) of the called methods during the request and more. Because of the low overhead, you can also use it in production.
The tuning procedure would be the following.
Identify slow requests with the Request Dashboard
Analyze the stack trace of the request with the Request Detail Dashboard to find out about slow methods
Dive into your code and try to optimize those slow methods
You can also correlate some metrics like the throughput or number of sessions with the response time or cpu usage
Analyze the heap with the JVM Memory Dashboard
Note: I am the developer of stagemonitor.
I would start with some tools that can help you profiling the application. Since you are developing webapp start with Lambda Probe and Java melody.
The first step is to determine the conditions under which the app starts to behave oddly. Ask yourself few questions:
Do performance issues arise right after applications starts, or overtime?
Do performance issues are correlated to quantity of client requests?
What is the real performance problem - high load on the server or lack of memory (note that they are related, so check which one starts first)
Are there any background processes which are performing some massive operations? Are they scheduled to run at some particular time period?
Try to find some clues before going deep into code. It will help you to narrow down possible causes.
As Joshua Bloch has stated in his book entitled "Effective Java" - performance issues are rarely the effect of some minor mistakes in source code (although, of course, misuse of Java constructs can lead to disaster). Usually the cause is bad system (API) architecture.
The last suggestion based on my experience - try not to think that high memory consumption is something bad. Tomcat will use as much memory as operating system and JVM will let him (not more than max settings) and just when it needs more - Tomcat will perform garbage collection. So a typical (proper!) graph of memory consumption looks like a saw. If you are dealing with memory leak, then the graph will be increasing constantly, but indefinitely. This is the most often misunderstood of memory leaks, so keep it in mind.
To be honest - we cannot help you much further. Those are just pointers, now you will have to make extensive research to figure out the cause :)
The general solution is to use a profiler e.g. YourKit, with a realistic workload which reproduces the problem.
What I do first is a CPU only profile, a memory only profile and finally a CPU & Memory profile on at once (I then look at the CPU profile results)
YourKit can also monitor your high level operations such a Java EE resources and JDBC connections. I haven't tried these as I don't use them. ;)
It can be a good idea to improve the efficiency even if its not the cause of the problem as it will reduce the amount of "noise" in these profiles and make your issues more obvious.
You could try increasing the amount of memory available but a suspect it will just delay the problem.
Ok. So I have seen huge Java applications run lesser configurations. You should try to do the following -
First connect a Profiler to your application and see which part of your application takes the most time. You can use JProfiler or Eclipse MAT ( I personally prefer JProfiler). Also try to take a look at the objects taking the most memory. This will help you narrow down to the parts which you need to rewrite to improve the performance.
Once you have taken a look at the memory leaks update your application to use 64bit JDK(assuming it already does not do so)
Take a look at your JVM arguments and optimize them.
You can try the open source tool Webapp Watcher in order to identify where in the code is the performance issue.
You have first to add a filter in the webapp (as explained here) in order to record metrics, and then import the logs in the WAW Analyzer tool and follow the steps described in the doc to know where is the potential performance issue in the code.

How to debug Java memory errors?

There is a Java Struts application running on Tomcat, that have some memory errors. Sometimes it becomes slowly and hoard all of the memory of Tomcat, until it crashes.
I know how to find and repair "normal code errors", using tests, debugging, etc, but I don't know how to deal with memory errors (How can I reproduce? How can I test? What are the places of code where is more common create a memory error? ).
In one question: Where can I start? Thanks
EDIT:
A snapshot sended by the IT Department (I haven't direct access to the production application)
Use one of the many "profilers". They hook into the JVM and can tell you things like how many new objects are being created per second, and what type they are etc.
Here's just one of many: http://www.ej-technologies.com/products/jprofiler/overview.html
I've used this one and it's OK.
http://kohlerm.blogspot.com/
It is quite good intro how to find memory leaks using eclipse memory analyzer.
If you prefer video tutorials, try youtube, although it is android specific it is very informative.
If your application becomes slowly you could create a heap dump and compare it to another heap dump create when the system is in a healthy condition. Look for differences in larger data structures.
You should run it under profiler (jprofile or yourkit, for example) for some time and see for memory/resource usage. Also try to make thread dumps.
There are couple of options profiler is one of them, another is to dump java heap to a file and analyze it with a special tool (i.e. IBM jvm provides a very cool tool called Memory Analizer that presents very detailed report of allocated memory in the time of jvm crash - http://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/).
3rd option is to start your server with jmx server enabled and connect to it via JConsole with this approach you would be able to monitor memory ussage/allocation in the runtime. JConsole is provided with standard sun jdk under bin directory (here u may find how to connect to tomcat via jconsole - Connecting remote tomcat JMX instance using jConsole)

Java profiler for IBM JVM 1.4.2 (WebSphere 6.0.2)

I'm looking for a Java profiler that works well with the JVM coming with WebSphere 6.0.2 (IBM JVM 1.4.2). I use yourkit for my usual profiling needs, but it specifically refuses to work with this old jvm (I'm sure the authors had their reasons...).
Can anybody point to a decent profiler that can do the job? Not interested in a generic list of profilers, BTW, I've seen the other stackoverflow theread, but I'd rather not try them one by one.
I would prefer a free version, if possible, since this is a one-off need (I hope!) and I would rather not pay for another profiler just for this.
Old post, but this may help someone. You can use IBM Health Center which is free. It can be downloaded standalone or as part of the IBM Support Assistant. I suggest downloading ISA since it has a ton of other useful tools such as Garbage Collection and Memory Visualizer and Memory Analyzer.
What are you looking to profile? Is it stuff in the JVM or the App Server? If it's the latter, there's loads of stuff in WAS 6 GUI to help with this. Assuming you really want to see stuff like the heap etc, then the IBM HeapAnalyzer might help. There are other tools listed off the bottom of this page.
Something else I've learned, ideally, youll be able to connect your IDE's profiler to the running JVM. Some let you do this to a remote one as well as the local one you are developing on. Is the JVM you wish to profile in live or remote? If so, you might have to force dumps and take them out of the live environment to look at at your leisure. Otherwise, set up something local and get the info from it that way.
Update: I found out that JProfiler integrates smoothly with WAS 6.0.2 (IBM JDK 1.4).

Categories

Resources