Hibernate+JPA+Spring Memory Leak - java

I am deveoloping a web application with Hibernate, JPA, Spring and Struts2. When I run the application for a few hours in my web server (VPS Tomcat) the OS send a SIGKILL to tomcat because of the memory usage. My server has 288Mb, tomcat gets killed when it reaches 200Mb aprox. Someone has told me that I need more memory but my application is small and doesn´t have too much traffic, it is not in production yet. I am using postgresql and my database is about 150Mb, it has many images. I have tried to use a memory profiler with netbeans, but the IDE becomes to slow and I have not been able to find anything.
I'll appreciate any help.

Do you close properly your connections in a finally block ?
It's hard to reply without the code with only theses informations

i have used JProfiler and yourkit but i am not satisfied with output for actual performance tuning and memory usage currently we have been switched to java melody. This not only help performance optimization in dev but also in production system. Java melody is very easy to integrate and configure and in production you can enable or disable by just updating web.xml

Related

Profiling memory leak in a non-redundant uptime-critical application

We have a major challenge which have been stumping us for months now.
A couple of months ago, we took over the maintenance of a legacy application, where the last developer to touch the code, left the company several years ago.
This application needs to be more or less always online. It's developed many years ago without staging and test environments, and without a redundant infrastructure setup.
We're dealing with a legacy Java EJB application running on Payara application server (Glassfish derivative) on an Ubuntu server.
Within the last year or two, it has been necessary to restart Payara approximately once a week, and the Ubuntu server once a month.
This is due to a memory leak which slows down the application over a period of around a week. The GUI becomes almost entirely non-responsive, but a restart of Payara fixes this, at least for a while.
However after each Payara restart, there is still some kind of residual memory use. The baseline memory usage increases, thereby reducing the time between Payara restarts. Around every month, we thus do a full Ubuntu reboot, which fixes the issue.
Naturally we want to find the memory leak, but we are unable to run a profiler on the server because it's resource intensive, and would need to run for several days in order to capture the memory leak.
We have also tried several times to dump the heap using "gcore" command, but it always result in a segfault and then we need to reboot the Ubuntu server.
What other options / approaches do we have to figure out which objects in the heap are not being garbage collected?
I would try to clone the server in some way to another system where you can perform tests without clients being affected. Could even be a system with less resources, if you want to trigger a resource based problem.
To be able to observe the memory leak without having to wait for days, I would create a load test, maybe with Apache JMeter, to simulate accesses of a week within a day or even hours or minutes (don't know if the base load is at a level where that is feasible from the server and network infrastructure).
First you could set up the load test to act as a "regular" mix of requests like seen in the wild. After you can trigger the loss of response, you can try to find out, if there are specific requests that are more likely to be the cause for the leak than others. (It also could be that some basic component that is reused in nearly any call contains the leak, and so you cannot find out "the" call with the leak.)
Then you can instrument this test server with a profiler.
To get another approach (you could do it in parallel) you also can use a static code inspection tool like SonarQube to analyze the source code for typical patterns of memory leaks.
And one other idea comes to my mind, but it is coming with many preconditions: if you have recorded typical scenarios for the backend calls, and if you have enough development resources, and if it is a stateless web application where each call could be inspoected more or less individually, then you could try to set up partial integration tests where you simulate the incoming web calls, with database and file access, but if possible without the application server, and record the increase of the heap usage after each of the calls. Statistically you might be able to find out the "bad" call this way. (So this would be something I would try as very last option.)
Apart from heap dump have to tried any realtime app perf monitoring (APM) like appdynamics or the opensource alternative like https://github.com/scouter-project/scouter.
Alternate approach would be to analyse existing application issue Eg: Payara issues like these https://github.com/payara/Payara/issues/4098 or maybe the ubuntu patch you are currently running app on.
You can use jmap, an exe bundled with the JDK, to check the memory. From the documentation:-
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
For more information you can see the documentation or see the stackoverflow question How to analyse the heap dump using jmap in java
There is also a tool called jhat which can be used tp analise java heap.
From the documentation:-
The jhat command parses a java heap dump file and launches a webserver. jhat enables you to browse heap dumps using your favorite webbrowser. jhat supports pre-designed queries (such as 'show all instances of a known class "Foo"') as well as OQL (Object Query Language) - a SQL-like query language to query heap dumps. Help on OQL is available from the OQL help page shown by jhat. With the default port, OQL help is available at http://localhost:7000/oqlhelp/
See JHat Dcoumentation, or How to analyze the heap dump using jhat

Extreme slowdown Cloud vs VPS (Amazon, Jelastic)

We are trying to move one of our web-services (Java) to the cloud from a development server, here are the details:
There is a PHP front-end, connecting to a Java-based web-service that is connected to a MySQL database (all requests to the database are sent from the web-service, the php part is communicating with the Java back-end only, no direct connection to the database).
Start Point
Dev Server - CentOS (cPanel), 765MB-1.5GB RAM, 4CPU, Tomcat 7
*the software is running fast, no speed issues, logs show normal CPU and memory usage
Scenario #1
PHP front-end on Elastic Beanstalk and Java web-service with database on Elastic Beanstalk
*the software is about 80% slower, logs show normal CPU and memory usage
Scenario #2
PHP front-end on VPS (same company/location with Jelastic) and Java web-service with database on Jelastic
*the software is about 70% slower, logs show normal CPU and memory usage
Scenario #3
PHP front-end on VPS, Java web-service with database on Elastic Beanstalk and Jelastic (swithing)
*the software is about 70-80% slower, logs show normal CPU and memory usage on both cloud environments
What I figured out, no matter where the PHP front-end is located, that will load fast, nothing to search here.
As soon as the Java back-end is moved from the VPS to the cloud (doesn't matter if Amazon or Jelastic), the whole software slows down extremely. Based on the logs and since we tried with two providers, this doesn't seem like a resource issue.
It cannot be a connection issue since we tried to have the PHP and Java in the same environment (Scenario #1).
It is either the Java web-service slowing down extremely (for unknown reason as logs show low resource usage) or it could be the connection between the Java application and the database (I doubt since in the first scenario, all three components are on Amazon, same environment, location).
Anyone ever had such an issue before? Any ideas? Thank you!
(note, I have zero experience with cloud hosting)
It might be related to specific parameters in configuration files, mostly for DB. Please double check that they are the same in each test.
Also it is not clear how you measure performance and what "slower" exactly means. And you have not specified size of resources on Jelastic and EB. Please double check that resources are equal as well.
For high performance Java cloud backend, you can try Jelastic implementation by Elastx - see the performance research that CloudSpectator did on them (they also used Amazon and Rackspace cloud in the study): http://blog.jelastic.com/wp-content/uploads/2013/09/Elastx-Fueld-by-SolidFire-9-5-13+Jelastic.pdf
Also, I do not know who your current Jelastic provider is, but if you contact them by clicking Help / Contact Support in Jelastic dashboard, I am sure that they will be happy to troubleshoot the issue! If this does not help - please ping me offline.
What you are measuring is CPU an memory. Since both give normal results, and your application is communicating over the network, I'd suspect network latency to be the culprit. Next thing to look into would be for example disk I/O performance, which can slow down your application like having a handbrake.

Java Webapp Performance Issues

I have a Web Application, Made entirely with Java. The Webapp doesn't use any Graphical / Model Framework, instead, the webapp uses The Model-View Controller. It's made only with Servlet specification (Servlet ver. 2.4).
The webapp it's developed since 2001, and it's very complex. Initially, was built for work with Tomcat 4.x/5.x. Actually, runs on Tomcat 6.x. But, we still having memory Leaks.
In Depth, the specifications of The Webapp can resumed as:
Uses Servlet v. 2.4 Specification.
It doesn't use Any Framework
It doesn't use JavaEE (Not EJB)
It's based on JavaSE (With Servlets)
Works Only on IE 6+ (Because of it's age)
Infrastructure Specification
Actually, the webapp works in three environments:
First
IBM Server (I don't remember exactly the model)
Intel Xeon 2.4 Ghz
32GB RAM
1TB HDD
Tomcat (Version 6) is configured to use 8GB of RAM
Second
Dell Server
Intel Xeon 2.0Ghz
4GB RAM
500GB HDD
Tomcat (Version 5.5) is configured to use 1.5GB of RAM
Third
Dell Server
Amd Opteron 1214 2.20Ghz
4GB RAM
320GB HDD
Tomcat (Version 6) is Configured to use 1.5GB of RAM
Database specification
The webapp uses SQL Server 2008 R2 Express Edition as a DBMS, except for the user of the first server-specification, that uses SQL Server 2008 R2 Standard Edition. For the connection pools, the app uses Apache DBCP.
Problem
Well, it has very serious performance issues. The webapp slow down continually, and, many times Denies the Service. The only way to recover the app is restarting The Apache Tomcat Service.
During a performance Audit, i've found several programming issues (Like database connections that never closes, excesive use of Vector collection [instead of ArrayList]).
I want to know how can improve the performance for the app, which applications can help me to monitoring the Tomcat performance and the Webapp Memory usage.
All suggestions are gladly accepted.
You could also try stagemonitor. It is an open source performance monitoring library. It records request response times, JVM metrics, request details including a call stack (profile) of the called methods during the request and more. Because of the low overhead, you can also use it in production.
The tuning procedure would be the following.
Identify slow requests with the Request Dashboard
Analyze the stack trace of the request with the Request Detail Dashboard to find out about slow methods
Dive into your code and try to optimize those slow methods
You can also correlate some metrics like the throughput or number of sessions with the response time or cpu usage
Analyze the heap with the JVM Memory Dashboard
Note: I am the developer of stagemonitor.
I would start with some tools that can help you profiling the application. Since you are developing webapp start with Lambda Probe and Java melody.
The first step is to determine the conditions under which the app starts to behave oddly. Ask yourself few questions:
Do performance issues arise right after applications starts, or overtime?
Do performance issues are correlated to quantity of client requests?
What is the real performance problem - high load on the server or lack of memory (note that they are related, so check which one starts first)
Are there any background processes which are performing some massive operations? Are they scheduled to run at some particular time period?
Try to find some clues before going deep into code. It will help you to narrow down possible causes.
As Joshua Bloch has stated in his book entitled "Effective Java" - performance issues are rarely the effect of some minor mistakes in source code (although, of course, misuse of Java constructs can lead to disaster). Usually the cause is bad system (API) architecture.
The last suggestion based on my experience - try not to think that high memory consumption is something bad. Tomcat will use as much memory as operating system and JVM will let him (not more than max settings) and just when it needs more - Tomcat will perform garbage collection. So a typical (proper!) graph of memory consumption looks like a saw. If you are dealing with memory leak, then the graph will be increasing constantly, but indefinitely. This is the most often misunderstood of memory leaks, so keep it in mind.
To be honest - we cannot help you much further. Those are just pointers, now you will have to make extensive research to figure out the cause :)
The general solution is to use a profiler e.g. YourKit, with a realistic workload which reproduces the problem.
What I do first is a CPU only profile, a memory only profile and finally a CPU & Memory profile on at once (I then look at the CPU profile results)
YourKit can also monitor your high level operations such a Java EE resources and JDBC connections. I haven't tried these as I don't use them. ;)
It can be a good idea to improve the efficiency even if its not the cause of the problem as it will reduce the amount of "noise" in these profiles and make your issues more obvious.
You could try increasing the amount of memory available but a suspect it will just delay the problem.
Ok. So I have seen huge Java applications run lesser configurations. You should try to do the following -
First connect a Profiler to your application and see which part of your application takes the most time. You can use JProfiler or Eclipse MAT ( I personally prefer JProfiler). Also try to take a look at the objects taking the most memory. This will help you narrow down to the parts which you need to rewrite to improve the performance.
Once you have taken a look at the memory leaks update your application to use 64bit JDK(assuming it already does not do so)
Take a look at your JVM arguments and optimize them.
You can try the open source tool Webapp Watcher in order to identify where in the code is the performance issue.
You have first to add a filter in the webapp (as explained here) in order to record metrics, and then import the logs in the WAW Analyzer tool and follow the steps described in the doc to know where is the potential performance issue in the code.

Tomcat 6 Web Application Eating Up Memory Over Time

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

low end virtual private server for java development

Will a vps with 360 megs of ram running Linux be able to support a single user developing a java web application that uses Spring, Hibernate, and MySQL for the database? The server will be for development only so the application will not have more then one or two concurrent users.
edit:
By development I mean a server I can deploy and test on. The actual coding will be done on windows, but I want a Linux server to test on as well.
This could work ok, but it depends a lot on your application setup. If you cache a lot, your appserver caching page content, Hibernate caching query results/objects or MySQL caching query results you probably will need more RAM. So if your content is big it might not fit, otherwise it might just fit. If you have absulutely no option of increasing the amount of memory if you find out you need more I would certainly not recommend this setup.
But maybe more to the point: What is your target platform? I would say that your server should match that.
Just for linux testing it probably is easier to either get a cheap pc or run it inside a virtual machine on your development machine (assuming you've got plenty of ram on that one).
Depends on what you're running for your IDE. If you're using Eclipse, you're going to want somewhere around 1Gb of RAM (Eclipse is a memory hog...and slow as all hell if you don't have enough).
If you're using a more efficient (memory wise) IDE, then you should be good to go with that setup for development.
UPDATE
Since no coding is going to happen on the box...you should be just fine with that box to do your testing. Enjoy!
Short answer - I don't think you will have any problems with the amount of ram. I've deployed a rails app to a 256MB VPS and it worked great for development.

Categories

Resources