I'm facing an issue where I have a java application running on a server, and it starts growing in memory until eventually the server cannot handle it anymore.
This is some sort of memory leak / resource leak problem, which I thought was extremely rare in Java due to the garbage collection. I guess something is being referenced and never used, so the garbage collector does not collect it.
The problem is that the size in memory grows so slowly that I'm not able to debug it properly (it may take two weeks to make the server unusable).
I'm using java + mysql-connector, and I'm sure the memory leak is caused by something related to the database connection.
Here is how I connect to the database:
private static Connection connect(){
try {
Connection conn = null;
conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/database","client","password");
return conn;
}catch(SQLException ex){
System.out.println("SQLException: " + ex.getMessage());
System.out.println("SQLState: " + ex.getSQLState());
System.out.println("VendorError: " + ex.getErrorCode());
return null;
}
}
public static Connection getConnection(){
try {
if (connection == null || connection.isClosed()) connection = connect();
return connection;
}catch (SQLException exception){
System.out.println("exception trying to connect to the database");
return null;
}
}
I can't find any possible problem here, but who knows!
Here's how I retrieve information from the database:
public void addPoints(long userId,int cantidad){
try {
if(DatabaseConnector.getConnection()!=null) {
PreparedStatement stm = DatabaseConnector.getConnection().prepareStatement("UPDATE users SET points = points + ? WHERE id = ? ");
stm.setLong(2, userId);
stm.setInt(1, cantidad);
if(stm.executeUpdate()==0){ //user doesn't have any point records in the database yet
PreparedStatement stm2 = DatabaseConnector.getConnection().prepareStatement("INSERT INTO users (id,points) VALUES (?,?)");
stm2.setLong(1, userId);
stm2.setInt(2, cantidad);
stm2.executeUpdate();
}
}
}catch (SQLException exception){
System.out.println("error recording points");
}
}
public ArrayList<CustomCommand> getCommands(long chatId) throws SQLException{
ArrayList<CustomCommand> commands = new ArrayList<>();
if(DatabaseConnector.getConnection() != null) {
PreparedStatement stm = DatabaseConnector.getConnection().prepareStatement("SELECT text,fileID,commandText,type,probability FROM customcommands WHERE chatid = ?");
stm.setLong(1, chatId);
ResultSet results = stm.executeQuery();
if(!results.isBeforeFirst()) return null;
while (results.next()){
commands.add(new CustomCommand(results.getString(1),results.getString(2),results.getString(3), CustomCommand.Type.valueOf(results.getString(4)),results.getInt(5)));
}
return commands;
}
return null;
}
Maybe the problem is something related to exception catching and statements not being correctly executed? Maybe something related to result sets?
It's driving me crazy. Thanks for helping me!
You do nothing to clean up ResultSet and Statement before you return. That's a bad idea. You should be closing each one in individual try/catch blocks in a finally block.
A ResultSet is an object that represents a database cursor on the database. You should close it so you don't run out of cursors.
I wouldn't have a single static Connection. I'd expect a thread-safe, managed pool of connections.
I wouldn't return a null. You don't make clear what the user is supposed to do with that. Better to throw an exception.
Related
It is said to be a good habit to close all JDBC resources after usage. But if I have the following code, is it necessary to close the Resultset and the Statement?
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rs = null;
try {
conn = // Retrieve connection
stmt = conn.prepareStatement(// Some SQL);
rs = stmt.executeQuery();
} catch(Exception e) {
// Error Handling
} finally {
try { if (rs != null) rs.close(); } catch (Exception e) {};
try { if (stmt != null) stmt.close(); } catch (Exception e) {};
try { if (conn != null) conn.close(); } catch (Exception e) {};
}
The question is if the closing of the connection does the job or if it leaves some resources in use.
What you have done is perfect and very good practice.
The reason I say its good practice... For example, if for some reason you are using a "primitive" type of database pooling and you call connection.close(), the connection will be returned to the pool and the ResultSet/Statement will never be closed and then you will run into many different new problems!
So you can't always count on connection.close() to clean up.
Java 1.7 makes our lives much easier thanks to the try-with-resources statement.
try (Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement()) {
try (ResultSet resultSet = statement.executeQuery("some query")) {
// Do stuff with the result set.
}
try (ResultSet resultSet = statement.executeQuery("some query")) {
// Do more stuff with the second result set.
}
}
This syntax is quite brief and elegant. And connection will indeed be closed even when the statement couldn't be created.
From the javadocs:
When a Statement object is closed, its
current ResultSet object, if one
exists, is also closed.
However, the javadocs are not very clear on whether the Statement and ResultSet are closed when you close the underlying Connection. They simply state that closing a Connection:
Releases this Connection object's
database and JDBC resources
immediately instead of waiting for
them to be automatically released.
In my opinion, always explicitly close ResultSets, Statements and Connections when you are finished with them as the implementation of close could vary between database drivers.
You can save yourself a lot of boiler-plate code by using methods such as closeQuietly in DBUtils from Apache.
I'm now using Oracle with Java. Here my point of view :
You should close ResultSet and Statement explicitly because Oracle has problems previously with keeping the cursors open even after closing the connection. If you don't close the ResultSet (cursor) it will throw an error like Maximum open cursors exceeded.
I think you may encounter with the same problem with other databases you use.
Here is tutorial Close ResultSet when finished:
Close ResultSet when finished
Close ResultSet object as soon as you finish
working with ResultSet object even
though Statement object closes the
ResultSet object implicitly when it
closes, closing ResultSet explicitly
gives chance to garbage collector to
recollect memory as early as possible
because ResultSet object may occupy
lot of memory depending on query.
ResultSet.close();
If you want more compact code, I suggest using Apache Commons DbUtils. In this case:
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rs = null;
try {
conn = // Retrieve connection
stmt = conn.prepareStatement(// Some SQL);
rs = stmt.executeQuery();
} catch(Exception e) {
// Error Handling
} finally {
DbUtils.closeQuietly(rs);
DbUtils.closeQuietly(stmt);
DbUtils.closeQuietly(conn);
}
No you are not required to close anything BUT the connection. Per JDBC specs closing any higher object will automatically close lower objects. Closing Connection will close any Statements that connection has created. Closing any Statement will close all ResultSets that were created by that Statement. Doesn't matter if Connection is poolable or not. Even poolable connection has to clean before returning to the pool.
Of course you might have long nested loops on the Connection creating lots of statements, then closing them is appropriate. I almost never close ResultSet though, seems excessive when closing Statement or Connection WILL close them.
Doesn't matter if Connection is poolable or not. Even poolable connection has to clean before returning to the pool.
"Clean" usually means closing resultsets & rolling back any pending transactions but not closing the connection. Otherwise pooling looses its sense.
The correct and safe method for close the resources associated with JDBC this (taken from How to Close JDBC Resources Properly – Every Time):
Connection connection = dataSource.getConnection();
try {
Statement statement = connection.createStatement();
try {
ResultSet resultSet = statement.executeQuery("some query");
try {
// Do stuff with the result set.
} finally {
resultSet.close();
}
} finally {
statement.close();
}
} finally {
connection.close();
}
I created the following Method to create reusable One Liner:
public void oneMethodToCloseThemAll(ResultSet resultSet, Statement statement, Connection connection) {
if (resultSet != null) {
try {
if (!resultSet.isClosed()) {
resultSet.close();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
if (statement != null) {
try {
if (!statement.isClosed()) {
statement.close();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
if (connection != null) {
try {
if (!connection.isClosed()) {
connection.close();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
I use this Code in a parent Class thats inherited to all my classes that send DB Queries. I can use the Oneliner on all Queries, even if i do not have a resultSet.The Method takes care of closing the ResultSet, Statement, Connection in the correct order. This is what my finally block looks like.
finally {
oneMethodToCloseThemAll(resultSet, preStatement, sqlConnection);
}
With Java 6 form I think is better to check it is closed or not before close (for example if some connection pooler evict the connection in other thread) - for example some network problem - the statement and resultset state can be come closed. (it is not often happens, but I had this problem with Oracle and DBCP). My pattern is for that (in older Java syntax) is:
try {
//...
return resp;
} finally {
if (rs != null && !rs.isClosed()) {
try {
rs.close();
} catch (Exception e2) {
log.warn("Cannot close resultset: " + e2.getMessage());
}
}
if (stmt != null && !stmt.isClosed()) {
try {
stmt.close();
} catch (Exception e2) {
log.warn("Cannot close statement " + e2.getMessage());
}
}
if (con != null && !conn.isClosed()) {
try {
con.close();
} catch (Exception e2) {
log.warn("Cannot close connection: " + e2.getMessage());
}
}
}
In theory it is not 100% perfect because between the the checking the close state and the close itself there is a little room for the change for state. In the worst case you will get a warning in long. - but it is lesser than the possibility of state change in long run queries. We are using this pattern in production with an "avarage" load (150 simultanous user) and we had no problem with it - so never see that warning message.
Some convenience functions:
public static void silentCloseResultSets(Statement st) {
try {
while (!(!st.getMoreResults() && (st.getUpdateCount() == -1))) {}
} catch (SQLException ignore) {}
}
public static void silentCloseResultSets(Statement ...statements) {
for (Statement st: statements) silentCloseResultSets(st);
}
As far as I remember, in the current JDBC, Resultsets and statements implement the AutoCloseable interface. That means they are closed automatically upon being destroyed or going out of scope.
I am having an java package, which connects with a database and fetches some data. At some rare case, I am getting heap memory exception, since the fetched query data size is exceeding the java heap space. Increasing the java heap space is not something the business can think for now.
Other option is to catch the exception and continue the flow with stopping the execution. ( I know catching OOME is not a good idea but here only me local variables are getting affected). My code is below:
private boolean stepCollectCustomerData() {
try {
boolean biResult = generateMetricCSV();
} catch (OutOfMemoryError e) {
log.error("OutOfMemoryError while collecting data ");
log.error(e.getMessage());
return false;
}
return true;
}
private boolean generateMetricCSV(){
// Executing the PAC & BI cluster SQL queries.
try (Connection connection = DriverManager.getConnection("connectionURL", "username", "password")) {
connection.setAutoCommit(false);
for (RedshiftQueryDefinition redshiftQueryDefinition: redshiftQueryDefinitions){
File csvFile = new File(dsarConfig.getDsarHomeDirectory() + dsarEntryId, redshiftQueryDefinition.getCsvFileName());
log.info("Running the query for metric: " + redshiftQueryDefinition.getMetricName());
try( PreparedStatement preparedStatement = createPreparedStatement(connection,
redshiftQueryDefinition.getSqlQuery(), redshiftQueryDefinition.getArgumentsList());
ResultSet resultSet = preparedStatement.executeQuery();
CSVWriter writer = new CSVWriter(new FileWriter(csvFile));) {
if (resultSet.next()) {
resultSet.beforeFirst();
log.info("Writing the data to CSV file.");
writer.writeAll(resultSet, true);
log.info("Metric written to csv file: " + csvFile.getAbsolutePath());
filesToZip.put(redshiftQueryDefinition.getCsvFileName(), csvFile);
} else {
log.info("There is no data for the metric " + redshiftQueryDefinition.getCsvFileName());
}
} catch (SQLException | IOException e) {
log.error("Exception while generating the CSV file: " + e);
e.printStackTrace();
return false;
}
}
} catch (SQLException e){
log.error("Exception while creating connection to the Redshift cluster: " + e);
return false;
}
return true;
}
We are getting exception in the line "ResultSet resultSet = preparedStatement.executeQuery()" in the later method and i am catching this exception in the parent method. Now, i need to make sure when the exception is caught in the former method, is the GC already triggered and cleared the local variables memory? (such as connection and result set variable) If not, when that will be happen?
I am worried about the java heap space because, this is continuous flow and I need to keep on fetching the data for another users.
The code that i have provided is only to explain the underlying issue and flow and kindly ignore syntax, etc.., I am using JDK8
Thanks in advance.
I use this code to fetch data from database table.
public List<Dashboard> getDashboardList() throws SQLException {
if (ds == null) {
throw new SQLException("Can't get data source");
}
//get database connection
Connection con = ds.getConnection();
if (con == null) {
throw new SQLException("Can't get database connection");
}
PreparedStatement ps = con.prepareStatement(
"SELECT * from GLOBALSETTINGS");
//get customer data from database
ResultSet result = ps.executeQuery();
List<Dashboard> list = new ArrayList<Dashboard>();
while (result.next()) {
Dashboard cust = new Dashboard();
cust.setUser(result.getString("SessionTTL"));
cust.setPassword(result.getString("MAXACTIVEUSERS"));
//store all data into a List
list.add(cust);
}
return list;
}
This code is a part of a JSF page which is deployed on glassfish server. The problem is that when I reload the JSF page many times(around 8 times) the web page freezes. I suspect that the thread pool is fill and there is no space for new connections. How I can solve the problem? Close the connection when the query is finished or there is another way?
Best wishes
First of all: Yes you should close your connection when your done by explicitly calling the close() method. Closing a connection will release database resources.
UPDATE: And you should close the PreparedStatement as well (with close()). I would also recommend to handle SQLExceptions in your method and not throw it, since you need to make sure that your statement and connection are closed even if an exception occurs.
Something like this:
Connection connection = dataSource.getConnection();
try {
PreparedStatement statement = connection.prepareStatement();
try {
// Work with the statement
catch (SQLException e ) {
// Handle exceptions
} catch (SQLException e {
// Handle exceptions
} finally {
statement.close();
}
} finally {
connection.close();
}
Furthermore, you should not query the database in a bean field's getter method. Getters can be called several times during each request. The more elegant way would be to prepare the DashboardList in the constructor or #PostConstruct of your bean.
I have an app that I'm connecting to a MySQL database. It loses connection in the middle of the night and then spouts about null connections and JDBC hasn't received messages in X seconds.
I call getConnection() before I do anything that requires communication with the SQL server.
This is my getConnection() method:
private Connection getConnection() {
try {
if (connection != null) {
if (connection.isClosed() || !connection.isValid(10000)) {
this.initializeRamsesConnection();
}
} else {
this.initializeRamsesConnection();
}
} catch (Exception e) {
debug("Connection failed: " + e);
}
return connection;
}
In the initializeRamsesConnection() method I put the password and so on information into a string and then I create the connection in the standard JDBC way.
Then I call this method:
private Connection getConnectionFromConnectionString() {
Connection con = null;
String driver = "com.mysql.jdbc.Driver";
try {
Class.forName(driver);//jdbc sorcery
//if there is no connection string
if (getConnectionString() == null) {
HMIDatabaseAdapter.debug("No connection string");
}
//makes a string out of the values of db/host
String str = getConnectionString();
//if there is no driver
if (driver == null) {
debug("" + ": " + "No driver");
}
//Tries to make a connection from the connection string, username, and the password.
con = DriverManager.getConnection(str, username, password);
//if for some reason the connection is null
if (con == null) {
HMIDatabaseAdapter.debug("CONNECTION IS NULL, WHAT?");
}
} catch (Exception ex) {
HMIDatabaseAdapter.debug("getConnection() " + ex);
}
return con;
}
What can I change in either of these methods to accommodate losing connection?
This is not the correct way of retrieving a connection. You're retrieving the connection and assigning it as an instance (or worse, static) variable of the class. Basically, you're keeping the connection open forever and reusing a single connection for all queries. This may end up in a disaster if the queries are executed by different threads. Also, when it's been kept open for too long, the DB will reclaim it because it assumes that it's dead/leaked.
You should acquire and close the connection in the shortest possible scope. I.e. in the very same try block as where you're executing the query. Something like this:
public Entity find(Long id) throws SQLException {
Entity entity = null;
try (
Connection connection = dataSource.getConnection(); // This should return a NEW connection!
PreparedStatement statement = connection.prepareStatement(SQL_FIND);
) {
statement.setLong(1, id);
try (ResultSet resultSet = preparedStatement.executeQuery()) {
if (resultSet.next()) {
entity = new Entity(
resultSet.getLong("id"),
resultSet.getString("name"),
resultSet.getInt("value")
);
}
}
}
return entity;
}
If you worry about connecting performance and want to reuse connections, then you should be using a connection pool. You could homegrow one, but I strongly discourage this as you seem to be pretty new to the stuff. Just use an existing connection pool like BoneCP, C3P0 or DBCP. Note that you should not change the JDBC idiom as shown in the above example. You still need to acquire and close the connection in the shortest possible scope. The connection pool will by itself worry about actually reusing, testing and/or closing the connection.
See also:
Am I Using JDBC Connection Pooling?
JDBC MySql connection pooling practices to avoid exhausted connection pool
Where in your code are the errors on losing connection coming from? This would probably be the best place to start.
Off the top of my head (and I may be wrong), JDBC connections will only close on an actual fatal error, so you won't know they've failed until you try to do something.
What I've done in the past is to invalidate the connection at the point of failure and retry periodically.
Maybe this is what you are looking for:
http://dev.mysql.com/doc/refman/5.0/en/auto-reconnect.html
For java see autoReconnect:
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html
My application has a memory leak resulting from my usage of JDBC. I have verified this by looking at a visual dump of the heap and seeing thousands of instances of ResultSet and associated objects. My question, then, is how do I appropriately manage resources used by JDBC so they can be garbage collected? Do I need to call ".close()" for every statement that is used? Do I need to call ".close()" on the ResultSets themselves?
How would you free the memory used by the call:
ResultSet rs = connection.createStatement().executeQuery("some sql query");
??
I see that there are other, very similar, questions. Apologies if this is redundant, but either I don't quite follow the answers or they don't seem to apply universally. I am trying to achieve an authoritative answer on how to manage memory when using JDBC.
::EDIT:: Adding some code samples
I have a class that is basically a JDBC helper that I use to simplify database interactions, the main two methods are for executing an insert or update, and for executing select statements.
This one for executing insert or update statements:
public int executeCommand(String sqlCommand) throws SQLException {
if (connection == null || connection.isClosed()) {
sqlConnect();
}
Statement st = connection.createStatement();
int ret = st.executeUpdate(sqlCommand);
st.close();
return ret;
}
And this one for returning ResultSets from a select:
public ResultSet executeSelect(String select) throws SQLException {
if (connection == null || connection.isClosed()) {
sqlConnect();
}
ResultSet rs = connection.createStatement().executeQuery(select);
return rs;
}
After using the executeSelect() method, I always call resultset.getStatement().close()
Examining a heap dump with object allocation tracing on shows statements still being held onto from both of those methods...
You should close the Statement if you are not going to reuse it. It is usually good form to first close the ResultSet as some implementations did not close the ResultSet automatically (even if they should).
If you are repeating the same queries you should probably use a PreparedStatement to reduce parsing overhead. And if you add parameters to your query you really should use PreparedStatement to avoid risk of sql injection.
Yes, ResultSets and Statements should always be closed in a finally block. Using JDBC wrappers such as Spring's JdbcTemplate helps making the code less verbose and close everything for you.
I copied this from a project I have been working on. I am in the process of refactoring it to use Hibernate (from the code it should be clear why!!). Using a ORM tool like Hibernate is one way to resolve your issue. Otherwise, here is the way I used normal DAOs to access the data. There is no memory leak in our code, so this may help as a template. Hope it helps, memory leaks are terrible!
#Override
public List<CampaignsDTO> getCampaign(String key) {
ResultSet resultSet = null;
PreparedStatement statement = null;
try {
statement = connection.prepareStatement(getSQL("CampaignsDAOImpl.getPendingCampaigns"));
statement.setString(1, key);
resultSet = statement.executeQuery();
List<CampaignsDTO> list = new ArrayList<CampaignsDTO>();
while (resultSet.next()) {
list.add(new CampaignsDTO(
resultSet.getTimestamp(resultSet.findColumn("cmp_name")),
...));
}
return list;
} catch (SQLException e) {
logger.fatal(LoggerCodes.DATABASE_ERROR, e);
throw new RuntimeException(e);
} finally {
close(statement);
}
}
The close() method looks like this:
public void close(PreparedStatement statement) {
try {
if (statement != null && !statement.isClosed())
statement.close();
} catch (SQLException e) {
logger.debug(LoggerCodes.TRACE, "Warning! PreparedStatement could not be closed.");
}
}
You should close JDBC statements when you are done. ResultSets should be released when associated statements are closed - but you can do it explicitly if you want.
You need to make sure that you also close all JDBC resources in exception cases.
Use Try-Catch-Finally block - eg:
try {
conn = dataSource.getConnection();
stmt = conn.createStatement();
rs = stmet.executeQuery("select * from sometable");
stmt.close();
conn.close();
} catch (Throwable t) {
// do error handling
} finally {
try {
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
} catch(Exception e) {
}
}