I've been doing code review (mostly using tools like FindBugs) of one of our pet projects and FindBugs marked following code as erroneous (pseudocode):
Connection conn = dataSource.getConnection();
try{
PreparedStatement stmt = conn.prepareStatement();
//initialize the statement
stmt.execute();
ResultSet rs = stmt.getResultSet();
//get data
}finally{
conn.close();
}
The error was that this code might not release resources. I figured out that the ResultSet and Statement were not closed, so I closed them in finally:
finally{
try{
rs.close()
}catch(SqlException se){
//log it
}
try{
stmt.close();
}catch(SqlException se){
//log it
}
conn.close();
}
But I encountered the above pattern in many projects (from quite a few companies), and no one was closing ResultSets or Statements.
Did you have troubles with ResultSets and Statements not being closed when the Connection is closed?
I found only this and it refers to Oracle having problems with closing ResultSets when closing Connections (we use Oracle db, hence my corrections). java.sql.api says nothing in Connection.close() javadoc.
One problem with ONLY closing the connection and not the result set, is that if your connection management code is using connection pooling, the connection.close() would just put the connection back in the pool. Additionally, some database have a cursor resource on the server that will not be freed properly unless it is explicitly closed.
I've had problems with unclosed ResultSets in Oracle, even though the connection was closed. The error I got was
"ORA-01000: maximum open cursors exceeded"
So: Always close your ResultSet!
You should always close all JDBC resources explicitly. As Aaron and John already said, closing a connection will often only return it to a pool and not all JDBC drivers are implemented exact the same way.
Here is a utility method that can be used from a finally block:
public static void closeEverything(ResultSet rs, Statement stmt,
Connection con) {
if (rs != null) {
try {
rs.close();
} catch (SQLException e) {
}
}
if (stmt != null) {
try {
stmt.close();
} catch (SQLException e) {
}
}
if (con != null) {
try {
con.close();
} catch (SQLException e) {
}
}
}
Oracle will give you errors about open cursors in this case.
According to: http://java.sun.com/javase/6/docs/api/java/sql/Statement.html
it looks like reusing a statement will close any open resultsets, and closing a statement will close any resultsets, but i don't see anything about closing a connection will close any of the resources it created.
All of those details are left to the JDBC driver provider.
Its always safest to close everything explicitly. We wrote a util class that wraps everything with try{ xxx } catch (Throwable {} so that you can just call Utils.close(rs) and Utils.close(stmt), etc without having to worry about exceptions that close scan supposedly throw.
The ODBC Bridge can produce a memory leak with some ODBC drivers.
If you use a good JDBC driver then you should does not have any problems with closing the connection. But there are 2 problems:
Does you know if you have a good driver?
Will you use other JDBC drivers in the future?
That the best practice is to close it all.
I work in a large J2EE web environment. We have several databases that may be connected to in a single request. We began getting logical deadlocks in some of our applications. The issue was that as follows:
User would request page
Server connects to DB 1
Server Selects on DB 1
Server "closes" connection to DB 1
Server connects to DB 2
Deadlocked!
This occurred for 2 reasons, we were experiencing far higher volume of traffic than normal and the J2EE Spec by default does not actually close your connection until the thread finishes execution. So, in the above example step 4 never actually closed the connection even though they were closed properly in finally .
To fix this, you you have to use resource references in the web.xml for your Database Connections and you have to set the res-sharing-scope to unsharable.
Example:
<resource-ref>
<description>My Database</description>
<res-ref-name>jdbc/jndi/pathtodatasource</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>
I've definitely seen problems with unclosed ResultSets, and what can it hurt to close them all the time, right? The unreliability of needing to remembering to do this is one of the best reasons to move to frameworks that manage these details for you. It might not be feasible in your development environment, but I've had great luck using Spring to manage JPA transactions. The messy details of opening connections, statements, result sets, and writing over-complicated try/catch/finally blocks (with try/catch blocks in the finally block!) to close them again just disappears, leaving you to actually get some work done. I'd highly recommend migrating to that kind of a solution.
In Java, Statements (not Resultsets) correlate to Cursors in Oracle. It is best to close the resources that you open as unexpected behavior can occur in regards to the JVM and system resources.
Additionally, some JDBC pooling frameworks pool Statements and Connections, so not closing them might not mark those objects as free in the pool, and cause performance issues in the framework.
In general, if there is a close() or destroy() method on an object, there's a reason to call it, and to ignore it is done so at your own peril.
Related
I have a bukkit plugin (minecraft) that requires a connection to the database.
Should a database connection stay open all the time, or be opened and closed when needed?
The database connection must be opened only when its needed and closed after doing all the necessary job with it. Code sample:
Prior to Java 7:
Connection con = null;
try {
con = ... //retrieve the database connection
//do your work...
} catch (SQLException e) {
//handle the exception
} finally {
try {
if (con != null) {
con.close();
}
} catch (SQLException shouldNotHandleMe) {
//...
}
}
Java 7:
try (Connection con = ...) {
} catch (SQLException e) {
}
//no need to call Connection#close since now Connection interface extends Autocloseable
But since manually opening a database connection is too expensive, it is highly recommended to use a database connection pool, represented in Java with DataSource interface. This will handle the physical database connections for you and when you close it (i.e. calling Connection#close), the physical database connection will just be in SLEEP mode and still be open.
Related Q/A:
Java Connection Pooling
Some tools to handle database connection pooling:
BoneCP
c3po
Apache Commons DBCP
HikariCP
Depends on what are your needs.
Creating a connection takes some time, so if you need to access database frequently it's better to keep the connection open. Also it's better to create a pool, so that many users can access database simultaneously(if it's needed).
If you need to use this connection only few times you may not keep it open, but you will have delay when you would like to access database. So i suggest you to make a timer that will keep connection open for some time(connection timeout).
You need to close your connections after each query executions.Sometimes you need to execute multiple queries at the same time because the queries are hanging from each other.Such as "first insert task then assign it to the employees".At this time execute your queries on the same transaction and commit it, if some errors occur then rollback.By default autocommit is disabled in JDBC. Example
Use connection pooling.If you are developing a webapplication then use App Server connection pooling.App server will use the same pooling for each of your applications so you can control the connection count from the one point.Highly recommend the Apache Tomcat Connection pooling.Example
As an additional info:
Connection, Statement and ResultSet.
1.If you close connection you don't need close statement or resultset.Both of them will be closed automatically
2.If you close Statement it will close ResultSet also
3.if you use try-with-resources like this:
try (Connection con = ...) {
} catch (SQLException e) {
}
it will close the connection automatically.Because try-with-resources require autoclosable objects and Connection is autocloseable.You can see the details about try-with-resources here
Actually, it's all matter on how you write your application! It's an art, but sadly everyone takes a tutorial for a good practice like Microsoft's tutorials.
If you know what you are coding, then you keep your connection open for the lifetime of the application. It's simple, not because you have to go at work in the morning that everyday we have to build a special route just for you! You take that single route or 2 or 4 like everyone does! You judge for the traffics and you build 2, 4 or 6 routes as needed. If there is traffic with these 4 or 6 routes, you wait!
Happy coding.
The Connection should be opened only when required. If it is open before the actual need, it reduces one active connection from the connection pool..so it ultimately effects the users of the application.
So,it is always a better practice to open connection only when required and closing it after completion of process.
Always try puttting you connection close logic inside the finally block that will ensure that your connection will be closed,even if any exception occurs in the application
finally
{
connection.close()
}
I'm working on a project programmed in jsf, but no persistence layer, the queries are plain jdbc in beans. At apllication start the jdbc connection is instantiated and if the user exists und writes his correct password the authentification bean will be instantiated. My problem is, I don't know exactly how to destroy the connection wenn the authentification bean dies for example because of a timeout. My other problem is, how would I know the application is over, if the user don't click the button log out and simply close the browser.
Consider seriuosly using a connection pool. It will make your life easier:)
For example when you authenticate a user, you just grab a connection from the pool, do the validation and then close the connection, which will return it to the pool.
At apllication start the jdbc connection is instantiated
This is the wrong approach. The connection should be opened in the very same try block as you're creating and executing the statement and gathering the results. The connection (and statement and resultset) must be closed in the finally block of this try block.
Not doing so may lead to resource leaking and unexpected (and undesired) application behaviour when this happens and/or when the DB server decides to timeout the connection because it's been kept open for too long by your application.
The following is the basic JDBC idiom:
Connection connection = null;
PreparedStatement statement = null;
ResultSet resultSet = null;
try {
connection = database.getConnection();
statement = connection.prepareStatement(SOME_SQL);
resultSet = statement.executeQuery();
// ...
} finally {
if (resultSet != null) try { resultSet.close(); } catch (SQLException ignore) {}
if (statement != null) try { statement.close(); } catch (SQLException ignore) {}
if (connection != null) try { connection.close(); } catch (SQLException ignore) {}
}
To improve connecting performance, you can always use a connection pool, but do not change the basic JDBC idiom of acquiring and closing the resources in the shortest scope in a try-finally block. Most decent servletcontainer/applicationservers ships with builtin connection pooling facilities. As long as it's unclear which one you're using, it's impossible to give a well-suited answer about it.
That said, I would still strongly recommend to detach the persistence layer from your MVC layer. It'll make it better testable, reuseable and maintainable.
See also:
Basic DAO tutorial
I am reviewing a big pile of existing code, trying to find unclosed connections that would cause the connection pool to run out or throw other erros.
In some places I see the connection is returned to the pool, the ResultSet is closed, but the PreparedStatement is not closed.
in pseudo code it would look like this:
Connection conn = null;
try {
conn = MyJdbcTemplateHolder.getNewConnectionFromPool();
PreparedStatement ps = conn.prepareStatement(sql, ...);
ResultSet rs = st.executeQuery();
// do stuff with results
} catch(Exception e) {
// exception
} finally {
rs.close();
MyJdbcTemplateHolder.returnConnectionToPool(conn);
//***** Here is what's missing: st.close(); *****
}
The question is: can the open statement cause issues because it wasn't explicitly closed? Or is closing the ResultSet and returning the connection enough?
Obviously I am not talking about one open statement - we have a pool of 100 connections and dozens of places in the code where this issue may come up.
MySQL version is 5.1
My JDBC jar is mysql-connector-java-5.1.11-bin.jar
The answer is yes, it can cause issues. As is discussed here in SO:
Closing JDBC Connections in Pool
JDBC MySql connection pooling practices to avoid exhausted connection pool
if you don't close connection-related resources in reverse order after you're done with them (or in a finally block), you're at risk. Connection pools vary on how they handle these, but it is worrisome - a minimum - that an improperly closed set of resources is thrown back into the pool.
In case it was unclear (and you may already know this), proper closing of resources is discussed further here:
How to properly clean up JDBC resources in Java?
Note that in forthcoming Java 7, there will be some help here:
http://www.javaspecialists.eu/archive/Issue190.html
in which a new try-with-resources statement is introduced in Java, which automatically closes any AutoCloseable resources referenced in the try statement.
I have a connection leak in some older Java web applications which do not utilize connection pooling.
Trying to find the leak is hard because IT will not grant me access to v$session SELECT Count(*) FROM v$session;
So instead I am trying to debug with System.out statements. Even after closing the connection conn.close(); when I print conn to the System log file it gives me the connection object name.
try {
Connection conn;
conn.close()
}
catch (SQLException e) { }
finally {
if (conn != null) {
try {
System.out.println("Closing the connection");
conn.close();
}
catch (Exception ex)
{
System.out.println("Exception is " + ex);
}
}
}
// I then check conn and it is not null and I can print the object name.
if (conn != null) {
System.out.println("Connection is still open and is " + conn);
}
however if I also add conn = null; below the conn.close(); statement the connection now seems closed. So my question is does conn.close(); actually release my connection or do I also have to make it null to really release my connection. Like I said it is really hard for me to determine if the connection is actually released without being able to query v$session. Is there snippet of java code which can give me my open connections??
It's probably educational at this point because I plan to refactor these applications to use connection pooling but I'm looking for a quick bandaid for now.
The important part of the close is what's happening on the database side. It's the RDBMS that has to close that connection. Calling the close() method is what communicates the message to the database to close the connection.
Setting the connection to null doesn't instruct RDBMS to do anything.
Same logic applies to ResultSet, which is a cursor on the database side, and Statement. You need to close those in individual try/catch blocks in the finally block of the method that created them, in reverse order of creation. Otherwise you'll see errors about "Max cursors exceeded".
Setting the conn to null only breaks the reference link to the connection object, and has no influence on the connection being open or not. If the connection is still open then the connection will still be referred to from inside the JDBC driver/connection pool etc...
Setting a variable to null is more telling the garbage collector that it is ok to clean up the original object when it wants to than anything else.
As others are saying, you've got two different concepts here: closing the connecting and tracking the connection in a variable.
To close the connection, call conn.close(). This will not set the variable conn to null. You can test if the connection is open with conn.isClosed().
If you don't care to track the connection in your code any more, you can conn = null. This does not immediately close the connection. I believe the connection will be automatically closed, based on the JDBC documentation :
Releases this Connection object's database and JDBC resources immediately instead of waiting for them to be automatically released.
If you choose to go this route, be aware that the garbage collector may not close your connection as quickly as you want, and you may have what appears to be a resource leak; reserved database locks won't be released until the connection is garbage collected. Certain drivers (I don't know if oracle is one) impose maximum limit to the number of connections that may exist at one time, so leaving open connections can also cause failures to connect, later in the program.
Connection leaks are a best. I think a good strategy is to wrap the getting and releasing of connections in a couple of functions and then always get and release your connections through those functions. Then you can have those functions maintain a list of all open connections, and do a stack trace on the caller of the allocate function. Then have a screen that shows a list of all open connections and where they came from. Run this in a test environment, run around using a bunch of screens, then exit them all so all the connections SHOULD close, then bring up the screen that shows open connectoins, and the villain should be revealed.
My explanation here is an educated guess.
As a practice I have always set conn=null after the close. I believe when you do conn.close() you are telling the garbage collector that it's ready to be garbage collected. However, it will be up to the garbage collection process to determine when to do so.
Also you can change your
if(conn!=null)
to
if (conn.isClosed())
..
Is there snippet of Java code which can give me my open connections?
Statement smt = null;
ResultSet rs = null;
try {
// Create Statement from connection
smt = conn.createStatement();
// Execute Query in statement
rs = stmt.executeQuery("SELECT 1 FROM Dual");
if (rs.next()) {
return true; // connection is valid
}
catch (SQLException e) {
// Some sort of logging
return false;
}
finally {
if (smt != null) smt.close();
if (rs != null) rs.close();
}
Just a quick guess, assuming you are using Oracle.
Sugession: Why don't you install jboss and set up connection pooling through there?
I'm using red5 1.0.0rc1 to create an online game.
I'm connecting to a MySQL database using a jdbc mysql connector v5.1.12
it seems that after several hours of idle my application can continue running queries because the connection to the db got closed and i have to restart the application.
how can I resolve the issue ?
Kfir
The MySQL JDBC driver has an autoreconnect feature that can be helpful on occasion; see "Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J"1, and read the caveats.
A second option is to use a JDBC connection pool.
A third option is to perform a query to test that your connection is still alive at the start of each transaction. If the connection is not alive, close it and open a new connection. A common query is SELECT 1. See also:
Cheapest way to to determine if a MySQL connection is still alive
A simple solution is to change the MySQL configuration properties to set the session idle timeout to a really large number. However:
This doesn't help if your application is liable to be idle for a really long time.
If your application (or some other application) is leaking connections, increasing the idle timeout could mean that lost connections stay open indefinitely ... which is not good for database memory utilization.
1 - If the link breaks (again), please Google for the quoted page title then edit the answer to update it with the new URL.
Well, you reopen the connection.
Connection pools (which are highly recommended, BTW, and if you run Java EE your container - Tomcat, JBoss, etc - can provide a javax.sql.DataSource through JNDI which can handle pooling and more for you) validate connections before handing them out by running a very simple validation query (like SELECT 1 or something). If the validation query doesn't work, it throws away the connection and opens a new one.
Increasing the connection or server timeout tends to just postpone the inevitable.
I had the Same issue for my application and I have removed the idle time out tag
Thats it
It really worked fine
try this, I was using the Jboss server, in that i have made the following change in mysql-ds.xml file.
Let me know if you have any more doubts
The normal JDBC idiom is to acquire and close the Connection (and also Statement and ResultSet) in the shortest possible scope, i.e. in the very same try-finally block of the method as you're executing the query. You should not hold the connection open all the time. The DB will timeout and reclaim it sooner or later. In MySQL it's by default after 8 hours.
To improve connecting performance you should really consider using a connection pool, like c3p0 (here's a developer guide). Note that even when using a connection pool, you still have to write proper JDBC code: acquire and close all the resources in the shortest possible scope. The connection pool will in turn worry about actually closing the connection or just releasing it back to pool for further reuse.
Here's a kickoff example how your method retrieving a list of entities from the DB should look like:
public List<Entity> list() throws SQLException {
// Declare resources.
Connection connection = null;
Statement statement = null;
ResultSet resultSet = null;
List<Entity> entities = new ArrayList<Entity>();
try {
// Acquire resources.
connection = database.getConnection();
statement = connection.createStatement("SELECT id, name, value FROM entity");
resultSet = statement.executeQuery();
// Gather data.
while (resultSet.next()) {
Entity entity = new Entity();
entity.setId(resultSet.getLong("id"));
entity.setName(resultSet.getString("name"));
entity.setValue(resultSet.getInteger("value"));
entities.add(entity);
}
} finally {
// Close resources in reversed order.
if (resultSet != null) try { resultSet.close(); } catch (SQLException logOrIgnore) {}
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
// Return data.
return entities;
}
See also:
DAO tutorial - How to write proper JDBC code
Do you have a validationQuery defined (like select 1)? If not, using a validation query would help.
You can check here for a similar issue.
Append '?autoReconnect=true' to the end of your database's JDBC URL (without the quotes) worked for me.
I saw that ?autoReconnect=true wasn't working for me.
What I did, is simply creating a function called: executeQuery with:
private ResultSet executeQuery(String sql, boolean retry) {
ResultSet resultSet = null;
try {
resultSet = getConnection().createStatement().executeQuery(sql);
} catch (Exception e) {
// disconnection or timeout error
if (retry && e instanceof CommunicationsException || e instanceof MySQLNonTransientConnectionException
|| (e instanceof SQLException && e.toString().contains("Could not retrieve transation read-only status server"))) {
// connect again
connect();
// recursive, retry=false to avoid infinite loop
return executeQuery(sql,false);
}else{
throw e;
}
}
return resultSet;
}
I know, I'm using string to get the error.. need to do it better.. but it's a good start, and WORKS :-)
This will almost all reasons from a disconnect.