I am working on reducing deadlocks and it was pointed out to me that I should not use multiple queries with one connection because the transaction may not be committed and open, causing deadlocks. So in pseudo code, something like this:
try(Connection con = datasource.getConnection())
{
PreparedStatement stm1 = con.prepareStatement(getSQL());
stm1.setString(1, owner);
stm1.setTimestamp(2, Timestamp.valueOf(LocalDateTime.now()));
stm1.setInt(3, count);
int updateCount = stm1.executeUpdate();
stm1.close();
PreparedStatement stm2 = con.prepareStatement(getSQL2());
stm2.setString(1, owner);
ResultSet rs = stm2.executeQuery();
List<Object> results = new ArrayList<>();
while(rs.next()) {
results.add(create(rs));
}
return results;
} catch (SQLException e) {
throw new RuntimeException("Failed to claim message",e);
}
When does stm1 commit the transaction when auto-commit is set to true?
Is it good practise to reuse a connection like that or should both statements use separate connections instead?
Questions like these can usually be answered by reading the JDBC specification. JDBC 4.2 section 10.1 Transaction Boundaries and Auto-commit says:
When to start a new transaction is a decision made implicitly by
either the JDBC driver or the underlying data source. Although some
data sources implement an explicit “begin transaction” statement,
there is no JDBC API to do so. Typically, a new transaction is started
when the current SQL statement requires one and there is no
transaction already in place. Whether or not a given SQL statement
requires a transaction is also specified by SQL:2003.
The Connection attribute auto-commit specifies when to end
transactions. Enabling auto-commit causes a transaction commit after
each individual SQL statement as soon as that statement is complete.
The point at which a statement is considered to be “complete” depends
on the type of SQL statement as well as what the application does
after executing it:
For Data Manipulation Language (DML) statements such as Insert, Update, Delete, and DDL statements, the statement is complete as soon
as it has finished executing.
For Select statements, the statement is complete when the associated result set is closed.
For CallableStatement objects or for statements that return multiple results, the statement is complete when all of the associated
result sets have been closed, and all update counts and output
parameters have been retrieved.
In your code a transaction is committed as part of stm1.executeUpdate() (this transaction might have been started on prepare, or on execute). A new transaction is started at prepare or execute of stmt2, but as you don't close stmt2 or rs, the connection close will trigger the commit.
As to whether you should reuse connections and statements: it depends on context and your code. For a specific unit of work you use a single connection. If you want further reuse of connections you should use a connection pool. Reusing statements should only be done when it makes sense to do so (otherwise your code might get complicated with resource leaks as a consequence), and again there are connection pools that provide built-in statement pooling that reduces this complexity for you.
Statements like "[..] should not use multiple queries with one connection because the transaction may not be committed and open, causing deadlocks." are usually incorrect and would lead to badly performing applications if applied. It might apply to misbehaving drivers that don't properly follow the auto-commit rules above, or maybe in situations were the connection is much longer lived and you don't properly finish a statement (like in the case of stmt2). That said, it is usually better to disable auto-commit and explicitly commit or rollback when you are done.
Your code could be improved by using try-with-resources for the statements and result sets as well, as this ensures result set and statement are closed as soon as possible, even when exceptions occur:
try (Connection con = datasource.getConnection()) {
try (PreparedStatement stm1 = con.prepareStatement(getSQL())) {
stm1.setString(1, owner);
stm1.setTimestamp(2, Timestamp.valueOf(LocalDateTime.now()));
stm1.setInt(3, count);
int updateCount = stm1.executeUpdate();
}
try (PreparedStatement stm2 = con.prepareStatement(getSQL2())) {
stm2.setString(1, owner);
try (ResultSet rs = stm2.executeQuery()) {
List<Object> results = new ArrayList<>();
while(rs.next()) {
results.add(create(rs));
}
return results;
}
}
} catch (SQLException e) {
throw new RuntimeException("Failed to claim message", e);
}
When auto-commit mode is disabled, no SQL statements are committed until you call the method commit explicitly. All statements executed after the previous call to the method commit are included in the current transaction and committed together as a unit.
As you set auto-commit true, it is committed immediate in data base.
It good practice to reuse a connection.This is perfectly safe as long as the same connection is not is use by two threads at the same time
Related
I am developing a server working with MySQL, and I have been trying to understand advantage of working with a connection pool vs a single connection that is kept open, and being passed down to the different methods through out the application.
The idea of working with a connection pool is understood, however there could be scenarios that this could create a bottleneck, that wouldn't be in case of working without the pool.
Better explain my meaning using code:
Lets say the following method is called simultaneously connectionPoolSize + 1 (e.g. 10) times, meaning that we have exhausted our connections from the connection pool, the last query attempt will fail since no connections available:
public void getData(con) {
Connection con = null;
Statement s = null;
ResultSet rs = null;
try {
con = connectionPool.getConnection();
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
con.close();
}
}
However if we are using a single connection, that is kept open, and all methods can use it, there is no need for any of the methods to wait for the connection to be sent back to pool (which as we saw above, could take some time).
e.g. call this method also 10 times, this would work
public void getData(con) {
Statement s = null;
ResultSet rs = null;
try {
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
// But this time we don't care that this will take time,
// since nobody is waiting for us to release the connection
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
}
}
Obviously the statements and result sets will still be kept open until the method is finished, but this doesn't affect the connection itself, so it doesn't hold back any other attempts to use this connection.
I assume there is some further insight that I am missing, I understand the standard is working with connection pools, so how do you handle these issues?
Depends on your use case. Suppose you are building a web application that would be used by multiple users simultaneously. Now if you have a single connection, all the queries from multiple user threads will be queued. And single db connection will process them one by one. So in a multi-user system(mostly all normal cases), single db connection will be a bottleneck & won't work. Additionally, you need to take care of thread safety in case you are writing & committing data to db.
If you need truly simultaneous query execution in db, then you should go ahead with connection pool. Then different user threads can use different connections & can execute queries in parallel.
Connection pools are used to keep a number of opened connections ready for use and to eliminate the need to open a new connection each time it is required.
If your application is single threaded then you probably don’t need a pool and can use a single connection instead.
Even though sharing a connection between multiple threads is permitted there are some pitfalls of this approach. Here is a description for Java DB: https://docs.oracle.com/javadb/10.8.3.0/devguide/cdevconcepts89498.html. You should check if this is also the case for MySQL.
In many cases it is easier to have an individual connection for each thread.
I'm using PostgreSQL JDBC, and I have a connection for some select and insert queries. Some queries take some time, so I added a timeout. The problem is that the timeout closes the connection, but the query is still executed in the db server and it creates locks.
A simplified code for the problem (The real code is much complex and bigger, but it doesn't matter):
PGPoolingDataSource source = new PGPoolingDataSource();
source.setUrl(url);
source.setUser(user);
source.setPassword(password);
source.setMaxConnections(10);
source.setSocketTimeout(5); // Timeout of 5 seconds
// Insert a row data, and make timeout
Connection con = source.getConnection();
con.setAutoCommit(false);
try {
Statement st2 = con.createStatement();
st2.execute("insert into data.person values (4, 'a')");
Statement st3 = con.createStatement();
st3.executeQuery("select pg_sleep(200)"); // A query which takes a lot of time and causes a timeout
con.commit();
con.close();
} catch (SQLException ex) {
if (!con.isClosed()) {
con.rollback();
con.close();
}
ex.printStackTrace();
}
Connection con2 = source.getConnection();
con2.setAutoCommit(false);
try {
Statement st2 = con2.createStatement();
// This insert query is locked because the query before is still executed, and the rollback didn't happen yet, and the row with the id of 4 is not released
st2.execute("insert into data.person values (4, 'b')");
con2.commit();
con2.close();
} catch (SQLException ex) {
if (!con2.isClosed()) {
con2.rollback();
con2.close();
}
ex.printStackTrace();
}
(data.person is a table with id and name.)
The timeout closes the connection, and it didn't even get to the line con.rollback(); . I have read that when an exception occurs on a query it does rollback in the background, so it is ok.
But the query takes a lot of time (a few hours) and as a result, the rollback will occur after the big select query has finished. So, I can't add the row to data.person for several hours (The second time I try to insert, I get a timeout exception because it waits for the lock to be released...).
I have read that I can use the function pg_terminate_backend in PostgreSQL to terminate the query, and so I can execute the insert query the second time.
My questions are :
1) How safe it is?
2) How common this solution is?
3) Is there a safer solution that JDBC or PostgreSQL provide?
pg_terminate_backend will work and is the safe and correct procedure if you want to interrupt the query and close the database connection.
There is also pg_cancel_backend which will interrupt the query but leave the connection open.
These functions require that you know the process ID of the session backend process, which you can get with the pg_backend_pid function.
You must run these statements on a different database connection than the original one!
Another, probably simpler method is to set the statement_timeout. This can be set in the configuration file or for an individual session or transaction. To set it for a transaction, use:
BEGIN; -- in JDBC, use setAutoCommit(false)
SET LOCAL statement_timeout = 30000; -- in milliseconds
SELECT /* your query */;
COMMIT;
I was setting con.setAutoCommit(false); as soon as I create connection so that nothing goes in DB uncommitted. But it turns out if you close the connection all transaction will be committed, no matter what your setAutoCommit() status is.
Class.forName("oracle.jdbc.driver.OracleDriver");
con = DriverManager.getConnection("jdbc:oracle:thin:#192.168.7.5:xxxx:xxx", "xxx", "xxx");
con.setAutoCommit(false);
st = con.createStatement();
String sql = "INSERT INTO emp (eid, name, dob, address) VALUES (?, ?, ?, ?)";
PreparedStatement statement = con.prepareStatement(sql);
statement.setInt(1, 8 );
statement.setString(2, "kkk" );
statement.setDate(3, date );
statement.setString(4, "pppp" );
pst = con.prepareStatement(sql);
int rowsInserted = statement.executeUpdate();
//con.commit ();
System.out.println ("rowsInserted "+rowsInserted);
con.close ();
Even after commenting con.commit (); row is being inserted when connection closes so I was wondering what is the use of con.commit (); ?
Another answer says it's vendor specific:
If a Connection is closed without an explicit commit or a rollback; JDBC does not mandate anything in particular here and hence the behaviour is dependent on the database vendor. In case of Oracle, an implict commit is issued.
It does not make sense.Thanks.
Logging off of Oracle commits any pending transaction. This happens whether you use sqlplus or the JDBC driver through conn.close(). Note that it's not the driver that issues the commit it's the server. During logoff the server commits pending changes. From your Java program you can always call conn.rollback() before calling conn.close() if you want to make sure that pending changes are not committed.
You asked what is the use of conn.commit(). It's used to explicitly commit a transaction at a specific point of your business logic. For example if you cache connections in a connection pool, you can disable auto-commit and commit pending changes before releasing the connection back to the pool. Some people prefer enabling auto-commit mode (which is the default in JDBC) and not worry about explicitly committing or rolling back changes. It depends on your business logic. You can ask yourself: will I ever need to rollback a DML execution? If the answer is yes then you should disable auto-commit and explicitly commit transactions.
Oracle documentation provides a very good explanation of when and why this should be used. Please go through the same!
If your JDBC Connection is in auto-commit mode pro grammatically or by default, (this is by default, fyi), then every SQL statement is committed to the database upon its completion.
You can refer to this question for more detailed explanation on the same topic.
con.commit() is an explicit commit and you can call it whenever you have to commit the transaction. In your case there is no explicit commit or rollback though you have set AutoCommit to false. The Oracle Database commits all the transactions of a session which exits the connection gracefully. If the session terminates abnormally then it rolls back the transactions.
Statement statement;
Connection connection;
I am closing connection after every database operation as below.
connection.close();
i created statement object as below.
connection.createStatement(....)
Do i need to close satement also similar to connection closing?
I mean do i need to call statement.close();?
What will happen if it is not called?
Thanks!
The Javadoc says it all:
Statement.close()
Releases this Statement object's database and JDBC resources immediately instead of waiting for this to happen when it is automatically closed. It is generally good practice to release resources as soon as you are finished with them to avoid tying up database resources.
One good way to ensure that close() is always called is by placing it inside a finally block:
Statement stmt = connection.createStatement();
try {
// use `stmt'
} finally {
stmt.close();
}
In Java 7, the above can be more concisely written as:
try (Statement stmt = connection.createStatement()) {
// ...
}
This is called the try-with-resources statement.
In general if you close a connection, all associated resources will be closed as well, provided the driver implementer did his work correctly. That said: it is better to close resources when your done with them, as that will free up resources both on your side and the database.
I'm using red5 1.0.0rc1 to create an online game.
I'm connecting to a MySQL database using a jdbc mysql connector v5.1.12
it seems that after several hours of idle my application can continue running queries because the connection to the db got closed and i have to restart the application.
how can I resolve the issue ?
Kfir
The MySQL JDBC driver has an autoreconnect feature that can be helpful on occasion; see "Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J"1, and read the caveats.
A second option is to use a JDBC connection pool.
A third option is to perform a query to test that your connection is still alive at the start of each transaction. If the connection is not alive, close it and open a new connection. A common query is SELECT 1. See also:
Cheapest way to to determine if a MySQL connection is still alive
A simple solution is to change the MySQL configuration properties to set the session idle timeout to a really large number. However:
This doesn't help if your application is liable to be idle for a really long time.
If your application (or some other application) is leaking connections, increasing the idle timeout could mean that lost connections stay open indefinitely ... which is not good for database memory utilization.
1 - If the link breaks (again), please Google for the quoted page title then edit the answer to update it with the new URL.
Well, you reopen the connection.
Connection pools (which are highly recommended, BTW, and if you run Java EE your container - Tomcat, JBoss, etc - can provide a javax.sql.DataSource through JNDI which can handle pooling and more for you) validate connections before handing them out by running a very simple validation query (like SELECT 1 or something). If the validation query doesn't work, it throws away the connection and opens a new one.
Increasing the connection or server timeout tends to just postpone the inevitable.
I had the Same issue for my application and I have removed the idle time out tag
Thats it
It really worked fine
try this, I was using the Jboss server, in that i have made the following change in mysql-ds.xml file.
Let me know if you have any more doubts
The normal JDBC idiom is to acquire and close the Connection (and also Statement and ResultSet) in the shortest possible scope, i.e. in the very same try-finally block of the method as you're executing the query. You should not hold the connection open all the time. The DB will timeout and reclaim it sooner or later. In MySQL it's by default after 8 hours.
To improve connecting performance you should really consider using a connection pool, like c3p0 (here's a developer guide). Note that even when using a connection pool, you still have to write proper JDBC code: acquire and close all the resources in the shortest possible scope. The connection pool will in turn worry about actually closing the connection or just releasing it back to pool for further reuse.
Here's a kickoff example how your method retrieving a list of entities from the DB should look like:
public List<Entity> list() throws SQLException {
// Declare resources.
Connection connection = null;
Statement statement = null;
ResultSet resultSet = null;
List<Entity> entities = new ArrayList<Entity>();
try {
// Acquire resources.
connection = database.getConnection();
statement = connection.createStatement("SELECT id, name, value FROM entity");
resultSet = statement.executeQuery();
// Gather data.
while (resultSet.next()) {
Entity entity = new Entity();
entity.setId(resultSet.getLong("id"));
entity.setName(resultSet.getString("name"));
entity.setValue(resultSet.getInteger("value"));
entities.add(entity);
}
} finally {
// Close resources in reversed order.
if (resultSet != null) try { resultSet.close(); } catch (SQLException logOrIgnore) {}
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
// Return data.
return entities;
}
See also:
DAO tutorial - How to write proper JDBC code
Do you have a validationQuery defined (like select 1)? If not, using a validation query would help.
You can check here for a similar issue.
Append '?autoReconnect=true' to the end of your database's JDBC URL (without the quotes) worked for me.
I saw that ?autoReconnect=true wasn't working for me.
What I did, is simply creating a function called: executeQuery with:
private ResultSet executeQuery(String sql, boolean retry) {
ResultSet resultSet = null;
try {
resultSet = getConnection().createStatement().executeQuery(sql);
} catch (Exception e) {
// disconnection or timeout error
if (retry && e instanceof CommunicationsException || e instanceof MySQLNonTransientConnectionException
|| (e instanceof SQLException && e.toString().contains("Could not retrieve transation read-only status server"))) {
// connect again
connect();
// recursive, retry=false to avoid infinite loop
return executeQuery(sql,false);
}else{
throw e;
}
}
return resultSet;
}
I know, I'm using string to get the error.. need to do it better.. but it's a good start, and WORKS :-)
This will almost all reasons from a disconnect.