I'm using PostgreSQL JDBC, and I have a connection for some select and insert queries. Some queries take some time, so I added a timeout. The problem is that the timeout closes the connection, but the query is still executed in the db server and it creates locks.
A simplified code for the problem (The real code is much complex and bigger, but it doesn't matter):
PGPoolingDataSource source = new PGPoolingDataSource();
source.setUrl(url);
source.setUser(user);
source.setPassword(password);
source.setMaxConnections(10);
source.setSocketTimeout(5); // Timeout of 5 seconds
// Insert a row data, and make timeout
Connection con = source.getConnection();
con.setAutoCommit(false);
try {
Statement st2 = con.createStatement();
st2.execute("insert into data.person values (4, 'a')");
Statement st3 = con.createStatement();
st3.executeQuery("select pg_sleep(200)"); // A query which takes a lot of time and causes a timeout
con.commit();
con.close();
} catch (SQLException ex) {
if (!con.isClosed()) {
con.rollback();
con.close();
}
ex.printStackTrace();
}
Connection con2 = source.getConnection();
con2.setAutoCommit(false);
try {
Statement st2 = con2.createStatement();
// This insert query is locked because the query before is still executed, and the rollback didn't happen yet, and the row with the id of 4 is not released
st2.execute("insert into data.person values (4, 'b')");
con2.commit();
con2.close();
} catch (SQLException ex) {
if (!con2.isClosed()) {
con2.rollback();
con2.close();
}
ex.printStackTrace();
}
(data.person is a table with id and name.)
The timeout closes the connection, and it didn't even get to the line con.rollback(); . I have read that when an exception occurs on a query it does rollback in the background, so it is ok.
But the query takes a lot of time (a few hours) and as a result, the rollback will occur after the big select query has finished. So, I can't add the row to data.person for several hours (The second time I try to insert, I get a timeout exception because it waits for the lock to be released...).
I have read that I can use the function pg_terminate_backend in PostgreSQL to terminate the query, and so I can execute the insert query the second time.
My questions are :
1) How safe it is?
2) How common this solution is?
3) Is there a safer solution that JDBC or PostgreSQL provide?
pg_terminate_backend will work and is the safe and correct procedure if you want to interrupt the query and close the database connection.
There is also pg_cancel_backend which will interrupt the query but leave the connection open.
These functions require that you know the process ID of the session backend process, which you can get with the pg_backend_pid function.
You must run these statements on a different database connection than the original one!
Another, probably simpler method is to set the statement_timeout. This can be set in the configuration file or for an individual session or transaction. To set it for a transaction, use:
BEGIN; -- in JDBC, use setAutoCommit(false)
SET LOCAL statement_timeout = 30000; -- in milliseconds
SELECT /* your query */;
COMMIT;
Related
I have a Java application where I am executing some queries to a SQL server 2008 database.
I am trying to execute a stored procedure with that piece of code:
//...
try (Connection connection = dataSource.getConnection()) {
PreparedStatement preparedStmt = connection.prepareStatement("exec dbo.myProc");
preparedStmt.execute();
connection.commit();
connection.close();
}
//...
But with some debugging I found out that the procedure was not over when the connection is being commited and closed.
So my question is, why is that ? And how can I ensure that the procedure is over before closing the connection ?
Make sure you have SET NOCOUNT ON in the proc code and/or consume all results returned using preparedStmt.getMoreResults(). This will ensure the proc runs to completion.
SET NOCOUNT ON will suppress DONE_IN_PROC (row counts) that need to be consumed. Besides row counts, other operations that return results to the client, such as SELECT, PRINT, and RAISERROR, will require getMoreResults() to retrieve the results and ensure the proc runs to completion. If you don't have those in the proc code and no exceptions are raised, SET NOCOUNT ON alone will be enough.
There are times closing connections takes a lot of time, like more than 10 minutes upto 1 hour, or worse, even for indefinite time, depending on how heavy or slow the query was.
In a situation where the client cancels the query because it has been taking too much time, I would want to free up the underlying connection used as soon as possible.
I tried cancelling the PreparedStatement, closing it, then closing the resultset, and then finally closing the connection. Cancelling took almost instantly. Closing the PreparedStatement and ResultSet took too much time that I had to wrap it in a Callable with timeout to skip that process in due time and proceed with closing the connection itself. I haven't got any much luck on what else to try out.
How do I deal with this? I can't simply let the connections unclosed and I can't let the users wait for 10 minutes before they can make another similar query.
Also, what's causing the closure of connection to take too much time? Is there anything else I could do? Do you think Oracle query hints would help?
I'm using Oracle JDBC via thin type of driver by the way.
UPDATE:
Apparently, it's possible to close the connection forcefully by configuring TimeToLive property in the connectionCacheProperties which closes the connection for a specific amount of time. However, what I need is on as-needed basis. This is worth mentioning because this proves that it is possible to forcefully close it as the Connection Pool just did. In fact, I even got the following message on my logs.
ORA-01013: user requested cancel..
Main function:
String g_sid = "";
Thread 1:
String sql = ...;
Connection conn = ...your connection func...;
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery( "SELECT sid from v$mystat");
if (rset.next()) g_sid = rset.getString("sid");
rset.close();
// now to the actual long-running SQL
ResultSet rset = stmt.executeQuery( sql );
//
stmt.close();
Thread 2:
String serialN = "";
Connection conn = ...your admin connection func...
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery( "SELECT serial# serialN from v$session where sid=" + g_sid );
if (rset.next()) {
serialN = rset.getString("serialN");
stmt.execute("alter system kill session '" + g_sid + "," + serialN + "'");
}
stmt.close();
// probably keep the admin connection open for further maintenance
//
This is what we use at my POW instead of "grant alter system to $UID" as a SYS-owned procedure (simplified working version):
CREATE OR REPLACE procedure SYS.kill_session(in_sid varchar2)
as
l_serial number;
l_runsql varchar2(1000) := 'alter system kill session ''$1,$2'' immediate';
begin
begin
select serial# into l_serial from v$session where username =
(
SELECT USER FROM DUAL
) and sid=in_sid and rownum<=1;
exception when no_data_found then
raise_application_error( -20001, 'Kill candidate not found');
end;
l_runsql := replace( l_runsql, '$1', in_sid);
l_runsql := replace( l_runsql, '$2', l_serial);
execute immediate l_runsql;
end;
/
This way you can only kill your own sessions.
I am wondering how does LIMIT in query prevent application thread reading from MySQL stream from hanging in close operation, and why does limit enable query canceling which is otherwise not working.
Statement statement = connection.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(Integer.MIN_VALUE);
// Statement statement = connection.createStatement(); // this can be canceled
new Thread(new Runnable() {
// tries to cancel query after streaming starts
#Override
public void run() {
try {
Thread.sleep(5);
statement.cancel(); // does nothing when streaming
// statement.close(); // makes application thread hang
} catch (SQLException | InterruptedException e) {
e.printStackTrace();
}
}
}).start();
// adding limit to the query makes it possible to cancel stream, even if limit is not yet reached
resultSet = statement.executeQuery("SOME_LONG_RUNNING_QUERY");
int i = 0;
while (resultSet.next()) {
System.out.println(++i);
}
connection.close();
Regular(non-streaming) query can be safely canceled, with or without limit. In streaming mode however, close/cancel operations simply make application thread hang/do nothing, presumably while performing blocking read on socket.
If i add some large LIMIT to the long running query then, as expected, cancel() operation results with:
com.mysql.jdbc.exceptions.jdbc4.MySQLQueryInterruptedException: Query execution was interrupted
I understand there are a couple of questions on this matter but none of them discusses aspects bellow:
Why does LIMIT make it possible to cancel streaming query
Can this bug-feature be relied upon, can it be changed in next releases, is there any official explanation ?
Limit make is so only a certain amount of records are pulled from the database. Limit is useful and is the best case to use in case you have a large query that is subject to hang.
In your case, when you are streaming your query that has both, a limit and close statement, it follows the order of operation. Since LIMIT comes first it will in result end your query. This would explain why even though you have a close statement it does not reach it and why you receive the exception.
I hope this clears up some of the issues you are having.
I am working on reducing deadlocks and it was pointed out to me that I should not use multiple queries with one connection because the transaction may not be committed and open, causing deadlocks. So in pseudo code, something like this:
try(Connection con = datasource.getConnection())
{
PreparedStatement stm1 = con.prepareStatement(getSQL());
stm1.setString(1, owner);
stm1.setTimestamp(2, Timestamp.valueOf(LocalDateTime.now()));
stm1.setInt(3, count);
int updateCount = stm1.executeUpdate();
stm1.close();
PreparedStatement stm2 = con.prepareStatement(getSQL2());
stm2.setString(1, owner);
ResultSet rs = stm2.executeQuery();
List<Object> results = new ArrayList<>();
while(rs.next()) {
results.add(create(rs));
}
return results;
} catch (SQLException e) {
throw new RuntimeException("Failed to claim message",e);
}
When does stm1 commit the transaction when auto-commit is set to true?
Is it good practise to reuse a connection like that or should both statements use separate connections instead?
Questions like these can usually be answered by reading the JDBC specification. JDBC 4.2 section 10.1 Transaction Boundaries and Auto-commit says:
When to start a new transaction is a decision made implicitly by
either the JDBC driver or the underlying data source. Although some
data sources implement an explicit “begin transaction” statement,
there is no JDBC API to do so. Typically, a new transaction is started
when the current SQL statement requires one and there is no
transaction already in place. Whether or not a given SQL statement
requires a transaction is also specified by SQL:2003.
The Connection attribute auto-commit specifies when to end
transactions. Enabling auto-commit causes a transaction commit after
each individual SQL statement as soon as that statement is complete.
The point at which a statement is considered to be “complete” depends
on the type of SQL statement as well as what the application does
after executing it:
For Data Manipulation Language (DML) statements such as Insert, Update, Delete, and DDL statements, the statement is complete as soon
as it has finished executing.
For Select statements, the statement is complete when the associated result set is closed.
For CallableStatement objects or for statements that return multiple results, the statement is complete when all of the associated
result sets have been closed, and all update counts and output
parameters have been retrieved.
In your code a transaction is committed as part of stm1.executeUpdate() (this transaction might have been started on prepare, or on execute). A new transaction is started at prepare or execute of stmt2, but as you don't close stmt2 or rs, the connection close will trigger the commit.
As to whether you should reuse connections and statements: it depends on context and your code. For a specific unit of work you use a single connection. If you want further reuse of connections you should use a connection pool. Reusing statements should only be done when it makes sense to do so (otherwise your code might get complicated with resource leaks as a consequence), and again there are connection pools that provide built-in statement pooling that reduces this complexity for you.
Statements like "[..] should not use multiple queries with one connection because the transaction may not be committed and open, causing deadlocks." are usually incorrect and would lead to badly performing applications if applied. It might apply to misbehaving drivers that don't properly follow the auto-commit rules above, or maybe in situations were the connection is much longer lived and you don't properly finish a statement (like in the case of stmt2). That said, it is usually better to disable auto-commit and explicitly commit or rollback when you are done.
Your code could be improved by using try-with-resources for the statements and result sets as well, as this ensures result set and statement are closed as soon as possible, even when exceptions occur:
try (Connection con = datasource.getConnection()) {
try (PreparedStatement stm1 = con.prepareStatement(getSQL())) {
stm1.setString(1, owner);
stm1.setTimestamp(2, Timestamp.valueOf(LocalDateTime.now()));
stm1.setInt(3, count);
int updateCount = stm1.executeUpdate();
}
try (PreparedStatement stm2 = con.prepareStatement(getSQL2())) {
stm2.setString(1, owner);
try (ResultSet rs = stm2.executeQuery()) {
List<Object> results = new ArrayList<>();
while(rs.next()) {
results.add(create(rs));
}
return results;
}
}
} catch (SQLException e) {
throw new RuntimeException("Failed to claim message", e);
}
When auto-commit mode is disabled, no SQL statements are committed until you call the method commit explicitly. All statements executed after the previous call to the method commit are included in the current transaction and committed together as a unit.
As you set auto-commit true, it is committed immediate in data base.
It good practice to reuse a connection.This is perfectly safe as long as the same connection is not is use by two threads at the same time
I'm using red5 1.0.0rc1 to create an online game.
I'm connecting to a MySQL database using a jdbc mysql connector v5.1.12
it seems that after several hours of idle my application can continue running queries because the connection to the db got closed and i have to restart the application.
how can I resolve the issue ?
Kfir
The MySQL JDBC driver has an autoreconnect feature that can be helpful on occasion; see "Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J"1, and read the caveats.
A second option is to use a JDBC connection pool.
A third option is to perform a query to test that your connection is still alive at the start of each transaction. If the connection is not alive, close it and open a new connection. A common query is SELECT 1. See also:
Cheapest way to to determine if a MySQL connection is still alive
A simple solution is to change the MySQL configuration properties to set the session idle timeout to a really large number. However:
This doesn't help if your application is liable to be idle for a really long time.
If your application (or some other application) is leaking connections, increasing the idle timeout could mean that lost connections stay open indefinitely ... which is not good for database memory utilization.
1 - If the link breaks (again), please Google for the quoted page title then edit the answer to update it with the new URL.
Well, you reopen the connection.
Connection pools (which are highly recommended, BTW, and if you run Java EE your container - Tomcat, JBoss, etc - can provide a javax.sql.DataSource through JNDI which can handle pooling and more for you) validate connections before handing them out by running a very simple validation query (like SELECT 1 or something). If the validation query doesn't work, it throws away the connection and opens a new one.
Increasing the connection or server timeout tends to just postpone the inevitable.
I had the Same issue for my application and I have removed the idle time out tag
Thats it
It really worked fine
try this, I was using the Jboss server, in that i have made the following change in mysql-ds.xml file.
Let me know if you have any more doubts
The normal JDBC idiom is to acquire and close the Connection (and also Statement and ResultSet) in the shortest possible scope, i.e. in the very same try-finally block of the method as you're executing the query. You should not hold the connection open all the time. The DB will timeout and reclaim it sooner or later. In MySQL it's by default after 8 hours.
To improve connecting performance you should really consider using a connection pool, like c3p0 (here's a developer guide). Note that even when using a connection pool, you still have to write proper JDBC code: acquire and close all the resources in the shortest possible scope. The connection pool will in turn worry about actually closing the connection or just releasing it back to pool for further reuse.
Here's a kickoff example how your method retrieving a list of entities from the DB should look like:
public List<Entity> list() throws SQLException {
// Declare resources.
Connection connection = null;
Statement statement = null;
ResultSet resultSet = null;
List<Entity> entities = new ArrayList<Entity>();
try {
// Acquire resources.
connection = database.getConnection();
statement = connection.createStatement("SELECT id, name, value FROM entity");
resultSet = statement.executeQuery();
// Gather data.
while (resultSet.next()) {
Entity entity = new Entity();
entity.setId(resultSet.getLong("id"));
entity.setName(resultSet.getString("name"));
entity.setValue(resultSet.getInteger("value"));
entities.add(entity);
}
} finally {
// Close resources in reversed order.
if (resultSet != null) try { resultSet.close(); } catch (SQLException logOrIgnore) {}
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
// Return data.
return entities;
}
See also:
DAO tutorial - How to write proper JDBC code
Do you have a validationQuery defined (like select 1)? If not, using a validation query would help.
You can check here for a similar issue.
Append '?autoReconnect=true' to the end of your database's JDBC URL (without the quotes) worked for me.
I saw that ?autoReconnect=true wasn't working for me.
What I did, is simply creating a function called: executeQuery with:
private ResultSet executeQuery(String sql, boolean retry) {
ResultSet resultSet = null;
try {
resultSet = getConnection().createStatement().executeQuery(sql);
} catch (Exception e) {
// disconnection or timeout error
if (retry && e instanceof CommunicationsException || e instanceof MySQLNonTransientConnectionException
|| (e instanceof SQLException && e.toString().contains("Could not retrieve transation read-only status server"))) {
// connect again
connect();
// recursive, retry=false to avoid infinite loop
return executeQuery(sql,false);
}else{
throw e;
}
}
return resultSet;
}
I know, I'm using string to get the error.. need to do it better.. but it's a good start, and WORKS :-)
This will almost all reasons from a disconnect.