I am working on a program that allows multiple users to access a db (MySQL), and at various times I'm getting a SQLException: Lock wait timeout exceeded .
The connection is created using:
conn = DriverManager.getConnection(connString, username, password);
conn.setAutoCommit(false);
and the calls all go through this bit of code:
try {
saveRecordInternal(record);
conn.commit();
} catch (Exception ex) {
conn.rollback();
throw ex;
}
Where saveRecordInternal has some internal logic, saving the given record. Somewhere along the way is the method which I suspect is the problem:
private long getNextIndex() throws Exception {
String query = "SELECT max(IDX) FROM MyTable FOR UPDATE";
PreparedStatement stmt = conn.prepareStatement(query);
ResultSet rs = stmt.executeQuery();
if (rs.next()) {
return (rs.getLong("IDX") + 1);
} else {
return 1;
}
}
This method is called by saveRecordInternal somewhere along it's operation, if needed.
For reasons that are currently beyond my control I cannot use auto-increment index, and anyway the index to-be-inserted is needed for some internal-program logic.
I would assume having either conn.commit() or conn.rollback() called would suffice to release the lock, but apparently it's not. So my question is - Should I use stmt.close() or rs.close() inside getNextIndex? Would that release the lock before the transaction is either committed or rolled back, or would it simply ensure the lock is indeed released when calling conn.commit() or conn.rollback()?
Is there anything else I'm missing / doing entirely wrong?
Edit: At the time the lock occurs all connected clients seem to be responsive, with no queries currently under-way, but closing all connected clients does resolve the issue. This leads me to think the lock is somehow preserved even though the transaction (supposedly?) ends, either by committing or rolling back.
Even though not closing a Statement or ResultSet is a bad idea but that function doesn't seem responsible for error that you are receiving. Function , getNextIndex() is creating local Statement andResultSet but not closing it. Close those right there or create those Statement and ResultSetobjects in saveRecordInternal() and pass as parameters or better if created in your starting point and reused again and again. Finally, close these when not needed anymore ( in following order - ResultSet, Statement, Connection ).
Error simply means that a lock was present on some DB object ( Connection, Table ,Row etc ) while another thread / process needed it at the same time but had to wait ( already locked ) but wait timed out due to longer than expected wait.
Refer , How to Avoid Lock wait timeout exceeded exception.? to know more about this issue.
All in all this is your environment specific issue and need to be debugged on your machine with extensive logging turned on.
Hope it helps !!
From the statements above I don't see any locks that remain open!
In general MySql should release the locks whenever a commit or rollback is called, or when the connection is closed.
In your case
SELECT max(IDX) FROM MyTable FOR UPDATE
would result in locking the whole table, but I assume that this is the expected logic! You lock the table until the new row is inserted and then release it to let the others insert.
I would test with:
SELECT IDX FROM MyTable FOR UPDATE Order by IDX Desc LIMIT 1
to make sure that the lock remains open even when locking a single row.
If this is not the case, I might be a lock timeout due to a very large table.
So, what I think happen here is: you query is trying to executed on some table but that table is locked with some another process. So, till the time the older lock will not get released from the table, your query will wait to get executed and after some time if the lock will no release you will get the lock time out exception.
You can also take a look on table level locks and row level locks.
In brief: table level locks lock the whole table and till the lock is there you want be able to execute any other query on the same table.
while
row level lock will put a lock on a specific row, so apart from that row you can execute queries on the table.
you can also check if there is any query running on the table from long time and due to that you are not able to execute another query and getting exception. How, to check this will vary upon database, but you can google it for your specific database to find out query to get the open connections or long running queries on database.
Related
I am updating status value of a table in db(mysql), i.e -2 to 0 through jdbc connection in java, now i need to fetch all those rows having 0 as status, in other method.
Now when i am trying to fetch all those values having status 0, the recent updated rows are not included.
I think there might be some buffer issues but not sure about it, can somebody help me out about the solution regarding the above issue.
Edit:- Currently I am using something like this.
connection.setAutoCommit(false);
String taskUpdateString = "update query"
taskStatement = connection.prepareStatement(taskUpdateString);
taskStatement.execute();
connection.commit();
updates in mysql happen as immediate after commit. but for the update mysql has to aquire a lock on the table or row, (for details: http://dev.mysql.com/doc/refman/5.7/en/internal-locking.html).
so it might be cause mysql is waiting for locks, or the buffering is on the query side
Are you sure you are closing all the java connections properly?
When you are done with using your Connection, you need to explicitly close it by calling its close() method in order to release any other database resources (cursors, handles, etc) the connection may be holding on to.
The safe pattern in Java is to close your ResultSet, Statement, and Connection (in that order) in a finally block when you are done with them, something like" ... See answer with 55 ticks on Stack overflow here: Closing Database Connections in Java
Hope that helps.
I write a java application where different threads (each thread has an own connection object using a connection pool c3p0) call a method like that.
Pseudo code:
void example(Connection connection) {
connection.update("LOCK TABLES Test WRITE");
resultSet = connection.query("SELECT * FROM Test WHERE Id = '5'");
if (resultSet.next()) {
connection.update("UPDATE Test SET Amount = '10' WHERE Id = '5'");
} else {
connection.update("INSERT INTO Test (Id, Amount) VALUES ('5', '10')");
}
connection.update("UNLOCK TABLES");
connection.commit();
}
There are a few other similar methods which lock a table, select/update/insert something and then unlock the table. The aim is to prevent race conditions and deadlocks.
Is it possible to cause MySQL deadlocks when I call such a method from different threads? If yes, can you give me an example how that happens (timing of two transactions which cause a deadlock)? I am a noob with deadlocks and I want to get into this.
Edit: Make clear that the connection that should be used in the method is passed from the thread that calls the method.
Edit: Replace READ with WRITE
It cannot here. As there is no complex logic and the code immediately commits after update, there must be always one thread which goes through. Even in more complex scenarios it would probably require highest serialization level (repeatable reads) which I believe MySql does not support.
This would possibly create a deadlock. Actually I'm not sure if it'll even execute, because you need to acquire a "WRITE" lock, not a "READ".
I am curious to know that, if I create SQL statement for SQLITE database file in java as
public Statement GetStatement()
{
if(connection==null || connection.isClosed())
{
connection = DriverManager.getConnection("jdbc:sqlite:"+filePath);
}
return connection.createStatement(); //Connection is private variable of type java.sql.connection
}
This statement returned by this function is used in executing insert, select, update SQL in different scenarios and multiple reads or multiple functions will be inserting, updating or selecting from database.
Now if I do not close the statements then are there chances of memory leaks.
But I close all the result set objects got by executing select SQL.
I know it is good practice to close the statements but what are counter effects if I do not do it?
SQLite enforces sequential write-access to the database (one process at a time). This makes it vital that you close the database connection when you have completed any INSERT or UPDATE operations. If you don't, you might receive "DatabaseObjectNotClosed" exceptions when your script next attempts to write to it. Not to mention memory leaks, and possible performance decrease.
I am using a simple piece of code to update a single row in Oracle DB, the query updates just a single row but still the execution hangs on stmt.executeUpdate().
It does not throw any exception, but the control just hangs there.
I am lost here as the same piece of code worked fine earlier.
try {
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
conn = DriverManager.getConnection(url,username,pwd);
stmt = conn.createStatement();
String sql = "update report set status =" +"'"+NO+"'"+ " where name=" +"'"+ name+"'"+ " and user_id=" +"'"+sub+"'";
System.out.println(sql);
stmt.executeUpdate(sql)
You should never create statements by concatenating variables like this. This leaves you open to SQL injection. Furthermore, each statement will be seen by Oracle as a new statement which will need a hard parse, which can lead to performance problems. Instead, use preparedStatement.
However, I think that in your case it's not the cause of your problem. When a simple update hangs, it is almost always one of these reasons:
The row is locked by another transaction.
You are updating a key that is referenced by another table as an unindexed foreign key.
The table has an ON UPDATE trigger that does lots of works.
I'm pretty sure that you're in the first case. Oracle prevents multiple users from updating the same row at the same time, for consistency reasons mostly. This means that a DML statement will wait upon another transaction if the modified row is locked. It will wait until the blocking transaction commits or rollbacks.
I advise you to always lock a row before trying to update it by doing a SELECT ... FOR UPDATE NOWAIT before. This will fail if the row is already locked, and you can exit gracefully by telling the user that this row is currently being locked by another session.
You also have to make sure that no transaction ends in an uncommited state. All sessions should commit or rollback their changes before exiting (don't use autocommit though).
Use JDBC PreparedStatement
java docs
Class.forName("org.apache.derby.jdbc.ClientDriver");
Connection con = DriverManager.getConnection
("jdbc:derby://localhost:1527/testDb","name","pass");
PreparedStatement updateemp = con.prepareStatement
("insert into emp values(?,?,?)");
updateemp.setInt(1,23);
updateemp.setString(2,"Roshan");
updateemp.setString(3, "CEO");
updateemp.executeUpdate();
I had the same problem you had, I don't know about others but I solved it by simply closing SQLplus! I didn't think that could solve it since some queries worked while sqlplus was running on the console, but I closed it and now every queries works and nothing hangs!
You may want to try this if you have our problem! exit Sqlplus on your console or any other database you are using, then try running your code again!
use PreparedStatement instead
http://www.mkyong.com/jdbc/jdbc-preparestatement-example-insert-a-record/
If you statement then every time it server will check for syntax so it consumes time
So go for PreparedStatement
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What could be the best solution for multithreaded Java application to ensure that all threads access db synchronously? For example, each thread represents separate transaction, and first checks db for value and then depending on answer has to insert or update some fields in database (note between check, insert and commit application is doing other processing). But the problem is that another thread might be doing just the same thing on same table.
More specific example. Thread T1 starts transaction, then checks table ENTITY_TABLE for entry with code '111'. If found updates its date, if not found inserts new entry, then commits transaction. Now imagine thread T2 does exactly same thing. Now there are few problems:
T1 and T2 checks db and find nothing and both insert same entry.
T1 checks db, find entry with old date, but on commit T2 already has updated entry to more recent date.
If we use cache and synchronize access to cache we have a problem: T1 acquires lock checks db and cache if not found add to cache, release lock, commit. T2 does the same, finds entry in cache going to commit. But T1 transaction fails and is roll backed. Now T2 is in bad shape, because it should insert to ENTITY_TABLE but doesn't know that.
more?
I'm working on creating simple custom cache with syncronization and solving problem 3. But maybe there is some more simple solution?
This should be dealt with primarily within the DB, by configuring the desired transaction isolation level. Then on top of this, you need to select your locking strategy (optimistic or pessimistic).
Without transaction isolation, you will have a hard time trying to ensure transaction integrity solely in the Java domain. Especially taking into consideration that even if the DB is currently accessed only from your Java app, this may change in the future.
Now as to which isolation level to choose, from your description it might seem that you need the highest isolation level, serializable. However, in practice this tends
to be a real performance hog due to extensive locking. So you may want to reevaluate your requirements to find the best balance of isolation and performance for your specific situation.
If you want to SQL SELECT a row from a database, and then later UPDATE the same row, you have 2 choices as a Java developer.
SELECT with ROWLOCK, or whatever the
row lock syntax is for your
particular data base.
SELECT the row, do your processing,
and just before you're ready to
update, SELECT the row again with ROWLOCK to see
if any other thread made changes.
If the two SELECTS return the same
values, UPDATE. If not, throw an
error or do your processing again.
The problem you are facing is transaction isolation.
Seems like you need to have each thread lock the row concerned in the where clause, which requires serializable isolation.
I tumbled into this problem when working with a multi-threaded Java program that was using a Sqllite database. It uses file locking so I had to make sure that only one thread was doing work at the same time.
I basically ended up with using synchronized. When the ConnectionFactory returns a db connection, it also returns a lock object that one should lock when using the connection. So you could do synchronization lock manually, or create a subclass of the class below which does it for you:
/**
* Subclass this class and implement the persistInTransaction method to perform
* an update to the database.
*/
public abstract class DBOperationInTransaction {
protected Logger logger = Logger.getLogger(DBOperationInTransaction.class.getName());
public DBOperationInTransaction(ConnectionFactory connectionFactory) {
DBConnection con = null;
try {
con = connectionFactory.getConnection();
if(con == null) {
logger.log(Level.SEVERE, "Could not get db connection");
throw new RuntimException("Could not get db connection");
}
synchronized (con.activityLock) {
con.connection.setAutoCommit(false);
persistInTransaction(con.connection);
con.connection.commit();
}
} catch (Exception e) {
logger.log(Level.SEVERE, "Failed to persist data:", e);
throw new RuntimeException(e);
} finally {
if(con != null) {
//Close con.connection silently.
}
}
}
/**
* Method for persisting data within a transaction. If any SQLExceptions
* occur they are logged and the transaction is rolled back.
*
* In the scope of the method there is a logger object available that any
* errors/warnings besides sqlException that you want to log.
*
* #param con
* Connection ready for use, do not do any transaction handling
* on this object.
* #throws SQLException
* Any SQL exception that your code might throw. These errors
* are logged. Any exception will rollback the transaction.
*
*/
abstract protected void persistInTransaction(Connection con) throws SQLException;
}
And the DBConnection struct:
final public class DBConnection {
public final Connection connection;
public final String activityLock;
public DBConnection(Connection connection, String activityLock) {
this.connection = connection;
this.activityLock = activityLock;
}
}
Offhand, I think you would have to lock the table before you query it. This will force sequential operation of your threads. Your threads should then be prepared for the fact that they will have to wait for the lock and of course, the lock acquisition might time out. This could introduce quite a bottleneck into your application as well as your threads will all have to queue up for database resources.