I am deleting more than 90k rows in my table through JDBC prepared statement.
My code looks like this:
//Open JDBC Connection
//MYSQl QUERY
String fetchDataSQL="DELETE FROM MYTABLE WHERE ID=? AND X=? AND Y=?";
preparedStatement = dbConnection.prepareStatement(fetchDataSQL);
rs = preparedStatement.executeQuery();
dbConnection.setAutoCommit(false);
for (Data dt : dataList) {
preparedStatement.setLong(1, dt.getID());
preparedStatement.setLong(2, dt.getX());
preparedStatement.setLong(3, dt.getY());
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
dbConnection.commit();
//cose prepared statement
//close connection
Here dataList contains more than 90k record which I want to delete.I had applied mysql indexing also to MYTABLE for id,X,Y
Unfortunately I got error
Deadlock found when trying to get lock; try restarting transaction
I have googled, I did not found working solution.
please help me to find solution or alternative if present.
Thank you
As #bodi0 points out, the MySQL Reference Manual (14.2.7.9. How to Cope with Deadlocks) has lots of advice on how to diagnose deadlocks, and how to deal with them.
In this case I can think of two possible explanations:
You are deadlocking against some other transaction being performed by a different database connection or a different database client.
You have entries in the datalist that have the same {ID, X, Y} values, so you end up adding multiple deletes for the same row or rows to the batch. Maybe this results in MySQL attempting to lock the same row twice, and deadlocking. (Just a theory ...)
But a better idea would be to diagnose the deadlock for yourself.
Related
I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it?
try {
// Disable auto-commit
connection.setAutoCommit(false);
// Create a prepared statement
String sql = "INSERT INTO mytable (xxx), VALUES(?)";
PreparedStatement pstmt = connection.prepareStatement(sql);
Object[] vals=set.toArray();
for (int i=0; i<vals.length; i++) {
pstmt.setString(1, vals[i].toString());
pstmt.addBatch();
}
// Execute the batch
int [] updateCounts = pstmt.executeBatch();
System.out.append("inserted "+updateCounts.length);
I had a similar performance issue with mysql and solved it by setting the useServerPrepStmts and the rewriteBatchedStatements properties in the connection url.
Connection c = DriverManager.getConnection("jdbc:mysql://host:3306/db?useServerPrepStmts=false&rewriteBatchedStatements=true", "username", "password");
I'd like to expand on Bertil's answer, as I've been experimenting with the connection URL parameters.
rewriteBatchedStatements=true is the important parameter. useServerPrepStmts is already false by default, and even changing it to true doesn't make much difference in terms of batch insert performance.
Now I think is the time to write how rewriteBatchedStatements=true improves the performance so dramatically. It does so by rewriting of prepared statements for INSERT into multi-value inserts when executeBatch() (Source). That means that instead of sending the following n INSERT statements to the mysql server each time executeBatch() is called :
INSERT INTO X VALUES (A1,B1,C1)
INSERT INTO X VALUES (A2,B2,C2)
...
INSERT INTO X VALUES (An,Bn,Cn)
It would send a single INSERT statement :
INSERT INTO X VALUES (A1,B1,C1),(A2,B2,C2),...,(An,Bn,Cn)
You can observe it by toggling on the mysql logging (by SET global general_log = 1) which would log into a file each statement sent to the mysql server.
You can insert multiple rows with one insert statement, doing a few thousands at a time can greatly speed things up, that is, instead of doing e.g. 3 inserts of the form INSERT INTO tbl_name (a,b,c) VALUES(1,2,3); , you do INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(1,2,3),(1,2,3); (It might be JDBC .addBatch() does similar optimization now - though the mysql addBatch used to be entierly un-optimized and just issuing individual queries anyhow - I don't know if that's still the case with recent drivers)
If you really need speed, load your data from a comma separated file with LOAD DATA INFILE , we get around 7-8 times speedup doing that vs doing tens of millions of inserts.
If:
It's a new table, or the amount to be inserted is greater then the already inserted data
There are indexes on the table
You do not need other access to the table during the insert
Then ALTER TABLE tbl_name DISABLE KEYS can greatly improve the speed of your inserts. When you're done, run ALTER TABLE tbl_name ENABLE KEYS to start building the indexes, which can take a while, but not nearly as long as doing it for every insert.
You may try using DDBulkLoad object.
// Get a DDBulkLoad object
DDBulkLoad bulkLoad = DDBulkLoadFactory.getInstance(connection);
bulkLoad.setTableName(“mytable”);
bulkLoad.load(“data.csv”);
try {
// Disable auto-commit
connection.setAutoCommit(false);
int maxInsertBatch = 10000;
// Create a prepared statement
String sql = "INSERT INTO mytable (xxx), VALUES(?)";
PreparedStatement pstmt = connection.prepareStatement(sql);
Object[] vals=set.toArray();
int count = 1;
for (int i=0; i<vals.length; i++) {
pstmt.setString(1, vals[i].toString());
pstmt.addBatch();
if(count%maxInsertBatch == 0){
pstmt.executeBatch();
}
count++;
}
// Execute the batch
pstmt.executeBatch();
System.out.append("inserted "+count);
Is there any way for 'previewing' sql select statements?
What I'm trying to do is to get the names of the columns that are returned by a sql statement without actually running the statement?
On application startup i need to know the column names, the problem is that some of the queries can run for awhile.
ResultSetMetaData may help
You still have to execute the query to get the meta data, but you may be able to alter add a restriction to the where clause which means it returns zero rows very quickly. For example you could append and 1 = 0 to the where clause.
The DBMS still has to do all the query parsing that it would normally do - it just means that the execution should hopefully fail very quickly
You didn't mention your DBMS, but the following works with the Postgres and Oracle JDBC drivers. I didn't test any other.
// the statement is only prepared, not executed!
PreparedStatement pstmt = con.prepareStatement("select * from foo");
ResultSetMetaData metaData = pstmt.getMetaData();
for (int i=1; i <= metaData.getColumnCount(); i++)
{
System.out.println(metaData.getColumnName(i));
}
We are trying to fetch data from Oracle DB using a PreparedStatement. It keeps fetching zero records while the same runs and fetches data when run from PL/SQL developer.
We found the root cause while trying to debug. While debugging the code fetched the two records properly.
We did a temporary fix by placing this piece of code.
ResultSet rs = ps.executeQuery();
while(!rs.hasNext()){
ps.executeQuery();}
This works. But this is not the best solution since it results in an unwanted DB hit.It clearly looks like a time issue. We also explicitly committed earlier transactions since they can affect the result of this query.
What could be causing this. What's the best way to solve this?
The method is quite big: I'll just post some parts here:
private static boolean loadCommission(Member member){
Connection conn = getConnection("schema1"); //obtained through connection pool
//insertion into table
conn.close();
Conn conn2 = getConnection("schema2"); //obtained through connection pool
PreparedStatement ps = conn2.prepareStatement(sql);
//this sql combines data from schema1
// and 2 with DB links
ResultSet rs = ps.executeQuery();
//business logic
conn2.close();
return true;
}
Thanks
We tried a few more things yesterday. We replaced the second connection code with direct jdbc connection like so
Connection conn = DriverManager.getConnection(URL, USER, PASS);
This too works. Now we are not sure if the delay is in getting connection from pool or in completing previous transaction like we thought earlier.
If your query selects from a materialized view, then there may be some elapsed time before it will yield results (as materialized views do not necessarily refresh instantly after a commit, depending upon how they've been created).
If this is the case, then you can resolve the problem by either selecting directly from the base table (or equivalent non-materialized views), or forcing the materialized view to refresh.
iv created a program which works really well with MySQL. However, when I convert it to SQLlite, everything works such as Creating tables, getConnection() to the database and such EXCEPT inserting values to the table??
I am unable to insert value to the table using Java (Netbeans) to SQLite database. I can only insert ONE row but not multiple rows? In MySQL, i could insert multiple rows at once, however, I cant in SQLite?
The code is (This works for only ONE row):
Connection con;
Statement s= con.CreateStatement();
String st = "Insert into table1 (10,'abc')";
s.executeUpdate(st);
s.close();
If I do something like this (DOES NOT work for more than one row - have no idea why- SQLite):
Connection con;
Statement s= con.CreateStatement();
String st = "Insert into table1 (10,'abc'), (5,'vfvdv')"; //now it doesnt work since Im inserting more than one row. I can't figure out what Im doing wrong -
//Iv also tried Insert into table1(ID, Name) values (10,'abc'), (5,'afb'); but it dont work.
s.executeUpdate(st);
s.close();
Could any java expert or anyone help me on this. I cant figure out what Im doing wrong or anything because when I type my commands in the 'SQLite' command line it works fine for ALL. But, In JAVA with SQLite, I can only update one row for some reason?
However, Java with MySQL works fine but not SQLlite.
Anyone could clarify what Im doing wrong would be brillant.
Thanks alot for reading this and your time.
It's possible that the SQLite JDBC driver doesn't support the multi-insert syntax. This syntax is also not standard, though many databases do support it.
Another option for doing multiple inserts like this is to use the batching mechanism of PreparedStatement.
Connection conn = ....;
PreparedStatement stmt =
conn.prepareStatement("INSERT INTO table1(id, name) VALUES (?, ?)");
stmt.setInt(1, 10);
stmt.setString(2, "abc");
stmt.addBatch();
stmt.setInt(1, 5);
stmt.setString(2, "vfvdv");
stmt.addBatch();
stmt.executeBatch();
Typically, you'd use the setInt, setString, addBatch sequence in a loop, instead of unrolled as I've shown here, and call executeBatch after the loop. This will work with all JDBC compliant databases.
Multi row insert is not standard according to the SQL92 standard.
SQLite suports it since version 3.7.11.
If you have a version under this one, you need to make one insert for each row...
i have the following query:
String updatequery = "UPDATE tbl_page SET linkCount = ?, pageProcessed = 1 WHERE pageUrl =?";
PreparedStatement updatestmt = kon.prepareStatement(updatequery);
updatestmt.clearParameters();
//updatestmt.setQueryTimeout(10);
updatestmt.setInt(1, linkCount);
updatestmt.setString(2, urlLink);
updatestmt.executeUpdate();
When i set the query timeout for 10 seconds it will catch an exception the query timed out. but when i dont it goes on waiting. Whats wrong with the query? pageUrl column is the Primary Key with varchar(900)
I know something might be wrong with the prepared statement because when i run this query in MS SQl Server Management Studio ('?' replaced with its value) it works fine.
Am i missing something in Java or MSSQL?
Since the code looks just fine, this could be an issue at database side. May be someone else has blocked the row by updating it and not doing a commit/rollback (most possibly from you MS-SQL Server Management studio !). You could look for locks owned by other processes for the same record so that you can be sure that this is not a database issue.
Create an index on pageUrl:
create index tbl_page_pageUrl_index on tbl_page(pageUrl);
That will allow speedy access to the rows you want to update.
Without this index, the database must do a full table scan, and when combined with an update command, if likely to lead to lock contention and possibly even deadlocks, depending on your locking options.