Hi want to delete the millions of rows from the table in batch to avoid locking. I am trying below code but its deleting all the rows.
Session session;
try {
session = dao.getHibernateTemplate().getSessionFactory().getCurrentSession();
} catch (HibernateException e) {
session = dao.getHibernateTemplate().getSessionFactory().openSession();
}
String sql = "delete from "+clazz.getSimpleName();
session.createQuery(sql).setFetchSize(limit).executeUpdate();
dao.getHibernateTemplate().flush();
Is there any better way of doing it
I am considering "clazz.getSimpleName();" is returning a table name.
If this is the case than your query is - "delete from 'tablename'" here you are not specifying any condition which restrict the delete statement, that's why it is deleting all the rows from the table.
As you are using setFetchSize - setFetchSize(int value) is a 'hint' to the driver, telling it how many rows it should fetch.
I think this method is not require in case of delete query.
Related
I'm trying to delete all the records from a MySQL table (46 records).
The code I have tried. Any suitable answer?
Session hs = connection.NewHibernateUtil.getSessionFactory().openSession();
Criteria cr = hs.createCriteria(Bookmark.class);
Bookmark b;
List<Bookmark> li = cr.list();
for (Bookmark s : li) {
b = new Bookmark();
b.setId(s.getId());
Transaction tr = hs.beginTransaction();
hs.delete(b);
tr.commit();
hs.flush();
hs.close();
}
Error
org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session: [mypojos.Bookmark#7]
You cant delete objects like that. You would first have to get the object from db and then you can delete using hs.delete(b); this is usually used when you have to cascade changes to associated objects.
Best approach in this case is to use HQL query something like this.
String stringQuery = "DELETE FROM tablename";
Query query = session.createQuery(stringQuery);
query.executeUpdate();
I am trying to use the update query with the LIMIT clause using sqlite-JDBC.
Let's say there are 100 bob's in the table but I only want to update one of the records.
Sample code:
String name1 = "bob";
String name2 = "alice";
String updateSql = "update mytable set user = :name1 " +
"where user is :name2 " +
"limit 1";
try (Connection con = sql2o.open()) {
con.createQuery(updateSql)
.addParameter("bob", name1)
.addParameter("alice", name2)
.executeUpdate();
} catch(Exception e) {
e.printStackTrace();
}
I get an error:
org.sql2o.Sql2oException: Error preparing statement - [SQLITE_ERROR] SQL error or missing database (near "limit": syntax error)
Using
sqlite-jdbc 3.31
sql2o 1.6 (easy database query library)
The flag:
SQLITE_ENABLE_UPDATE_DELETE_LIMIT
needs to be set to get the limit clause to work with the update query.
I know the SELECT method works with the LIMIT clause but I would need 2 queries to do this task; SELECT then UPDATE.
If there is no way to get LIMIT to work with UPDATE then I will just use the slightly more messy method of having a query and sub query to get things to work.
Maybe there is a way to get sqlite-JDBC to use an external sqlite engine outside of the integrated one, which has been compiled with the flag set.
Any help appreciated.
You can try this query instead:
UPDATE mytable SET user = :name1
WHERE ROWID = (SELECT MIN(ROWID)
FROM mytable
WHERE user = :name2);
ROWID is a special column available in all tables (unless you use WITHOUT ROWID)
I want to copy a table (10 million records) in originDB(sqlite3) into another database called targetDB.
The process of my method is:
read data from the origin table and generate a ResultSet, then generate corresponding insert sql about every record and execute commit to batch insert when the count of record reach 10000. The code as follow:
public void transfer() throws IOException, SQLException {
targetDBOperate.setCommit(false);//batch insert
int count = 0;
String[] cols = parser(propertyPath);//get fields of data table
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
String base = "insert into " + targetTable;
while(rs.next()) {
count++;
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// batch insert
}
}
targetDBOperate.closeConnection();
}
The follow picture is the trend of using memory, and vertical axis represents memory usage
As we can say it will be bigger and bigger until out of memory. The stackoverflow has some relevant question such as Out of memory when inserting records in SQLite, FireDac, Delphi
, but I havent solve my problem for we use different implement method. My hypothesis is that when the count of record hasn't reach 10000, these corresponding insert sql will be cached in memory and they haven't been removed when execute commit by default? Every advice will be appreciate.
By moving a higher number of rows in SQLite or any other relational database you should follow some basic principles:
1) set autoCommit to false, i.e. do not commit each insert
2) use batch update, i.e. do not round trip for each row
3) use prepared statement, i.e. do not parse each insert.
Putting this together you get following code:
cn is the source connection, cn2 is the target connection.
For each inserted row you call addBatch, but only once per batchSize you call executeBatch which initiates a round trip.
Do not forget a last executeBatch at the end of the loop and the final commit.
cn2.setAutoCommit(false)
String SEL_STMT = "select id, col1,col2 from tab1"
String INS_STMT = "insert into tab2(id, col1,col2) values(?,?,?)"
def batchSize = 10000
def stmt = cn.prepareStatement(SEL_STMT)
def stmtIns = cn2.prepareStatement(INS_STMT)
rs = stmt.executeQuery()
while(rs.next())
{
stmtIns.setLong(1,rs.getLong(1))
stmtIns.setString(2,rs.getString(2))
stmtIns.setTimestamp(3,rs.getTimestamp(3))
stmtIns.addBatch();
i += 1
if (i == batchSize) {
def insRec = stmtIns.executeBatch();
i = 0
}
}
rs.close()
stmt.close()
def insRec = stmtIns.executeBatch();
stmtIns.close()
cn2.commit()
Sample test with your size with sqlite-jdbc-3.23.1:
inserted rows: 10000000
total time taken to insert the batch = 46848 ms
I do not observe any memory issues or problems with a large transaction
You are trying to fetch 10M records in one go by doing the following. This will definitely eat your memory like anything
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
Use paginated queries to read the batches and do batch updates according.
You are not even doing a batch-update You are simply firing 10K queries one after the other by doing the following code
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// This simply means that you are commiting after 10K records
}
I have table without unique index tuples, lets say table has records
A->B->Status
A->C->Status
A->B->Status
A->B->Status
A->C->Status
I am getting first and second record, processing them. After then I want to update only these two records
how can I make this process possible at java application layer?
Since there is not any unique index tupples I cannot use update SQL with proper WHERE clause
Using
Spring 3.XX
Oracle 11g
I think you may try to use ROWID pseudocolumn.
For each row in the database, the ROWID pseudocolumn returns the address of the row. Oracle Database rowid values contain information necessary to locate a row:
The data object number of the object
The data block in the datafile in which the row resides
The position of the row in the data block (first row is 0)
The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
Usually, a rowid value uniquely identifies a row in the database. However, rows in different tables that are stored together in the same cluster can have the same rowid.
SELECT ROWID, last_name
FROM employees
WHERE department_id = 20;
The rowid for the row stays the same, even when the row migrates.
You can solve this issue by using updatable resultsets. This feature relies on rowid to perform all modifications (delete/update/insert).
This is a excerpt highlighting the feature itself:
String sqlString = "SELECT EmployeeID, Name, Office " +
" FROM employees WHERE EmployeeID=1001";
try {
stmt = con.createStatement(
ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_UPDATABLE);
ResultSet rs = stmt.executeQuery(sqlString);
//Check the result set is an updatable result set
int concurrency = rs.getConcurrency();
if (concurrency == ResultSet.CONCUR_UPDATABLE) {
rs.first();
rs.updateString("Office", "HQ222");
rs.updateRow();
} else {
System.out.println("ResultSet is not an updatable result set.");
}
rs.close();
} catch(SQLException ex) {
System.err.println("SQLException: " + ex.getMessage());
}
Here is a complete example.
I have a code looking like this:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
try {
for ( Customer customer: customers ) {
i++;
session.update(customer);
if ( i % 200 == 0 ) { //200, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
} catch (Exc e) {
//TODO want to know customer id here!
}
tx.commit();
session.close();
Say, at some point session.flush() raises an DataException, because one of the fields did not map into the database column size, one of those batch of 200 customers. Nothing wrong with it, data can be corrupted, it's ok in this case. BUT, I really need to know the customer id which failed. Database returns meaningless error message, not stating what was the params of the statement, etc. Catched exception also does not contain which customer did fail, only the sql statement text, looking like 'update Customer set name=?'
Can I somehow determine it using the hibernate session? Does it store somewhere the information about last entity it tried to save down?