I want to insert an object into database in a transaction and after that object is saved in the database, I'd like to delete that it once a specific operation is done. Can I restart the transaction again and perform deletion and then commit? Is this a correct way of doing it?
Example :
Employee employee = new Employee();
String name = "Ronnie";
entityManager.getTransaction.begin();
employee.setName(name);
entityManager.persist(employee);
entityManager.getTransaction.commit();
//After few steps
entityManager.getTransaction.begin();
entityManager.remove(employee);
entityManager.getTransaction.commit();
SHORT ANSWER: Yes, you can do that whithout problems.
LONG ANSWER: Yes, you can.
Every transaction is independent of any other transaction. So, if you do some operations, commit them (remember, committing a transaction execs the operations in the DB, and closes it), and then reopen it lately, it is independent of the last transaction.
You can even be in the same transaction, whithout closing it, by flushing changes to the DB:
Employee employee = new Employee();
String name = "Ronnie";
entityManager.getTransaction.begin();
employee.setName(name);
entityManager.persist(employee);
entityManager.flush();
//After few steps, the transaction is still the same
entityManager.remove(employee);
entityManager.getTransaction.commit();
The transaction isolate database state from other transactions. So you can insert and delete in the same transaction. no need to commit it.
Related
We have a function in our application that cleans up the database and resets the data, we call a method cleanup() which at first deletes all the data from the database and then calls a sql script file to insert all the necessary default data of our application. Our application supports Oracle - MySQL and MSSQL, the function works fine on both Oracle and MySQL but doesn't work as it supposed to on MSSQL.
The problem is that it is clearing all the data from the database but not inserting the default data, the first section of the method works fine but the second section doesn't get committed to the database. Here is the function:
public boolean cleanup(...){
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
// delete and drop sql queries here...
tx.commit();
session.close();
// end of first section
session = sessionFactory.openSession();
tx = session.beginTransaction();
// insert default data sql queries here...
tx.commit();
session.close();
// end of second section
}
The database gets cleared successfully, but the default data is not being inserted to the database. Please let me know what am I doing wrong here. I tried doing both delete and insert sections in one transaction with no luck.
How are highly contended records saved when running concurrent transactions on the same document(s)?
It appears that this is happening:
MVCC Transaction A begins.
MVCC Transaction B begins.
Transaction A updates docA and docB.
Transaction A commits.
Transaction B updates docA and docC - locks are gained as Transaction A has committed and holds no locks.
Transaction B commits overwriting the work transition A has done on docA.
Here is the example code:
mongoClient = new MongoClient( "localhost" , 27017 );
db = mongoClient.getDB("test");
collection = db.getCollection("testData");
//Create usable Mongo key from key String (i.e {_id:ObjectId("53b4477d44aef43e83c18922")})
String key = "53b4477d44aef43e83c18922";
String key2 = "53bfff9e44aedb6d98a5c578";
ObjectId keyObj = new ObjectId(key);
ObjectId keyObj2 = new ObjectId(key2);
//Set up the transaction
BasicDBObject transaction = new BasicDBObject();
transaction.append("beginTransaction", 1);
transaction.append("isolation", "mvcc");
db.command(transaction);
//Create search query
BasicDBObject query = new BasicDBObject().append("_id",keyObj);
BasicDBObject query2 = new BasicDBObject().append("_id",keyObj2);
//Create set
BasicDBObject set = new BasicDBObject();
set.append("$inc", new BasicDBObject().append("balance",50));
//Run command
collection.update(query, set);
collection.update(query2, set);
//Commit the transactions
BasicDBObject commitTransaction = new BasicDBObject();
commitTransaction.append("commitTransaction", 1);
db.command(commitTransaction);
Is there a check I can do to decide whether or not to commit the transaction? Or is this intended behaviour of TokuMX (or am I doing something wrong)?
TokuMX multi-statement transactions are very similar to MySQL multi-statement transactions. In your example, a document-level lock will be held while the update is happening, so updates to that record will be serialized.
If there is a conflict because two transactions are updating the same document at the same time, the update method will return an error that says there is a lock conflict.
To help you understand what happens, have two threads run this, but have neither commit. You will see one thread wait and eventually time out with a lock timeout error.
Also, if your transaction is a single update, you can just run it, you don't need to wrap it in a transaction. If you want to use a multi-statement transaction, you may want "serializable" isolation rather than MVCC, if you'll be doing reads as part of the transaction: http://docs.tokutek.com/tokumx/tokumx-transactions.html#tokumx-transactions-isolation-serializable Also, you will need to reserve a connection for the transaction, or the connection pool can make your transactions behave improperly: http://docs.tokutek.com/tokumx/tokumx-transactions.html#tokumx-transactions-multi-statement-drivers
I have code which updates a column in a database which looks like this:
logger.info("Entering Update Method");
Query query =session.createQuery("update CardMaster cm set cm.otpAmount = :otpAmount" + " where cm.cardNumber = :cardnumber");
double otpAmount= cardMaster.getOtpAmount();
String cardNumber=cardMaster.getCardNumber();
query.setParameter("otpAmount",otpAmount);
query.setParameter("cardnumber",cardNumber);
query.executeUpdate();
logger.info("cardMasterUpdated successfully");
In this I am getting otpamount ,cardnumber and it is giving result of executeupdate as 1 but it is not reflecting in Database .. I am opening the session and committing correctly outside.
Instead of using this, if I use update() of hibernate it is happening correctly.
Can you help me out of this?
You have to commit the transaction.
Since you do not commit, nothing is visible for other processes, like the tool you are using to view your DB.
You can acquire your transaction with session.getTransaction(). However, normally you start your transaction manually like this:
Transaction tx = session.beginTransaction();
// Do your stuff
session.flush();
tx.commit()
I am using hibernate to update 20K products in my database.
As of now I am pulling in the 20K products, looping through them and modifying some properties and then updating the database.
so:
load products
foreach products
session begintransaction
productDao.MakePersistant(p);
session commit();
As of now things are pretty slow compared to your standard jdbc, what can I do to speed things up?
I am sure I am doing something wrong here.
The right place to look at in the documentation for this kind of treatment is the whole Chapter 13. Batch processing.
Here, there are several obvious mistakes in your current approach:
you should not start/commit the transaction for each update.
you should enable JDBC batching and set it to a reasonable number (10-50):
hibernate.jdbc.batch_size 20
you should flush() and then clear() the session at regular intervals (every n records where n is equal to the hibernate.jdbc.batch_size parameter) or it will keep growing and may explode (with an OutOfMemoryException) at some point.
Below, the example given in the section 13.2. Batch updates illustrating this:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
ScrollableResults customers = session.getNamedQuery("GetCustomers")
.setCacheMode(CacheMode.IGNORE)
.scroll(ScrollMode.FORWARD_ONLY);
int count=0;
while ( customers.next() ) {
Customer customer = (Customer) customers.get(0);
customer.updateStuff(...);
if ( ++count % 20 == 0 ) {
//flush a batch of updates and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
You may also consider using the StatelessSession.
Another option would be to use DML-style operations (in HQL!): UPDATE FROM? EntityName (WHERE where_conditions)?. This the HQL UPDATE example:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
String hqlUpdate = "update Customer c set c.name = :newName where c.name = :oldName";
// or String hqlUpdate = "update Customer set name = :newName where name = :oldName";
int updatedEntities = s.createQuery( hqlUpdate )
.setString( "newName", newName )
.setString( "oldName", oldName )
.executeUpdate();
tx.commit();
session.close();
Again, refer to the documentation for the details (especially how to deal with the version or timestamp property values using the VERSIONED keyword).
If this is pseudo-code, I'd recommend moving the transaction outside the loop, or at least have a double loop if having all 20K products in a single transaction is too much:
load products
foreach (batch)
{
try
{
session beginTransaction()
foreach (product in batch)
{
product.saveOrUpdate()
}
session commit()
}
catch (Exception e)
{
e.printStackTrace()
session.rollback()
}
}
Also, I'd recommend that you batch your UPDATEs instead of sending each one individually to the database. There's too much network traffic that way. Bundle each chunk into a single batch and send them all at once.
I agree with the answer above about looking at the chapter on batch processing.
I also wanted to add that you should make sure that you only load what is neccessary for the changes that you need to make for the product.
What I mean is, if the product eagerly loads a large number of other objects that are not important for this transaction, you should consider not loading the joined objects - it will speed up the loading of products and depending on their persistance strategy, may also save you time when making the product persistent again.
The fastest possible way to do a batch update would be to convert it to a single SQL statement and execute it as raw sql on the session. Something like
update TABLE set (x=y) where w=z;
Failing that you can try to make less transactions and do updates in batches:
start session
start transaction
products = session.getNamedQuery("GetProducs")
.setCacheMode(CacheMode.IGNORE)
.scroll(ScrollMode.FORWARD_ONLY);
count=0;
foreach product
update product
if ( ++count % 20 == 0 ) {
session.flush();
session.clear();
}
}
commit transaction
close session
For more information look at the Hibernate Community Docs
I want update some of my table in database and want all of these work do in 1 transaction,
first of all I delete some entry in branchbuildin(Table) and Insert new one after this action
The problem occurred when I insert and entry with same buildingname and branch_fk (be cause I have this constraint on this table ( uniqueConstraints={#UniqueConstraint(columnNames={"buildingname","branch_fk"})})) but when I don't use hibernate session and use normal JDBC transaction I don't have these problem.
List<Integer> allBranchBuilding = branchBuildingDao.getAllBranchBuildingID(pkId, sess);
for (Integer integer : allBranchBuilding) {
branchBuildingDao.delete(integer, sess); // delete kardane tamame BranchBuilding ha va tel haie aanha
}
Address myAdr = new Address();
setAddress(myAdr, centralFlag, city, latit, longit, mainstreet, remainAdr, state);
BranchBuildingEntity bbe = new BranchBuildingEntity();
setBranchBuildingEntity(bbe, be, myAdr, city, centralFlag, latit, longit, mainstreet, buildingName, remainAdr, state, des);
branchBuildingDao.save(bbe, sess);//Exception Occurred
I get my session at the first of Method:
Session sess = null;
sess = HibernateUtil.getSession();
Transaction tx = sess.beginTransaction();
You're right, everything occurs in the same transaction, and the same Hibernate Session.
The Session keeps track of every entity it manages. Even though you asked to delete it in the database, the corresponding object is still memorised in the Session until the Session is terminated.
In general, it is possible that
Hibernate reorders your operations
when sending them to the database, for
efficiency reasons.
What you could do is flush (ie. send to the database) your transaction because the save (if needed, you could also clear - ie empty the entities memorized by the Session - it after flushing):
sess.flush();
// sess.clear(); // if needed or convenient for you
branchBuildingDao.save(bbe, sess);
Note also that while your entities are memorized by the session, modifying them will trigger an automatic update when closing the session.
In our project, we have a method that deletes efficiently a collection (and another for an array, declared using the convenient ... parameter syntax) of entities (it works for all entities, it doesn't have to be done for each entity), removing them out of the session at the same time, and taking care of the flushing before :
Loop on all entities, delete it (using sess.delete(e)) and add it to a 'deleteds' list.
Every 50 entities (corresponding to the batch size we configured for efficiency reasons) (and at the end) :
flush the Session to force Hibernate to send immediately the changes to the database,
loop on 'deleteds' list, clear each entity from the Session (using sess.evict(e)).
empty the 'deleteds'list.
Don't worry, flush only sends the SQL to the database. It is still subject to commit or rollback.