Optimistic locking and overriding - java

I am getting difficulties when trying to understand how can version-based optimistic locking prevent "last-commit-wins" issue and appropriate overriding.
To make the question more concrete, let's consider the following pseudo-code that uses JDBC:
connection.setAutoCommit(false);
Account account = select(id);
if (account.getBalance() >= amount) {
account.setBalance(account.getBalance() - amount);
}
int rowsUpdated = update(account); // version=:oldVer+1 WHERE version=:oldVer
if (rowsUpdated == 0) throw new OptimisticLockException();
connection.commit();
Here what if other transaction would commit its change right between the update and the commit ? If the transactions are concurrent, then the update made by the first transaction is not yet committed and so not visible to the second transaction (with proper isolation levels) and so the first transaction commit will override the changes of the second transaction without any notification or error.
Is this the case that optimistic locking just decrease the probability of the issue while not preventing it in general ?

the idea of a database "transaction" is that it is supposed to provide a guarantee of "consistency" across multiple conceptual operations. the database is in charge of enforcing this. so, when the transaction commits, the database should only allow the transaction to complete if it can ensure that everything that happened during the transaction is still valid.
In practice, a database will typically handle this such that once one of the updates succeeds, the relevant row will be write locked until the relevant transaction completes. Thus, one of the updates is guaranteed to fail.
Note: this also requires an appropriate isolation level in your jdbc connection. the isolation level ensures that the test for the current value done before the update is still applicable at the time of the write.

Related

LockModeType.OPTIMISTIC_FORCE_INCREMENT is this lock taken at application level or database level

In case of hibernate jpa LockModeType.OPTIMISTIC_FORCE_INCREMENT, is this lock taken at application level or database level.
I am using following snippet for taking optimistic locks:
setting = this.entityManager.find(Setting.class, setting.getId(),
LockModeType.OPTIMISTIC_FORCE_INCREMENT);
setting.setUpdateTimestamp(new Date());
newSettingList.add(setting);
Suppose there are two jvm's running and both have same methods and there is conflict, will this locking mechanism work in this case?
My observation is that, whenever I was debugging and "newSettingList.add(setting);" at this line in code, I was not seeing any changes in database at that point. So how locking is ensured at database level?
Optimistic Locking is a strategy where you read a record, use version number to check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks.
Better explanation here.
This means that since you use optimistic locking you don't intervene in the locks at the database level. What you you do is simply use the database to keep versioning of the objects-entities. For example:
a) You open a T1 transaction from the 1st jvm and read an object with version v1.
b) You open a T2 transaction from the 2nd jvm and read the same object with v1.(no update in this object has been made).
c) You update in T1 transaction the object and setting its version v2. you commit the transaction.
d) You try to have access again in db for the object but you get an exception because of the versioning.
But there is no need to have access from the same jvm for the 2 transactions

Does Hibernate 4 require a transaction when Hibernate 3 did not? [duplicate]

Why do I need Transaction in Hibernate for read-only operations?
Does the following transaction put a lock in the DB?
Example code to fetch from DB:
Transaction tx = HibernateUtil.getCurrentSession().beginTransaction(); // why begin transaction?
//readonly operation here
tx.commit() // why tx.commit? I don't want to write anything
Can I use session.close() instead of tx.commit()?
Transactions for reading might look indeed strange and often people don't mark methods for transactions in this case. But JDBC will create transaction anyway, it's just it will be working in autocommit=true if different option wasn't set explicitly. But there are practical reasons to mark transactions read-only:
Impact on databases
Read-only flag may let DBMS optimize such transactions or those running in parallel.
Having a transaction that spans multiple SELECT statements guarantees proper Isolation for levels starting from Repeatable Read or Snapshot (e.g. see PostgreSQL's Repeatable Read). Otherwise 2 SELECT statements could see inconsistent picture if another transaction commits in parallel. This isn't relevant when using Read Committed.
Impact on ORM
ORM may cause unpredictable results if you don't begin/finish transactions explicitly. E.g. Hibernate will open transaction before the 1st statement, but it won't finish it. So connection will be returned to the Connection Pool with an unfinished transaction. What happens then? JDBC keeps silence, thus this is implementation specific: MySQL, PostgreSQL drivers roll back such transaction, Oracle commits it. Note that this can also be configured on Connection Pool level, e.g. C3P0 gives you such an option, rollback by default.
Spring sets the FlushMode=MANUAL in case of read-only transactions, which leads to other optimizations like no need for dirty checks. This could lead to huge performance gain depending on how many objects you loaded.
Impact on architecture & clean code
There is no guarantee that your method doesn't write into the database. If you mark method as #Transactional(readonly=true), you'll dictate whether it's actually possible to write into DB in scope of this transaction. If your architecture is cumbersome and some team members may choose to put modification query where it's not expected, this flag will point you to the problematic place.
All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (e.g., BEGIN, COMMIT, ROLLBACK).
If you don't declare transaction boundaries explicitly, then each statement will have to be executed in a separate transaction (autocommit mode). This may even lead to opening and closing one connection per statement unless your environment can deal with connection-per-thread binding.
Declaring a service as #Transactional will give you one connection for the whole transaction duration, and all statements will use that single isolation connection. This is way better than not using explicit transactions in the first place.
On large applications, you may have many concurrent requests, and reducing database connection acquisition request rate will definitely improve your overall application performance.
JPA doesn't enforce transactions on read operations. Only writes end up throwing a TransactionRequiredException in case you forget to start a transactional context. Nevertheless, it's always better to declare transaction boundaries even for read-only transactions (in Spring #Transactional allows you to mark read-only transactions, which has a great performance benefit).
Transactions indeed put locks on the database — good database engines handle concurrent locks in a sensible way — and are useful with read-only use to ensure that no other transaction adds data that makes your view inconsistent. You always want a transaction (though sometimes it is reasonable to tune the isolation level, it's best not to do that to start out with); if you never write to the DB during your transaction, both committing and rolling back the transaction work out to be the same (and very cheap).
Now, if you're lucky and your queries against the DB are such that the ORM always maps them to single SQL queries, you can get away without explicit transactions, relying on the DB's built-in autocommit behavior, but ORMs are relatively complex systems so it isn't at all safe to rely on such behavior unless you go to a lot more work checking what the implementation actually does. Writing the explicit transaction boundaries in is far easier to get right (especially if you can do it with AOP or some similar ORM-driven technique; from Java 7 onwards try-with-resources could be used too I suppose).
It doesn't matter whether you only read or not - the database must still keep track of your resultset, because other database clients may want to write data that would change your resultset.
I have seen faulty programs to kill huge database systems, because they just read data, but never commit, forcing the transaction log to grow, because the DB can't release the transaction data before a COMMIT or ROLLBACK, even if the client did nothing for hours.

How to deal with "StaleObjectStateException" in Hibernate with pessimistic locking?

Edit:
It turns out in this case since I was using "version" annotation, so I'm using optimistic locking, not pessimistic locking.
If I remove version and hence disable optimistic locking. The pessimistic locking takes over and performance degraded significantly.
So I guess I have to live with optimistic locking and occasional exceptions. Is there a better solution?
Original:
I currently have multiple tomcat instances in an apache 2.2 load balancer via ajp. The backend system is hibernate. The system serves multiple users and requests, for request it deducts one credit from the user's account.
I use hibernate's pessimistic locking for managing credit deductions.
I kept getting the following from time to time on user account object in the logs:
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Code
private boolean decUserAccountQuota(UserAccount userAccount, int creditDec) {
if(userAccount.getBalance() <1) return false;
String userName = userAccount.getUsername();
Manager manager = getManager(userName);
try{
manager.beginTransaction();
manager.refresh(userAccount, LockMode.UPGRADE); //this uses pessimistic locking, this calls sessionFactory.getCurrentSession().refresh();
userAccount.setBalance(userAccount.getBalance()-creditDec);
manager.commitTransaction(); //this causes the Exception
}catch(Exception exp){
exp.printStackTrace();
manager.rollbackTransaction();
return false;
}finally{
manager.closeSession();
}
return true;
}
Questions:
How do I prevent this exception from happening. What happens here
is more than one threads tries to update the same entity, one thread
succeeds and hence, when the next thread goes to commit the data, it
sees that its already been modified and ends up throwing
StaleObjectStateException. But if I'm already using pessimistic
locking, how can the exception still happen?
Are there any better ways in terms of performance and integrity in
managing the user account credit system?
Your Hibernate entity is either using #Version annotation or defines <version> in your XML Hibernate mapping. This will enable the optimistic locking provided by Hibernate.
If you are explicitly using pessimistic locking as you described, removing the above should fix your problem.
More info here
In your code the below line is not taking a pessimistic lock, although you are specifying it to, because your database doesnot support SELECT.. FOR UPDATE .
manager.refresh(userAccount, LockMode.UPGRADE);
For this reason hibernate uses an alternate mode of locking called LockMode.READ for userAccount which is optimistic locking based on the #Version attribute in your entity.
I searched to see if there is any alternative way of forcing hibernate to take pessimistic locking for your scenario but it looks like its not possbile.
Coming back to your question of how to minimize or avoid the StaleObjectStateException here are my thoughts:-
Synchronize on userAccount object . Though this seems to affect performance but this would happen only if the same user places too many concurrent requests. Also giving up a little on performance to make sure that user need not be thrown an exception and made to retry would be ideal scenario.
If you have found any alternative solution please do share it .

Why do I need Transaction in Hibernate for read-only operations?

Why do I need Transaction in Hibernate for read-only operations?
Does the following transaction put a lock in the DB?
Example code to fetch from DB:
Transaction tx = HibernateUtil.getCurrentSession().beginTransaction(); // why begin transaction?
//readonly operation here
tx.commit() // why tx.commit? I don't want to write anything
Can I use session.close() instead of tx.commit()?
Transactions for reading might look indeed strange and often people don't mark methods for transactions in this case. But JDBC will create transaction anyway, it's just it will be working in autocommit=true if different option wasn't set explicitly. But there are practical reasons to mark transactions read-only:
Impact on databases
Read-only flag may let DBMS optimize such transactions or those running in parallel.
Having a transaction that spans multiple SELECT statements guarantees proper Isolation for levels starting from Repeatable Read or Snapshot (e.g. see PostgreSQL's Repeatable Read). Otherwise 2 SELECT statements could see inconsistent picture if another transaction commits in parallel. This isn't relevant when using Read Committed.
Impact on ORM
ORM may cause unpredictable results if you don't begin/finish transactions explicitly. E.g. Hibernate will open transaction before the 1st statement, but it won't finish it. So connection will be returned to the Connection Pool with an unfinished transaction. What happens then? JDBC keeps silence, thus this is implementation specific: MySQL, PostgreSQL drivers roll back such transaction, Oracle commits it. Note that this can also be configured on Connection Pool level, e.g. C3P0 gives you such an option, rollback by default.
Spring sets the FlushMode=MANUAL in case of read-only transactions, which leads to other optimizations like no need for dirty checks. This could lead to huge performance gain depending on how many objects you loaded.
Impact on architecture & clean code
There is no guarantee that your method doesn't write into the database. If you mark method as #Transactional(readonly=true), you'll dictate whether it's actually possible to write into DB in scope of this transaction. If your architecture is cumbersome and some team members may choose to put modification query where it's not expected, this flag will point you to the problematic place.
All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (e.g., BEGIN, COMMIT, ROLLBACK).
If you don't declare transaction boundaries explicitly, then each statement will have to be executed in a separate transaction (autocommit mode). This may even lead to opening and closing one connection per statement unless your environment can deal with connection-per-thread binding.
Declaring a service as #Transactional will give you one connection for the whole transaction duration, and all statements will use that single isolation connection. This is way better than not using explicit transactions in the first place.
On large applications, you may have many concurrent requests, and reducing database connection acquisition request rate will definitely improve your overall application performance.
JPA doesn't enforce transactions on read operations. Only writes end up throwing a TransactionRequiredException in case you forget to start a transactional context. Nevertheless, it's always better to declare transaction boundaries even for read-only transactions (in Spring #Transactional allows you to mark read-only transactions, which has a great performance benefit).
Transactions indeed put locks on the database — good database engines handle concurrent locks in a sensible way — and are useful with read-only use to ensure that no other transaction adds data that makes your view inconsistent. You always want a transaction (though sometimes it is reasonable to tune the isolation level, it's best not to do that to start out with); if you never write to the DB during your transaction, both committing and rolling back the transaction work out to be the same (and very cheap).
Now, if you're lucky and your queries against the DB are such that the ORM always maps them to single SQL queries, you can get away without explicit transactions, relying on the DB's built-in autocommit behavior, but ORMs are relatively complex systems so it isn't at all safe to rely on such behavior unless you go to a lot more work checking what the implementation actually does. Writing the explicit transaction boundaries in is far easier to get right (especially if you can do it with AOP or some similar ORM-driven technique; from Java 7 onwards try-with-resources could be used too I suppose).
It doesn't matter whether you only read or not - the database must still keep track of your resultset, because other database clients may want to write data that would change your resultset.
I have seen faulty programs to kill huge database systems, because they just read data, but never commit, forcing the transaction log to grow, because the DB can't release the transaction data before a COMMIT or ROLLBACK, even if the client did nothing for hours.

Hibernate second level cache and RR transaction isolation

If two transactions (both at RR isolation level) ask for the same item which is 2nd-level cached, and then they modify and store this item. Now, for reading that item, they did not run any SQL because it's cached; so in this case, will they actually start a data base transaction? And when they commit their changes, will they run into lost update problem?
From a pessimistic point of view:
If the second level cache is configured to participate in the transaction, then only the one that first acquired the write lock would be able to modify the cached object, and then write the change to the database. When the second transaction wants to acquire the write lock, it would have to wait until the first transaction ends and releases it.
With optimistic locking, I guess a Concurrent Modification Exception (or similar name) should happen and the second transaction would retry the operation.

Categories

Resources