Edit:
It turns out in this case since I was using "version" annotation, so I'm using optimistic locking, not pessimistic locking.
If I remove version and hence disable optimistic locking. The pessimistic locking takes over and performance degraded significantly.
So I guess I have to live with optimistic locking and occasional exceptions. Is there a better solution?
Original:
I currently have multiple tomcat instances in an apache 2.2 load balancer via ajp. The backend system is hibernate. The system serves multiple users and requests, for request it deducts one credit from the user's account.
I use hibernate's pessimistic locking for managing credit deductions.
I kept getting the following from time to time on user account object in the logs:
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Code
private boolean decUserAccountQuota(UserAccount userAccount, int creditDec) {
if(userAccount.getBalance() <1) return false;
String userName = userAccount.getUsername();
Manager manager = getManager(userName);
try{
manager.beginTransaction();
manager.refresh(userAccount, LockMode.UPGRADE); //this uses pessimistic locking, this calls sessionFactory.getCurrentSession().refresh();
userAccount.setBalance(userAccount.getBalance()-creditDec);
manager.commitTransaction(); //this causes the Exception
}catch(Exception exp){
exp.printStackTrace();
manager.rollbackTransaction();
return false;
}finally{
manager.closeSession();
}
return true;
}
Questions:
How do I prevent this exception from happening. What happens here
is more than one threads tries to update the same entity, one thread
succeeds and hence, when the next thread goes to commit the data, it
sees that its already been modified and ends up throwing
StaleObjectStateException. But if I'm already using pessimistic
locking, how can the exception still happen?
Are there any better ways in terms of performance and integrity in
managing the user account credit system?
Your Hibernate entity is either using #Version annotation or defines <version> in your XML Hibernate mapping. This will enable the optimistic locking provided by Hibernate.
If you are explicitly using pessimistic locking as you described, removing the above should fix your problem.
More info here
In your code the below line is not taking a pessimistic lock, although you are specifying it to, because your database doesnot support SELECT.. FOR UPDATE .
manager.refresh(userAccount, LockMode.UPGRADE);
For this reason hibernate uses an alternate mode of locking called LockMode.READ for userAccount which is optimistic locking based on the #Version attribute in your entity.
I searched to see if there is any alternative way of forcing hibernate to take pessimistic locking for your scenario but it looks like its not possbile.
Coming back to your question of how to minimize or avoid the StaleObjectStateException here are my thoughts:-
Synchronize on userAccount object . Though this seems to affect performance but this would happen only if the same user places too many concurrent requests. Also giving up a little on performance to make sure that user need not be thrown an exception and made to retry would be ideal scenario.
If you have found any alternative solution please do share it .
Related
I found that the #Transactional is used to ensure transaction on repository method or on a service method.
#Lock is used on repository method to ensure locking of entity to provide isolation.
Some questions are raised in my mind:
What are major difference/relations in these two annotations ?
When to use #Transactional and when to use #Lock ?
Is #Lock useful in distributed database system to provide data concurrency and consistency ?
Transactional: Whenever you put #Transactional annotation, it enables transactional behavior which qualifies ACID properties
ACID: ACID (Atomicity, Consistency, Isolation, Durability) is a set of
properties of database transactions intended to guarantee the validity
even in the event of errors.
Atomic
Guarantees that all operations in a transaction are treated as a single “unit”, which either succeeds completely or fails completely.
Consistent
Ensures that a transaction can only bring the database from one valid state to another by preventing data corruption.
Isolation
Determines how and when changes made by one transaction become visible to the other. Serializable and Snapshot Isolation are the top 2 isolation levels from a strictness standpoint.
Durable
Ensures that the results of the transaction are permanently stored in the system. The modifications must persist even in case of power loss or system failures.
Lock: It should not be confused with transactional,#Lock enables locking behavior during a transaction
JPA has two main lock types defined.
Pessimistic Locking
Optimistic Locking
If you want to know more about Pessimistic and Obtimistic locking you can explore the internet, below is explanation from Baeldung,
Pessimistic Locking When we are using Pessimistic Locking in a
transaction and access an entity, it will be locked immediately. The
transaction releases the lock either by committing or rolling back the
transaction.
Optimistic Locking In Optimistic Locking, the transaction doesn't lock
the entity immediately. Instead, the transaction commonly saves the
entity's state with a version number assigned to it.
When we try to update the entity's state in a different transaction,
the transaction compares the saved version number with the existing
version number during an update.
At this point, if the version number differs, it means that the entity
can't be modified. If there is an active transaction then that
transaction will be rolled back and the underlying JPA implementation
will throw an OptimisticLockException.
Apart from the version number approach, we can use other approaches
such as timestamps, hash value computation, or serialized checksum,
depending on which approach is the most suitable for our current
development context.
There are also other lock types available in spring
NONE: No lock.
OPTIMISTIC: Optimistic lock.
OPTIMISTIC_FORCE_INCREMENT: Optimistic lock, with version update.
PESSIMISTIC_FORCE_INCREMENT: Pessimistic write lock, with version update
PESSIMISTIC_READ: Pessimistic read lock.
PESSIMISTIC_WRITE: Pessimistic write lock.
READ: Synonymous with OPTIMISTIC.
WRITE: Synonymous with OPTIMISTIC_FORCE_INCREMENT.
Now answer to your questions
What are the major differences/relations in these two annotations?
You will understand after reading above
When to use #Transactional and when to use #Lock?
If you want transactional behavior then add #transactional and if your usecase requires locking and as per use case use appropriate locking
Is #Lock useful in the distributed database system to provide data
concurrency and consistency?
The two main tools we use to cope with concurrency are database transactions and distributed locks. These two are not interchangeable. You can't use a transaction when you need a lock. You can't use a lock when you need a transaction. source
In case of hibernate jpa LockModeType.OPTIMISTIC_FORCE_INCREMENT, is this lock taken at application level or database level.
I am using following snippet for taking optimistic locks:
setting = this.entityManager.find(Setting.class, setting.getId(),
LockModeType.OPTIMISTIC_FORCE_INCREMENT);
setting.setUpdateTimestamp(new Date());
newSettingList.add(setting);
Suppose there are two jvm's running and both have same methods and there is conflict, will this locking mechanism work in this case?
My observation is that, whenever I was debugging and "newSettingList.add(setting);" at this line in code, I was not seeing any changes in database at that point. So how locking is ensured at database level?
Optimistic Locking is a strategy where you read a record, use version number to check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks.
Better explanation here.
This means that since you use optimistic locking you don't intervene in the locks at the database level. What you you do is simply use the database to keep versioning of the objects-entities. For example:
a) You open a T1 transaction from the 1st jvm and read an object with version v1.
b) You open a T2 transaction from the 2nd jvm and read the same object with v1.(no update in this object has been made).
c) You update in T1 transaction the object and setting its version v2. you commit the transaction.
d) You try to have access again in db for the object but you get an exception because of the versioning.
But there is no need to have access from the same jvm for the 2 transactions
I am getting difficulties when trying to understand how can version-based optimistic locking prevent "last-commit-wins" issue and appropriate overriding.
To make the question more concrete, let's consider the following pseudo-code that uses JDBC:
connection.setAutoCommit(false);
Account account = select(id);
if (account.getBalance() >= amount) {
account.setBalance(account.getBalance() - amount);
}
int rowsUpdated = update(account); // version=:oldVer+1 WHERE version=:oldVer
if (rowsUpdated == 0) throw new OptimisticLockException();
connection.commit();
Here what if other transaction would commit its change right between the update and the commit ? If the transactions are concurrent, then the update made by the first transaction is not yet committed and so not visible to the second transaction (with proper isolation levels) and so the first transaction commit will override the changes of the second transaction without any notification or error.
Is this the case that optimistic locking just decrease the probability of the issue while not preventing it in general ?
the idea of a database "transaction" is that it is supposed to provide a guarantee of "consistency" across multiple conceptual operations. the database is in charge of enforcing this. so, when the transaction commits, the database should only allow the transaction to complete if it can ensure that everything that happened during the transaction is still valid.
In practice, a database will typically handle this such that once one of the updates succeeds, the relevant row will be write locked until the relevant transaction completes. Thus, one of the updates is guaranteed to fail.
Note: this also requires an appropriate isolation level in your jdbc connection. the isolation level ensures that the test for the current value done before the update is still applicable at the time of the write.
Our app is mostly using optimistic locking using Hibernate’ s versioning support. We are planning to implement pessimistic locking in one particular scenario. I don’t have much experience with pessimistic locking so please excuse if this question sounds naïve.
When a user shows intention for updating an entry - we lock the corresponding DB row using “select for update”. Now, if this user takes a long time to commit his changes are forgets about it after locking, how do we unlock this lock using some timeout/rollback mechanism? So that the row doesn’t stays locked for a very long time and disallowing all other users to edit it.
I doubt if this will be handled at Weblogic-JTA-Spring transaction mechanism we are using – where we already have a transaction timeout of 30 mins. (??)
So, should this rollback be handled directly at Oracle level. If yes, then how? Please advise on best way to handle this so that such locks don’t stay lingering around for too long.
Locks will be released only when the transaction ends. The transaction will end either when an explicit commit or rollback is issued to the database or when the database session is terminated (which does an implicit rollback). If your middle tier is already set to rollback any transactions that are open for more than 30 minutes, that would be sufficient to release the locks.
If you have a Java application running in a Weblogic application server, however, it strikes me as unusual for pessimistic locking to be appropriate. First, I assume that you are using a connection pool in the middle tier. If that is the case, then one database connection from the connection pool would need to be held by the middle tier for the length of the transaction (up to 30 minutes in this case). But allowing one session to hold open a particular database session for an extended period of time defeats the purpose of having a connection pool. Normally, dozens if not hundreds of application sessions can share a single connection from the connection pool-- if you are going to allow pessimistic locking, you're now forcing a 1:1 relationship between application sessions and database sessions for those sessions.
There are numerous cases that optimistic locking cannot replace pessimistic locking. Lock timeout is handled in the database. Refer to this page about how to configure it in Oracle
Can Oracle's default object lock timeout be changed?
I am using Java EE 6 with JBOSS7 and JPA2 + Hibernate. For my client I provide a REST api.
My concern is how to efficiently ensure that no resources where modified concurrently. Should happen too often, but in case it happens I would like to ensure proper handling.
My approaches so far:
Map<String, ReentrantLock> to store the locks. (my ids are always
UUIDs) Locks are created on demand if missing in map. On this
approach i like that concurrent access will be blocked and i can
control how long the other thread tries to lock the resource.
Use JPA2 optimistic locking.
Which one would you recommend? Or is there an even better approach?
seems error-prone, plus it might not scale. I've never seen such
design and would discourage it.
transactions with optimistic
locking is a viable option. In this case, some transaction might
fail and you will need to deal with errors and retry.
transactions with pessimistic locking is another viable option. It's
like 1) but using the database to lock and order operations. AFAIK,
JPA support pessimistic locking as well. Otherwise you can use
SELECT FOR UPDATE(supported by most DBMS) to explicitely acquire row locks. Make sure you
figure out a scheme were locks are acquired in consistent order, to
avoid deadlocks.
The choice between 2-3 depends on the use case, e.g. if contention is expected to be high or not, or whether it is easy to retry a failed transaction.