Lock, process and release lock in jdbc - java

Thats my requirement - to lock a database record, process it and release it
Environment - weblogic 10.3
Database - Oracle 11g
Datasources - multiple XA recources involved
Tx mgr - JTA
Here are the results of the experiments I have done so far:
Experiment 1 - Rely on read uncommitted
Read the db record
Lock the record by id in another table, as part of the global JTA transaction
Process the record
A second transaction which tries to lock the same record will fail, will drop the record.
But for this to work the RDBMS should allow dirty reads.
Unfortunately Oracle does not support read uncommitted isolation level.
Experiment 2 - Lock record in local transaction
Read the db record
Lock the record by id in another table, as a separate local transaction
Process the record and delete the record when the transaction commits successfully
A second transaction which tries to lock the same record will fail, will drop the record. This approach is based on committed data, should work fine.
Here is the problem - Since the lock transaction and the global parent are different, if the processing fails rolling back the main transaction, I should compensate by rolling back the lock transaction, which I do not know how to do - Need help here
If Iam not able to rollback the record locking transaction, would have to write some dirty logic around the record locking code. I dont prefer this.
This appears to be a very common requirement. I would like to know how you guys handle this elegantly.
Does Oracle support in any way making uncommitted updates visible to all transactions.
Thanks a lot in advance.

We have an utility class that implements roughly what you describe in experiment 2:
Prerequisite: having a dedicated table for the lock
On lock phase, a new connection is created; a INSERT INTO is performed on the lock table.
On unlock phase, a rollback on the connection is performed regardless of the execution of the business logic.
It is used like a java.util.concurrent.locks.Lock:
Lock lock = new Lock(...);
lock.lock();
try {
// your business logic
} finally {
lock.unlock();
}
It works on websphere / oracle.
Note that if you use JPA, there is a built-in support for entity locking.

Related

Concurrent update and delete on one table

Using Java, Hibernate and Oracle database.
I have two concurrent processes:
Process1 removes some entities from table1. (multiple: delete from table1 where id =...) Done by native hibernate query.
Process2 updates SAME/other entities in table1. (multiple: update table1 set name=... where id=...) Done by jpa repository delete method.
Currently sometimes exception
CannotAcquireLockException is thrown,
(SQL Error: 60, SQLState: 61000..
ORA-00060: deadlock detected while waiting for resource)
So, the question is: what is going on and how I can avoid exception? Any workaround?
IMPORTANT: In case of collisions I would be satisfied if delete succeeds and update won't do anything.
Session A waits for B, B waits for A - this is what a deadlock basically is.
Nothing to wait for any more, Oracle kills either of the sessions.
Option 1
Create semaphore to effectively serialize concurrent processes.
create table my_semaphore(dummy char(1));
Session 1:
LOCK TABLE my_semaphore in exclusive mode;
UPDATE <your update here>;
COMMIT;
Session 2:
LOCK TABLE my_semaphore in exclusive mode;
DELETE <your delete here>;
COMMIT;
Option 2
Try processing rows with both statements in the same order, say by rowid or whatever.
So that session B never returns to rows held by A, if A is stuck in behind by rows locked by B. This more tricky and resource-intesive.
"locking tables doesnt look attractive at all -what the point then of having severaal processes working with database"
Obviously we want to enable concurrent processes. The trick is to design processes which can run concurrently without interfering with each other. Your architecture is failing to address this point. It should not be possible for Process B to update records which are being deleted by Process A.
This is an unfortunate side-effect of the whole web paradigm which is stateless and favours an optimistic locking strategy. Getting locks at the last possible moment "scales" but incurs the risk of deadlock.
The alternative is a pessimistic locking strategy, in which a session locks the rows it wants upfront. In Oracle we can do this with SELECT .. FOR UPDATE. This locks a subset of rows (the set defined by the WHERE clause) and not the whole table. Find out more.
So it doesn't hinder concurrent processes which operate on different subsets of data but it will prevent a second session grabbing records which are already being processed. This still results in an exception for the second session but at least that happens before the session has done any work, and provides information to re-evaluate the task (hmmm, do we want to delete these records if they're being updated?).
Hibernate supports the SELECT FOR UPDATE syntax. This StackOverflow thread discusses it.

What kind of lock on MySQL is set when I use connection.setAutoCommit(false) in java?

Suppose I have the following piece of code .
try {
connection.setAutoCommit(false) ;
....
....
connection.commit();
}
catch (Exception e)
{
}
Does the above transaction acquires locks on the MySQL tables which are referenced in the code in the try statement ? If it does, what kind of locks are these ? Are these read locks or write locks ?
Also, does it make a row level lock or a complete lock on the table ?
There are two kinds of locking available in JDBC.
Optimistic Locking
Pessimistic Locking
Pessimistic Locking locks the records as soon as it selects the row to update.
Optimistic Locking locks the record only when updating takes place.
But connection.setAutoCommit(false); means that you are starting a transaction on this connection. All changes you will do to the DB tables in this connection will be saved on commit or reverted on rollback (or disconnect without commit). It does not mean that you lock the whole DB. Whether other users will be locked trying to access the tables your transaction uses will depend on the operations your transaction is doing and transaction isolation level.

How to rollback/timeout “select for update” locks in Oracle?

Our app is mostly using optimistic locking using Hibernate’ s versioning support. We are planning to implement pessimistic locking in one particular scenario. I don’t have much experience with pessimistic locking so please excuse if this question sounds naïve.
When a user shows intention for updating an entry - we lock the corresponding DB row using “select for update”. Now, if this user takes a long time to commit his changes are forgets about it after locking, how do we unlock this lock using some timeout/rollback mechanism? So that the row doesn’t stays locked for a very long time and disallowing all other users to edit it.
I doubt if this will be handled at Weblogic-JTA-Spring transaction mechanism we are using – where we already have a transaction timeout of 30 mins. (??)
So, should this rollback be handled directly at Oracle level. If yes, then how? Please advise on best way to handle this so that such locks don’t stay lingering around for too long.
Locks will be released only when the transaction ends. The transaction will end either when an explicit commit or rollback is issued to the database or when the database session is terminated (which does an implicit rollback). If your middle tier is already set to rollback any transactions that are open for more than 30 minutes, that would be sufficient to release the locks.
If you have a Java application running in a Weblogic application server, however, it strikes me as unusual for pessimistic locking to be appropriate. First, I assume that you are using a connection pool in the middle tier. If that is the case, then one database connection from the connection pool would need to be held by the middle tier for the length of the transaction (up to 30 minutes in this case). But allowing one session to hold open a particular database session for an extended period of time defeats the purpose of having a connection pool. Normally, dozens if not hundreds of application sessions can share a single connection from the connection pool-- if you are going to allow pessimistic locking, you're now forcing a 1:1 relationship between application sessions and database sessions for those sessions.
There are numerous cases that optimistic locking cannot replace pessimistic locking. Lock timeout is handled in the database. Refer to this page about how to configure it in Oracle
Can Oracle's default object lock timeout be changed?

Hibernate second level cache and RR transaction isolation

If two transactions (both at RR isolation level) ask for the same item which is 2nd-level cached, and then they modify and store this item. Now, for reading that item, they did not run any SQL because it's cached; so in this case, will they actually start a data base transaction? And when they commit their changes, will they run into lost update problem?
From a pessimistic point of view:
If the second level cache is configured to participate in the transaction, then only the one that first acquired the write lock would be able to modify the cached object, and then write the change to the database. When the second transaction wants to acquire the write lock, it would have to wait until the first transaction ends and releases it.
With optimistic locking, I guess a Concurrent Modification Exception (or similar name) should happen and the second transaction would retry the operation.

How come hibernate executes update sql statements when I do a read using HQL on the same object/table?

What's happening is i'm performing a read of some records, say Car where color = red, and it returns 10 cars. Then I iterate over those 10 cars, and update a date in that car object, i.e. car.setDate(5/1/2010). Then I perform another read from Cars where color = green. I have sql logging turned on and I noticed when i call query.list() it actually prints out some update statements to the Cars table, and I end up getting a lock wait timeout. Also note, this is all done in a single database transaction, so I can understand the lock wait timeout - it seems like i have a lock on the table i'm reading from, and in that same transaction i'm trying to update it before i release the lock on the table. But it seems like it shouldn't be trying to run the sql to update those records until the end of the transaction when i call commit? This is all using hibernate's HQL to perform the reads. I'm not calling anything directly at all to do the saves, i'm just doing car.setDate.
The database writes are controlled by the FlushMode on your session. By default, hibernate uses FlushMode.AUTO, which allows it to perform a session.flush() whenever it sees fit. A session.flush() causes uncommitted data on the session to be written to the database. Flushing session data to the database does not make it permanent until you commit your session (or roll it back). Depending on your database Table/Row locking strategy, the rows that have been updated as part of this transaction may be locked for Read or Read/Write access.
I think the answer is in the database- do your tables have the appropriate locking strategy that supports your use case?

Categories

Resources