If two transactions (both at RR isolation level) ask for the same item which is 2nd-level cached, and then they modify and store this item. Now, for reading that item, they did not run any SQL because it's cached; so in this case, will they actually start a data base transaction? And when they commit their changes, will they run into lost update problem?
From a pessimistic point of view:
If the second level cache is configured to participate in the transaction, then only the one that first acquired the write lock would be able to modify the cached object, and then write the change to the database. When the second transaction wants to acquire the write lock, it would have to wait until the first transaction ends and releases it.
With optimistic locking, I guess a Concurrent Modification Exception (or similar name) should happen and the second transaction would retry the operation.
Related
In case of hibernate jpa LockModeType.OPTIMISTIC_FORCE_INCREMENT, is this lock taken at application level or database level.
I am using following snippet for taking optimistic locks:
setting = this.entityManager.find(Setting.class, setting.getId(),
LockModeType.OPTIMISTIC_FORCE_INCREMENT);
setting.setUpdateTimestamp(new Date());
newSettingList.add(setting);
Suppose there are two jvm's running and both have same methods and there is conflict, will this locking mechanism work in this case?
My observation is that, whenever I was debugging and "newSettingList.add(setting);" at this line in code, I was not seeing any changes in database at that point. So how locking is ensured at database level?
Optimistic Locking is a strategy where you read a record, use version number to check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks.
Better explanation here.
This means that since you use optimistic locking you don't intervene in the locks at the database level. What you you do is simply use the database to keep versioning of the objects-entities. For example:
a) You open a T1 transaction from the 1st jvm and read an object with version v1.
b) You open a T2 transaction from the 2nd jvm and read the same object with v1.(no update in this object has been made).
c) You update in T1 transaction the object and setting its version v2. you commit the transaction.
d) You try to have access again in db for the object but you get an exception because of the versioning.
But there is no need to have access from the same jvm for the 2 transactions
I'd like to realize following scenario in PosgreSql from java:
User selects data
User starts transaction: inserts, updates, deletes data
User commits transaction
I'd like data not be available for other users during the transaction. It would be enough if I'd get an exception when other user tries to update the table.
I've tried to use select for update or select for share, but it locks data for reading also. I've tried to use lock command, but I'm not able to get a lock (ERROR: could not obtain lock on relation "fppo10") or another transaction gets lock when trying to commit transaction, not when updating the data.
Does it exist a way to lock data in a moment of transaction start to prevent any other call of update, insert or delete statement?
I have this scenario working successfully for a couple of years on DB2 database. Now I need the same application to work also for PostgreSql.
Finally, I think I get what you're going for.
This isn't a "transaction" problem per se (and depending on the number of tables to deal with and the required statements, you may not even need one), it's an application design problem. You have two general ways to deal with this; optimistic and pessimistic locking.
Pessimistic locking is explicitly taking and holding a lock. It's best used when you can guarantee that you will be changing the row plus stuff related to it, and when your transactions will be short. You would use it in situations like updating "current balance" when adding sales to an account, once a purchase has been made (update will happen, short transaction duration time because no further choices to be made at that point). Pessimistic locking becomes frustrating if a user reads a row and then goes to lunch (or on vacation...).
Optimistic locking is reading a row (or set of), and not taking any sort of db-layer lock. It's best used if you're just reading through rows, without any immediate plan to update any of them. Usually, row data will include a "version" value (incremented counter or last updated timestamp). If your application goes to update the row, it compares the original value(s) of the data to make sure it hasn't been changed by something else first, and alerts the user if the data changed. Most applications interfacing with users should use optimistic locking. It does, however, require that users notice and pay attention to updated values.
Note that, because a lock is rarely (and for a short period) taken in optimistic locking, it usually will not conflict with a separate process that takes a pessimistic lock. A pessimistic locking app would prevent an optimistic one from updating locked rows, but not reading them.
Also note that this doesn't usually apply to bulk updates, which will have almost no user interaction (if any).
tl;dr
Don't lock your rows on read. Just compare the old value(s) with what the app last read, and reject the update if they don't match (and alert the user). Train your users to respond appropriately.
Instead of select for update try a "row exclusive" table lock:
LOCK TABLE YourTable IN ROW EXCLUSIVE MODE;
According to the documentation, this lock:
The commands UPDATE, DELETE, and INSERT acquire this lock mode on the
target table (in addition to ACCESS SHARE locks on any other
referenced tables). In general, this lock mode will be acquired by any
command that modifies data in a table.
Note that the name of the lock is confusing, but it does lock the entire table:
Remember that all of these lock modes are table-level locks, even if
the name contains the word "row"; the names of the lock modes are
historical
I am getting difficulties when trying to understand how can version-based optimistic locking prevent "last-commit-wins" issue and appropriate overriding.
To make the question more concrete, let's consider the following pseudo-code that uses JDBC:
connection.setAutoCommit(false);
Account account = select(id);
if (account.getBalance() >= amount) {
account.setBalance(account.getBalance() - amount);
}
int rowsUpdated = update(account); // version=:oldVer+1 WHERE version=:oldVer
if (rowsUpdated == 0) throw new OptimisticLockException();
connection.commit();
Here what if other transaction would commit its change right between the update and the commit ? If the transactions are concurrent, then the update made by the first transaction is not yet committed and so not visible to the second transaction (with proper isolation levels) and so the first transaction commit will override the changes of the second transaction without any notification or error.
Is this the case that optimistic locking just decrease the probability of the issue while not preventing it in general ?
the idea of a database "transaction" is that it is supposed to provide a guarantee of "consistency" across multiple conceptual operations. the database is in charge of enforcing this. so, when the transaction commits, the database should only allow the transaction to complete if it can ensure that everything that happened during the transaction is still valid.
In practice, a database will typically handle this such that once one of the updates succeeds, the relevant row will be write locked until the relevant transaction completes. Thus, one of the updates is guaranteed to fail.
Note: this also requires an appropriate isolation level in your jdbc connection. the isolation level ensures that the test for the current value done before the update is still applicable at the time of the write.
Thats my requirement - to lock a database record, process it and release it
Environment - weblogic 10.3
Database - Oracle 11g
Datasources - multiple XA recources involved
Tx mgr - JTA
Here are the results of the experiments I have done so far:
Experiment 1 - Rely on read uncommitted
Read the db record
Lock the record by id in another table, as part of the global JTA transaction
Process the record
A second transaction which tries to lock the same record will fail, will drop the record.
But for this to work the RDBMS should allow dirty reads.
Unfortunately Oracle does not support read uncommitted isolation level.
Experiment 2 - Lock record in local transaction
Read the db record
Lock the record by id in another table, as a separate local transaction
Process the record and delete the record when the transaction commits successfully
A second transaction which tries to lock the same record will fail, will drop the record. This approach is based on committed data, should work fine.
Here is the problem - Since the lock transaction and the global parent are different, if the processing fails rolling back the main transaction, I should compensate by rolling back the lock transaction, which I do not know how to do - Need help here
If Iam not able to rollback the record locking transaction, would have to write some dirty logic around the record locking code. I dont prefer this.
This appears to be a very common requirement. I would like to know how you guys handle this elegantly.
Does Oracle support in any way making uncommitted updates visible to all transactions.
Thanks a lot in advance.
We have an utility class that implements roughly what you describe in experiment 2:
Prerequisite: having a dedicated table for the lock
On lock phase, a new connection is created; a INSERT INTO is performed on the lock table.
On unlock phase, a rollback on the connection is performed regardless of the execution of the business logic.
It is used like a java.util.concurrent.locks.Lock:
Lock lock = new Lock(...);
lock.lock();
try {
// your business logic
} finally {
lock.unlock();
}
It works on websphere / oracle.
Note that if you use JPA, there is a built-in support for entity locking.
What's happening is i'm performing a read of some records, say Car where color = red, and it returns 10 cars. Then I iterate over those 10 cars, and update a date in that car object, i.e. car.setDate(5/1/2010). Then I perform another read from Cars where color = green. I have sql logging turned on and I noticed when i call query.list() it actually prints out some update statements to the Cars table, and I end up getting a lock wait timeout. Also note, this is all done in a single database transaction, so I can understand the lock wait timeout - it seems like i have a lock on the table i'm reading from, and in that same transaction i'm trying to update it before i release the lock on the table. But it seems like it shouldn't be trying to run the sql to update those records until the end of the transaction when i call commit? This is all using hibernate's HQL to perform the reads. I'm not calling anything directly at all to do the saves, i'm just doing car.setDate.
The database writes are controlled by the FlushMode on your session. By default, hibernate uses FlushMode.AUTO, which allows it to perform a session.flush() whenever it sees fit. A session.flush() causes uncommitted data on the session to be written to the database. Flushing session data to the database does not make it permanent until you commit your session (or roll it back). Depending on your database Table/Row locking strategy, the rows that have been updated as part of this transaction may be locked for Read or Read/Write access.
I think the answer is in the database- do your tables have the appropriate locking strategy that supports your use case?