How to rollback/timeout “select for update” locks in Oracle? - java

Our app is mostly using optimistic locking using Hibernate’ s versioning support. We are planning to implement pessimistic locking in one particular scenario. I don’t have much experience with pessimistic locking so please excuse if this question sounds naïve.
When a user shows intention for updating an entry - we lock the corresponding DB row using “select for update”. Now, if this user takes a long time to commit his changes are forgets about it after locking, how do we unlock this lock using some timeout/rollback mechanism? So that the row doesn’t stays locked for a very long time and disallowing all other users to edit it.
I doubt if this will be handled at Weblogic-JTA-Spring transaction mechanism we are using – where we already have a transaction timeout of 30 mins. (??)
So, should this rollback be handled directly at Oracle level. If yes, then how? Please advise on best way to handle this so that such locks don’t stay lingering around for too long.

Locks will be released only when the transaction ends. The transaction will end either when an explicit commit or rollback is issued to the database or when the database session is terminated (which does an implicit rollback). If your middle tier is already set to rollback any transactions that are open for more than 30 minutes, that would be sufficient to release the locks.
If you have a Java application running in a Weblogic application server, however, it strikes me as unusual for pessimistic locking to be appropriate. First, I assume that you are using a connection pool in the middle tier. If that is the case, then one database connection from the connection pool would need to be held by the middle tier for the length of the transaction (up to 30 minutes in this case). But allowing one session to hold open a particular database session for an extended period of time defeats the purpose of having a connection pool. Normally, dozens if not hundreds of application sessions can share a single connection from the connection pool-- if you are going to allow pessimistic locking, you're now forcing a 1:1 relationship between application sessions and database sessions for those sessions.

There are numerous cases that optimistic locking cannot replace pessimistic locking. Lock timeout is handled in the database. Refer to this page about how to configure it in Oracle
Can Oracle's default object lock timeout be changed?

Related

Row lock contention issue with large transactions

I have a situation where we are acquiring lock on an object from database using SELECT FOR UPDATE. This is necessary for us to insert and delete records from multiple tables in an orderly fashion. The functionality works something like this.
Login -> Acquire lock on unique lock object and insert records to multiple tables and release lock -> Logout -> Acquire lock on same unique lock object and delete records from multiple tables and release lock.
We have synchronization enabled to track users have logged in before logging him out. It is taken care in Java code. However we obtain another lock at database level to make sure the database transactions are synchronized when large number of users are logging in.
Problem: The whole system works perfectly in multi-clustered servers and singleton servers. However, when the number of concurrent users reaches 4000+, we are facing row lock contention (Mode 6) in the database. And few users are not able to login.
Objective: To fix the locking mechanism to enable users to login and logout successfully.
Things tried so far: Added NOWAIT and SKIP LOCKED to SELECT FOR UPDATE query. This doesn't solve my problem because the first one simply throws an error and the second one basically skips the lock which would affect synchronization.
Need suggestions and opinions from Database experts to resolve this issue. TIA.
UPDATE: Just adding one more information. We do not update or do anything with the locked row. It is just used as a mechanism to synchronize other database tasks we do.
Instead of relying on pessimistic locking(your current approach)- use optimistic locking possibly using some ORM.

Does Hibernate 4 require a transaction when Hibernate 3 did not? [duplicate]

Why do I need Transaction in Hibernate for read-only operations?
Does the following transaction put a lock in the DB?
Example code to fetch from DB:
Transaction tx = HibernateUtil.getCurrentSession().beginTransaction(); // why begin transaction?
//readonly operation here
tx.commit() // why tx.commit? I don't want to write anything
Can I use session.close() instead of tx.commit()?
Transactions for reading might look indeed strange and often people don't mark methods for transactions in this case. But JDBC will create transaction anyway, it's just it will be working in autocommit=true if different option wasn't set explicitly. But there are practical reasons to mark transactions read-only:
Impact on databases
Read-only flag may let DBMS optimize such transactions or those running in parallel.
Having a transaction that spans multiple SELECT statements guarantees proper Isolation for levels starting from Repeatable Read or Snapshot (e.g. see PostgreSQL's Repeatable Read). Otherwise 2 SELECT statements could see inconsistent picture if another transaction commits in parallel. This isn't relevant when using Read Committed.
Impact on ORM
ORM may cause unpredictable results if you don't begin/finish transactions explicitly. E.g. Hibernate will open transaction before the 1st statement, but it won't finish it. So connection will be returned to the Connection Pool with an unfinished transaction. What happens then? JDBC keeps silence, thus this is implementation specific: MySQL, PostgreSQL drivers roll back such transaction, Oracle commits it. Note that this can also be configured on Connection Pool level, e.g. C3P0 gives you such an option, rollback by default.
Spring sets the FlushMode=MANUAL in case of read-only transactions, which leads to other optimizations like no need for dirty checks. This could lead to huge performance gain depending on how many objects you loaded.
Impact on architecture & clean code
There is no guarantee that your method doesn't write into the database. If you mark method as #Transactional(readonly=true), you'll dictate whether it's actually possible to write into DB in scope of this transaction. If your architecture is cumbersome and some team members may choose to put modification query where it's not expected, this flag will point you to the problematic place.
All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (e.g., BEGIN, COMMIT, ROLLBACK).
If you don't declare transaction boundaries explicitly, then each statement will have to be executed in a separate transaction (autocommit mode). This may even lead to opening and closing one connection per statement unless your environment can deal with connection-per-thread binding.
Declaring a service as #Transactional will give you one connection for the whole transaction duration, and all statements will use that single isolation connection. This is way better than not using explicit transactions in the first place.
On large applications, you may have many concurrent requests, and reducing database connection acquisition request rate will definitely improve your overall application performance.
JPA doesn't enforce transactions on read operations. Only writes end up throwing a TransactionRequiredException in case you forget to start a transactional context. Nevertheless, it's always better to declare transaction boundaries even for read-only transactions (in Spring #Transactional allows you to mark read-only transactions, which has a great performance benefit).
Transactions indeed put locks on the database — good database engines handle concurrent locks in a sensible way — and are useful with read-only use to ensure that no other transaction adds data that makes your view inconsistent. You always want a transaction (though sometimes it is reasonable to tune the isolation level, it's best not to do that to start out with); if you never write to the DB during your transaction, both committing and rolling back the transaction work out to be the same (and very cheap).
Now, if you're lucky and your queries against the DB are such that the ORM always maps them to single SQL queries, you can get away without explicit transactions, relying on the DB's built-in autocommit behavior, but ORMs are relatively complex systems so it isn't at all safe to rely on such behavior unless you go to a lot more work checking what the implementation actually does. Writing the explicit transaction boundaries in is far easier to get right (especially if you can do it with AOP or some similar ORM-driven technique; from Java 7 onwards try-with-resources could be used too I suppose).
It doesn't matter whether you only read or not - the database must still keep track of your resultset, because other database clients may want to write data that would change your resultset.
I have seen faulty programs to kill huge database systems, because they just read data, but never commit, forcing the transaction log to grow, because the DB can't release the transaction data before a COMMIT or ROLLBACK, even if the client did nothing for hours.

Why do I need Transaction in Hibernate for read-only operations?

Why do I need Transaction in Hibernate for read-only operations?
Does the following transaction put a lock in the DB?
Example code to fetch from DB:
Transaction tx = HibernateUtil.getCurrentSession().beginTransaction(); // why begin transaction?
//readonly operation here
tx.commit() // why tx.commit? I don't want to write anything
Can I use session.close() instead of tx.commit()?
Transactions for reading might look indeed strange and often people don't mark methods for transactions in this case. But JDBC will create transaction anyway, it's just it will be working in autocommit=true if different option wasn't set explicitly. But there are practical reasons to mark transactions read-only:
Impact on databases
Read-only flag may let DBMS optimize such transactions or those running in parallel.
Having a transaction that spans multiple SELECT statements guarantees proper Isolation for levels starting from Repeatable Read or Snapshot (e.g. see PostgreSQL's Repeatable Read). Otherwise 2 SELECT statements could see inconsistent picture if another transaction commits in parallel. This isn't relevant when using Read Committed.
Impact on ORM
ORM may cause unpredictable results if you don't begin/finish transactions explicitly. E.g. Hibernate will open transaction before the 1st statement, but it won't finish it. So connection will be returned to the Connection Pool with an unfinished transaction. What happens then? JDBC keeps silence, thus this is implementation specific: MySQL, PostgreSQL drivers roll back such transaction, Oracle commits it. Note that this can also be configured on Connection Pool level, e.g. C3P0 gives you such an option, rollback by default.
Spring sets the FlushMode=MANUAL in case of read-only transactions, which leads to other optimizations like no need for dirty checks. This could lead to huge performance gain depending on how many objects you loaded.
Impact on architecture & clean code
There is no guarantee that your method doesn't write into the database. If you mark method as #Transactional(readonly=true), you'll dictate whether it's actually possible to write into DB in scope of this transaction. If your architecture is cumbersome and some team members may choose to put modification query where it's not expected, this flag will point you to the problematic place.
All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (e.g., BEGIN, COMMIT, ROLLBACK).
If you don't declare transaction boundaries explicitly, then each statement will have to be executed in a separate transaction (autocommit mode). This may even lead to opening and closing one connection per statement unless your environment can deal with connection-per-thread binding.
Declaring a service as #Transactional will give you one connection for the whole transaction duration, and all statements will use that single isolation connection. This is way better than not using explicit transactions in the first place.
On large applications, you may have many concurrent requests, and reducing database connection acquisition request rate will definitely improve your overall application performance.
JPA doesn't enforce transactions on read operations. Only writes end up throwing a TransactionRequiredException in case you forget to start a transactional context. Nevertheless, it's always better to declare transaction boundaries even for read-only transactions (in Spring #Transactional allows you to mark read-only transactions, which has a great performance benefit).
Transactions indeed put locks on the database — good database engines handle concurrent locks in a sensible way — and are useful with read-only use to ensure that no other transaction adds data that makes your view inconsistent. You always want a transaction (though sometimes it is reasonable to tune the isolation level, it's best not to do that to start out with); if you never write to the DB during your transaction, both committing and rolling back the transaction work out to be the same (and very cheap).
Now, if you're lucky and your queries against the DB are such that the ORM always maps them to single SQL queries, you can get away without explicit transactions, relying on the DB's built-in autocommit behavior, but ORMs are relatively complex systems so it isn't at all safe to rely on such behavior unless you go to a lot more work checking what the implementation actually does. Writing the explicit transaction boundaries in is far easier to get right (especially if you can do it with AOP or some similar ORM-driven technique; from Java 7 onwards try-with-resources could be used too I suppose).
It doesn't matter whether you only read or not - the database must still keep track of your resultset, because other database clients may want to write data that would change your resultset.
I have seen faulty programs to kill huge database systems, because they just read data, but never commit, forcing the transaction log to grow, because the DB can't release the transaction data before a COMMIT or ROLLBACK, even if the client did nothing for hours.

Lock, process and release lock in jdbc

Thats my requirement - to lock a database record, process it and release it
Environment - weblogic 10.3
Database - Oracle 11g
Datasources - multiple XA recources involved
Tx mgr - JTA
Here are the results of the experiments I have done so far:
Experiment 1 - Rely on read uncommitted
Read the db record
Lock the record by id in another table, as part of the global JTA transaction
Process the record
A second transaction which tries to lock the same record will fail, will drop the record.
But for this to work the RDBMS should allow dirty reads.
Unfortunately Oracle does not support read uncommitted isolation level.
Experiment 2 - Lock record in local transaction
Read the db record
Lock the record by id in another table, as a separate local transaction
Process the record and delete the record when the transaction commits successfully
A second transaction which tries to lock the same record will fail, will drop the record. This approach is based on committed data, should work fine.
Here is the problem - Since the lock transaction and the global parent are different, if the processing fails rolling back the main transaction, I should compensate by rolling back the lock transaction, which I do not know how to do - Need help here
If Iam not able to rollback the record locking transaction, would have to write some dirty logic around the record locking code. I dont prefer this.
This appears to be a very common requirement. I would like to know how you guys handle this elegantly.
Does Oracle support in any way making uncommitted updates visible to all transactions.
Thanks a lot in advance.
We have an utility class that implements roughly what you describe in experiment 2:
Prerequisite: having a dedicated table for the lock
On lock phase, a new connection is created; a INSERT INTO is performed on the lock table.
On unlock phase, a rollback on the connection is performed regardless of the execution of the business logic.
It is used like a java.util.concurrent.locks.Lock:
Lock lock = new Lock(...);
lock.lock();
try {
// your business logic
} finally {
lock.unlock();
}
It works on websphere / oracle.
Note that if you use JPA, there is a built-in support for entity locking.

Extended Session for transactions

What is 'Extended Session Antipattern' ?
An extended (or Long) session (or session-per-conversation) is a session that may live beyond the duration of a transaction, as opposed to transaction-scoped sessions (or session-per-request). This is not necessarily an anti-pattern, this is a way to implement Long conversations (i.e. conversations with the database than span multiple transactions) which are just another way of designing units of work.
Like anything, I'd just say that long conversations can be misused or wrongly implemented.
Here is how the documentation introduces Long conversations:
12.1.2. Long conversations
The session-per-request pattern is
not the only way of designing units of
work. Many business processes require
a whole series of interactions with
the user that are interleaved with
database accesses. In web and
enterprise applications, it is not
acceptable for a database transaction
to span a user interaction. Consider
the following example:
The first screen of a dialog opens. The data seen by the user has been
loaded in a particular Session and
database transaction. The user is free
to modify the objects.
The user clicks "Save" after 5 minutes and expects their
modifications to be made persistent.
The user also expects that they were
the only person editing this
information and that no conflicting
modification has occurred.
From the point of view of the user, we
call this unit of work a long-running
conversation or application
transaction. There are many ways to
implement this in your application.
A first naive implementation might
keep the Session and database
transaction open during user think
time, with locks held in the database
to prevent concurrent modification and
to guarantee isolation and atomicity.
This is an anti-pattern, since lock
contention would not allow the
application to scale with the number
of concurrent users.
You have to use several database
transactions to implement the
conversation. In this case,
maintaining isolation of business
processes becomes the partial
responsibility of the application
tier. A single conversation usually
spans several database transactions.
It will be atomic if only one of these
database transactions (the last one)
stores the updated data. All others
simply read data (for example, in a
wizard-style dialog spanning several
request/response cycles). This is
easier to implement than it might
sound, especially if you utilize some
of Hibernate's features:
Automatic Versioning: Hibernate can perform automatic optimistic
concurrency control for you. It can
automatically detect if a concurrent
modification occurred during user
think time. Check for this at the end
of the conversation.
Detached Objects: if you decide to use the session-per-request pattern,
all loaded instances will be in the
detached state during user think time.
Hibernate allows you to reattach the
objects and persist the modifications.
The pattern is called
session-per-request-with-detached-objects.
Automatic versioning is used to
isolate concurrent modifications.
Extended (or Long) Session: the Hibernate Session can be disconnected
from the underlying JDBC connection
after the database transaction has
been committed and reconnected when a
new client request occurs. This
pattern is known as
session-per-conversation and makes
even reattachment unnecessary.
Automatic versioning is used to
isolate concurrent modifications and
the Session will not be allowed to be
flushed automatically, but
explicitly.
Both
session-per-request-with-detached-objects
and session-per-conversation have
advantages and disadvantages. These
disadvantages are discussed later in
this chapter in the context of
optimistic concurrency control.
I've added some references below but I suggest reading the whole Chapter 12. Transactions and Concurrency.
References
Hibernate Core Reference Guide
12.1.2. Long conversations
12.3. Optimistic concurrency control

Categories

Resources