Transaction in PostgreSql - java

I'd like to realize following scenario in PosgreSql from java:
User selects data
User starts transaction: inserts, updates, deletes data
User commits transaction
I'd like data not be available for other users during the transaction. It would be enough if I'd get an exception when other user tries to update the table.
I've tried to use select for update or select for share, but it locks data for reading also. I've tried to use lock command, but I'm not able to get a lock (ERROR: could not obtain lock on relation "fppo10") or another transaction gets lock when trying to commit transaction, not when updating the data.
Does it exist a way to lock data in a moment of transaction start to prevent any other call of update, insert or delete statement?
I have this scenario working successfully for a couple of years on DB2 database. Now I need the same application to work also for PostgreSql.

Finally, I think I get what you're going for.
This isn't a "transaction" problem per se (and depending on the number of tables to deal with and the required statements, you may not even need one), it's an application design problem. You have two general ways to deal with this; optimistic and pessimistic locking.
Pessimistic locking is explicitly taking and holding a lock. It's best used when you can guarantee that you will be changing the row plus stuff related to it, and when your transactions will be short. You would use it in situations like updating "current balance" when adding sales to an account, once a purchase has been made (update will happen, short transaction duration time because no further choices to be made at that point). Pessimistic locking becomes frustrating if a user reads a row and then goes to lunch (or on vacation...).
Optimistic locking is reading a row (or set of), and not taking any sort of db-layer lock. It's best used if you're just reading through rows, without any immediate plan to update any of them. Usually, row data will include a "version" value (incremented counter or last updated timestamp). If your application goes to update the row, it compares the original value(s) of the data to make sure it hasn't been changed by something else first, and alerts the user if the data changed. Most applications interfacing with users should use optimistic locking. It does, however, require that users notice and pay attention to updated values.
Note that, because a lock is rarely (and for a short period) taken in optimistic locking, it usually will not conflict with a separate process that takes a pessimistic lock. A pessimistic locking app would prevent an optimistic one from updating locked rows, but not reading them.
Also note that this doesn't usually apply to bulk updates, which will have almost no user interaction (if any).
tl;dr
Don't lock your rows on read. Just compare the old value(s) with what the app last read, and reject the update if they don't match (and alert the user). Train your users to respond appropriately.

Instead of select for update try a "row exclusive" table lock:
LOCK TABLE YourTable IN ROW EXCLUSIVE MODE;
According to the documentation, this lock:
The commands UPDATE, DELETE, and INSERT acquire this lock mode on the
target table (in addition to ACCESS SHARE locks on any other
referenced tables). In general, this lock mode will be acquired by any
command that modifies data in a table.
Note that the name of the lock is confusing, but it does lock the entire table:
Remember that all of these lock modes are table-level locks, even if
the name contains the word "row"; the names of the lock modes are
historical

Related

Transactions are failing as table is Locked in ORACLE

I do have a below scenario in a legacy codebase -
'Team' table holds information about Team and a counter. It has a column named 'TEAM_NAME' and 'COUNTER'.
Below 3 step operation is being executed in a transaction -
Take a exclusive LOCK on table.
Read the counter corresponding to the team.
Use that counter and increment the counter value & save it back to TEAM table.
Once these steps are performed , Commit the complete operation.
Due to taking an Exclusive LOCK on table in the first step other concurrent transactions are failing. I want to perform this without loosing transactions in the system.
I do think that if i remove LOCK statement and have my method as Synchronized can work but i do have 4 JVMs in real time and still concurrent transaction can hit this.
Please suggest some better design way to handle this.
You should almost never need to do a manual LOCK in Oracle. If you're doing that, you should probably rethink what you are doing. What you should probably be doing is:
Do SELECT ... FOR UPDATE on your table corresponding to the team. This will lock only that row, not the entire table. Concurrent sessions working on different teams will be free to continue.
Do whatever you need to do.
Run an UPDATE ... to update the counter.
An even simpler way would be:
Do UPDATE ... RETURNING my_counter INTO ... which will return the updated value of the counter.
Do what you need to do, keeping in mind you have the incremented counter value.

Row lock contention issue with large transactions

I have a situation where we are acquiring lock on an object from database using SELECT FOR UPDATE. This is necessary for us to insert and delete records from multiple tables in an orderly fashion. The functionality works something like this.
Login -> Acquire lock on unique lock object and insert records to multiple tables and release lock -> Logout -> Acquire lock on same unique lock object and delete records from multiple tables and release lock.
We have synchronization enabled to track users have logged in before logging him out. It is taken care in Java code. However we obtain another lock at database level to make sure the database transactions are synchronized when large number of users are logging in.
Problem: The whole system works perfectly in multi-clustered servers and singleton servers. However, when the number of concurrent users reaches 4000+, we are facing row lock contention (Mode 6) in the database. And few users are not able to login.
Objective: To fix the locking mechanism to enable users to login and logout successfully.
Things tried so far: Added NOWAIT and SKIP LOCKED to SELECT FOR UPDATE query. This doesn't solve my problem because the first one simply throws an error and the second one basically skips the lock which would affect synchronization.
Need suggestions and opinions from Database experts to resolve this issue. TIA.
UPDATE: Just adding one more information. We do not update or do anything with the locked row. It is just used as a mechanism to synchronize other database tasks we do.
Instead of relying on pessimistic locking(your current approach)- use optimistic locking possibly using some ORM.

Concurrent update and delete on one table

Using Java, Hibernate and Oracle database.
I have two concurrent processes:
Process1 removes some entities from table1. (multiple: delete from table1 where id =...) Done by native hibernate query.
Process2 updates SAME/other entities in table1. (multiple: update table1 set name=... where id=...) Done by jpa repository delete method.
Currently sometimes exception
CannotAcquireLockException is thrown,
(SQL Error: 60, SQLState: 61000..
ORA-00060: deadlock detected while waiting for resource)
So, the question is: what is going on and how I can avoid exception? Any workaround?
IMPORTANT: In case of collisions I would be satisfied if delete succeeds and update won't do anything.
Session A waits for B, B waits for A - this is what a deadlock basically is.
Nothing to wait for any more, Oracle kills either of the sessions.
Option 1
Create semaphore to effectively serialize concurrent processes.
create table my_semaphore(dummy char(1));
Session 1:
LOCK TABLE my_semaphore in exclusive mode;
UPDATE <your update here>;
COMMIT;
Session 2:
LOCK TABLE my_semaphore in exclusive mode;
DELETE <your delete here>;
COMMIT;
Option 2
Try processing rows with both statements in the same order, say by rowid or whatever.
So that session B never returns to rows held by A, if A is stuck in behind by rows locked by B. This more tricky and resource-intesive.
"locking tables doesnt look attractive at all -what the point then of having severaal processes working with database"
Obviously we want to enable concurrent processes. The trick is to design processes which can run concurrently without interfering with each other. Your architecture is failing to address this point. It should not be possible for Process B to update records which are being deleted by Process A.
This is an unfortunate side-effect of the whole web paradigm which is stateless and favours an optimistic locking strategy. Getting locks at the last possible moment "scales" but incurs the risk of deadlock.
The alternative is a pessimistic locking strategy, in which a session locks the rows it wants upfront. In Oracle we can do this with SELECT .. FOR UPDATE. This locks a subset of rows (the set defined by the WHERE clause) and not the whole table. Find out more.
So it doesn't hinder concurrent processes which operate on different subsets of data but it will prevent a second session grabbing records which are already being processed. This still results in an exception for the second session but at least that happens before the session has done any work, and provides information to re-evaluate the task (hmmm, do we want to delete these records if they're being updated?).
Hibernate supports the SELECT FOR UPDATE syntax. This StackOverflow thread discusses it.

Mysql/JDBC: Deadlock

I have a J2EE server, currently running only one thread (the problem arises even within one single request) to save its internal model of data to MySQL/INNODB-tables.
Basic idea is to read data from flat files, do a lot of calculation and then write the result to MySQL. Read another set of flat files for the next day and repeat with step 1. As only a minor part of the rows change, I use a recordset of already written rows, compare to the current result in memory and then update/insert it correspondingly (no delete, just setting a deletedFlag).
Problem: Despite a purely sequential process I get lock timeout errors (#1204) and Innodump show record locks (though I do not know how to figure the details). To complicate things under my windows machine everything works, while the production system (where I can't install innotop) has some record locks.
To the critical code:
Read data and calculate (works)
Get Connection from Tomcat Pool and set to autocommit=false
Use Statement to issue "LOCK TABLES order WRITE"
Open Recordset (Updateable) on table order
For each row in Recordset --> if difference, update from in-memory-object
For objects not yet in the database --> Insert data
Commit Connection, Close Connection
The Steps 5/6 have an Commitcounter so that every 500 changes the rows are committed (to avoid having 50.000 rows uncommitted). In the first run (so w/o any locks) this takes max. 30sec / table.
As stated above right now I avoid any other interaction with the database, but it in future other processes (user requests) might read data or even write some fields. I would not mind for those processes to read either old/new data and to wait for a couple of minutes to save changes to the db (that is for a lock).
I would be happy to any recommendation to do better than that.
Summary: Complex code calculates in-memory objects which are to be synchronized with database. This sync currently seems to lock itself despite the fact that it sequentially locks, changes unlocks the tables without any exceptions thrown. But for some reason row locks seem to remain.
Kind regards
Additional information:
Mysql: show processlist lists no active connections (all asleep or alternatively waiting for table locks on table order) while "show engine INNODB" reports a number of row locks (unfortuantely I can't understand which transaction is meant as output is quite cryptic).
Solved: I wrongly declared a ResultSet as updateable. The ResultSet was closed only on a "finalize()" method via Garbage Collector which was not fast enough - before I reopended the ResultSet and tried therefore to aquire a lock on an already locked table.
Yet it was odd, that innotop showed another query of mine to hang on a completely different table. Though as it works for me, I do not care about oddities:-)

How come hibernate executes update sql statements when I do a read using HQL on the same object/table?

What's happening is i'm performing a read of some records, say Car where color = red, and it returns 10 cars. Then I iterate over those 10 cars, and update a date in that car object, i.e. car.setDate(5/1/2010). Then I perform another read from Cars where color = green. I have sql logging turned on and I noticed when i call query.list() it actually prints out some update statements to the Cars table, and I end up getting a lock wait timeout. Also note, this is all done in a single database transaction, so I can understand the lock wait timeout - it seems like i have a lock on the table i'm reading from, and in that same transaction i'm trying to update it before i release the lock on the table. But it seems like it shouldn't be trying to run the sql to update those records until the end of the transaction when i call commit? This is all using hibernate's HQL to perform the reads. I'm not calling anything directly at all to do the saves, i'm just doing car.setDate.
The database writes are controlled by the FlushMode on your session. By default, hibernate uses FlushMode.AUTO, which allows it to perform a session.flush() whenever it sees fit. A session.flush() causes uncommitted data on the session to be written to the database. Flushing session data to the database does not make it permanent until you commit your session (or roll it back). Depending on your database Table/Row locking strategy, the rows that have been updated as part of this transaction may be locked for Read or Read/Write access.
I think the answer is in the database- do your tables have the appropriate locking strategy that supports your use case?

Categories

Resources