I have two databases. I have to commit my transaction only when both the entries are available in those two databases. For that, I planned to use a 2 phase similar algorithm. But the implementation is getting somewhat tricky.
I thought about implementation as follows:
I query both the databases for the respective entries and if available, I will lock them so that no other user can access them and return success.
If i get success from both, I commit them. or else i will remove the lock, if any of the entry is locked.
I also have to handle faults, so the lock should be released after a timeout if an explicit release query hasn't been sent.
My question is, how to explicitly lock the rows for no concurrent access to multiple clients and is there any better implementation that I should think about? Thanks for the help in advance.
Related
Suppose, I have an application deployed on 3 nodes and from each node, one thread is trying to fetch and update the same data record at the same time. Data can only be fetched and updated from the central database. A method that is making the database connection, is thread safe. In this scenario, Is it possible that data can be modified and lead to inconsistency? If yes, how can we solve this problem?
You're confusing several completely different things together:
In general, to protect "data" between "threads", one would use a "lock". One way of protecting data in Java is with synchronized.
In general, a "thread" running on one "node" cannot - and will not - interfere with in-memory data objects being manipulated by some thread on a different "node".
"Database access" brings completely different issues to the table. In particular, read up about isolation levels
Finally, IF you're doing "database updates" and IF "concurrency" is an issue ... then you probably want to perform your update(s) within a DB transaction.
A "transaction" is ACID:
Atomic
Consistency
Isolation
Durability
I have a situation where we are acquiring lock on an object from database using SELECT FOR UPDATE. This is necessary for us to insert and delete records from multiple tables in an orderly fashion. The functionality works something like this.
Login -> Acquire lock on unique lock object and insert records to multiple tables and release lock -> Logout -> Acquire lock on same unique lock object and delete records from multiple tables and release lock.
We have synchronization enabled to track users have logged in before logging him out. It is taken care in Java code. However we obtain another lock at database level to make sure the database transactions are synchronized when large number of users are logging in.
Problem: The whole system works perfectly in multi-clustered servers and singleton servers. However, when the number of concurrent users reaches 4000+, we are facing row lock contention (Mode 6) in the database. And few users are not able to login.
Objective: To fix the locking mechanism to enable users to login and logout successfully.
Things tried so far: Added NOWAIT and SKIP LOCKED to SELECT FOR UPDATE query. This doesn't solve my problem because the first one simply throws an error and the second one basically skips the lock which would affect synchronization.
Need suggestions and opinions from Database experts to resolve this issue. TIA.
UPDATE: Just adding one more information. We do not update or do anything with the locked row. It is just used as a mechanism to synchronize other database tasks we do.
Instead of relying on pessimistic locking(your current approach)- use optimistic locking possibly using some ORM.
I'd like to realize following scenario in PosgreSql from java:
User selects data
User starts transaction: inserts, updates, deletes data
User commits transaction
I'd like data not be available for other users during the transaction. It would be enough if I'd get an exception when other user tries to update the table.
I've tried to use select for update or select for share, but it locks data for reading also. I've tried to use lock command, but I'm not able to get a lock (ERROR: could not obtain lock on relation "fppo10") or another transaction gets lock when trying to commit transaction, not when updating the data.
Does it exist a way to lock data in a moment of transaction start to prevent any other call of update, insert or delete statement?
I have this scenario working successfully for a couple of years on DB2 database. Now I need the same application to work also for PostgreSql.
Finally, I think I get what you're going for.
This isn't a "transaction" problem per se (and depending on the number of tables to deal with and the required statements, you may not even need one), it's an application design problem. You have two general ways to deal with this; optimistic and pessimistic locking.
Pessimistic locking is explicitly taking and holding a lock. It's best used when you can guarantee that you will be changing the row plus stuff related to it, and when your transactions will be short. You would use it in situations like updating "current balance" when adding sales to an account, once a purchase has been made (update will happen, short transaction duration time because no further choices to be made at that point). Pessimistic locking becomes frustrating if a user reads a row and then goes to lunch (or on vacation...).
Optimistic locking is reading a row (or set of), and not taking any sort of db-layer lock. It's best used if you're just reading through rows, without any immediate plan to update any of them. Usually, row data will include a "version" value (incremented counter or last updated timestamp). If your application goes to update the row, it compares the original value(s) of the data to make sure it hasn't been changed by something else first, and alerts the user if the data changed. Most applications interfacing with users should use optimistic locking. It does, however, require that users notice and pay attention to updated values.
Note that, because a lock is rarely (and for a short period) taken in optimistic locking, it usually will not conflict with a separate process that takes a pessimistic lock. A pessimistic locking app would prevent an optimistic one from updating locked rows, but not reading them.
Also note that this doesn't usually apply to bulk updates, which will have almost no user interaction (if any).
tl;dr
Don't lock your rows on read. Just compare the old value(s) with what the app last read, and reject the update if they don't match (and alert the user). Train your users to respond appropriately.
Instead of select for update try a "row exclusive" table lock:
LOCK TABLE YourTable IN ROW EXCLUSIVE MODE;
According to the documentation, this lock:
The commands UPDATE, DELETE, and INSERT acquire this lock mode on the
target table (in addition to ACCESS SHARE locks on any other
referenced tables). In general, this lock mode will be acquired by any
command that modifies data in a table.
Note that the name of the lock is confusing, but it does lock the entire table:
Remember that all of these lock modes are table-level locks, even if
the name contains the word "row"; the names of the lock modes are
historical
I have a J2EE server, currently running only one thread (the problem arises even within one single request) to save its internal model of data to MySQL/INNODB-tables.
Basic idea is to read data from flat files, do a lot of calculation and then write the result to MySQL. Read another set of flat files for the next day and repeat with step 1. As only a minor part of the rows change, I use a recordset of already written rows, compare to the current result in memory and then update/insert it correspondingly (no delete, just setting a deletedFlag).
Problem: Despite a purely sequential process I get lock timeout errors (#1204) and Innodump show record locks (though I do not know how to figure the details). To complicate things under my windows machine everything works, while the production system (where I can't install innotop) has some record locks.
To the critical code:
Read data and calculate (works)
Get Connection from Tomcat Pool and set to autocommit=false
Use Statement to issue "LOCK TABLES order WRITE"
Open Recordset (Updateable) on table order
For each row in Recordset --> if difference, update from in-memory-object
For objects not yet in the database --> Insert data
Commit Connection, Close Connection
The Steps 5/6 have an Commitcounter so that every 500 changes the rows are committed (to avoid having 50.000 rows uncommitted). In the first run (so w/o any locks) this takes max. 30sec / table.
As stated above right now I avoid any other interaction with the database, but it in future other processes (user requests) might read data or even write some fields. I would not mind for those processes to read either old/new data and to wait for a couple of minutes to save changes to the db (that is for a lock).
I would be happy to any recommendation to do better than that.
Summary: Complex code calculates in-memory objects which are to be synchronized with database. This sync currently seems to lock itself despite the fact that it sequentially locks, changes unlocks the tables without any exceptions thrown. But for some reason row locks seem to remain.
Kind regards
Additional information:
Mysql: show processlist lists no active connections (all asleep or alternatively waiting for table locks on table order) while "show engine INNODB" reports a number of row locks (unfortuantely I can't understand which transaction is meant as output is quite cryptic).
Solved: I wrongly declared a ResultSet as updateable. The ResultSet was closed only on a "finalize()" method via Garbage Collector which was not fast enough - before I reopended the ResultSet and tried therefore to aquire a lock on an already locked table.
Yet it was odd, that innotop showed another query of mine to hang on a completely different table. Though as it works for me, I do not care about oddities:-)
I have a Java frontend and a MySQL backend scenario, I used a 'LOCK IN SHARE MODE' for SELECT. If I request the same row from another process, it gives the data.. However it does not allow me to update. What I would like to do is inform the user they will only have a READ only copy, so if they wish to see the information they can else they can request it later.. How could I check the status of the ROW so that the user will be informed about this situation?? If I use 'FOR UPDATE', It just waits for until the first user saves the data. I find it less user friendly, if they just have a blank screen or when they click button it does nothing. Any help will be greatly appreciated. Using MySQL 5.5, Java 7.
The short answer is "You can't"!
You may want to take a look at this discussion.
[EDIT]
The answer to that post states:
You can't (check lock's state) for non-named locks!!!! More info:
http://forums.mysql.com/read.php?21,222363,223774#msg-223774
Row-level locks are not meant for application level locks. They are just means to implement consistent reads and writes. That means you have to release them as soon as possible. You need to implement your own application level lock and it's not that much hard. Perhaps a simple user_id field will do. If it is null then there's no lock. But if it's not null, the id indicates who is holding the record. In this case you'll need row-level locking to update the user_id field. And as I said before, you'll have to release MySQL lock as soon as you are done locking / unlocking the record.
The question's entire premise lies in the rather liberal use of RDBMS' row-level locking (which is usually used for short-lived concurrency control) directly for interactive UI control.
But putting that aside and answering the question, one can set the session's innodb_lock_wait_timeout to a very short value, minimum being 1, and catching the resulting Lock wait timeout exceeded; try restarting transaction when unable to lock.
The exception class was com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException when I just tried with mysql-connector-java 5.1.38, but other exception classes has changed over releases so this too may be different in older version of MySQL Connector/J.
The "attempt and fail" method of acquiring locks is the standard way of tackling these types of concurrency situations, as the alternate method of "check before attempting" is an anti-pattern that creates a race-condition between checking and the actual attempt to lock.