Transactions are failing as table is Locked in ORACLE - java

I do have a below scenario in a legacy codebase -
'Team' table holds information about Team and a counter. It has a column named 'TEAM_NAME' and 'COUNTER'.
Below 3 step operation is being executed in a transaction -
Take a exclusive LOCK on table.
Read the counter corresponding to the team.
Use that counter and increment the counter value & save it back to TEAM table.
Once these steps are performed , Commit the complete operation.
Due to taking an Exclusive LOCK on table in the first step other concurrent transactions are failing. I want to perform this without loosing transactions in the system.
I do think that if i remove LOCK statement and have my method as Synchronized can work but i do have 4 JVMs in real time and still concurrent transaction can hit this.
Please suggest some better design way to handle this.

You should almost never need to do a manual LOCK in Oracle. If you're doing that, you should probably rethink what you are doing. What you should probably be doing is:
Do SELECT ... FOR UPDATE on your table corresponding to the team. This will lock only that row, not the entire table. Concurrent sessions working on different teams will be free to continue.
Do whatever you need to do.
Run an UPDATE ... to update the counter.
An even simpler way would be:
Do UPDATE ... RETURNING my_counter INTO ... which will return the updated value of the counter.
Do what you need to do, keeping in mind you have the incremented counter value.

Related

Best Practice to process multiple rows from database in different thread

I would like to ask about the best way to do the following, currently I have many rows that are being inserted into database, with some status like 'NEW'
One thread(ThreadA) is reading 20 rows of the data from table with following query: select * from TABLE where status = 'NEW' order by some_date asc and puts read data into the the queue. It only reads data when number of elements in the queue is less then 20.
Another thread(ThreadB) is reading data from the queue and processes it, during the process it changes the status of the row to something like 'IN PROGRESS'.
My fear is that while ThreadB is processing one row, but still didn't update its status, if the number of elements in the queue is reduced to be lower than 20, it will fetch another 20 elements and put it into the queue, so there is a possibility of having duplicates in queue.
The data might come back with a status like 'NEW' I thought that I can update the data read with some flag(something like fetched), and to set the flag as not read after processing.
I feel like I am missing something. So I would like to ask if there some best practice on how to handle tasks similar to this.
PS. Number of threads that read the data might be increased in the future, this is what I try to keep in mind
Right, since no-one seems to be picking this one up, I'll continue here what was started in the comments:
There are lots of solutions to this one. In your case, with just one processing thread, you might want for example to store just the records ids in the queue. Then ThreadB can fetch the row itself to make sure the status is indeed NEW. Or use optimistic locking with update table set status='IN_PROGRESS' where id=rowId and status='NEW' and quit processing this row on exception.
Optimisting locking is fun, and you could also use it to get rid of producer thread altogether. Imagine a few threads, processing records from database. They could each pick up a record, and try to set the status with optimistic locking as in the first example. It's quite possible to get a lot of contention for records this way, so each thread could fetch N rows, where N is number of threads, or twice that much. And then try to process the first row that it succeeded to set IN_PROGRESS for. This solution makes for a less complicated system, and one less thing to take care of/synchronize with.
And you can have the thread not only pick up records that are NEW, but also these which are IN_PROGRESS and started_date < sysdate = timeout, that would include records that were not processed because of system failure (like a thread managed to set one row to IN_PROGRESS and then your system went down. So you get some resilience here.

How to avoid deadlock in PostgreSQL

I am facing problem in scenario where user points are getting deducted from the database table and at the same time when request came for adding the points that points are not getting added due to deadlock condition. I need advise to avoid the deadlock. I cannot make code threadsafe as it will affect the processing. I am using Postgres as database.
If the deadlock happens only occasionally, don't worry. Just repeat the transaction.
If it happens often, you have to do more to get decent performance. There are two measures to reduce the frequency of deadlocks:
Keep the transactions short and don't add or delete more points in one transaction than necessary.
Whenever you modify more than one point in a transaction, handle (and thus lock) the points in some fixed order, for example in the order of the primary key column.

Why does a SELECT wait for a lock?

In my application I have the problem that sometimes SELECT statements run into a java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction exception. Sadly I can't create an example as the circumstances are very complex. So the question is just about general understanding.
A little bit of background information: I'm using MySQL (InnoDB) with READ_COMMITED isolation level.
Actually I don't understand how a SELECT can ever run into a lock timeout with that setup. I thought that a SELECT would never lock as it will just return the latest commited state (managed by MySQL). Anyway according to what is happening this seems to be wrong. So how is it really?
I already read this https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html but that didn't really give me a clue. No SELECT ... FOR UPDATE or something like that is used.
That is probably due to your database. Usually this kind of problems come from that side, not from the programming side that access it.In my experience with db's, these problems are usually due to that. In the end, the programming side is just "go and get that for me in that db" thing.
I found this without much effort.
It basically explains that:
Lock wait timeout occurs typically when a transaction is waiting on row(s) of data to update which is already been locked by some other transaction.
You should also check this answer that has a specific transaction problem, which might help you, as trying to change different tables might do the timeout
the query was attempting to change at least one row in one or more InnoDB tables. Since you know the query, all the tables being accessed are candidates for being the culprit.
To speed up queries in a DB, several transactions can be executed at the same time. For example if someone runs a select query over a table for the wages of the employees of a company (each employee identified by an id) and another one changes the last name of someone who e.g. has married, you can execute both queries at the same time because they don't interfere.
But in other cases even a SELECT statement might interfere with another statement.
To prevent unexpected results in a SQL transactions, transactions follow the ACID-model which stands for Atomicity, Consistency, Isolation and Durability (for further information read wikipedia).
Let's say transaction 1 starts to calculate something and then wants to write the results to table A. Before writing it it locks all SELECT statements to table A. Otherwise this would interfere with the Isolation requirement. Because if a transaction 2 would start while 1 is still writing, 2's results depend on where 1 has already written to and where not.
Now, it might even produce a dead-lock. E.g. before transaction 1 can write the last field in table A, it still has to write something to table B but transaction 2 has already blocked table B to read safely from it after it read from A and now you have a deadlock. 2 wants to read from A which is blocked by 1, so it waits for 1 to finish but 1 waits for 2 to unlock table B to finish by itself.
To solve this problem one strategy is to rollback certain transactions after a certain timeout. (more here)
So that might be a read on for your select statement to get a lock wait timeout exceeded.
But a dead-lock usually just happens by coincidence, so if transaction 2 was forced to rollback, transaction 1 should be able to finish so that 2 should be able to succeed on a later try.

Multiple threads update same row in database at a time how to maintain consistence

In my java application multiple threads update same row at a time how to get consistence results ?
for example
current row value count =0;
thread 1 updating it to count+1=1;
thread 2 updating at the same time count+1=2
but it should not happen like this
thread 1 updating it to count+1=1;
thread 2 updating at the same time count+1=1;
both threads should not catch the same value because both are running same time
how can we achieve this in jdbc hibernate , database ??
There are two possible ways to go.
Either you choose a pessimistic approach and lock rows, tables or even ranges of rows.
Or you work with versioned Entities (Optimistic Locking).
Maybe you can will find more information here:
https://docs.jboss.org/hibernate/orm/3.3/reference/en/html/transactions.html
Incrementing a counter in this way is hard to manage concurrently. You really need to use pessimistic locking to solve this particular problem.
SELECT 1 FROM mytable WHERE x IS TRUE FOR UPDATE
This will force each thread to wait until the previous one commits before it reads the counter.
This is necessary because you have two potential issues, the first is the read race and the second is the write lock. The write lock gets taken automatically in most RDBMSs but unless you take it explicitly before you read, the counter will be incremented once by both threads together (because both read the original value before the update).
If you need to have parallel writes, then you need to insert a table and then materialize an aggregate later. That is a more complex design pattern though.
Your question is not 100% clear, but I guess you're looking for the different locking strategies: http://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/Hibernate_User_Guide.html#locking
If you're working on a DB that has sequence generators (Oracle, Postgres, ...) you should consider using those. Assuming you're always doing the same increment value and it's no that one thread increments by one and another by two then that should be a good solution.
Here is a detailed answer to this question:
How to properly handle two threads updating the same row in a database
To summarize:
The biggest question is: Are the two threads trying to persist the same data ?To summarize the content of the linked answer. Lets name the two threads T1 and T2. There are couple of aproaches:
Approach 1, this is more or less the last to update Wins situation. It more or less avoids the optimistic locking (the version counting). In case you don't have dependency from T1 to T2 or reverse in order to set status PARSED. This should be good.
Aproach 2 Optimistic Locking This is what you have now. The solution is to refresh the data and restart your operation.
Aproach 3 Row level DB lock The solution here is more or less the same as for approach 2 with the small correction that the Pessimistic lock dure. The main difference is that in this case it may be a READ lock and you might not be even able to read the data from the database in order to refresh it if it is PESSIMISTIC READ.
Aproach 4 application level synchronization There are many different ways to do synchronization. One example would be to actually arrange all your updates in a BlockingQueue or JMS queue(if you want it to be persistent) and push all updates from a single thread. To visualize it a bit T1 and T2 will put elements on the Queue and there will be a single T3 thread reading operations and pushing them to the Database server.

Transaction in PostgreSql

I'd like to realize following scenario in PosgreSql from java:
User selects data
User starts transaction: inserts, updates, deletes data
User commits transaction
I'd like data not be available for other users during the transaction. It would be enough if I'd get an exception when other user tries to update the table.
I've tried to use select for update or select for share, but it locks data for reading also. I've tried to use lock command, but I'm not able to get a lock (ERROR: could not obtain lock on relation "fppo10") or another transaction gets lock when trying to commit transaction, not when updating the data.
Does it exist a way to lock data in a moment of transaction start to prevent any other call of update, insert or delete statement?
I have this scenario working successfully for a couple of years on DB2 database. Now I need the same application to work also for PostgreSql.
Finally, I think I get what you're going for.
This isn't a "transaction" problem per se (and depending on the number of tables to deal with and the required statements, you may not even need one), it's an application design problem. You have two general ways to deal with this; optimistic and pessimistic locking.
Pessimistic locking is explicitly taking and holding a lock. It's best used when you can guarantee that you will be changing the row plus stuff related to it, and when your transactions will be short. You would use it in situations like updating "current balance" when adding sales to an account, once a purchase has been made (update will happen, short transaction duration time because no further choices to be made at that point). Pessimistic locking becomes frustrating if a user reads a row and then goes to lunch (or on vacation...).
Optimistic locking is reading a row (or set of), and not taking any sort of db-layer lock. It's best used if you're just reading through rows, without any immediate plan to update any of them. Usually, row data will include a "version" value (incremented counter or last updated timestamp). If your application goes to update the row, it compares the original value(s) of the data to make sure it hasn't been changed by something else first, and alerts the user if the data changed. Most applications interfacing with users should use optimistic locking. It does, however, require that users notice and pay attention to updated values.
Note that, because a lock is rarely (and for a short period) taken in optimistic locking, it usually will not conflict with a separate process that takes a pessimistic lock. A pessimistic locking app would prevent an optimistic one from updating locked rows, but not reading them.
Also note that this doesn't usually apply to bulk updates, which will have almost no user interaction (if any).
tl;dr
Don't lock your rows on read. Just compare the old value(s) with what the app last read, and reject the update if they don't match (and alert the user). Train your users to respond appropriately.
Instead of select for update try a "row exclusive" table lock:
LOCK TABLE YourTable IN ROW EXCLUSIVE MODE;
According to the documentation, this lock:
The commands UPDATE, DELETE, and INSERT acquire this lock mode on the
target table (in addition to ACCESS SHARE locks on any other
referenced tables). In general, this lock mode will be acquired by any
command that modifies data in a table.
Note that the name of the lock is confusing, but it does lock the entire table:
Remember that all of these lock modes are table-level locks, even if
the name contains the word "row"; the names of the lock modes are
historical

Categories

Resources