I am facing problem in scenario where user points are getting deducted from the database table and at the same time when request came for adding the points that points are not getting added due to deadlock condition. I need advise to avoid the deadlock. I cannot make code threadsafe as it will affect the processing. I am using Postgres as database.
If the deadlock happens only occasionally, don't worry. Just repeat the transaction.
If it happens often, you have to do more to get decent performance. There are two measures to reduce the frequency of deadlocks:
Keep the transactions short and don't add or delete more points in one transaction than necessary.
Whenever you modify more than one point in a transaction, handle (and thus lock) the points in some fixed order, for example in the order of the primary key column.
Related
I do have a below scenario in a legacy codebase -
'Team' table holds information about Team and a counter. It has a column named 'TEAM_NAME' and 'COUNTER'.
Below 3 step operation is being executed in a transaction -
Take a exclusive LOCK on table.
Read the counter corresponding to the team.
Use that counter and increment the counter value & save it back to TEAM table.
Once these steps are performed , Commit the complete operation.
Due to taking an Exclusive LOCK on table in the first step other concurrent transactions are failing. I want to perform this without loosing transactions in the system.
I do think that if i remove LOCK statement and have my method as Synchronized can work but i do have 4 JVMs in real time and still concurrent transaction can hit this.
Please suggest some better design way to handle this.
You should almost never need to do a manual LOCK in Oracle. If you're doing that, you should probably rethink what you are doing. What you should probably be doing is:
Do SELECT ... FOR UPDATE on your table corresponding to the team. This will lock only that row, not the entire table. Concurrent sessions working on different teams will be free to continue.
Do whatever you need to do.
Run an UPDATE ... to update the counter.
An even simpler way would be:
Do UPDATE ... RETURNING my_counter INTO ... which will return the updated value of the counter.
Do what you need to do, keeping in mind you have the incremented counter value.
In my java application multiple threads update same row at a time how to get consistence results ?
for example
current row value count =0;
thread 1 updating it to count+1=1;
thread 2 updating at the same time count+1=2
but it should not happen like this
thread 1 updating it to count+1=1;
thread 2 updating at the same time count+1=1;
both threads should not catch the same value because both are running same time
how can we achieve this in jdbc hibernate , database ??
There are two possible ways to go.
Either you choose a pessimistic approach and lock rows, tables or even ranges of rows.
Or you work with versioned Entities (Optimistic Locking).
Maybe you can will find more information here:
https://docs.jboss.org/hibernate/orm/3.3/reference/en/html/transactions.html
Incrementing a counter in this way is hard to manage concurrently. You really need to use pessimistic locking to solve this particular problem.
SELECT 1 FROM mytable WHERE x IS TRUE FOR UPDATE
This will force each thread to wait until the previous one commits before it reads the counter.
This is necessary because you have two potential issues, the first is the read race and the second is the write lock. The write lock gets taken automatically in most RDBMSs but unless you take it explicitly before you read, the counter will be incremented once by both threads together (because both read the original value before the update).
If you need to have parallel writes, then you need to insert a table and then materialize an aggregate later. That is a more complex design pattern though.
Your question is not 100% clear, but I guess you're looking for the different locking strategies: http://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/Hibernate_User_Guide.html#locking
If you're working on a DB that has sequence generators (Oracle, Postgres, ...) you should consider using those. Assuming you're always doing the same increment value and it's no that one thread increments by one and another by two then that should be a good solution.
Here is a detailed answer to this question:
How to properly handle two threads updating the same row in a database
To summarize:
The biggest question is: Are the two threads trying to persist the same data ?To summarize the content of the linked answer. Lets name the two threads T1 and T2. There are couple of aproaches:
Approach 1, this is more or less the last to update Wins situation. It more or less avoids the optimistic locking (the version counting). In case you don't have dependency from T1 to T2 or reverse in order to set status PARSED. This should be good.
Aproach 2 Optimistic Locking This is what you have now. The solution is to refresh the data and restart your operation.
Aproach 3 Row level DB lock The solution here is more or less the same as for approach 2 with the small correction that the Pessimistic lock dure. The main difference is that in this case it may be a READ lock and you might not be even able to read the data from the database in order to refresh it if it is PESSIMISTIC READ.
Aproach 4 application level synchronization There are many different ways to do synchronization. One example would be to actually arrange all your updates in a BlockingQueue or JMS queue(if you want it to be persistent) and push all updates from a single thread. To visualize it a bit T1 and T2 will put elements on the Queue and there will be a single T3 thread reading operations and pushing them to the Database server.
Say I'm creating an entity like this:
Answer answer = new Answer(this, question, optionId);
ofy().save().entity(answer);
Should I check whether the write process is successful?
Say I want to make another action (increment a counter), Should I make a transaction, that includes the writing process?
And also, how can I check if the writing process is successful?
An error while saving will produce an exception. Keep in mind that since you are not calling now(), you have started an async operation and the actual exception may occur when the session is closed (eg, end of the request).
Yes, if you want to increment a counter, you need to start a transaction which encompasses the load, increment, and save. Also keep in mind that it's possible for a transaction to retry even though it is successful, so a naive transaction can possibly overcount. If you need a rigidly exact increment, the pattern is significantly more complex. All databases suffer from some variation of this problem.
I want to iterate over records in the database and update them. However since that updating is both taking some time and prone to errors, I need to a) don't keep the db waiting (as e.g. with a ScrollableResults) and b) commit after each update.
Second thing is that this is done in multiple threads, so I need to ensure that if thread A is taking care of a record, thread B is getting another one.
How can I implement this sensibly with hibernate?
To give a better idea, the following code would be executed by several threads, where all threads share a single instance of the RecordIterator:
Iterator<Record> iter = db.getRecordIterator();
while(iter.hasNext()){
Record rec = iter.next();
// do something lengthy here
db.save(rec);
}
So my question is how to implement the RecordIterator. If on every next() I perform a query, how to ensure that I don't return the same record twice? If I don't, which query to use to return detached objects? Is there a flaw in the general approach (e.g. use one RecordIterator per thread and let the db somehow handle synchronization)? Additional info: there are way to many records to locally keep them (e.g. in a set of treated records).
Update: Because the overall process takes some time, it can happen that the status of Records changes. Due to that the ordering of the result of a query can change. I guess to solve this problem I have to mark records in the database once I return them for processing...
Hmmm, what about pushing your objects from a reader thread in some bounded blocking queue, and let your updater threads read from that queue.
In your reader, do some paging with setFirstResult/setMaxResults. E.g. if you have 1000 elements maximum in your queue, fill them up 500 at a time. When the queue is full, the next push will automatically wait until the updaters take the next elements.
My suggestion would be, since you're sharing an instance of the master iterator, is to run all of your threads using a shared Hibernate transaction, with one load at the beginning and a big save at the end. You load all of your data into a single 'Set' which you can iterate over using your threads (be careful of locking, so you might want to split off a section for each thread, or somehow manage the shared resource so that you don't overlap).
The beauty of the Hibernate solution is that the records aren't immediately saved to the database, since you're using a transaction, and are stored in hibernate's cache. Then at the end they'd all be written back to the database at once. This would save on those expensive database writes you're worried about, plus it gives you an actual object to work with on each iteration, instead of just a database row.
I see in your update that the status of the records may change during processing, and this could always cause a problem. If this is a constantly running process or long running, then my advice using a hibernate solution would be to work in smaller sets, and yes, add a flag to mark records that have been updated, so that when you move to the next set you can pick up ones that haven't been touched.
I have EJB RESTEasy controller with CMT.
One critical method which creates some entities in DB works fine and quickly on single invocation.
But when i try to invoke it simultaneously by 10 users it works very slowly.
I've tracked time in logs and the most expanded place vs. single invocation is
lag between exit from RESTeasy controller and enter into MainFilter.
So this lag grows from 0-1 ms for single invocation to 8 sec. for 10 simultaneously invocations!
I need ideas what could be a reason and how can I speed up it.
My immediate reaction is that it's a database locking problem. Can you tell if the lag occurs as the flow of control passes across the transaction boundary? Try the old technique of littering your code with print statements to see when things stop.
Lock contention over resteasy threads? database? It is very difficult to predict where the bottleneck is.
As per some of the comments above, it does sound like it could be a database locking problem. From what you said, the "lag" occours between the Controller and the Filter invoking the controller. Presumably that is where the transaction commit is occuring.
You say however that the code creates some entities in the database, but you don't say if the code does any updates or selects. Just doing inserts wouldn't normally create a locking problem with most databases, unless there are associated updates or selects (i.e. select for update in Oracle).
Check and see if there are any resources like a table of keys or an parent record that are being updated that might be causing the problem.
Also check your JDBC documentation. Most JDBC drivers have logging levels that should allow you to see the operations being performed on the database. While this may generate a sizeable log, if you include a thread identifier in the log, you should be able to see where problems are occuring.