What is most suitable way to handle optimistic locking in jpa. I have below solutions but don't know its better to use this.
Handling exception of optimistic locking in catch blocking and retrying again.
Using Atomic variable flag and checking if its processing then wait until other thread finish its processing. This way data modification or locking contention may be avoided.
Maintaining queue of all incoming database change request and processing it one by one.
Anyone please suggest me if there is better solution to this problem.
You don't say why you are using optimistic locking.
You usually use it to avoid blocking resources (like database rows) for a long time, i.e. data is read from a database and displayed to the user. Eventually the user makes changes to the database, and the data is written back.
You don't want to block the data for other users during that time. In a scenario like this you don't want to use option 2, for the same reason.
Option 1 is not easy, because an optimistic locking exception tells you that something has changed the data behind your back, and you would overwrite these changes with your data. Re-trying to write the data won't help here.
Option 3 might be possible in some situations, but adds a lot of complexity and possible errors. This would be my last resort by far.
In my experience optimistic locking exceptions are quite rare. In most cases the easiest way out is to discard everything, and re-do it from start, even if it means to tell the user: sorry, there was an unexpected problem, do it again.
On the other hand, if you get these problems regularly, between two competing threads, you should try to avoid it. In these cases option 2 might by the way to go, but it depends on the scenario.
If the conflict occurs between a user interaction and a background thread (and not between two users) you could try to change the timing of the background thread, or signal the background thread to delay its work.
To sum it up: It mostly depends on your setup, and when and how the exception occurs.
Related
For different database technologies, MS-SQL and H2 recently, and even with different applications, I get exceptions from statements that could not acquire a table lock.
This happens in particular when accessing the same table from many parallel threads. Assuming these conflicts cannot be avoided in principle with complex enough statements and enough parallel threads, what is a good way to handle these failed statements?
The MS-SQL exception explicitly suggests to retry the statement.
H2 allows to set a LOCK_TIMEOUT.
Since every statement may fail this way a retry would be needed for every statement in the software. Seems like a lot of boiler-plate code that needs to be added.
In contrast, the timeout is a simple configuration, but it needs to be excessively high to guarantee that a few more threads do not again trigger the exception. Which brings me back to the retry logic.
The legacy code contains several ten statements, each implemented in its own method, opening/closing the triplet of connection/statement/result set. What would be a good way to refactor it apart from wrapping each of them into a retry-loop? Lower performance is acceptable, but no failed statements.
By clustered environment I mean same code running on multiple server machines.My scenario what I can think of is as follows
Multiple request come to update Card details based on expiry time from different threads at the same time. A snippet of code is following
synchronized(card) { //card object
if(card.isExpired())
updateCard()
}
My understanding is synchronized block works at jvm level so how in multiserver environment it is achieved.
Please suggest edit to rephrase question. I asked what I can recollect from a question asked to me.
As you said, synchronized block is only for "local JVM" threads.
When it comes to cluster, it is up to you how you drive your distributed transaction.
It really depends where your objects (e.g. card) are stored.
Database - You will probably need to use some locking strategy. Very likely optimistic locking that stores a version of entity and checks it when every change is made. Or more "safe" pessimistic locking where you lock the whole row when making changes.
Memory - You will probably need some memory grid solution (e.g. Hazelcast...) and make use of its transaction support or implement it by yourself
Any other? You will have specify...
See, in a clustered environment, you will usually have multiple JVMs running the same code. If traffic is high, then actually the number of JVMs could auto-scale and increase (new instances could be spawned). This is one of the reasons why you should be really careful when using static fields to keep data in a distributed environment.
Next, coming to your actual question, if you have a single jvm serving requests, then all other threads will have to wait to get that lock. If you have multiple JVMs running, then lock acquired by one thread on oneJVM will not prevent acquisition of the (in reality, not same, but conceptually same) lock by another thread in a different jvm.
I am assuming you want to handle that only one thread can edit the object or perform the action (based on the method name i.e updatecard) I suggest you implement optimistic locking (versioning), hibernate can do this quite easily, to prevent dirty read.
Say I'm creating an entity like this:
Answer answer = new Answer(this, question, optionId);
ofy().save().entity(answer);
Should I check whether the write process is successful?
Say I want to make another action (increment a counter), Should I make a transaction, that includes the writing process?
And also, how can I check if the writing process is successful?
An error while saving will produce an exception. Keep in mind that since you are not calling now(), you have started an async operation and the actual exception may occur when the session is closed (eg, end of the request).
Yes, if you want to increment a counter, you need to start a transaction which encompasses the load, increment, and save. Also keep in mind that it's possible for a transaction to retry even though it is successful, so a naive transaction can possibly overcount. If you need a rigidly exact increment, the pattern is significantly more complex. All databases suffer from some variation of this problem.
My backend system serves about 10K POS devices, each device will request service in a sequential style, however I am wondering how the backend guarantee to handle requests of a given client in sequential style.
For example a device issues a 'sell' request, and timeout(may DB blocked) to get response, so it issue a 'cancellation' to cancel that sale request. In this case, the backend may is still handling 'sale' transaction when get 'cancellation' request, it may cause some unexpected result.
My idea is to setup a persistent queue for each device(client), but is it OK to setup 10K queues? I am not sure, please help.
This is an incredibly complex area of computer science and a lot of these problems have been solved many times. I would not try to reinvent the wheel.
My advice:
Read about and thoroughly understand ACID (summaries paraphrased):
Atomicity - If any part of a transaction fails, the whole transaction fails, and the database is not left in an unknown or corrupted state. This is hugely important. Rely on existing software to make this happen in the actual database. And don't invent data structures that require you to reinvent your own transaction system. Make your transactions as small as possible to reduce failures.
Consistency - The database is never left in an invalid state. All operations committed to it will take it to a new valid state.
Isolation - The operations you perform on a database can be performed at the same time and result in the same state as if performed one after the other. OR performed safely inside a locking transaction.
Durability - Once a transaction is committed, it will remain so.
Both your existing system and your proposed idea sound like they could potentially be violating ACID:
A stateful request system probably violates (or makes it hard not to violate) isolation.
A queue could violate durability if not done in a bullet-proof way.
Not to mention, you have scalability issues as well. Combine scalability and ACID and you have heavyweight situation requiring serious expertise.
If you can help it, I would strongly suggest relying on existing systems, especially if this is for point of sale.
What are the best ways to avoid race conditions in jsp at the same time not to slow down the process.I have tried
isThreadSafe=false
synchronized(session)
however is there any other alternate solution is available?
A one-size-fits-all solution (e.g. threadSafe=false) causes requests to be executed one at a time. And that inevitably slows down request processing.
To that avoid that scenario, you need to understand why you are getting race conditions and (re-)design your architecture to avoid the problem. For example:
if the race condition is in updates to some shared in-memory data structures, you need to synchronize access and updates to the data structure at the appropriate level of granularity
if the race condition is in updates to the your database, you need to restructure your SQL to use transactions at the appropriate level of granularity.
These are just (possible) schemas for how to solve your race condition problems. In reality, you have to understand the root causes yourself.
It depends on why you have race conditions.
The simplest thought is to have no global variables that you write to.
Do all of your logic in methods that only use local variables.
Don't have any java code in your jsp page, but call out to a bean to do the operations.
What are you doing that is causing a race condition?
isThreadSafe=false
This will probably lead to poor performance as it makes page access sequential. It also will only affect one page, so if you access the data via another page, this will do nothing for you.
synchronized(session)
This is not guaranteed to work (though it will on some servers as a side-effect of the implementation).
Any solution will probably require more information about the data you are trying to guard and the configuration of your server.