Achieving Pessimistic Lock in Springboot JPA with #Transactional Approach - java

So here is how it goes,
After reading about locking transactions in Spring JPA for ACID properties, I did a POC(proof of concept) to get working with transactions.
In my scenerio, I have two applications trying to access the database with read and write operations. Both applications have controller, entity and repository for a sample table with 3 columns.
In my first application, controller goes like:
#GetMapping("/test")
#Transaction
JpaPOC jpaPoc = jpaRepository.findById(2);
// Some system out prints
// Sleep thread for 2 minutes
// set or change one or more properties
jpaRepository.save(jpaPoc);
Repository File:
#Lock(LockModeType.PESSIMISTIC_WRITE)
Optional<JpaPoc> findById(Integer id);
In my second application theres a controller, blank repository and same entity.
Controller
#GetMapping("/test")
#Transaction
JpaPOC jpaPoc = jpaRepository.findById(2);
//some system out prints for testing
// setting the jpaPoc property to something else
jpaRepository.save(jpaPoc);
//Success print
Now the thing is that When I run this applications and hit controller methods (first and the second) my second controller is not waiting for the lock to be released.. Its working just like that and updating the values.
Furthermore, lets say that we did end up fixing things and its waiting now for the lock to be released.
Now lets say I don't want the second transaction to wait for the lock to be released then perform the operation. I want it to throw a exception that its locked and rollback itself. Is there any advice that I can take on this.. It would be of great help
Thanks in Advance..

Related

Best way to impletent Java multithreading to prevent transaction rollback

I have an List which contains say 4 DTOs. I am performing some processes on each of the DTOs present in my list. If suppose for one the DTO, any exception comes then all the transactions are rolled back (even if the process is success for other 3 DTOs).
My code looks like this :
#Transactional
public void processEvent(List<MyObject> myList){
myList.forEach(dto -> process(dto));
}
public void process(MyObject dto){
//some code which calls another class marked as #Transactional
// and save the data processed to database
}
I want to perform these processes for each DTO on a sepearte thread such that exception encountered in one thread does not rollbacks transaction for all the DTOs.
Also is there a way to process these DTOs one by one on different threads so that data consistency is maintained ?
Simply move the transactional to the method called with the dto, plus I am not sure if it is needed a transaction for dto. This looks as a controller which should not have any transactional annotaions. In the service once you change the dto to entity and are ready to save it you may put the anotation. Furthermore if you are simply calling the repository's save method you do not need to be in transaction as save method has the annotation in the repository.
public void processEvent(List<MyObject> myList){
myList.forEach(dto -> process(dto));
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void process(MyObject dto){
//some code which calls another class marked as #Transactional
// and save the data processed to database
}
And one last advice do not put #Transactional on classes, except if they have the readOnly parameter set to true. Then you can put #Transactional on the methods that perform any CRUD operations.

Is there a generic way to work with optimistic locking using Hibernate/Spring Data JPA?

I'm using hibernate's #Version annotation for optimistic locking. My usual use case for updating something in DB looks like this:
Open a transaction (using #Transactional)
Select entities I need to change
Check if I can make the needed changes (validation step)
Do changes
Save everything (Spring Data JPA repository save() method), commit transaction
If I catch any of OptimisticLockException, then I have to retry everything from step 1 (until successful save or fail on validation step, and not more than X times)
This algorithm looks quite common for any kind of optimistic locking processing. Is there a generic way using hibernate or spring data jpa to do these retries/handling of optimistic locking failures or should I write method like this myself? I mean something like (but not literally):
boolean trySaveUntilDoneOrNotOptimisticLockingExceptionOccur(Runnable codeWhichSelectsValidatesUpdatesAndSavesDataButWithoutOptimisticLockingProcessing, int maxOptimisticLockingRetries)
As the question is tagged with spring-data-jpa, I will answer from Spring world.
Just have a look at #Retryable. I find it quite useful for exactly the same use case you describe. This is my usual pattern:
#Service
#Transactional
#Retryable(maxAttempts = 7,
backoff = #Backoff(delay = 50),
include = { TransientDataAccessException.class,
RecoverableDataAccessException.class }
)
public class MyService {
// all methods in this service are now transactional and automatically retried.
}
You can play with backoff options, of course.
Check out here for further examples on #Retryable.

why do we have to use #Modifying annotation for queries in Data Jpa

for example I have a method in my CRUD interface which deletes a user from the database:
public interface CrudUserRepository extends JpaRepository<User, Integer> {
#Transactional
#Modifying
#Query("DELETE FROM User u WHERE u.id=:id")
int delete(#Param("id") int id, #Param("userId") int userId);
}
This method will work only with the annotation #Modifying. But what is the need for the annotation here? Why cant spring analyze the query and understand that it is a modifying query?
CAUTION!
Using #Modifying(clearAutomatically=true) will drop any pending updates on the managed entities in the persistence context spring states the following :
Doing so triggers the query annotated to the method as an updating
query instead of selecting one. As the EntityManager might contain
outdated entities after the execution of the modifying query, we do
not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes
still pending in the EntityManager. If you wish the EntityManager to
be cleared automatically, you can set the #Modifying annotation’s
clearAutomatically attribute to true.
Fortunately, starting from Spring Boot 2.0.4.RELEASE Spring Data added flushAutomatically flag (https://jira.spring.io/browse/DATAJPA-806) to auto flush any managed entities on the persistence context before executing the modifying query check reference https://docs.spring.io/spring-data/jpa/docs/2.0.4.RELEASE/api/org/springframework/data/jpa/repository/Modifying.html#flushAutomatically
So the safest way to use #Modifying is :
#Modifying(clearAutomatically=true, flushAutomatically=true)
What happens if we don't use those two flags??
Consider the following code :
repo {
#Modifying
#Query("delete User u where u.active=0")
public void deleteInActiveUsers();
}
Scenario 1 why flushAutomatically
service {
User johnUser = userRepo.findById(1); // store in first level cache
johnUser.setActive(false);
repo.save(johnUser);
repo.deleteInActiveUsers();// BAM it won't delete JOHN right away
// JOHN still exist since john with active being false was not
// flushed into the database when #Modifying kicks in
// so imagine if after `deleteInActiveUsers` line you called a native
// query or started a new transaction, both cases john
// was not deleted so it can lead to faulty business logic
}
Scenario 2 why clearAutomatically
In following consider johnUser.active is false already
service {
User johnUser = userRepo.findById(1); // store in first level cache
repo.deleteInActiveUsers(); // you think that john is deleted now
System.out.println(userRepo.findById(1).isPresent()) // TRUE!!!
System.out.println(userRepo.count()) // 1 !!!
// JOHN still exists since in this transaction persistence context
// John's object was not cleared upon #Modifying query execution,
// John's object will still be fetched from 1st level cache
// `clearAutomatically` takes care of doing the
// clear part on the objects being modified for current
// transaction persistence context
}
So if - in the same transaction - you are playing with modified objects before or after the line which does #Modifying, then use clearAutomatically & flushAutomatically if not then you can skip using these flags
BTW this is another reason why you should always put the #Transactional annotation on service layer, so that you only can have one persistence context for all your managed entities in the same transaction.
Since persistence context is bounded to hibernate session, you need to know that a session can contain couple of transactions see this answer for more info https://stackoverflow.com/a/5409180/1460591
The way spring data works is that it joins the transactions together (known as Transaction Propagation) into one transaction (default propagation (REQUIRED)) see this answer for more info https://stackoverflow.com/a/25710391/1460591
To connect things together if you have multiple isolated transactions (e.g not having a transactional annotation on the service) hence you would have multiple sessions following the way spring data works hence you have multiple persistence contexts (aka 1st level cache) that means you might delete/modify an entity in a persistence context even with using flushAutomatically the same deleted/modified entity might be fetched and cached in another transaction's persistence context already, That would cause wrong business decisions due to wrong or un-synced data.
This will trigger the query annotated to the method as updating query instead of a selecting one. As the EntityManager might contain outdated entities after the execution of the modifying query, we automatically clear it (see JavaDoc of EntityManager.clear() for details). This will effectively drop all non-flushed changes still pending in the EntityManager. If you don't wish the EntityManager to be cleared automatically you can set #Modifying annotation's clearAutomatically attribute to false;
for further detail you can follow this link:-
http://docs.spring.io/spring-data/jpa/docs/1.3.4.RELEASE/reference/html/jpa.repositories.html
Queries that require a #Modifying annotation include INSERT, UPDATE, DELETE, and DDL statements.
Adding #Modifying annotation indicates the query is not for a SELECT query.
When you use only #Query annotation,you should use select queries
However you #Modifying annotation you can use insert,delete,update queries above the method.

JPA2 Optimistick Lock

Hi I'm struggling with Optimistick Lock on JPA2 and I have no more ideas why is it occurring.
My case is that I'm running multiple threads but there is one entity in the DB which stores progress. Which means that different threads are trying to update this entity during execution to make possible to see the progress by user.
I have a methods addAllItems and addDone. Both of methods are used to update the entity by several threads and I'm displaying the result by showing (done/allItems)*100.
Methods were simple at the beginning
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void addAllItems(Long id, Integer items){
Job job = jobDao.findById(id);
job.setAll(job.getAll() + items);
jobDao.merge(job);
}
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void addDone(Long id, Integer done){
Job job = jobDao.findById(id);
job.setDone(job.getDone() + done);
jobDao.merge(job);
}
When I realized that Optimistic Lock is occurring I changed both methods by adding synchronized to the signature. It has no effect so I added refresh (from entity manager) to make sure that I'm having current version. It also made no difference. I also added manual flush at the end, but still nothing better...
Here is final version of method (addAllItems is pretty much the same, only difference is in setter):
#Transactional(propagation=Propagation.REQUIRES_NEW)
public synchronized void addDone(Long id, Integer done){
Job job = jobDao.findById(id);
job = jobDao.refresh(job);
job.setDone(job.getDone() + done);
jobDao.merge(job);
jobDao.flush();
}
Where the jobDao.refresh method is just calling refresh on entityManager.
I'm using eclipselink 2.40.
What else can I check?
I'm out of ideas at the moment...
As you are sure that you are using a proxy (I assume you correctly configured the PlatformTransactionManager), you could try to use an explicit pessimistic lock inside the transaction - normally you should not need to do so, but if it fixes the problem ...
I suppose that in your dao, you have something like :
Job job = EntityManager.find(Job.class, jobId);
To force a pessimistic lock, just change it to :
Job job = EntityManager.find(Job.class, jobId, LockModeType.PESSIMISTIC_WRITE);
As you just do load, modify, store and commit, it might be a correct use case for pessimistic locking.

How to refresh JPA entities when backend database changes asynchronously?

I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.

Categories

Resources