PessimisticLockException when saving data in loop - java

I am getting pesimistlockexception when trying to persist multiple object of same time through JPA.
Here is my code for reference
#Override
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public Boolean changeDplListMappingByCustomWatchList(List<Integer> dplIds, Integer customWatchListId,
ServiceRequestor customServiceRequestor) {
for(Integer dplId : dplIds) {
if(dplId != null) {
CustomWatchListDplMapping customWatchListDplMapping = new CustomWatchListDplMapping();
customWatchListDplMapping.setDplId(dplId);
customWatchListDplMapping.setWatchListId(customWatchListId);
this.create(customWatchListDplMapping);
}
}
}
catch(Exception e) {
LOG.error("Exception occured while changing dpl mapping by custom watchList id", e);
}
return true;
}
public void create(Model entity) {
manager.persist(entity);
manager.joinTransaction();
}
After first entity when it iterate through second one it throws an exception. If it has only one entity to save then it works well, but for more than one entity model it throws this exception.

by default pessimistic lock is for 1 second so please do the changes in the properties file it will help you to unlock and you will be able to save into database

Related

#Transactional on #Async methods in Spring Boot with shared object

I have a requirement where I have to save a lot of data into an Oracle DB.
I want to use multithreading to speed things up. Everything is in a transaction.
I have two tables in a many to one relation.
First, I synchronously save the one-side entity with repo.save()
Next I build batches of 25 many-side entity entries (from an initial list of a few hundred) and start a new thread using #Async to save them.
Each of this entities has to reference the one-side entity so I pass the new one-side entity created previously to the #Async methods.
I wait for all of them to complete before exiting.
Problem is I get the following exception :
java.sql.BatchUpdateException: ORA-02291: integrity constraint (USER.ENTITY_FK1) violated - parent key not found
Which basically means that the one-side entity I am trying to reference from the many-side does not exist.
From here the transaction fails and everything gets rolled back.
Can someone help me ?
Thanks
Update with code :
In OrchestratorService.class :
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public void importDataFromFile(MultipartFile file) throws Exception {
MyDto myDto = convertor.convertFileToDto(file);
log.info("myDto : {}", myDto.toString());
ParentEntity savedParent = parentEntityService.saveParent(myDto.getFileName());
childService.saveAllChildren(savedParent, myDto);
}
ParentEntityService.class :
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public ParentEntity saveParent(String fileName){
ParentEntity parentEntity= ParentEntity.builder()
.fileName(fileName)
.loadingDate(LocalDate.now())
.build();
ParentEntity savedParentEntity = parentEntityRepository.save(parentEntity);
log.info("Saved ParentEntity : {} ", savedParentEntity );
return savedPpassImportCtrlEntity;
}
ChildService.class :
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void saveAllChildren(ParentEntity parentEntity, MyDto myDto) throws Exception{
List<CompletableFuture<ProcessStatus>> completableFutures = new ArrayList<>();
List<List<ChildrenData>> lists = ListUtils.partition(myDto.getChildrenDataList(), 25);
for (List<ChildrenData> list :lists) {
CompletableFuture<ProcessStatus> result = asyncSaveBatchService.saveBatch(parentEntity, list);
completableFutures.add(result);
}
// wait for all to finish
CompletableFuture<Void> combinedFuture = CompletableFuture.allOf(completableFutures.toArray(new CompletableFuture[0]));
combinedFuture.get();
for(CompletableFuture<ProcessStatus> result : completableFutures){
if(result.get().getStatus().equals(Boolean.FALSE)){
throw new RuntimeException("Process failed : \n " + result.get().getErrorMessage());
}
}
}
And finally AsyncSaveBatchService.class :
#Async("threadPoolTaskExecutor")
public CompletableFuture<ProcessStatus> saveBatch(ParentEntity parentEntity, List<ChildrenData> list){
try{
List<ChildrenEntity> childrenEntitiesToSave = new ArrayList<>();
for (ChildrenData childrenData: list) {
ChildrenEntity childrenEntity = ChildrenEntity .builder()
.firstName(childrenData.getFirstName())
.lastName(childrenData.getLastName())
.parent(parentEntity)
.build();
childrenEntitiesToSave.add(childrenEntity);
}
childrenEntityRepository.saveAll(childrenEntitiesToSave);
return CompletableFuture.completedFuture(ProcessStatus.builder().status(Boolean.TRUE).errorMessage(null).build());
}catch (Exception ex){
return CompletableFuture.completedFuture(ProcessStatus.builder().status(Boolean.FALSE).errorMessage(ExceptionUtils.getStackTrace(ex)).build());
}

Hibernate JPA update multi threads single entity

I have message queue, that gives messages with some entity field update info. There are 10 threads, that process messages from the queue.
For example
1st thread processes message, this thread should update field A from my entity with id 123.
2nd thread processes another message, this thread should update field B from my entity with id 123 at the same time.
Sometimes after updates database don't contain some updated fields.
some updater:
someService.updateEntityFieldA(entityId, newFieldValue);
some service:
public Optional<Entity> findById(String entityId) {
return Optional.ofNullable(new DBWorker().findOne(Entity.class, entityId));
}
public void updateEntityFieldA(String entityId, String newFieldValue) {
findById(entityId).ifPresent(entity -> {
entity.setFieldA(newFieldValue);
new DBWorker().update(entity);
});
}
db worker:
public <T> T findOne(final Class<T> type, Serializable entityId) {
T findObj;
try (Session session = HibernateUtil.openSessionPostgres()) {
findObj = session.get(type, entityId);
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
return findObj;
}
public void update(Object entity) {
try (Session session = HibernateUtil.openSessionPostgres()) {
session.beginTransaction();
session.update(entity);
session.getTransaction().commit();
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
}
HibernateUtil.openSessionPostgres() gets each time new session from
sessionFactory.openSession()
Is it possible do such logic without threads locks / optimistic locking and pessimistic locking?
If you use sessionFactory.openSession() to always open a new session, on the update hibernate may be lossing the info about the dirty fields it needs to update, so issues and update to all fields.
Setting hibernate.show_sql property to true will show you the SQL UPDATE statements generated by hibernate.
Try refactoring your code to, in the same transaction, load the entity and update the field. A session.update is not needed, as the entity is managed, on transaction commit hibernate will flush changes and issue a SQL update.

How to roll back a spring jpa repo.save() function

I've seen several different posts regarding this topic, but most of them tend to be that it's not working. What I'm trying to do is save several objects in a loop, but if one of them fails -> rollback all of the saved objects.
Here is my current code.
#Override
public Fleet saveFleet(String fleetId, List<String> serialNoList) {
fleet = new Fleet();
Fleet tempFleet = new Fleet();
fleet.setKey(new FleetKey());
//Change this to string utils uppercase
fleet.getKey().setFleetId(StringUtils.upperCase(fleetId));
fleet.getKey().setUserId(StringUtils.upperCase(userService.getCurrentUser().getUserId()));
fleet.getKey().setDealerCd("USER");
for (int i = 0; i < serialNoList.size(); i++) {
//Try catch block?
tempFleet = fleetRepo.save(fleet);
}
//commit if all the data goes correctly, rollback if there is an exception.
return tempFleet;
}
Add #Transactional(rollbackFor = Exception.class) to the top of the method. Spring will rollback all data within this transaction for you if any exception is thrown by database.
As others have mentioned you can use #Transactional:
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/Transactional.html
What it actually does is:
try {
transaction.begin();
saveFleet(fleetId, serialNoList);
transaction.commit();
} catch(Exception ex) {
transaction.rollback();
throw ex;
}
You can use #Transactional annotation to rollback transaction in case of exception.

multiple batch queries within transaction template

I am trying to delete every 100 records read from a file in 3 tables,using spring jdbc batch delete .If i wrap the logic inside a transactionTemplate, is it going to work as expected, e.g lets say i am creating 3 batch out of 300 records and wrapping the logic inside a transaction ,then is the transaction going to roll back 1st and 2nd batch, if 3rd batch got a problem.My code is included in the question for reference. I am writing the below code to achieve what i have explained above, is my code correct?
TransactionTemplate txnTemplate = new TransactionTemplate(txnManager);
txnTemplate.execute(new TransactionCallbackWithoutResult() {
#Override
public void doInTransactionWithoutResult(final TransactionStatus status) {
try {
deleteFromABCTable(jdbcTemplate, successList);
deleteFromDEFTable(jdbcTemplate, successList);
deleteFromXYZTable(jdbcTemplate, successList);
} catch (Exception e) {
status.setRollbackOnly();
throw e;
}
}
});
My delete methods :-
private void deleteFromABCTable(JdbcTemplate jdbcTemplate, List
successList) {
try {
jdbcTemplate.batchUpdate(
"delete from ABC where document_id in (select document_id
from ABC where item in(?)))",
new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i)
throws SQLException {
ps.setString(0, successList.get(i));
} });
} catch (Exception e) { }

JDBC Replication driver always returning same data without active cache

I am using the MySQL JDBC Replication Driver com.mysql.jdbc.ReplicationDriver to shift load between Master and Slave.
I am using that connection URL
jdbc.de.url=jdbc:mysql:replication://master:3306,slave1:3306,slave2:3306/myDatabase?zeroDateTimeBehavior=convertToNull&characterEncoding=UTF-8&roundRobinLoadBalance=true
As soon as I am starting my application I am getting only that data from where it has been started, like I am working on a locked snapshot of the database. If I am doing any CRUD operation the data is not callable or updates are not shown. Replication of mysql is working just fine and I can query the correct data from the database.
There is no level2 cache active and I am using hibernate with pooled connections
If I am using the normal JDBC Driver com.mysql.jdbc.Driver everything is working just fine. So why am I getting always the same resultsets, no matter what I do change in the database...
Update 1
It seems like it is related to my aspect
#Aspect
public class ReadOnlyConnectionInterceptor implements Ordered {
private class ReadOnly implements ReturningWork<Object> {
ProceedingJoinPoint pjp;
public ReadOnly(ProceedingJoinPoint pjp) {
this.pjp = pjp;
}
#Override
public Object execute(Connection connection) throws SQLException {
boolean autoCommit = connection.getAutoCommit();
boolean readOnly = connection.isReadOnly();
try {
connection.setAutoCommit(false);
connection.setReadOnly(true);
return pjp.proceed();
} catch (Throwable e) {
//if an exception was raised, return it
return e;
} finally {
// restore state
connection.setReadOnly(readOnly);
connection.setAutoCommit(autoCommit);
}
}
}
private int order;
private EntityManager entityManager;
public void setOrder(int order) {
this.order = order;
}
#Override
public int getOrder() {
return order;
}
#PersistenceContext
public void setEntityManager(EntityManager entityManager) {
this.entityManager = entityManager;
}
#Around("#annotation(readOnlyConnection)")
public Object proceed(ProceedingJoinPoint pjp,
ReadOnlyConnection readOnlyConnection) throws Throwable {
Session hibernateSession = entityManager.unwrap(Session.class);
Object result = hibernateSession.doReturningWork(new ReadOnly(pjp));
if (result == null) {
return result;
}
//If the returned object extends Throwable, throw it
if (Throwable.class.isAssignableFrom(result.getClass())) {
throw (Throwable) result;
}
return result;
}
}
I annotate all my readOnly request with #ReadOnlyConnection. Before I had all my service layer methods annotated with that even though they might be calling each other. Now I am only annotating the request method and I am to the state, where I am getting the database updates on the second call.
1) Doing initial call => getting data as expected
2) Changing data in the database
3) Doing same call again => getting the exact same data from the first call
4) Doing same call again => getting the changed data
The thing with connection.setAutoCommit(false) is that it seems to not do a commit after set back to connection.setAutoCommit(true). So after adding the following line to the aspect, everything worked as expected again
try {
connection.setAutoCommit(false);
connection.setReadOnly(true);
return pjp.proceed();
} catch (Throwable e) {
return e;
} finally {
// restore state
connection.commit(); // THIS LINE
connection.setReadOnly(readOnly);
connection.setAutoCommit(autoCommit);
}

Categories

Resources