Spring - Two hibernate update queries are executed instead of one - java

I am working Spring and Hibernate. I have a requirement where I need to update a particular field by adding a number to it. Since multiple threads could execute it at the same time, while updating I check the field value with the old value. So if nothing was updated that means it was incremented by some other thread and we trigger a retry.
CompanyService
public Company getAndIncrementRequestId(final int companyId, int retry) throws Exception {
Optional<Company> companyOptional = companyRepository.findById(companyId);
if (!companyOptional.isPresent()) {
throw new EntityNotFoundException("Company not found for given id" + companyId);
}
Company company = companyOptional.get();
int oldRequestId = company.getRequestId();
int requestId;
if (oldRequestId == Integer.MAX_VALUE) {
requestId = 1;
} else {
requestId = oldRequestId + 1;
}
company.setRequestId(requestId); //--------------------------> PROBLEM
int result = companyRepository.updateRequestId(companyId, requestId, oldRequestId);
if (result == 0) {
if (retry < 0) {
throw new Exception("Unable to get requestId");
}
LOG.warn("Retrying since there was some update on requestId by some other thread");
try {
TimeUnit.MILLISECONDS.sleep(100);
} catch (InterruptedException e) {
LOG.warn("Unexpected InterruptException occurred while trying to get requestId");
}
return getAndIncrementRequestId(companyId, retry - 1);
}
return company;
}
CompanyRepository
#Transactional
public interface CompanyRepository extends CrudRepository<Company, Integer> {
Optional<Company> findById(String id);
#Modifying(clearAutomatically = true)
#Query("update Company c set c.requestId = :requestId WHERE c.id = :companyId AND c.requestId = :oldRequestId")
int updateRequestId(#Param("companyId") Integer companyId, #Param("requestId") Integer requestId,#Param("oldRequestId") Integer oldRequestId);
}
But this above code in Service will trigger two hibernate updates one which set the requestId with lastest requestId and the other the actual update. Could observe two queries in the log after setting show-sql as true.
But if the line ,
company.setRequestId(requestId);
Is moved down after the companyRepository.updateRequestId() it works fine.
Working CompanyService
public Company getAndIncrementRequestId(final int companyId, int retry) throws Exception {
Optional<Company> companyOptional = companyRepository.findById(companyId);
if (!companyOptional.isPresent()) {
throw new EntityNotFoundException("Company not found for given id" + companyId);
}
Company company = companyOptional.get();
int oldRequestId = company.getRequestId();
int requestId;
if (oldRequestId == Integer.MAX_VALUE) {
requestId = 1;
} else {
requestId = oldRequestId + 1;
}
int result = companyRepository.updateRequestId(companyId, requestId, oldRequestId);
if (result == 0) {
if (retry < 0) {
throw new Exception("Unable to get requestId");
}
LOG.warn("Retrying since there was some update on requestId by some other thread");
try {
TimeUnit.MILLISECONDS.sleep(100);
} catch (InterruptedException e) {
LOG.warn("Unexpected InterruptException occurred while trying to get requestId");
}
return getAndIncrementRequestId(companyId, retry - 1);
}
company.setRequestId(requestId); //--------------------------> PROBLEM DOES NOT EXISTS
return company;
}
Sp my question why are there two queries when I have not even passed the entity Company anywhere..?

It is because when you do "companyRepository.findById(companyId);" the returned company is returned in managed state.
So , when in case 1 you set the request id before invoking "companyRepository.updateRequestId(companyId, requestId, oldRequestId);", a transaction object is created in the company repository which executes all the pending updates of the managed entity plus the query of the method "updateRequestId" also gets fired.
While in second case, since you have written set statement after invoking "companyRepository.updateRequestId(companyId, requestId, oldRequestId);", that is why the update on managed object never gets fired because it does not get any transaction

Related

#Transactional method insert value on exception and multithread wildfly CDI

I have a method in CDI bean which is transactional, on error it creates an entry in database with the exception message. This method can be called by RESTendpoint and in multithread way.
We have a SQL constraint to avoid duplicity in database
#Transactional
public RegistrationRuleStatus performCheck(RegistrationRule rule, User user) {
try {
//check if rule is dependent of other rules and if all proved, perform check
List<RegistrationRule> rules = rule.getRuleParentDependencies();
boolean parentDependenciesAreProved = true;
if (!CollectionUtils.isEmpty(rules)) {
parentDependenciesAreProved = ruleDao.areParentDependenciesProved(rule,user.getId());
}
if (parentDependenciesAreProved) {
Object service = CDI.current().select(Object.class, new NamedAnnotation(rule.getProvider().name())).get();
Method method = service.getClass().getMethod(rule.getProviderType().getMethod(), Long.class, RegistrationRule.class);
return (RegistrationRuleStatus) method.invoke(service, user.getId(), rule);
} else {
RegistrationRuleStatus status = statusDao.getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.START, new Date());
statusDao.create(status);
}
return status;
}
} catch (Exception e) {
LOGGER.error("could not perform check {} for provider {}", rule.getProviderType().name(), rule.getProvider().name(), e.getCause()!=null?e.getCause():e);
return statusDao.createErrorStatus(user,rule,e.getCause()!=null?e.getCause().getMessage():e.getMessage());
}
}
create Error method:
#Transactional
public RegistrationRuleStatus createErrorStatus(User user, RegistrationRule rule, String message) {
RegistrationRuleStatus status = getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.ERROR, new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
create(status);
}else {
status.setStatus(RegistrationActionStatus.ERROR);
status.setStatusDate(new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
update(status);
}
return status;
}
the problem is method is called twice at same time and the error recorded is DuplicateException but we don't want it. We verify at the beginning if object already exists, but I think it is called at exactly same time.
JAVA8/wildfly/CDI/JPA/eclipselink
Any idea?
I'd suggest you to consider following approaches:
1) Implement retry logic. Catch exception, analyze it. If it indicates an unexpected duplicate (like you described), then don't consider it as an error and just repeat the method call. Now your code will work differently: It will notice that a record already exists and will not create a duplicate.
2) Use isolation level SERIALIZABLE. Then within a single transaction your will "see" a consistent behaviour: If select operation hasn't found a particular record, then till the end of this transaction no other transaction will insert such record and there will be no exception related to duplicates. But the price is that the whole table will be locked for each such transaction. This can degrade the application performance essentially.

Synchronized block throwing DuplicateKeyException

Below is my class which is under threaded spring executors. Based on the type of source/service TAXP, TAXS, TAXT methods are getting called.
Logic is if the 'taxInfo.getGroupingId()' is already present in primary tax table do not insert, else insert primary table.
All secondary and tertiary tables records are inserted. TAXP, TAXS, TAXT are the topics and they receive data anytime. there might be milllisecond gap or at the same time the data would be sent so the blocks are synchroized.
All the 3 methods are called from 3 different thread executors.
executor1.insertPrimaryTaxInfo(taxInfo);
executor2.insertSecTaxInfo(taxInfo);
executor3.insertTerTaxInfo(taxInfo);
#Service
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public class TaxServiceImpl implements TaxService {
private static final Logger LOG = LogManager.getLogger(ScanServiceImpl.class);
// TAXP
#Override
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = TaxServiceException.class)
public void insertPrimaryTaxInfo(TaxInfo taxInfo) throws TaxServiceException {
String taxId = null;
try {
synchronized (this) {
taxId = taxMapper.checkExists(taxInfo.getGroupingId());
if (taxId == null) {
taxMapper.insertTaxInfo(taxInfo); // primary tax table
}
}
LOG.info("tax id -- " + taxId);
} catch (Exception ex) {
LOG.error("Error inserting txId for " + taxInfo.getGroupingId()
+ ex);
throw new TaxServiceException(ex);
}
}
// TAXS
#Override
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = TaxServiceException.class)
public void insertSecTaxInfo(TaxInfo taxInfo) throws TaxServiceException {
String taxId = null;
try {
synchronized (this) {
taxId = taxMapper.checkExists(taxInfo.getGroupingId());
if (taxId == null) {
taxMapper.insertTaxInfo(taxInfo); // primary tax table
}
}
taxMapper.insertIntoSecTable(taxInfo); // secondary tax table
LOG.info("tax id -- " + taxId);
} catch (Exception ex) {
LOG.error("Error inserting txId for " + taxInfo.getGroupingId()
+ ex);
throw new TaxServiceException(ex);
}
}
// TAXT
#Override
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = TaxServiceException.class)
public void insertTerTaxInfo(TaxInfo taxInfo) throws TaxServiceException {
String taxId = null;
try {
synchronized (this) {
taxId = taxMapper.checkExists(taxInfo.getGroupingId());
if (taxId == null) {
taxMapper.insertTaxInfo(taxInfo); // primary tax table
}
}
taxMapper.insertIntoSecTable(taxInfo); // secondary tax table
taxMapper.insertIntoTerTable(taxInfo); // Tertiary tax table
LOG.info("tax id -- " + taxId);
} catch (Exception ex) {
LOG.error("Error inserting txId for " + taxInfo.getGroupingId()
+ ex);
throw new TaxServiceException(ex);
}
}
}
The issue is when TAXP, TAXS, TAXT are getting data at the same time, and the 3 above methods are called simultaneously. At a millisecond difference one of the thread inserts into primary table and the other thread trying to do the same but finds a record already exisitng in the table and throws duplicate key excepiton.
Im getting the below exception:
"com.data.exception.TaxServiceException: org.springframework.dao.DuplicateKeyException:
### Error updating database. Cause: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (TAXDB2.TAX_PK) violated
The reason for synchrnozing the block is to overcome this exception. What is wrong with the above code?
It looks like you may be attempting to generate non guid based primary keys in your application instead of letting the database generate them and running into conflicts.
Its a losing battle to attempt to synchronize database access in your application. You should be letting the database manage the concurrency through its existing mechanisms. For more information refer to ACID. Additionaly, it is very instructive to lookup the current Isolation level on your database implementation and what it does in comparison to the others. For example SQL Server docs Understanding Isolation Levels

Java - How to delete an entity from Google Cloud Datastore

Architecture: I have a web application from where I'm interacting with the Datastore and a client (raspberry pi) which is calling methods from the web application using Google Cloud Endpoints.
I have to add that I'm not very familiar with web applications and I assume that something's wrong with the setConsumed() method because I can see the call of /create in the app engine dashboard but there's no entry for /setConsumed.
I'm able to add entities to the Datastore using objectify:
//client method
private static void sendSensorData(long index, String serialNumber) throws IOException {
SensorData data = new SensorData();
data.setId(index+1);
data.setSerialNumber(serialNumber);
sensor.create(data).execute();
}
//api method in the web application
#ApiMethod(name = "create", httpMethod = "post")
public SensorData create(SensorData data, User user) {
// check if user is authenticated and authorized
if (user == null) {
log.warning("User is not authenticated");
System.out.println("Trying to authenticate user...");
createUser(user);
// throw new RuntimeException("Authentication required!");
} else if (!Constants.EMAIL_ADDRESS.equals(user.getEmail())) {
log.warning("User is not authorised, email: " + user.getEmail());
throw new RuntimeException("Not authorised!");
}
data.save();
return data;
}
//method in entity class SensorData
public Key<SensorData> save() {
return ofy().save().entity(this).now();
}
However, I'm not able to delete an entity from the datastore using the following code.
EDIT: There are many logs of the create-request in Stackdriver Logging, but none of setConsumed(). So it seems like the calls don't even reach the API although both methods are in the same class.
EDIT 2: The entity gets removed when I invoke the method from the Powershell so the problem is most likely on client side.
//client method
private static void removeSensorData(long index) throws IOException {
sensor.setConsumed(index+1);
}
//api method in the web application
#ApiMethod(name = "setConsumed", httpMethod = "put")
public void setConsumed(#Named("id") Long id, User user) {
// check if user is authenticated and authorized
if (user == null) {
log.warning("User is not authenticated");
System.out.println("Trying to authenticate user...");
createUser(user);
// throw new RuntimeException("Authentication required!");
} else if (!Constants.EMAIL_ADDRESS.equals(user.getEmail())) {
log.warning("User is not authorised, email: " + user.getEmail());
throw new RuntimeException("Not authorised!");
}
Key serialKey = KeyFactory.createKey("SensorData", id);
datastore.delete(serialKey);
}
This is what I follow to delete an entity from datastore.
public boolean deleteEntity(String propertyValue) {
String entityName = "YOUR_ENTITY_NAME";
String gql = "SELECT * FROM "+entityName +" WHERE property= "+propertyValue+"";
Query<Entity> query = Query.newGqlQueryBuilder(Query.ResultType.ENTITY, gql)
.setAllowLiteral(true).build();
try{
QueryResults<Entity> results = ds.run(query);
if (results.hasNext()) {
Entity rs = results.next();
ds.delete(rs.getKey());
return true;
}
return false;
}catch(Exception e){
logger.error(e.getMessage());
return false;
}
}
If you don't want to use literals, you can also use binding as follows:
String gql = "SELECT * FROM "+entityName+" WHERE property1= #prop1 AND property2= #prop2";
Query<Entity> query = Query.newGqlQueryBuilder(Query.ResultType.ENTITY, gql)
.setBinding("prop1", propertyValue1)
.setBinding("prop2", propertyValue2)
.build();
Hope this helps.
I was able to solve it by myself finally!
The problem was just related to the data type of the index used for removeSensorData(long index) which came out of a for-loop and therefore was an Integer instead of a long.

Error: Hibernate could not initialize proxy - no Session

ReportService Code
private void generatePaySummary() {
try {
Map params = new HashMap();
params = getOrganizationInfo(params);
params.put("rptsubtitle", "Payroll Date: "+date_formatter.format(tbpaydate.getDate()));
int i = cboDept.getSelectedIndex();
int deptno = 0;
if (i != -1) deptno = (Integer)deptnos.get(i);
ReportService srv = new ReportService();
List empids = srv.getEmployeesInPayroll(deptno, tbpaydate.getDate());
if (!empids.isEmpty()) {
PayslipService.setEmployees(empids);
PayslipService.setPayDate(tbpaydate.getDate());
RepGenService repsrv = new RepGenService();
JRBeanCollectionDataSource jbsrc = new JRBeanCollectionDataSource(PaySummaryFactory.getPaySummary());
repsrv.generateReport(false, "/orgpayroll/reports/jasper/payrollsummary.jasper", true, params, jbsrc);
}
else
SysUtils.messageBox("No employees in payroll on "+date_formatter.format(tbpaydate.getDate())+"!");
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Error" + e.getMessage());
}
}
I am trying to execute a function which will open a jasper report template.
The function works if it will only process 1 employee from the database, but if I process more with the same date, it says Hibernate could not initialize proxy - no Session.
This means that you have one collection with lazy fetchType.
you can solve it by changing it to EAGER mode
So go to ReportService class and turn your employee collection's fetchType to EAGER. Or add (fetch=fetch = FetchType.EAGER)

Objectify BATCH delete has no effect

I have a DAO below, with a transactional delete per entity and in batch.
Deleting one entity at a time works just fine.
Batch delete has NO effect whatsoever :
the code below is simple and straightforward IMO, but the call to deleteMyObjects(Long[] ids) - which calls delete(Iterable keysOrEntities) of Objectify - has no effect !
public class MyObjectDao {
private ObjectifyOpts transactional = new ObjectifyOpts().setBeginTransaction(true);
private ObjectifyOpts nonTransactional = new ObjectifyOpts().setBeginTransaction(false);
private String namespace = null;
public MyObjectDao(String namespace) {
Preconditions.checkNotNull(namespace, "Namespace cannot be NULL");
this.namespace = namespace;
}
/**
* set namespace and get a non-transactional instance of Objectify
*
* #return
*/
protected Objectify nontxn() {
NamespaceManager.set(namespace);
return ObjectifyService.factory().begin(nonTransactional);
}
/**
* set namespace and get a transactional instance of Objectify
*
* #return
*/
protected Objectify txn() {
NamespaceManager.set(namespace);
Objectify txn = ObjectifyService.factory().begin(transactional);
log.log(Level.FINE, "transaction <" + txn.getTxn().getId() + "> started");
return txn;
}
protected void commit(Objectify txn) {
if (txn != null && txn.getTxn().isActive()) {
txn.getTxn().commit();
log.log(Level.FINE, "transaction <" + txn.getTxn().getId() + "> committed");
} else {
log.log(Level.WARNING, "commit NULL transaction");
}
}
protected void rollbackIfNeeded(Objectify txn) {
if (txn != null && txn.getTxn() != null && txn.getTxn().isActive()) {
log.log(Level.WARNING, "transaction <" + txn.getTxn().getId() + "> rolling back");
txn.getTxn().rollback();
} else if (txn == null || txn.getTxn() == null) {
log.log(Level.WARNING, "finalizing NULL transaction, not rolling back");
} else if (!txn.getTxn().isActive()) {
log.log(Level.FINEST, "transaction <" + txn.getTxn().getId() + "> NOT rolling back");
}
}
public void deleteMyObject(Long id) {
Objectify txn = null;
try {
txn = txn();
txn.delete(new Key<MyObject>(MyObject.class, id));
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
public void deleteMyObjects(Long[] ids) {
Objectify txn = null;
List<Key<? extends MyObject>> keys = new ArrayList<Key<? extends MyObject>>();
for (long id : ids) {
keys.add(new Key<MyObject>(MyObject.class, id));
}
try {
txn = txn();
txn.delete(keys);
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
}
When I call deleteMyObjects(Long[] ), I see nothing suspicious in the logs below. The transaction commits just fine without errors. But the data is not effected. Looping through the same list of Ids and deleting the objects one at a time, works just fine.
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao txn
FINE: transaction <6> started
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao commit
FINE: transaction <6> committed
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao rollbackIfNeeded
FINEST: transaction <6> NOT rolling back
But the data is unchanged and present in the datastore !?!?!
Any help welcome.
UPDATE
Stepping into the Objectify code, I wonder wether this has something to do with the namespace ? Right here in the objectify code :
#Override
public Result<Void> delete(Iterable<?> keysOrEntities)
{
// We have to be careful here, objs could contain raw Keys or Keys or entity objects or both!
List<com.google.appengine.api.datastore.Key> keys = new ArrayList<com.google.appengine.api.datastore.Key>();
for (Object obj: keysOrEntities)
keys.add(this.factory.getRawKey(obj));
return new ResultAdapter<Void>(this.ads.delete(this.txn, keys));
}
When I inspect this.factory.getRawKey(obj) in debug, I notice that the namespace of the key is empty. NamespaceManager.get() however returns the correct namespace !?
Namespace was not set when creating the keys.
The namespace must be set BEFORE creating a key !
So rewriting it like this, fixed my problem :
public void deleteMyObjects(Long[] ids) {
Objectify txn = null;
try {
txn = txn();
List<Key<MyObject>> keys = new ArrayList<Key<MyObject>>();
for (long id : ids) {
keys.add(new Key<MyObject>(MyObject.class, id));
}
txn.delete(keys);
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
Then I call this :
new MyObjectDAO("somenamespace").delete({ 1L, 34L, 116L });

Categories

Resources