Using Both Query Api And Criteria Api in Hibernate Leads Problems - java

When I update a row with query API and then retrieve the data with criteria API in the same transaction, I get old value, not the updated value. Why is it like that and how can I solve the problem? I need to get the updated value.
#Service
#Transactional
public class ExampleServiceImpl implements ExampleService {
#Autowired
ExampleRepository exampleRepository;
#Transactional
public void example() {
ExampleEntity entity = (ExampleEntity) sessionFactory.getCurrentSession().createCriteria(ExampleEntity.class).add(Restrictions.eq("id", 190001L)).uniqueResult();
exampleRepository.updateState(190001L, State.CLOSED);
ExampleEntity updatedEntity = (ExampleEntity)sessionFactory.getCurrentSession().createCriteria(ExampleEntity.class).add(Restrictions.eq("id", 190001L)).uniqueResult();
assertEquals(State.CLOSED, updatedEntity.getState());
}
}
#Repository
public class ExampleRepositoryImpl implements ExampleRepository {
public void updateState(Long id, State state) {
String updateScript = "update exampleEntity set state= '%s', " +
"VERSION = VERSION + 1 " +
"where ID = %s;";
updateScript = String.format(updateScript, state, id);
Query sqlQuery = sessionFactory.getCurrentSession().createSQLQuery(updateScript);
sqlQuery.executeUpdate();
}
}
Note: If I delete the first line and don't retrieve entity at the beginning everything works as I expected.

You are mixing native SQL and hibernate. Basically, when you first retrieve the entity, it gets stored in your session EntityManager. You then use plain SQL to update the row in the database, but as far as hibernate is concerned, the entity has not been dirtied because it isn't clever enough to understand how plain SQL relates to the object model. When you retrieve it the second time, it simply gives you the original entity it already has cached in the EntityManager rather than querying the database.
The solution is to simply manually force evict the entity from the EntityManager after the update as follows:
sessionFactory.getCurrentSession().evict(entity);
Or you could simply update the entity you fetched and persist it (best solution IMHO, no superfluous DAO method, and best abstraction away from the database):
ExampleEntity entity = (ExampleEntity) sessionFactory.getCurrentSession().createCriteria(ExampleEntity.class).add(Restrictions.eq("id", 190001L)).uniqueResult();
entity.setState(State.CLOSED);
entity.setVersion(e.getVersion() + 1);
sessionFactory.getCurrentSession().update(entity);
Basically... whichever option you choose, don't mix plain SQL and hibernate queries in the same transaction. Once hibernate has an object loaded, it will return that same entity from its cache until it knows for a fact that it is dirty. It is not clever enough to know that an entity is dirty when plain SQL was used to dirty it. If you have no choice and must use SQL (and this should never be the case in a well designed hibernate model), then call evict to tell hibernate the entity is dirty.

Your transaction is still not committed when you get result - this is the reason you get "old" value.

Related

JPA flushing to Database before #PreUpdate is called

I am trying to capture the entity data in the database before the save is executed, for the purpose of creating a shadow copy.
I have implemented the following EntityListener in my Spring application:
public class CmsListener {
public CmsListener() {
}
#PreUpdate
private void createShadow(CmsModel entity) {
EntityManager em = BeanUtility.getBean(EntityManager.class);
CmsModel p = em.find(entity.getClass(), entity.getId());
System.out.println(entity);
}
}
The entity does indeed contain the entity object that is to be saved, and then I inject the EntityManager using another tool, which works fine - but for some reason, the entity has already been saved to the database. The output of CmsModel p = em.find(...) results in identical data which is in entity.
Why is JPA/hibernate persisting the changes before #PreUpdate is called? How can I prevent that?
I would assume this is because em.find doesn't actually query the database but fetches the object from cache, so it actually fetches the same object entity refers to (with changes already applied).
You could check your database log for the query that fetches the data for entity.id to verify this is indeed the case or you could add a breakpoint in createShadow() and have a look at the database entry for entity at the time the function is called to see for yourself if the changes are already applied to the database at that time.
To actually solve your problem and get your shadow copy you could fetch the object directly from database via native query.
Here is an untested example of what this could look like:
public CmsModel fetchCmsModelDirectly(){
Query q = em.createNativeQuery("SELECT cm.id,cm.value_a,cm.value_b FROM CmsModel cm", CmsModel.class);
try{
return q.getSingleResult();
}catch(NoResultException e){
return null;
}
}
Do you check if the entity is really updated to database? My suspect is that the change is only updated to the persistence context (cache). And when the entity is query back at the listener, the one from the cache is returned. So they are identical.
This is the default behavior of most of the ORM (JPA in this case) to speed up the data lookup. The ORM framework will take care of the synchronizing between the persistence context and the database. Usually when the transaction is committed.

Bulk inserting existing data: Preventing JPA to do a select before every insert

I'm working on a Spring Boot application that uses JPA (Hibernate) for the persistence layer.
I'm currently implementing a migration functionality. We basically dump all the existing entities of the system into an XML file. This export includes ids of the entities as well.
The problem I'm having is located on the other side, reimporting the existing data. In this step the XML gets transformed to a Java object again and persisted to the database.
When trying to save the entity, I'm using the merge method of the EntityManager class, which works: everything is saved successfully.
However when I turn on the query logging of Hibernate I see that before every insert query, a select query is executed to see if an entity with that id already exists. This is because the entity already has an id that I provided.
I understand this behavior and it actually makes sense. I'm sure however that the ids will not exist so the select does not make sense for my case. I'm saving thousands of records so that means thousands of select queries on large tables which is slowing down the importing process drastically.
My question: Is there a way to turn this "checking if an entity exists before inserting" off?
Additional information:
When I use entityManager.persist() instead of merge, I get this exception:
org.hibernate.PersistentObjectException: detached entity passed to
persist
To be able to use a supplied/provided id I use this id generator:
#Id
#GeneratedValue(generator = "use-id-or-generate")
#GenericGenerator(name = "use-id-or-generate", strategy = "be.stackoverflowexample.core.domain.UseIdOrGenerate")
#JsonIgnore
private String id;
The generator itself:
public class UseIdOrGenerate extends UUIDGenerator {
private String entityName;
#Override
public void configure(Type type, Properties params, ServiceRegistry serviceRegistry) throws MappingException {
entityName = params.getProperty(ENTITY_NAME);
super.configure(type, params, serviceRegistry);
}
#Override
public Serializable generate(SessionImplementor session, Object object)
{
Serializable id = session
.getEntityPersister(entityName, object)
.getIdentifier(object, session);
if (id == null) {
return super.generate(session, object);
} else {
return id;
}
}
}
If you are certain that you will never be updating any existing entry on the database and all the entities should be always freshly inserted, then I would go for the persist operation instead of a merge.
Per update
In that case (id field being set-up as autogenerated) the only way would be to remove the generation annotations from the id field and leave the configuration as:
#Id
#JsonIgnore
private String id;
So basically setting the id up for always being assigned manually. Then the persistence provider will consider your entity as transient even when the id is present.. meaning the persist would work and no extra selects would be generated.
I'm not sure I got whether you fill or not the ID. In the case you fill it on the application side, check the answer here. I copied it below:
Here is the code of Spring SimpleJpaRepository you are using by using Spring Data repository:
#Transactional
public <S extends T> S save(S entity) {
if (entityInformation.isNew(entity)) {
em.persist(entity);
return entity;
} else {
return em.merge(entity);
}
}
It does the following:
By default Spring Data JPA inspects the identifier property of the given entity. If the identifier property is null, then the entity will be assumed as new, otherwise as not new.
Link to Spring Data documentation
And so if one of your entity has an ID field not null, Spring will make Hibernate do an update (and so a SELECT before).
You can override this behavior by the 2 ways listed in the same documentation. An easy way is it to make your Entity implement Persistable (instead of Serializable), which will make you implement the method "isNew".

JPA's bulk update works when changing the object?

Normally, if I change an object mapped with #Entity, it will be persisted at the end of transactional methods, even if I don't call any save methods.
I'm doing a bulk update for performance reasons using the EntityManager#CriteriaUpdate from JPA, but I need to trigger some events in the setters of the objects, so I set them, but don't call the save method.
What I want to know is if the bulk update is useful if I change the object, or each object will be persisted, even though the bulk update is executed?
PgtoDAO:
public void bulkUpdateStatus(List<Long> pgtos, Long newStatusId) {
CriteriaBuilder cb = this.manager.getCriteriaBuilder();
CriteriaUpdate<Pgto> update = cb.createCriteriaUpdate(Pgto.class);
Root e = update.from(Pgto.class);
update.set("status", newStatusId);
update.where(e.get("id").in(pgtos));
this.manager.createQuery(update).executeUpdate();
}
PgtoService:
#Transactional(readOnly = false)
public int changePgtosStatus(List<Pgto> pgtos, StatusEnum newStatus){
...
List<Long> pgtoIds = new ArrayList<Pgto>();
for(Pgto pgto : pgtos){
// Hibernate will persist each object here, individually?
pgto.setStatus(newStatus.id());
pgtoIds.add(pgto.getId());
}
pgtoDao.bulkUpdateStatus(pgtoIds, newStatus.id());
// I tried setting a different status here to the objects, but it did not persisted
}
Perhaps I should end the connection after the bulk update?
Criteria query and changed entities are treated separately. Criteria query is just executed, and managed (loaded via entity manager) changed entities are synchronized with database on transaction commit.
If you like to prevent this, you will have to detach those entities from entity manager. Then changes will be not propagated to database anymore

update and return data in spring data JPA

For concurrency purpose, I have got a requirement to update the state of a column of the database to USED while selecting from AVAILABLE pool.
I was thinking to try #Modifying, and #Query(query to update the state based on the where clause)
It is all fine, but this is an update query and so it doesn't return the updated data.
So, is it possible in spring data, to update and return a row, so that whoever read the row first can use it exclusively.
My update query is something like UPDATE MyObject o SET o.state = 'USED' WHERE o.id = (select min(id) from MyObject a where a.state='AVAILABLE'), so basically the lowest available id will be marked used. There is a option of locking, but these requires exceptional handling and if exception occur for another thread, then try again, which is not approved in my scenario
You need to explicitly declare a transaction to avoid other transactions being able to read the values involved until it's commited. The level with best performance allowing it is READ_COMMITED, which doesn't allow dirty reads from other transactions (suits your case). So the code will look like this:
Repo:
#Repository
public interface MyObjectRepository extends JpaRepository<MyObject, Long> {
#Modifying
#Query("UPDATE MyObject o SET o.state = 'USED' WHERE o.id = :id")
void lockObject(#Param("id") long id);
#Query("select min(id) from MyObject a where a.state='AVAILABLE'")
Integer minId();
}
Service:
#Transactional(isolation=Isolation.READ_COMMITTED)
public MyObject findFirstAvailable(){
Integer minId;
if ((minId = repo.minId()) != null){
repo.lockObject(minId);
return repo.findOne(minId);
}
return null;
}
I suggest to use multiple transactions plus Optimistic Locking.
Make sure your entity has an attribute annotated with #Version.
In the first transaction load the entity, mark it as USED, close the transaction.
This will flush and commit the changes and make sure nobody else touched the entity in the mean time.
In the second transaction you can no do whatever you want to do with the entity.
For these small transactions I find it clumsy to move them to separate methods so I can use #Transactional. I therefore use the TransactionTemplate instead.

JPA, removing an entity which has found by different manager

Assume we have a simple entity bean, like above
#Entity
public class Schemes implements serializable{
...
#Id private long id;
...
}
I find a record using find method and it works perfect, the problem is I cannot manipulate it(remove) by another EntityManager later, for example I find it with a method, and later I want to remove it, what is the problem?! if I find it with same manager again I would remove it, but if object has found by another manager I cannot.
#ManagedBean #SessionScopped class JSFBean {
private Schemes s;
public JSFBean(){
....
EntityManager em;//.....
s=em.find(Schemes.class,0x10L);//okay!
....
}
public void remove(){//later
....
EntityManager em;//.....
em.getTransaction().begin();
em.remove(s);//Error! some weird error, it throws IllegalArgumentException!
em.getTransaction().commit();
....
}
}
many thanks.
You are probably getting a java.lang.IllegalArgumentException: Removing a detached instance.
The two EMs do not share a persistence context and for the second EM, your object is considered detached. Trying to remove a detached object will result in an IllegalArgumentException.
You can refetch the entity before the removal:
Schemes originalS = em.find(Schemes.class, s.getId());
em.remove(originalS);
EDIT You can also delete the entity without fetching it first by using parametrized bulk queries:
DELETE FROM Schemes s WHERE s.id = :id
Be aware that bulk queries can cause problems on their own. First, they bypass the persistence context, meaning that whatever you do with a bulk query will not be reflected by the objects in the persistence context. This is less an issue for delete queries than for update queries. Secondly, if you have defined any cascading rules on your entites - they will be ignored by a bulk query.

Categories

Resources