The idea is basically to extend some Repositories with custom functionality. So I got this setup, which DOES work!
#MappedSuperclass
abstract class MyBaseEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
var id: Int = 0
var eid: Int = 0
}
interface MyRepository<T : MyBaseEntity> {
#Transactional
fun saveInsert(entity: T): Optional<T>
}
open class MyRepositoryImpl<T : MyBaseEntity> : MyRepository<T> {
#Autowired
private lateinit var entityManager: EntityManager
#Transactional
override fun saveInsert(entity: T): Optional<T> {
// lock table
entityManager.createNativeQuery("LOCK TABLE myTable WRITE").executeUpdate()
// get current max EID
val result = entityManager.createNativeQuery("SELECT MAX(eid) FROM myTable LIMIT 1").singleResult as? Int ?: 0
// set entities EID with incremented result
entity.eid = result + 1
// test if table is locked. sending manually 2-3 POST requests to REST
Thread.sleep(5000)
// save
entityManager.persist(entity)
// unlock
entityManager.createNativeQuery("UNLOCK TABLES").executeUpdate()
return Optional.of(entity)
}
}
How would I do this more spring-Like?
At first, I thought the #Transactional would do the LOCK and UNLOCK stuff. I tried a couple of additional parameters and #Lock. I did go through docs and some tutorials but the abstract technical English is often not easy to understand. At the end, I did not get a working solution so I manually added the table-locking, which works fine. Still would prefer a more spring-like way to do it.
1) There might be a problem with your current design as well. The persist does not instantly INSERT a row in the database. That happens on transaction commit when the method returns.
So you unlock the table before the actual insert:
// save
entityManager.persist(entity) // -> There is no INSERT at this point.
// unlock
entityManager.createNativeQuery("UNLOCK TABLES").executeUpdate()
2) Going back to how to do it only with JPA without natives (it still requires a bit of a workaround as it is not supported by default):
// lock table by loading one existing entity and setting the LockModeType
Entity lockedEntity = entityManager.find(Entity.class, 1, LockModeType.PESSIMISTIC_WRITE);
// get current max EID, TRY NOT TO USE NATIVE QUERY HERE
// set entities EID with incremented result
// save
entityManager.persist(entity)
entityManager.flush() // -> Force an actual INSERT
// unlock by passing the previous entity
entityManager.lock(lockedEntity, LockModeType.NONE)
Related
I have a rest application where one of the resources can be updated. Below are two methods responsible for achieving this task:
updateWithRelatedEntities(String, Store): receives id and new object Store which was constructed by deserializing PUT request entity, sets the version (used for optimistic locking) on new object and calls update in a transaction.
public Store updateWithRelatedEntities(String id, Store newStore) {
Store existingStore = this.get(id);
newStore.setVersion(existingStore.getVersion());
em.getTransaction().begin();
newStore = super.update(id, newStore);
em.getTransaction().commit();
return newStore;
}
update(String, T): a generic method for making an update. Checks that ids match and performs merge operation.
public T update(String id, T newObj) {
if (newObj == null) {
throw new EmptyPayloadException(type.getSimpleName());
}
Type superclass = getClass().getGenericSuperclass();
if (superclass instanceof Class) {
superclass = ((Class) superclass).getGenericSuperclass();
}
Class<T> type = (Class<T>) (((ParameterizedType) superclass).getActualTypeArguments()[0]);
T obj = em.find(type, id);
if (!newObj.getId().equals(obj.getId())) {
throw new IdMismatchException(id, newObj.getId());
}
return em.merge(newObj);
}
The problem is that this call: T obj = em.find(type, id); triggers an update of store object in the database which means that we get OptimisticLockException when triggering merge (because versions are now different).
Why is this happening? What would be the correct way to achieve this?
I kind of don't want to copy properties from newStore to existingStore and use existingStore for merge - which would, I think, solve the optimistic lock problem.
This code is not running on an application server and I am not using JTA.
EDIT:
If I detach existingStore before calling update, T obj = em.find(type, id); doesn't trigger an update of store object so this solves the problem. The question still remains though - why does it trigger it when entity is not detached?
I can't see your entity from code which you added but I believe that you missing some key point with optimistic locking -> #Version annotation on version field.
If you have this field on your entity then container should be able to do merge procedure without problems. Please take a look to
Optimistic Locking also good article don't break optimistic locking
I know that there are some questions about this subject already but I think that this one is different.
Let's say I have this class:
#Entity
public class foo{
#Id
#GeneratedValue
private long id;
#Version
private long version;
private String description;
...
}
They I create some objects and persist them to a DB using JPA add().
Later, I get all from the repository using JPA all();
From that list I select one object and change the description.
Then I want to update that object in the repository using JPA merge() (see code).
The problem here is that it works the first time I try to change the description (Version value is now 2).
The second time, a OptimisticLockException is raised saying that that object was changed meanwhile.
I'm using H2 has DB in embedded mode.
MERGE CODE:
//First: persist is tried, if the object already exists, an exception is raised and then this code is executed
try {
tx = em.getTransaction();
tx.begin();
entity = em.merge(entity);
tx.commit();
} catch (PersistenceException pex) {
//Do stuff
}
What can be wrong where?
Thank you.
EDIT (more code)
//Foo b is obtained by getting all objects from db using JPA all() and then one object is selected from that list
b.changeDescription("Something new!");
//Call update method (Merge code already posted)
I would assume that you are changing elements in the list from different clients or different threads. This is what causes an OptimisticLockException.
One thread, in it's own EntityManager, reads the Foo object and gets a #Version at the time of the read.
// select and update AnyEntity
EntityManager em1 = emf.createEntityManager();
EntityTransaction tx1 = em1.getTransaction();
tx1.begin();
AnyEntity firstEntity = em1.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
firstEntity.setName("name1");
em1.merge(firstEntity);
Another client reads and updates the Foo object at the same time, before the first client has committed its changes to the database:
// select and update AnyEntity from a different EntityManager from a different thread or client
EntityManager em2 = emf.createEntityManager();
EntityTransaction tx2 = em2.getTransaction();
tx2.begin();
AnyEntity secondEntity = em2.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
secondEntity.setName("name2");
em2.merge(secondEntity);
Now the first client commits its changes to the database:
// commit first change while second change still pending
tx1.commit();
em1.close();
And the second client gets an OptimisticLockException when it updates its changes:
// OptimisticLockException thrown here means that a change happened while AnyEntity was still "checked out"
try {
tx2.commit();
em2.close();
} catch (RollbackException ex ) {
Throwable cause = ex.getCause();
if (cause != null && cause instanceof OptimisticLockException) {
System.out.println("Someone already changed AnyEntity.");
} else {
throw ex;
}
}
Reference: Java - JPA - #Version annotation
Are you properly initialising the version field?
If not, it is not supposed to work with null, try adding a default value to it:
#Version
private Long version = 0L;
Here are a post which explains perfectly when OptimisticLockException is thrown.
Also, just for future reference, you can make JPA avoid this in-memory validation of entities when you are updating them but want to change in DB side just in the end of this transaction using detach method on EntityManager:
em.detach(employee);
I try to cover my Repository code with junit tests but unexpectedly I am facing the following problem:
#Test
#Transactional
public void shoudDeactivateAll(){
/*get all Entities from DB*/
List<SomeEntity> someEntities = someEntityRepository.findAll();
/*for each Entity set 1 for field active*/
someEntities.forEach(entity ->
{entity.setActive(1);
/*save changes*/
SomeEntityRepository.save(entity);});
/*call service, which walks through the whole rows and updates "Active" field to 0.*/
unActiveService.makeAllUnactive();
/*get all Entities again
List<SomeEntity> someEntities = SomeEntityRepository.findAll();
/*check that all Entities now have active =0*/
someEntities.forEach(entity -> {AssertEquals(0, entity.getActive());});
}
where:
makeAllUnactive() method is just a #Query:
#Modifying
#Query(value = "update SomeEntity e set v.active=0 where v.active =1")
public void makeAllUnactive();
And: someEntityRepository extends JpaRepository
This test method return AssertionError: Expected 0 but was 1.
it means that makeAllUnactive didn't change the status for Entitites OR did chanches, but they are invisible.
Could you please help me understand where is "gap" in my code?
in the query you have:
#Query(value = "update SomeEntity e set v.active=0 where v.active =1")
you should rather have changed it into:
#Query(value = "update SomeEntity e set e.active=0 where e.active =1")
if that does not work, try flushing after running SomeEntityRepository.save(entity);
EDIT:
You should enable clearAutomatically flag in the #Modifying, so that EntityManager will get updated. However keep it mind that it may also cause loosing all the non-flushed changes. For some more reading take a look:
http://docs.spring.io/spring-data/jpa/docs/1.5.0.M1/reference/htmlsingle/#jpa.modifying-queries
I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic
I want the first to be generated:
#Id
#Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12,
scale = 0)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN")
#BusinessKey
public Long getAId() {
return this.aId;
}
I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId?
Note that afterwards, I want to be able to update bId from my gui.
If the solution is JPA, even better.
Choose your poison:
Option #1
you could annotate bId as org.hibernate.annotations.Generated and use a database trigger on insert (I'm assuming the nextval has already been assigned to AID so we'll assign the curval to BID):
CREATE OR REPLACE TRIGGER "MY_TRIGGER"
before insert on "MYENTITY"
for each row
begin
select "MYENTITY_SEQ".curval into :NEW.BID from dual;
end;
I'm not a big fan of triggers and things that happen behind the scene but this seems to be the easiest option (not the best one for portability though).
Option #2
Create a new entity, persist it, flush the entity manager to get the id assigned, set the aId on bId, merge the entity.
em.getTransaction().begin();
MyEntity e = new MyEntity();
...
em.persist(e);
em.flush();
e.setBId(e.getAId());
em.merge(e);
...
em.getTransaction().commit();
Ugly, but it works.
Option #3
Use callback annotations to set the bId in-memory (until it gets written to the database):
#PostPersist
#PostLoad
public void initialiazeBId() {
if (this.bId == null) {
this.bId = aId;
}
}
This should work if you don't need the id to be written on insert (but in that case, see Option #4).
Option #4
You could actually add some logic in the getter of bId instead of using callbacks:
public Long getBId() {
if (this.bId == null) {
return this.aId;
}
return this.bId;
}
Again, this will work if you don't need the id to be persisted in the database on insert.
If you use JPA, after inserting the new A the id should be set to the generated value, i tought (maybe it depends on which jpa provider you use), so no 2nd query needed. then set bld to ald value in your DAO?