I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic
Related
The idea is basically to extend some Repositories with custom functionality. So I got this setup, which DOES work!
#MappedSuperclass
abstract class MyBaseEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
var id: Int = 0
var eid: Int = 0
}
interface MyRepository<T : MyBaseEntity> {
#Transactional
fun saveInsert(entity: T): Optional<T>
}
open class MyRepositoryImpl<T : MyBaseEntity> : MyRepository<T> {
#Autowired
private lateinit var entityManager: EntityManager
#Transactional
override fun saveInsert(entity: T): Optional<T> {
// lock table
entityManager.createNativeQuery("LOCK TABLE myTable WRITE").executeUpdate()
// get current max EID
val result = entityManager.createNativeQuery("SELECT MAX(eid) FROM myTable LIMIT 1").singleResult as? Int ?: 0
// set entities EID with incremented result
entity.eid = result + 1
// test if table is locked. sending manually 2-3 POST requests to REST
Thread.sleep(5000)
// save
entityManager.persist(entity)
// unlock
entityManager.createNativeQuery("UNLOCK TABLES").executeUpdate()
return Optional.of(entity)
}
}
How would I do this more spring-Like?
At first, I thought the #Transactional would do the LOCK and UNLOCK stuff. I tried a couple of additional parameters and #Lock. I did go through docs and some tutorials but the abstract technical English is often not easy to understand. At the end, I did not get a working solution so I manually added the table-locking, which works fine. Still would prefer a more spring-like way to do it.
1) There might be a problem with your current design as well. The persist does not instantly INSERT a row in the database. That happens on transaction commit when the method returns.
So you unlock the table before the actual insert:
// save
entityManager.persist(entity) // -> There is no INSERT at this point.
// unlock
entityManager.createNativeQuery("UNLOCK TABLES").executeUpdate()
2) Going back to how to do it only with JPA without natives (it still requires a bit of a workaround as it is not supported by default):
// lock table by loading one existing entity and setting the LockModeType
Entity lockedEntity = entityManager.find(Entity.class, 1, LockModeType.PESSIMISTIC_WRITE);
// get current max EID, TRY NOT TO USE NATIVE QUERY HERE
// set entities EID with incremented result
// save
entityManager.persist(entity)
entityManager.flush() // -> Force an actual INSERT
// unlock by passing the previous entity
entityManager.lock(lockedEntity, LockModeType.NONE)
I am not hardcore Hibernate Programmer.
Create a native query return bulk data million records:
return super.em.createNativeQuery(query).getResultList();
Fetching data and build Objects which is persisted using DAO in Loop which have 1 million persists.
persist(object)
When simultaneously I run select query on table it shows me 0 Results.
Select count(*) from Audit_Log;
After all records are inserted successfully, MySQL shows me result.
Earlier I was named query for fetching values from DAO and it worked well. Now I opted native query and got this behavior. Is there something I need to change?
Code:
public abstract class GenericDaoImpl<T, PK> implements GenericDao<T, PK> {
#Override
public T create(final T t) {
this.em.persist(t);
return t;
}
#Override
public void flush() {
org.hibernate.Session session = (org.hibernate.Session)em.getDelegate();
session.flush();
}
DAO:
#Override
public Person create(Person person) {
return (Person) create((MainRecord) person);
}
Are you persisting 10L (a.k.a. 1M) records in single transaction? Looks like this is this case. Now since TX is open for considerably very long period of time, firing a query on TOAD (or any other SQL client) won't return it since data isn't committed in the DB yet.
Try flushing the data in between.
Also, I hope you're using batching on hibernate as well as driver level. There is no way I will persist 1M records w/o batching.
The first thing you need to try is to use method flush or close the session directy while the persist was done.
And the explain was here:
persist() makes a transient instance persistent. However, it does not guarantee that the identifier value will be assigned to the persistent instance immediately, the assignment might happen at flush time. persist() also guarantees that it will not execute an INSERT statement if it is called outside of transaction boundaries. This is useful in long-running conversations with an extended Session/persistence context.
Try this code snippet, it was was similar to approach we using in JDBC batching..
public Long save(HttpServletRequest request) {
//Further business logic here....
for ( int i=0; i<count; i++ ) {
getEntityManager().persist((ABC) model);
if ( i > 0 && i % 2500== 0 ) {
getEntityManager().flush();
getEntityManager().clear();
}
}
tx.commit();
((EntityManager) session).close();
}
I try to cover my Repository code with junit tests but unexpectedly I am facing the following problem:
#Test
#Transactional
public void shoudDeactivateAll(){
/*get all Entities from DB*/
List<SomeEntity> someEntities = someEntityRepository.findAll();
/*for each Entity set 1 for field active*/
someEntities.forEach(entity ->
{entity.setActive(1);
/*save changes*/
SomeEntityRepository.save(entity);});
/*call service, which walks through the whole rows and updates "Active" field to 0.*/
unActiveService.makeAllUnactive();
/*get all Entities again
List<SomeEntity> someEntities = SomeEntityRepository.findAll();
/*check that all Entities now have active =0*/
someEntities.forEach(entity -> {AssertEquals(0, entity.getActive());});
}
where:
makeAllUnactive() method is just a #Query:
#Modifying
#Query(value = "update SomeEntity e set v.active=0 where v.active =1")
public void makeAllUnactive();
And: someEntityRepository extends JpaRepository
This test method return AssertionError: Expected 0 but was 1.
it means that makeAllUnactive didn't change the status for Entitites OR did chanches, but they are invisible.
Could you please help me understand where is "gap" in my code?
in the query you have:
#Query(value = "update SomeEntity e set v.active=0 where v.active =1")
you should rather have changed it into:
#Query(value = "update SomeEntity e set e.active=0 where e.active =1")
if that does not work, try flushing after running SomeEntityRepository.save(entity);
EDIT:
You should enable clearAutomatically flag in the #Modifying, so that EntityManager will get updated. However keep it mind that it may also cause loosing all the non-flushed changes. For some more reading take a look:
http://docs.spring.io/spring-data/jpa/docs/1.5.0.M1/reference/htmlsingle/#jpa.modifying-queries
I have a java project with a collection of unit tests that perform simple updates, deletes using JPA2. The unit tests run without a problem, and I can verify the changes in the database - all good. I attempt to copy/paste this same function in a handler (Smartfox Extension) - I recieve a rollback exception.
Column 'levelid' cannot be null.
Looking for suggestions as to why this might be. I can perform data reads from within this extension ( GetModelHandler ) but trying to set data does not work. It's completely baffling.
So in summary -
This works...
#Test
public void Save()
{
LevelDAO dao = new LevelDAO();
List levels = dao.findAll();
int i = levels.size();
Level l = new Level();
l.setName("test");
Layer y = new Layer();
y.setLayername("layer2");
EntityManagerHelper.beginTransaction();
dao.save(l);
EntityManagerHelper.commit();
}
This fails with rollback exception
public class SetModelHandler extends BaseClientRequestHandler
{
#Override
public void handleClientRequest(User sender, ISFSObject params)
{
LevelDAO dao = new LevelDAO();
List levels = dao.findAll();
int i = levels.size();
Level l = new Level();
l.setName("test");
Layer y = new Layer();
y.setLayername("layer2");
EntityManagerHelper.beginTransaction();
dao.save(l);
EntityManagerHelper.commit();
}
}
The Level and Layer class have a OneToMany and ManyToOne attribute respectively.
Any ideas appeciated.
Update
Here's the schema
Level
--------
levelid (int) PK
name (varchar)
Layer
--------
layerid (int) 11 PK
layername (varchar) 100
levelid (int)
Foreign Key Name:Level.levelid ,
On Delete: no action,
On Update: no action
When I changed
EntityManagerHelper.beginTransaction();
dao.update(l);
EntityManagerHelper.commit();
to
EntityManagerFactory factory = Persistence.createEntityManagerFactory("bwmodel");
EntityManager entityManager = factory.createEntityManager();
entityManager.getTransaction().begin();
dao.update(l);
entityManager.persist(l);
entityManager.getTransaction().commit();
This performs a save but not an update ? I'm missing something obvious here.
The most likely problem I can see would be different database definitions. Testing EJBs often use an in-memory database that is generated on the fly. Whereas in actual production you are using a real database which is probably enforcing constraints.
Try assigning the levelid value a value or changing the database schema.
I want the first to be generated:
#Id
#Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12,
scale = 0)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN")
#BusinessKey
public Long getAId() {
return this.aId;
}
I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId?
Note that afterwards, I want to be able to update bId from my gui.
If the solution is JPA, even better.
Choose your poison:
Option #1
you could annotate bId as org.hibernate.annotations.Generated and use a database trigger on insert (I'm assuming the nextval has already been assigned to AID so we'll assign the curval to BID):
CREATE OR REPLACE TRIGGER "MY_TRIGGER"
before insert on "MYENTITY"
for each row
begin
select "MYENTITY_SEQ".curval into :NEW.BID from dual;
end;
I'm not a big fan of triggers and things that happen behind the scene but this seems to be the easiest option (not the best one for portability though).
Option #2
Create a new entity, persist it, flush the entity manager to get the id assigned, set the aId on bId, merge the entity.
em.getTransaction().begin();
MyEntity e = new MyEntity();
...
em.persist(e);
em.flush();
e.setBId(e.getAId());
em.merge(e);
...
em.getTransaction().commit();
Ugly, but it works.
Option #3
Use callback annotations to set the bId in-memory (until it gets written to the database):
#PostPersist
#PostLoad
public void initialiazeBId() {
if (this.bId == null) {
this.bId = aId;
}
}
This should work if you don't need the id to be written on insert (but in that case, see Option #4).
Option #4
You could actually add some logic in the getter of bId instead of using callbacks:
public Long getBId() {
if (this.bId == null) {
return this.aId;
}
return this.bId;
}
Again, this will work if you don't need the id to be persisted in the database on insert.
If you use JPA, after inserting the new A the id should be set to the generated value, i tought (maybe it depends on which jpa provider you use), so no 2nd query needed. then set bld to ald value in your DAO?