Unique constraint violated on replace item from list in test - java

So I have client = creditor which has list of documents. This list can contain only one type of each document, so i have method add document which adds new documnet, but if there is already document of this type it should be replaced.
this test fail on unique constraint
def "should replace documents with same type"() {
given:
def creditor = creditors.create(CreditorHelper.createSampleCreditorForm())
def documentType = DocumentTypeEvent.INVESTMENT_INSTRUCTION
and:
def old = documents.addDocument(new DocumentForm("urlOld", creditor.creditorReference, documentType, ZonedDateTime.now()))
when:
documents.addDocument(new DocumentForm("urlNew", creditor.creditorReference, documentType, ZonedDateTime.now()))
then:
def newResult = documentRepository.findByCreditorReference(creditor.creditorReference)
newResult.size() == 1
newResult.find {
it.url == "urlNew"
}
and:
documentRepository.findByHash(old.hash) == Optional.empty()
}
implementaion is simple replace:
#Transactional
public Document addDocument(final DocumentForm documentForm) {
return creditorRepository.findByCreditorReferenceIgnoreCase(documentForm.getCreditorReference())
.addDocument(new Document(documentForm));
}
above calls:
public Document addDocument(Document newDocument) {
documents.removeIf(existingDocument -> existingDocument.getType() == newDocument.getType());
documents.add(newDocument);
}
entity:
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
#JoinColumn(name = "creditor_id")
#Builder.Default
private List<Document> documents = new ArrayList<>();
funny is that when I remove unique constraint from flyway test is passing, so it seems like problems with transaction.

I think it might be related to Hibernate's queries ordering during flush time. Because persisting new entities is invoked as first operation by Hibernate's session, you get exception as entity is present in DB during flush time. Turn on show_sql option in Hibernate and try look at logs what is the real order of queries sent to DB.
Also read Vlad's post about ordering: A beginner’s guide to Hibernate flush operation order. You can read code of class EventListenerRegistryImpl as well and see how ordering looks like.

Related

Spring Data, JPA #ManyToOne lazy initialization not working

I know there are many similar questions about this trouble but nothing works for me.
I have #ManyToOne relationship between Aim and User.
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn(name = "user_id", nullable = false, updatable = false)
private User user;
and
#OneToMany(fetch = FetchType.LAZY, mappedBy = "user")
private Collection<Aim> userAims;
respectively.
#Override
#Transactional(propagation = Propagation.REQUIRED)
#PreAuthorize("isAuthenticated() and principal.user.isEnabled() == true")
public Aim findById(String aimId) throws NumberFormatException, EntityNotFoundException {
Aim aim = null;
try {
aim = aimRepository.findOne(Long.parseLong(aimId));
} catch (NumberFormatException e) {
throw new InvalidDataAccessApiUsageException(e.getMessage(), e);
}
if (aim == null) throw new EntityNotFoundException("Aim with id: " + aimId + " not found!");
return aim;
}
#OneToMany associations work fine with lazy fetching. Method isn't nested to another #Transactional method so #Transactional works fine.
So the record exists.
Classes User and Aim aren't final and implement
Serializable
Some sources advice to put annotations on getters. It also doesn't
work.
#Fetch(FetchMode.SELECT) the same situation =\
Query via Hibernate results the same, but HQL query with left join
fetch works fine
My FK is ON UPDATE CASCADE ON INSERT CASCADE
optional = false also tried...
Pay attention that I haven't the LazyInitException
Thanks in advance!
I'm guessing from the code in your findById method, and by the reference to "lazy initialization not working" in the title, that you are wanting to find an Aim object by it's numeric Id, along with the associated User object.
In order to do this with lazy-loading, you need to 'get' the associated object, and (most importantly) you need to 'get' one of the associated entity's fields.
So the code inside the try block should be:
aim = aimRepository.findOne(Long.parseLong(aimId));
if (aim != null && aim.getUser() != null) {
aim.getUser().getUserId(); // doesn't need to be assigned to anything
}
Alternatively, if you have a logger available you can use the userId in a debug or trace log message:
if (aim != null && aim.getUser() != null) {
logger.debug("Lazy-loaded User " + aim.getUser().getUserId());
}
This has the added benefit that you can debug how things are lazy-loaded.
By the way, we found out the hard way that making a find routine throw an Exception when it doesn't find something is a bad idea. This is because you might want to use the find routine to find out if an Entity does NOT exist. If that is happening within a transaction, your exception may trigger an unwanted rollback (unless you specifically ignore it). Better to return null and check for that instead of using a try ... catch.

Spring data - insert data depending on previous insert

I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic

Removing entities from Collection in hibernate

I'm trying to manually delete every entity that's in a collection on an entity. The problem is, the entities don't get deleted from the database, even though they get removed from the collection on the task.
Below is the code im using to achieve this:
public int removeExistingCosts(final DataStoreTask task) {
int removedAccumulator = 0;
Query query = entityManager.createNamedQuery(DataStoreCost.GET_COSTS_FOR_TASK);
query.setParameter(DataStoreCost.TASK_VARIABLE_NAME, task);
try {
List costsForTask = query.getResultList();
for(Object cost : costsForTask) {
task.getCosts().remove(cost);
removedAccumulator++;
}
} catch (NoResultException e) {
logger.debug("Couldn't costs for task: {}", task.getId());
}
entityManager.flush();
entityManager.persist(task);
return removedAccumulator;
}
Any ideas?
P.S the collection is represented as:
#OneToMany(targetEntity = DataStoreCost.class, mappedBy = "task", cascade = CascadeType.ALL)
private Collection<DataStoreCost> costs;
Cheers.
I think you need to explicitly remove the Cost entity via the entityManager. When you remove the Cost from the Tasks cost list you actually only remove the reference to that instance. It does not know that that particular Cost will not be used anywhere else.
It's not deleting the entity, because it doesn't know if something else is referring to it.
You need to enable delete orphan. In jpa2, use the orphanRemoval attribute. If you're using hibernate annotations, use CascadeStyle delete orphan.

Why does eclipselink/jpa attempt to persist an entity while I don't ask it to?

I'm trying to persist 3 entities (exp,def,meng) in a transaction, and then persist another 2 (def', meng'), where meng' is related to exp.
However, as I attempt to persist meng' eclipselink/jpa2 is being told to:
Call: INSERT INTO EXPRESSION (EXPRESSION, GENDER) VALUES (?, ?)
bind => [exp, null]
which will throw an expession since it's been already inserted and it's a key.
So apparently persisting the entity meng' which includes updating exp itself would somehow make eclipselink think I asked to persist a new exp.
Here is the test:
#Test
public void testInsertWords() throws MultipleMengsException, Exception{
final List<String[]> mengsWithSharedExp = new LinkedList<String[]>();
mengsWithSharedExp.add(mengsList.get(3));
mengsWithSharedExp.add(mengsList.get(4));
insertWords(mengsWithSharedExp, null, mengsDB);
}
Here is the problematic code:
public void insertWords(EnumMap<Input, MemoEntity> input) throws MultipleMengsException {
Expression def = (Expression) input.get(Input.definition);
Expression exp = (Expression) input.get(Input.expression);
beginTransaction();
persistIfNew(def);
persistIfNew(exp);
persistNewMeng(null, exp, def);
commitTransaction();
}
private void persistNewMeng(final MUser usr, Expression exp, final Expression def) throws RuntimeException {
final Meaning meng = new Meaning(usr, exp, def);
if (!persistIfNew(meng)) {
throw new RuntimeException("Meng ." + meng.toString() + " was expected to be new.");
}
if (usr != null) {
usr.addMeng(meng);
}
}
public <Entity> boolean persistIfNew(final Entity entity) {
final Object key = emf.getPersistenceUnitUtil().getIdentifier(entity);
if (em.find(entity.getClass(), emf.getPersistenceUnitUtil().getIdentifier(entity)) != null) {
return false;
}
em.persist(entity);
return true;
}
You can checkout the Maven source code (to test) from here.
Is this expected behavior? If so, why? And most importantly, how to solve?
It looks as if
#ManyToMany(cascade=CascadeType.ALL)
private Set<Expression> exps;
in Meaning is the culprit, although I don't understand why it should. The documentation says:
If the entity is already managed, the persist operation is ignored, although the persist operation will cascade to related entities that have the cascade element set to PERSIST or ALL in the relationship annotation.
Frank is correct. You are not reading in the Expression, so when you call persist on Meaning, when it is referencing existing Expressions they are detached, which cause the transaction to fail. Calling merge will work, or you can remove the cascade persist on the exps relationship since you seem to persist new Expressions directly anyway its not needed.
Mostly likely you run into:
[...] If the entity is detached [...] the transaction commit will fail.
(same source that you are citing)
If you persist a new entity that references an already persistent entity, you must use "merge" instead of "persist". "merge" will persist new entities and update existing entities.
Also beware of the fact that the merge operation will return an attached data graph, that must be used for further operations within the same persistence context.

Hibernate: same generated value in two properties

I want the first to be generated:
#Id
#Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12,
scale = 0)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN")
#BusinessKey
public Long getAId() {
return this.aId;
}
I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId?
Note that afterwards, I want to be able to update bId from my gui.
If the solution is JPA, even better.
Choose your poison:
Option #1
you could annotate bId as org.hibernate.annotations.Generated and use a database trigger on insert (I'm assuming the nextval has already been assigned to AID so we'll assign the curval to BID):
CREATE OR REPLACE TRIGGER "MY_TRIGGER"
before insert on "MYENTITY"
for each row
begin
select "MYENTITY_SEQ".curval into :NEW.BID from dual;
end;
I'm not a big fan of triggers and things that happen behind the scene but this seems to be the easiest option (not the best one for portability though).
Option #2
Create a new entity, persist it, flush the entity manager to get the id assigned, set the aId on bId, merge the entity.
em.getTransaction().begin();
MyEntity e = new MyEntity();
...
em.persist(e);
em.flush();
e.setBId(e.getAId());
em.merge(e);
...
em.getTransaction().commit();
Ugly, but it works.
Option #3
Use callback annotations to set the bId in-memory (until it gets written to the database):
#PostPersist
#PostLoad
public void initialiazeBId() {
if (this.bId == null) {
this.bId = aId;
}
}
This should work if you don't need the id to be written on insert (but in that case, see Option #4).
Option #4
You could actually add some logic in the getter of bId instead of using callbacks:
public Long getBId() {
if (this.bId == null) {
return this.aId;
}
return this.bId;
}
Again, this will work if you don't need the id to be persisted in the database on insert.
If you use JPA, after inserting the new A the id should be set to the generated value, i tought (maybe it depends on which jpa provider you use), so no 2nd query needed. then set bld to ald value in your DAO?

Categories

Resources