Delete an item from a list in a JPA entity - java

I want to remove an item from a list in an Entity. I have this Entity :
#Entity
public class PairingCommit extends Model
{
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
public long id;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "commit")
public List<CommitItem> items;
}
I do the following for removing an Item :
commit.items.remove(item);
commit.update();
But it doesn't remove the object from the database.
I suppose i missed something up...
EDIT : After some search, I'm not sure to use JPA... I'm working with Play framework 2 that use Ebean... But it seems that I have access to JPA annotation..
My first problem was that trying to directly delete the item like this :
CommitItem.byId(id).delete();
But it give a OptimisticLockException.

You should call EntityManager 's remove method on item.
EntityManager em;
item = em.merge(item); // Now item is attached
em.find(PairingCommit.class, [Pairing Commit PK]).items.remove(item);
em.remove(item);

Take a look at this question/answer. The CascadeType annotation will propagate EntityManager operations to the linked entities. The way your code is currently set up, calling
entityManager.remove(pairingCommit);
would also delete all of the CommitItems that the PairingCommit is linked to, but
commit.items.remove(item);
is not an EntityManager operation, so nothing gets propagated.
You can get rid of the linked items directly with the EntityManager.

The specification says:
It is particularly important to ensure that changes to the inverse
side of a relationship result in appropriate updates on the owning
side, so as to ensure the changes are not lost when they are
synchronized to the database.
So, you must remove from the owning side of relation:
commitItem.setCommit(null);

Ok so I have solved the problem of Optimistick Lock. It was that mysql failed to compare floating point number. I have passed to DECIMAL type and works fine now.
But I d'ont understand why the removing of list don't works.
Here an article on how Optimistick lock works : http://www.avaje.org/occ.html

Related

How to maintain bi-directional relationships with Spring Data REST and JPA?

Working with Spring Data REST, if you have a OneToMany or ManyToOne relationship, the PUT operation returns 200 on the "non-owning" entity but does not actually persist the joined resource.
Example Entities:
#Entity(name = 'author')
#ToString
class AuthorEntity implements Author {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
String fullName
#ManyToMany(mappedBy = 'authors')
Set<BookEntity> books
}
#Entity(name = 'book')
#EqualsAndHashCode
class BookEntity implements Book {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(nullable = false)
String title
#Column(nullable = false)
String isbn
#Column(nullable = false)
String publisher
#ManyToMany(fetch = FetchType.LAZY, cascade = [CascadeType.ALL])
Set<AuthorEntity> authors
}
If you back them with a PagingAndSortingRepository, you can GET a Book, follow the authors link on the book and do a PUT with the URI of a author to associate with. You cannot go the other way.
If you do a GET on an Author and do a PUT on its books link, the response returns 200, but the relationship is never persisted.
Is this the expected behavior?
tl;dr
The key to that is not so much anything in Spring Data REST - as you can easily get it to work in your scenario - but making sure that your model keeps both ends of the association in sync.
The problem
The problem you see here arises from the fact that Spring Data REST basically modifies the books property of your AuthorEntity. That itself doesn't reflect this update in the authors property of the BookEntity. This has to be worked around manually, which is not a constraint that Spring Data REST makes up but the way that JPA works in general. You will be able to reproduce the erroneous behavior by simply invoking setters manually and trying to persist the result.
How to solve this?
If removing the bi-directional association is not an option (see below on why I'd recommend this) the only way to make this work is to make sure changes to the association are reflected on both sides. Usually people take care of this by manually adding the author to the BookEntity when a book is added:
class AuthorEntity {
void add(BookEntity book) {
this.books.add(book);
if (!book.getAuthors().contains(this)) {
book.add(this);
}
}
}
The additional if clause would've to be added on the BookEntity side as well if you want to make sure that changes from the other side are propagated, too. The if is basically required as otherwise the two methods would constantly call themselves.
Spring Data REST, by default uses field access so that theres actually no method that you can put this logic into. One option would be to switch to property access and put the logic into the setters. Another option is to use a method annotated with #PreUpdate/#PrePersist that iterates over the entities and makes sure the modifications are reflected on both sides.
Removing the root cause of the issue
As you can see, this adds quite a lot of complexity to the domain model. As I joked on Twitter yesterday:
#1 rule of bi-directional associations: don't use them… :)
It usually simplifies the matter if you try not to use bi-directional relationship whenever possible and rather fall back to a repository to obtain all the entities that make up the backside of the association.
A good heuristics to determine which side to cut is to think about which side of the association is really core and crucial to the domain you're modeling. In your case I'd argue that it's perfectly fine for an author to exist with no books written by her. On the flip side, a book without an author doesn't make too much sense at all. So I'd keep the authors property in BookEntity but introduce the following method on the BookRepository:
interface BookRepository extends Repository<Book, Long> {
List<Book> findByAuthor(Author author);
}
Yes, that requires all clients that previously could just have invoked author.getBooks() to now work with a repository. But on the positive side you've removed all the cruft from your domain objects and created a clear dependency direction from book to author along the way. Books depend on authors but not the other way round.
I faced a similar problem, while sending my POJO(containing bi-directional mapping #OneToMany and #ManyToOne) as JSON via REST api, the data was persisted in both the parent and child entities but the foreign key relation was not established. This happens because bidirectional associations need to be manually maintained.
JPA provides an annotation #PrePersist which can be used to make sure that the method annotated with it is executed before the entity is persisted. Since, JPA first inserts the parent entity to the database followed by the child entity, I included a method annotated with #PrePersist which would iterate through the list of child entities and manually set the parent entity to it.
In your case it would be something like this:
class AuthorEntitiy {
#PrePersist
public void populateBooks {
for(BookEntity book : books)
book.addToAuthorList(this);
}
}
class BookEntity {
#PrePersist
public void populateAuthors {
for(AuthorEntity author : authors)
author.addToBookList(this);
}
}
After this you might get an infinite recursion error, to avoid that annotate your parent class with #JsonManagedReference and your child class with #JsonBackReference. This solution worked for me, hopefully it will work for you too.
This link has a very good tutorial on how you can navigate the recursion problem:Bidirectional Relationships
I was able to use #JsonManagedReference and #JsonBackReference and it worked like a charm
I believe one can also utilize #RepositoryEventHandler by adding a #BeforeLinkSave handler to cross link the bidirectional relation between entities. This seems to be working for me.
#Component
#RepositoryEventHandler
public class BiDirectionalLinkHandler {
#HandleBeforeLinkSave
public void crossLink(Author author, Collection<Books> books) {
for (Book b : books) {
b.setAuthor(author);
}
}
}
Note: #HandlBeforeLinkSave is called based on the first parameter, if you have multiple relations in your equivalent of an Author class, the second param should be Object and you will need to test within the method for the different relation types.

Why am I getting deleted instance passed to merge, when merging the entity first

I believe the entity that I wish to delete, is a managed entity. But, regardless, why is merging it then removing it giving me the following error:
deleted instance passed to merge
Someone said on stackoverflow that merge should be ignored if it is a managed entity. So why is this not being ignored?
The way I wish to delete it is like so:
TrialUser mergedEntity = em.merge(tu);
em.remove(mergedEntity);
But this errors, but if I get rid of the first line it seems to work fine. But I want it the other way because that is consistent with the rest of the code.
EDIT:
#PersistenceContext(unitName = "UnitName")
protected EntityManager entityManager;
#Table(name="TRIAL_USER")
#Id
private BigDecimal id;
#ManyToOne(cascade= {CascadeType.ALL }, fetch=FetchType.EAGER)
#JoinColumn(name="TRIAL_USER_CLASS_ID3")
private TrialUserElement trialUserElement3;
#ManyToOne(cascade= {CascadeType.ALL }, fetch=FetchType.EAGER)
#JoinColumn(name="TRIAL_USER_CLASS_ID1")
private TrialUserElement trialUserElement1;
#ManyToOne(cascade= {CascadeType.ALL }, fetch=FetchType.EAGER)
#JoinColumn(name="TRIAL_USER_CLASS_ID2")
private TrialUserElement trialUserElement2;
You can have this error when you run some code into a transaction, when you commit at the end og the method.
When using spring in a method or class annotated with
#Transactional
This happens because you first delete the object (without committing) and then try to update it.
this code will generate the exception:
#Transactional
myethod(){
dao.delete(myObject);
myObject.setProperty("some value");
dao.save();
}
To avoid the error you should not delete and then save in the same transaction.
This is bit of a shot in the dark as I can't run your code, and these types of problems can turn out to be a bit complex. But rest assured that it should be fine to merge and then delete. I suspect it may be related to your many to one associated entities.
At the point where the transaction commits, the remove is being cascaded to the linked entities.
Even though the merge is redundant for your parent entity, I think the merge is being cascaded to the child entities, which have been deleted, hence the exception.
Try changing your cascade rules - pull it back to CascadeType.MERGE (for all three) and see if you still get the exception. Or change to CascadeType.DELETE, this will prevent the necessary merge being cascaded.
I faced the same issue, I discovered that during the update of the parent entity I was using cascade delete annotation at the child entity, so when I tried to delete this child, it triggered the cascade delete for the parent. then got this error.
To resolve the issue just remove the cascade delete from the child entity.
The JPA SPEC says that:
3.2.7.1 Merging Detached Entity State
If X is a removed entity instance, an IllegalArgumentException will be thrown by the
merge operation (or the transaction commit will fail).
That is why you cannot merge a deleted entity.

Removing entries in DB by setting their status in JPA

My team made a decision not to remove entries from database, but to give them a status like ACTIVE or DELETED.
Problem arises when, for entity - let's call it Customer I make a collection of another entities:
#Entity
public class Customer {
#ManyToMany
private List<Order> orders;
private MyEnumStatus status;
...
}
If I want to 'remove' one order from the list orders I have no influence on how is it done (especially - it will certainly not set desired status on entry in the link table and instead it will just remove a record from it).
#PersistenceContext
EntityManager em;
...
Customer customer = customerService.getRandomOne();
customer.getOrders().remove(0);
em.merge(customer);
My question is - is it possible to apply a status field (in Order entity) to this scenario? I mean to somehow overwrite this behaviour to set statuses instead of removing entries.
Yes it is.
You can use either Events or Interceptors. Both are described in chapter 14 of the reference docs. This option is powerful but hairy. Play with it at your own risk.
Soft deletes are probably a better option. #mdatwood's link shows one way of doing this. There's a fuller example at feraturenotbug. Don't forget to tweak all your queries return only objects that aren't deleted. That's mentioned in the example.
You need to implement logical deletes. See this answer for more information.
You can override the remove() method implementation using the #SQLDelete annotation as below in your customer entity:
#Entity
#SQLDelete(sql="UPDATE customer SET status = 'deleted' WHERE id = ?")
public class Customer {
#ManyToMany
private List<Order> orders;
}

Found shared references to a collection org.hibernate.HibernateException

I got this error message:
error: Found shared references to a collection: Person.relatedPersons
When I tried to execute addToRelatedPersons(anotherPerson):
person.addToRelatedPersons(anotherPerson);
anotherPerson.addToRelatedPersons(person);
anotherPerson.save();
person.save();
My domain:
Person {
static hasMany = [relatedPersons:Person];
}
any idea why this happens ?
Hibernate shows this error when you attempt to persist more than one entity instance sharing the same collection reference (i.e. the collection identity in contrast with collection equality).
Note that it means the same collection, not collection element - in other words relatedPersons on both person and anotherPerson must be the same. Perhaps you're resetting that collection after entities are loaded? Or you've initialized both references with the same collection instance?
I had the same problem. In my case, the issue was that someone used BeanUtils to copy the properties of one entity to another, so we ended up having two entities referencing the same collection.
Given that I spent some time investigating this issue, I would recommend the following checklist:
Look for scenarios like entity1.setCollection(entity2.getCollection()) and getCollection returns the internal reference to the collection (if getCollection() returns a new instance of the collection, then you don't need to worry).
Look if clone() has been implemented correctly.
Look for BeanUtils.copyProperties(entity1, entity2).
Explanation on practice. If you try to save your object, e.g.:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
message.setFiles(folders);
MESSAGESDAO.getMessageDAO().save(message);
you don't need to set updated object to a parent object:
message.setFiles(folders);
Simple save your parent object like:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
// Not set updated object here
MESSAGESDAO.getMessageDAO().save(message);
Reading online the cause of this error can be also an hibernate bug, as workaround that it seems to work, it is to put a:
session.clear()
You must to put the clear after getting data and before commit and close, see example:
//getting data
SrReq sr = (SrReq) crit.uniqueResult();
SrSalesDetailDTO dt=SrSalesDetailMapper.INSTANCE.map(sr);
//CLEAR
session.clear();
//close session
session.getTransaction().commit();
session.close();
return dt;
I use this solution for select to database, for update or insert i don't know if this solution can work or can cause problems.
My problem is equal at 100% of this: http://www.progtown.com/topic128073-hibernate-many-to-many-on-two-tables.html
I have experienced a great example of reproducing such a problem.
Maybe my experience will help someone one day.
Short version
Check that your #Embedded Id of container has no possible collisions.
Long version
When Hibernate instantiates collection wrapper, it searches for already instantiated collection by CollectionKey in internal Map.
For Entity with #Embedded id, CollectionKey wraps EmbeddedComponentType and uses #Embedded Id properties for equality checks and hashCode calculation.
So if you have two entities with equal #Embedded Ids, Hibernate will instantiate and put new collection by the first key and will find same collection for the second key.
So two entities with same #Embedded Id will be populated with same collection.
Example
Suppose you have Account entity which has lazy set of loans.
And Account has #Embedded Id consists of several parts(columns).
#Entity
#Table(schema = "SOME", name = "ACCOUNT")
public class Account {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "account")
private Set<Loan> loans;
#Embedded
private AccountId accountId;
...
}
#Embeddable
public class AccountId {
#Column(name = "X")
private Long x;
#Column(name = "BRANCH")
private String branchId;
#Column(name = "Z")
private String z;
...
}
Then suppose that Account has additional property mapped by #Embedded Id but has relation to other entity Branch.
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "BRANCH")
#MapsId("accountId.branchId")
#NotFound(action = NotFoundAction.IGNORE)//Look at this!
private Branch branch;
It could happen that you have no FK for Account to Brunch relation id DB so Account.BRANCH column can have any value not presented in Branch table.
According to #NotFound(action = NotFoundAction.IGNORE) if value is not present in related table, Hibernate will load null value for the property.
If X and Y columns of two Accounts are same(which is fine), but BRANCH is different and not presented in Branch table, hibernate will load null for both and Embedded Ids will be equal.
So two CollectionKey objects will be equal and will have same hashCode for different Accounts.
result = {CollectionKey#34809} "CollectionKey[Account.loans#Account#43deab74]"
role = "Account.loans"
key = {Account#26451}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
result = {CollectionKey#35653} "CollectionKey[Account.loans#Account#33470aa]"
role = "Account.loans"
key = {Account#35225}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
Because of this, Hibernate will load same PesistentSet for two entities.
In my case, I was copying and pasting code from my other classes, so I did not notice that the getter code was bad written:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "credito")
public Set getConceptoses() {
return this.letrases;
}
public void setConceptoses(Set conceptoses) {
this.conceptoses = conceptoses;
}
All references conceptoses but if you look at the get says letrases
I too got the same issue, someone used BeanUtils.copyProperties(source, target). Here both source and target, are using the same collection as attribute.
So i just used the deep copy as below..
How to Clone Collection in Java - Deep copy of ArrayList and HashSet
Consider an entity:
public class Foo{
private<user> user;
/* with getters and setters */
}
And consider an Business Logic class:
class Foo1{
List<User> user = new ArrayList<>();
user = foo.getUser();
}
Here the user and foo.getUser() share the same reference. But saving the two references creates a conflict.
The proper usage should be:
class Foo1 {
List<User> user = new ArrayList<>();
user.addAll(foo.getUser);
}
This avoids the conflict.
I faced similar exception in my application. After looking into the stacktrace it was clear that exception was thrown within a FlushEntityEventListener class.
In Hibernate 4.3.7 the MSLocalSessionFactory bean no longer supports the eventListeners property. Hence, one has to explicitly fetch the service registry from individual Hibernate session beans and then set the required custom event listeners.
In the process of adding custom event listeners we need to make sure the corresponding default event listeners are removed from the respective Hibernate session.
If the default event listener is not removed then the case arises of two event listeners registered against same event. In this case while iterating over these listeners, against first listeners any collections in the session will be flagged as reached and while processing the same collection against second listener would throw this Hibernate exception.
So, make sure that when registering custom listeners corresponding default listeners are removed from registry.
My problem was that I had setup an #ManyToOne relationship. Maybe if the answers above don't fix your problem you might want to check the relationship that was mentioned in the error message.
Posting here because it's taken me over 2 weeks to get to the bottom of this, and I still haven't fully resolved it.
There is a chance, that you're also just running into this bug which has been around since 2017 and hasn't been addressed.
I honestly have no clue how to get around this bug. I'm posting here for my sanity and hopefully to shave a couple weeks of your googling. I'd love any input anyone may have, but my particular "answer" to this problem was not listed in any of the above answers.
I had to replace the following collection initilization:
challenge.setGoals(memberChallenge.getGoals());
with
challenge.setGoals(memberChallenge.getGoals()
.stream()
.map(dmo -> {
final ChallengeGoal goal = new ChallengeGoalImpl();
goal.setMemberChallenge(challenge);
goal.setGoalDate(dmo.getGoalDate());
goal.setGoalValue(dmo.getGoalValue());
return goal;
})
.collect(Collectors.toList()));
I changed
#OneToMany( cascade= CascadeType.ALL)
#JoinColumn(
name = "some_id",
referencedColumnName = "some_id"
)
to
#OneToMany(mappedBy = "some_id", cascade= CascadeType.ALL)
You're using pointers(indirectly), so sometimes you're copying the memory address instead of the object/collection you want. Hibernate checks this and throw that error. Here's what can you do:
Don't copy the object/collection;
Initiate a new empty one;
Make a function to copy it's content and call it;
For example:
public Entity copyEntity(Entity e){
Entity copy = new Entity();
e.copy(name);
e.setCollection2(null);
e.setCollection3(copyCollection(e.getCollection3());
return copy;
}
In a one to many and many to one relationship this error will occur. If you attempt to devote same instance from many to one entity to more than one instance from one to many entity.
For example, each person can have many books but each of these books can be owned by only one person if you consider more than one owner for a book this issue is raised.

How do you remove rows after changing the item in a JPA OneToOne relationship?

How do you get a OneToOne item to automatically remove with JPA/Hibernate? I would expect simply setting the OneToOne item to be null in the class that contains would be smart enough to allow Hibernate to delete it.
Given a simple object, simplified:
#Entity
public class Container {
private Item item;
#OneToOne(cascade=CascadeType.ALL)
public Item getItem() { return item; }
public void setItem(Item newItem) { item = newItem; }
}
When an Item is set on Container an Container is persisted with merge a row gets inserted.
Container container = new Container();
container.setItem(new Item());
container = entityManager.merge(container);
// Row count is 1
But when the item is set null, or to another item, the old object still exists in the table.
container.setItem(null);
container = entityManager.merge(container);
// Row count is STILL 1, leaving orphaned rows.
So, how do I remove these OneToOne orphans?
I'm guessing that the reasone behind hibernate not allowing DELETE_ORPHAN to OneToOne relations is related to this issue.
If you really want this bad, you can hack your way with these steps:
transform your OneToOne relation into a OneToMany:
add a method just to get the first element of the child's collection (this is optional but handy)
use the DELETE_ORPHAN annotation
Of course, this is a big hack.
As JPA 2.0 has been released for a very long time now, you could simply use:
#OneToOne(cascade = CascadeType.ALL, orphanRemoval = true)
Try to change to
#OneToOne
#Cascade(cascade = {CascadeType.ALL, CascadeType.DELETE_ORPHAN})
See also my answer on similar post here.
Unfortunately, there is no way to do this in JPA without tying yourself to Hibernate's implementation.
So yes, as Foxy says, you can use org.hibernate.annotations.CascadeType instead of the standard JPA annotation, which allows you to specify DELETE_ORPHAN. If you want to use the JPA abstraction, you must delete orphans yourself as of yet.

Categories

Resources