How to delete children on update parent? - java

I have "Parent" and "Child" hibernate entities.
On "Parent" I have a Set<Child> to hold it's children.
When I update the Parent with new children, all works fine: the children are created on "child" table.
But, when I remove one element from the Parent hashset and save, the correspondent child on database is not be deleted.
Here is:
On PARENT (named Workflow):
#OneToMany(orphanRemoval=true, cascade = CascadeType.ALL, mappedBy="workflow", fetch = FetchType.EAGER)
private Set<ActivityDB> activities;
On Child (named Activity)
#ManyToOne
#JoinColumn(name="id_workflow")
#Fetch(FetchMode.JOIN)
private WorkflowDB workflow;
I'm working on persistent instance inside the session. No error is raised. Just seems to work fine, but the register on database still there.
To do a test, I load the Workflow and do a
workflow.activities.remove( activity_index_x )
and then save the workflow using session.update( workflow ).
but the "activity_index_x" still in database and comes to life again when I reload the workflow.

Make sure you go through the manual regarding bidirectional association links.
The best practices include adding the add/remove child methods:
class WorkflowDB {
public void remove (ActivityDB a) {
if (a != null) {
this.activities.remove(a);
a.setWorkflow(null);
}
}
public void add (ActivityDB a) {
if (a != null) {
this.activities.add(a);
a.setWorkflow(this);
}
}
}
But because you use a Set as the one-to-many side, you need to pay extra attention to equals and hashcode. The best way is to use a business-key for checking equality and for the hash-code algorithm, and never use the database identifier for equals/hashcode, especially in conjunction with hash-like data structures (set/map).
Bidirectional associations are more complicated to manage than unidirectional ones. If you don't really need the one-to-many side, you can remove it and replace it with a query instead. That way you'd have to manage only the many-to-one side.

This is caused by the child-to-parent reference not being cleared. Since you mapped both sides (and configured it this way) Hibernate will actually look at the child end of the relation.
The best way to fix this is to also clear the workflow field on the activity when you remove the activity from the workflow (and reversely), so:
class Workflow {
public void remove (Activity a) {
if (this.activities.remove(a)) {
a.setWorkflow(null);
}
}
public void add (Activity a) {
if (this.activities.add(a)) {
a.setWorkflow(this);
}
}
}
The main question is which side of the relation do you want to maintain the relation-state in?
You could also map the relation on the Workflow (do not use the mappedBy attribute, but use a JoinTable annotation to keep the column on the child table) and only map the parent-Workflow as a read-only (insertable=false,updatable=false) field in the Activity.
This way the Workflow is completely in control of which activities are part of it and the activities can still see the workflow they are part of.
class Workflow {
#OneToMany
#JoinTable(...)
private Set<Activity> activities
}
class Activity {
#Column(insertable=false, updatable=false)
private Workflow workflow
}

Related

Proper way to update a Set element in a JPA #OneToMany relationship?

Lets assume we have a bi-directional One-to-Many relationship between Parent and Child.
I like the idea of model that relationship with a Set, because of it intrinsic nature of disallowing duplicates.
Question:
1) What would be the proper JPA way to update a child in such a situation?
Query the Parent and pass an updated Child into it?
Query the Child directly and just call its setters?
2) Has either way some performance advantages or disadvantages?
#Entity
public class Parent extends AbstractPersistable<Long> {
#OneToMany(cascade = CascadeType.ALL, ... )
private Set<Child> children = new HashSet();
public void addChild( Child child ) { ... }
public void removeChild( Child child ) { ... }
// non-anemic domain model ?
public void updateChild( Child child ) {
// how to update the element in the Set?
}
}
UPDATE:
How to properly write the update method? Since Sets in Java do not have a get method?
To update a Child, you don't need to operate the parent collection.
Thanks to the dirty checking mechanism, once the Child becomes managed in the currently running Persistence Context, every change is picked automatically and synchronized to the database.
That's the reason you don't have an update method in JPA. You only have persist or merge in EntityManager.
So, you need to do the following steps:
You load the Child by id:
Child child = entityManager.find(Child.class, childId);
Do the changes on the Child and you are done:
child.setName(newName);

Wicket - Serialization of persisted and non-persisted JPA entities

I know that when using Wicket with JPA frameworks it is not advisable to serialize entities that have already been persisted to the database (because of problems with lazy fields and to save space). In such cases we are supposed to use LoadableDetachableModel. But what about the following use-case?
Suppose we want to create a new entity (say, a Contract) which will consist, among other things, of persisted entities (say, a Client which is selected from a list of clients stored in the DB). The entity under creation is a model object of some Wicket component (say, a Wizard). In the end (when we finish our wizard) we save the new entity to the DB. So my question is: what is the best generic solution to the serialization problem of such model objects? We can't use LDM because the entity is not in the DB yet but we don't want our inner entities (like Client) to be serialized wholly, too.
My idea was to implement a custom wicket serializer that checks if the object is an entity and if it is persisted. If so, store only its id, otherwise use the default serialization. Similarly, when deserializing use the stored id and get the entity from the DB or deserialize using the default mechanism. Not sure, though, how to do that in a generic way. My next thought was that if we can do it, then we do not need any LDM anymore, we can just store all our entities in simple org.apache.wicket.model.Model models and our serialization logic will take care of them, right?
Here's some code:
#Entity
Client {
String clientName;
#ManyToOne(fetch = FetchType.LAZY)
ClientGroup group;
}
#Entity
Contract {
Date date;
#ManyToOne(fetch = FetchType.LAZY)
Client client;
}
ContractWizard extends Wizard {
ContractWizard(String markupId, IModel<Contract> model) {
super(markupId);
setDefaultModel(model);
}
}
Contract contract = DAO.createEntity(Contract.class);
ContractWizard wizard = new ContractWizard("wizard", ?);
How to pass the contract? If we just say Model.of(contract) the whole contract will be serialized along with inner client (and it can be big), moreover if we access contract.client.group after deserialization we can bump into the problem: https://en.wikibooks.org/wiki/Java_Persistence/Relationships#Serialization.2C_and_Detaching
So I wonder how people go about solving such issues, I'm sure it's a fairly common problem.
I guess there are 2 approaches to your problem:
a.) Only save the stuff the user actually sees in Models. In your example that might be "contractStartDate", "contractEndDate", List of clientIds. That's the main approach if you don't want your DatabaseObjects in your view.
b.) Write your own LoadableDetachableModel and make sure you only serialize transient objects. For example like: (assuming that any negative id is not saved to the database)
public class MyLoadableDetachableModel extends LoadableDetachableModel {
private Object myObject;
private Integer id;
public MyLoadableDetachableModel(Object myObject) {
this.myObject = myObject;
this.id = myObject.getId();
}
#Override
protected Object load() {
if (id < 0) {
return myObject;
}
return myObjectDao.getMyObjectById(id);
}
#Override
protected void onDetach() {
super.onDetach();
id = myObject.getId();
if (id >= 0) {
myObject = null;
}
}
}
The downfall of this is that you'll have to make your DatabaseObjects Serializable which is not really ideal and can lead to all kind of problems. You would also need to decouple the references to other entities from the transient object by using a ListModel.
Having worked with both approaches I personally prefer the first. From my expierence the whole injecting dao objects into wicket can lead to disaster. :) I would only use this in view-only projects that aren't too big.
Most projects I know of just accept serializing referenced entities (e.g. your Clients) along with the edited entity (Contract).
Using conversations (keeping a Hibernate/JPA session open over several requests) is a nice alternative for applications with complex entity relations:
The Hibernate session and its entities is kept separate from the page and is never serialized. The component just keeps an identifier to fetch its conversation.

hibernate issue when cascading entities from a relationship

I made my self a little hibernate sandbox to understand how it works.
I ve done quite well so far with all the basics. Everything works as expected.
I only have an unsolved issue.
To make it short, I have a Rats entity and a Sickness entity.
A Rat can have a single Sickness.
The association is correctly set into the DB and the entities files include this part:
in Rats class:
[...]
#ManyToOne(fetch = FetchType.LAZY )
#Cascade({ CascadeType.SAVE_UPDATE, CascadeType.DELETE})
#JoinColumn(name = "Sickness_Id")
public Sickness getSickness() {
return this.sickness;
}
[...]
in Sickness class:
[...]
#OneToMany(fetch = FetchType.LAZY, mappedBy = "sickness")
#Cascade({ /*CascadeType.SAVE_UPDATE,*/ CascadeType.MERGE, CascadeType.REFRESH})
public Set<Rats> getRatses() {
return this.ratses;
}
[...]
If I create a new Rats with a new Sickness and save the Rats, the cascade works as expected and the Sickness is automatically added to the DB too.
Deletion part works too, when I delete a Rats, its sickness is deleted.
What does not work is trying to create a Sickness and try to spread it to many Rats via its SetRatses method:
String sick_name2 = "Tourista";
System.out.println("\nsetting new sickness: " + sick_name2 + " and assigning it to all rats");
Sickness sickness2 = new Sickness();
sickness2.setNom(sick_name2);
ArrayList<Rats> sickratsList = (ArrayList<Rats>) session.createCriteria(Rats.class).list();// new HashSet<Rats>();
Set<Rats> sickRatsSet = new HashSet<Rats>();
for(Rats rat : sickratsList){
sickRatsSet.add(rat);
}
sickness2.setRatses(sickRatsSet);
session.save(sickness2);
Debuging this shows that the Sickness is correctly inserted into the DB, its sickRatsSet field is correctly set with all the rats.
But... if I check the Rats status, their Sickness has not been updated.
Trying to set CascadeType. into the Sickness relationship did not help.
I know that I could solve it with something like:
ArrayList<Rats> sickratsList = (ArrayList<Rats>) session.createCriteria(Rats.class).list();// new HashSet<Rats>();
Set<Rats> sickRatsSet = new HashSet<Rats>();
for(Rats rat : sickratsList){
rat.setSickness(seckness2);
session.save(rat);
}
But I would like to understand how to do it via Sickness.setRatses.
So that I can find my way later with a many to many relationship (I suppose it will be pretty similar).
Thx in advance.
A bidirectional association has an owner side (the side without the mappedBy attribute), and an inverse side (the side with the mappedBy attribute).
Hibernate only considers the owner side when deciding which entities are associated with each other.
Adding rats to a sickness thus won't make Hibernate associate the rat to the sickness, since that only modifies the inverse side. You must set the rat's sickness.
Note that using a DELETE cascade on a ManyToXxx annotation doesn't make much sense. There is no reason to delete the tourista sickness from the database as soon as one of the thousand rats having the tourista is deleted. And that will obviously cause an exception anyway, since 999 other rats have a foreign key to the tourista sickness.
This error happens because you're retrieving the objects that are already cached in the 1st level cache (session), and those objects don't have the bidirectional association set correctly. In your code, you're never calling rat.setSickeness(sickness).
Try calling the following methods and check if the data is now correct
session.flush()
session.clear()
// load the rats /sickness again and the relations should be set.
Bottom line: when you have a bidirectional association is the developer responsibility to add/set the objects on boths ends, otherwise you'll get into this error. The simplest way to fix this is to only have one method in one of your 2 objects that know how to maintain the assocation. For example
public class Rat {
public void setSickness(Sickness sickness) {
this.sicknesses = sickness;
sickness.addRat(this);
}
}
public class Sickness {
// leave this as package protected! So the only way to set the association is from the Rat
void addRat(Rat rat) {
rats.add(rat);
}
}
You might want to read the Hibernate documentation about Session and how it works as a 1st level cache.
Thanks JB and Augusto, I got a much better understanding now.
I was able to solve my issue by overriding this way:
public void setRatses(Set<Rats> ratses) {
this.ratses = ratses;
for(Rats rat : ratses){
rat.setSickness(this);
}
This bring me another methodological question.
I found out that if I do the following:
raton.setSickness(sickness1);
raton.display(); -> raton.sickness = sickness1 as expected
sickness1.display(); -> sickness1.ratses does not contain raton for the reasons that you guys pointed.
I can either use session.flush()
session.clear() or commit the transaction and start a new one if I need sickness1.ratses to be up to date.
I suppose that I can also override Rats.setSickness this way:
public void setSickness(Sickness sickness) {
if(sickness.ratses.contain(this) sickness.ratses.remove(this);
this.sickness = sickness;
ickness.ratses.add(this);
}
this way, my sickness is up to date inside the session without I need to flush the session.
Would this be such a good idea?
On regarding performance side, I suppose that the override solution generates additional DB operations that might not be really needed?

How to maintain bi-directional relationships with Spring Data REST and JPA?

Working with Spring Data REST, if you have a OneToMany or ManyToOne relationship, the PUT operation returns 200 on the "non-owning" entity but does not actually persist the joined resource.
Example Entities:
#Entity(name = 'author')
#ToString
class AuthorEntity implements Author {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
String fullName
#ManyToMany(mappedBy = 'authors')
Set<BookEntity> books
}
#Entity(name = 'book')
#EqualsAndHashCode
class BookEntity implements Book {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(nullable = false)
String title
#Column(nullable = false)
String isbn
#Column(nullable = false)
String publisher
#ManyToMany(fetch = FetchType.LAZY, cascade = [CascadeType.ALL])
Set<AuthorEntity> authors
}
If you back them with a PagingAndSortingRepository, you can GET a Book, follow the authors link on the book and do a PUT with the URI of a author to associate with. You cannot go the other way.
If you do a GET on an Author and do a PUT on its books link, the response returns 200, but the relationship is never persisted.
Is this the expected behavior?
tl;dr
The key to that is not so much anything in Spring Data REST - as you can easily get it to work in your scenario - but making sure that your model keeps both ends of the association in sync.
The problem
The problem you see here arises from the fact that Spring Data REST basically modifies the books property of your AuthorEntity. That itself doesn't reflect this update in the authors property of the BookEntity. This has to be worked around manually, which is not a constraint that Spring Data REST makes up but the way that JPA works in general. You will be able to reproduce the erroneous behavior by simply invoking setters manually and trying to persist the result.
How to solve this?
If removing the bi-directional association is not an option (see below on why I'd recommend this) the only way to make this work is to make sure changes to the association are reflected on both sides. Usually people take care of this by manually adding the author to the BookEntity when a book is added:
class AuthorEntity {
void add(BookEntity book) {
this.books.add(book);
if (!book.getAuthors().contains(this)) {
book.add(this);
}
}
}
The additional if clause would've to be added on the BookEntity side as well if you want to make sure that changes from the other side are propagated, too. The if is basically required as otherwise the two methods would constantly call themselves.
Spring Data REST, by default uses field access so that theres actually no method that you can put this logic into. One option would be to switch to property access and put the logic into the setters. Another option is to use a method annotated with #PreUpdate/#PrePersist that iterates over the entities and makes sure the modifications are reflected on both sides.
Removing the root cause of the issue
As you can see, this adds quite a lot of complexity to the domain model. As I joked on Twitter yesterday:
#1 rule of bi-directional associations: don't use them… :)
It usually simplifies the matter if you try not to use bi-directional relationship whenever possible and rather fall back to a repository to obtain all the entities that make up the backside of the association.
A good heuristics to determine which side to cut is to think about which side of the association is really core and crucial to the domain you're modeling. In your case I'd argue that it's perfectly fine for an author to exist with no books written by her. On the flip side, a book without an author doesn't make too much sense at all. So I'd keep the authors property in BookEntity but introduce the following method on the BookRepository:
interface BookRepository extends Repository<Book, Long> {
List<Book> findByAuthor(Author author);
}
Yes, that requires all clients that previously could just have invoked author.getBooks() to now work with a repository. But on the positive side you've removed all the cruft from your domain objects and created a clear dependency direction from book to author along the way. Books depend on authors but not the other way round.
I faced a similar problem, while sending my POJO(containing bi-directional mapping #OneToMany and #ManyToOne) as JSON via REST api, the data was persisted in both the parent and child entities but the foreign key relation was not established. This happens because bidirectional associations need to be manually maintained.
JPA provides an annotation #PrePersist which can be used to make sure that the method annotated with it is executed before the entity is persisted. Since, JPA first inserts the parent entity to the database followed by the child entity, I included a method annotated with #PrePersist which would iterate through the list of child entities and manually set the parent entity to it.
In your case it would be something like this:
class AuthorEntitiy {
#PrePersist
public void populateBooks {
for(BookEntity book : books)
book.addToAuthorList(this);
}
}
class BookEntity {
#PrePersist
public void populateAuthors {
for(AuthorEntity author : authors)
author.addToBookList(this);
}
}
After this you might get an infinite recursion error, to avoid that annotate your parent class with #JsonManagedReference and your child class with #JsonBackReference. This solution worked for me, hopefully it will work for you too.
This link has a very good tutorial on how you can navigate the recursion problem:Bidirectional Relationships
I was able to use #JsonManagedReference and #JsonBackReference and it worked like a charm
I believe one can also utilize #RepositoryEventHandler by adding a #BeforeLinkSave handler to cross link the bidirectional relation between entities. This seems to be working for me.
#Component
#RepositoryEventHandler
public class BiDirectionalLinkHandler {
#HandleBeforeLinkSave
public void crossLink(Author author, Collection<Books> books) {
for (Book b : books) {
b.setAuthor(author);
}
}
}
Note: #HandlBeforeLinkSave is called based on the first parameter, if you have multiple relations in your equivalent of an Author class, the second param should be Object and you will need to test within the method for the different relation types.

Merging JPA entity returns old values

I have 2 JPA entities that have a bidirectional relationship between them.
#Entity
public class A {
#ManyToOne(cascade={CascadeType.PERSIST, CascadeType.MERGE})
B b;
// ...
}
and
#Entity
public class B {
#OneToMany(mappedBy="b",cascade={CascadeType.PERSIST, CascadeType.MERGE})
Set<A> as = new HashSet<A>();
// ...
}
Now I update some field values of a detached A which also has relationships to some Bs and vice versa and merge it back by
public String save(A a) {
A returnedA = em.merge(a);
}
returnedA now has the values of A prior to updating them.
I suppose that
FINEST: Merge clone with references A#a7caa3be
FINEST: Register the existing object B#cacf2dfb
FINEST: Register the existing object A#a7caa3be
FINEST: Register the existing object A#3f2584b8
indicates that the referenced As in B (which still have the old values) are responsible for overwriting the new ones?
Does anyone have a hint how to prevent this to happen?
Any idea is greatly appreciated!
Thanks in advance.
Dirk, I've had a similar problem and the solution (I might not be leveraging the API correctly) was intensive. Eclipselink maintains a cache of objects and if they are not updated (merged/persisted) often the database reflects the change but the cascading objects are not updated (particularly the parents).
(I've declared A as the record joining multiple B's)
Entities:
public class A
{
#OneToMany(cascade = CascadeType.ALL)
Collection b;
}
public class B
{
#ManyToOne(cascade = {CascadeType.MERGE, CascadeType.REFRESH}) //I don't want to cascade a persist operation as that might make another A object)
A a;
}
In the case above a workaround is:
public void saveB(B b) //"Child relationship"
{
A a = b.getA();//do null checks as needed and get a reference to the parent
a.getBs().add(b); //I've had the collection be null
//Persistence here
entityInstance.merge(a); // or persist this will cascade and use b
}
public void saveA(A a)
{
//Persistence
entityInstance.merge(a) // or persist
}
What you're doing here is physically cascading the merge down the chain from the top. It is irritating to maintain, but it does solve the problem. Alternatively you can deal with it by checking if it is detached and refreshing/replacing but I've found that to be less desirable and irritating to work with.
If someone has a better answer as to what the correct setup is I would be happy to hear it. Right now I've taken this approach for my relational entities and it is definitely irritating to maintain.
Best of luck with it, I'd love to hear a better solution.

Categories

Resources