I have 2 JPA entities that have a bidirectional relationship between them.
#Entity
public class A {
#ManyToOne(cascade={CascadeType.PERSIST, CascadeType.MERGE})
B b;
// ...
}
and
#Entity
public class B {
#OneToMany(mappedBy="b",cascade={CascadeType.PERSIST, CascadeType.MERGE})
Set<A> as = new HashSet<A>();
// ...
}
Now I update some field values of a detached A which also has relationships to some Bs and vice versa and merge it back by
public String save(A a) {
A returnedA = em.merge(a);
}
returnedA now has the values of A prior to updating them.
I suppose that
FINEST: Merge clone with references A#a7caa3be
FINEST: Register the existing object B#cacf2dfb
FINEST: Register the existing object A#a7caa3be
FINEST: Register the existing object A#3f2584b8
indicates that the referenced As in B (which still have the old values) are responsible for overwriting the new ones?
Does anyone have a hint how to prevent this to happen?
Any idea is greatly appreciated!
Thanks in advance.
Dirk, I've had a similar problem and the solution (I might not be leveraging the API correctly) was intensive. Eclipselink maintains a cache of objects and if they are not updated (merged/persisted) often the database reflects the change but the cascading objects are not updated (particularly the parents).
(I've declared A as the record joining multiple B's)
Entities:
public class A
{
#OneToMany(cascade = CascadeType.ALL)
Collection b;
}
public class B
{
#ManyToOne(cascade = {CascadeType.MERGE, CascadeType.REFRESH}) //I don't want to cascade a persist operation as that might make another A object)
A a;
}
In the case above a workaround is:
public void saveB(B b) //"Child relationship"
{
A a = b.getA();//do null checks as needed and get a reference to the parent
a.getBs().add(b); //I've had the collection be null
//Persistence here
entityInstance.merge(a); // or persist this will cascade and use b
}
public void saveA(A a)
{
//Persistence
entityInstance.merge(a) // or persist
}
What you're doing here is physically cascading the merge down the chain from the top. It is irritating to maintain, but it does solve the problem. Alternatively you can deal with it by checking if it is detached and refreshing/replacing but I've found that to be less desirable and irritating to work with.
If someone has a better answer as to what the correct setup is I would be happy to hear it. Right now I've taken this approach for my relational entities and it is definitely irritating to maintain.
Best of luck with it, I'd love to hear a better solution.
Related
So, I have found myself in quite a pickle regarding Hibernate. When I started developing my web application, I used "eager" loading everywhere so I could easily access children, parents etc.
After a while, I ran into my first problem - re-saving of deleted objects. Multiple stackoverflow threads suggested that I should remove the object from all the collections that it's in. Reading those suggestions made my "spidey sense" tickle as my relations weren't really simple and I had to iterate multiple objects which made my code look kind of ugly and made me wonder if this was the best approach.
For example, when deleting Employee (that belongs to User in a sense that User can act as multiple different Employees). Let's say Employee can leave Feedback to Party, so Employee can have multiple Feedback and Party can have multiple Feedback. Additionally, both Employee and Party belong to some kind of a parent object, let's say an Organization. Basically, we have:
class User {
// Has many
Set<Employee> employees;
// Has many
Set<Organization> organizations;
// Has many through employees
Set<Organization> associatedOrganizations;
}
class Employee {
// Belongs to
User user;
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Organization {
// Belongs to
User user;
// Has many
Set<Employee> employees;
// Has many
Set<Party> parties;
}
class Party {
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Feedback {
// Belongs to
Party party;
// Belongs to
Employee employee;
}
Here's what I ended up with when deleting an employee:
// First remove feedbacks related to employee
Iterator<Feedback> iter = employee.getFeedbacks().iterator();
while (iter.hasNext()) {
Feedback feedback = iter.next();
iter.remove();
feedback.getParty().getFeedbacks().remove(feedback);
session.delete(feedback);
}
session.update(employee);
// Now remove employee from organization
Organization organization = employee.getOrganization();
organization.getEmployees().remove(employee);
session.update(organization);
This is, by my definition, ugly. I would've assumed that by using
#Cascade({CascadeType.ALL})
then Hibernate would magically remove Employee from all associations by simply doing:
session.delete(employee);
instead I get:
Error during managed flush [deleted object would be re-saved by cascade (remove deleted object from associations)
So, in order to try to get my code a bit cleaner and maybe even optimized (sometimes lazy fetch is enough, sometimes I need eager), I tried lazy fetching almost everything and hoping that if I do, for example:
employee.getFeedbacks()
then the feedbacks are nicely fetched without any problem but nope, everything breaks:
failed to lazily initialize a collection of role: ..., could not initialize proxy - no Session
The next thing I thought about was removing the possibility for objects to insert/delete their related children objects but that would probably be a bad idea performance-wise - inserting every object separately with
child.parent=parent
instead of in a bulk with
parent.children().add(children).
Finally, I saw that multiple people recommended creating my own custom queries and stuff but at that point, why should I even bother with Hibernate? Is there really no good way to handle my problem relatively clean or am I missing something or am I an idiot?
If I understood the question correctly it's all about cascading through simple 1:N relations. In that case Hibernate can do the job rather well:
#Entity
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#OneToMany(cascade = CascadeType.ALL,
mappedBy = "post", orphanRemoval = true)
private List<Comment> comments = new ArrayList<>();
}
#Entity
public class Comment {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#ManyToOne
private Post post;
}
Code:
Post post = newPost();
doInTransaction(session -> {
session.delete(post);
});
Generates:
delete from Comment where id = 1
delete from Comment where id = 2
delete from Post where id = 1
But if you have some other (synthetic) collections, Hibernate has no chance to know which ones, so you have to handle them yourself.
As for Hibernate and custom queries, Hibernate provides HQL which is more compact then traditional SQL, but still is less transparent then annotations.
I made my self a little hibernate sandbox to understand how it works.
I ve done quite well so far with all the basics. Everything works as expected.
I only have an unsolved issue.
To make it short, I have a Rats entity and a Sickness entity.
A Rat can have a single Sickness.
The association is correctly set into the DB and the entities files include this part:
in Rats class:
[...]
#ManyToOne(fetch = FetchType.LAZY )
#Cascade({ CascadeType.SAVE_UPDATE, CascadeType.DELETE})
#JoinColumn(name = "Sickness_Id")
public Sickness getSickness() {
return this.sickness;
}
[...]
in Sickness class:
[...]
#OneToMany(fetch = FetchType.LAZY, mappedBy = "sickness")
#Cascade({ /*CascadeType.SAVE_UPDATE,*/ CascadeType.MERGE, CascadeType.REFRESH})
public Set<Rats> getRatses() {
return this.ratses;
}
[...]
If I create a new Rats with a new Sickness and save the Rats, the cascade works as expected and the Sickness is automatically added to the DB too.
Deletion part works too, when I delete a Rats, its sickness is deleted.
What does not work is trying to create a Sickness and try to spread it to many Rats via its SetRatses method:
String sick_name2 = "Tourista";
System.out.println("\nsetting new sickness: " + sick_name2 + " and assigning it to all rats");
Sickness sickness2 = new Sickness();
sickness2.setNom(sick_name2);
ArrayList<Rats> sickratsList = (ArrayList<Rats>) session.createCriteria(Rats.class).list();// new HashSet<Rats>();
Set<Rats> sickRatsSet = new HashSet<Rats>();
for(Rats rat : sickratsList){
sickRatsSet.add(rat);
}
sickness2.setRatses(sickRatsSet);
session.save(sickness2);
Debuging this shows that the Sickness is correctly inserted into the DB, its sickRatsSet field is correctly set with all the rats.
But... if I check the Rats status, their Sickness has not been updated.
Trying to set CascadeType. into the Sickness relationship did not help.
I know that I could solve it with something like:
ArrayList<Rats> sickratsList = (ArrayList<Rats>) session.createCriteria(Rats.class).list();// new HashSet<Rats>();
Set<Rats> sickRatsSet = new HashSet<Rats>();
for(Rats rat : sickratsList){
rat.setSickness(seckness2);
session.save(rat);
}
But I would like to understand how to do it via Sickness.setRatses.
So that I can find my way later with a many to many relationship (I suppose it will be pretty similar).
Thx in advance.
A bidirectional association has an owner side (the side without the mappedBy attribute), and an inverse side (the side with the mappedBy attribute).
Hibernate only considers the owner side when deciding which entities are associated with each other.
Adding rats to a sickness thus won't make Hibernate associate the rat to the sickness, since that only modifies the inverse side. You must set the rat's sickness.
Note that using a DELETE cascade on a ManyToXxx annotation doesn't make much sense. There is no reason to delete the tourista sickness from the database as soon as one of the thousand rats having the tourista is deleted. And that will obviously cause an exception anyway, since 999 other rats have a foreign key to the tourista sickness.
This error happens because you're retrieving the objects that are already cached in the 1st level cache (session), and those objects don't have the bidirectional association set correctly. In your code, you're never calling rat.setSickeness(sickness).
Try calling the following methods and check if the data is now correct
session.flush()
session.clear()
// load the rats /sickness again and the relations should be set.
Bottom line: when you have a bidirectional association is the developer responsibility to add/set the objects on boths ends, otherwise you'll get into this error. The simplest way to fix this is to only have one method in one of your 2 objects that know how to maintain the assocation. For example
public class Rat {
public void setSickness(Sickness sickness) {
this.sicknesses = sickness;
sickness.addRat(this);
}
}
public class Sickness {
// leave this as package protected! So the only way to set the association is from the Rat
void addRat(Rat rat) {
rats.add(rat);
}
}
You might want to read the Hibernate documentation about Session and how it works as a 1st level cache.
Thanks JB and Augusto, I got a much better understanding now.
I was able to solve my issue by overriding this way:
public void setRatses(Set<Rats> ratses) {
this.ratses = ratses;
for(Rats rat : ratses){
rat.setSickness(this);
}
This bring me another methodological question.
I found out that if I do the following:
raton.setSickness(sickness1);
raton.display(); -> raton.sickness = sickness1 as expected
sickness1.display(); -> sickness1.ratses does not contain raton for the reasons that you guys pointed.
I can either use session.flush()
session.clear() or commit the transaction and start a new one if I need sickness1.ratses to be up to date.
I suppose that I can also override Rats.setSickness this way:
public void setSickness(Sickness sickness) {
if(sickness.ratses.contain(this) sickness.ratses.remove(this);
this.sickness = sickness;
ickness.ratses.add(this);
}
this way, my sickness is up to date inside the session without I need to flush the session.
Would this be such a good idea?
On regarding performance side, I suppose that the override solution generates additional DB operations that might not be really needed?
I have "Parent" and "Child" hibernate entities.
On "Parent" I have a Set<Child> to hold it's children.
When I update the Parent with new children, all works fine: the children are created on "child" table.
But, when I remove one element from the Parent hashset and save, the correspondent child on database is not be deleted.
Here is:
On PARENT (named Workflow):
#OneToMany(orphanRemoval=true, cascade = CascadeType.ALL, mappedBy="workflow", fetch = FetchType.EAGER)
private Set<ActivityDB> activities;
On Child (named Activity)
#ManyToOne
#JoinColumn(name="id_workflow")
#Fetch(FetchMode.JOIN)
private WorkflowDB workflow;
I'm working on persistent instance inside the session. No error is raised. Just seems to work fine, but the register on database still there.
To do a test, I load the Workflow and do a
workflow.activities.remove( activity_index_x )
and then save the workflow using session.update( workflow ).
but the "activity_index_x" still in database and comes to life again when I reload the workflow.
Make sure you go through the manual regarding bidirectional association links.
The best practices include adding the add/remove child methods:
class WorkflowDB {
public void remove (ActivityDB a) {
if (a != null) {
this.activities.remove(a);
a.setWorkflow(null);
}
}
public void add (ActivityDB a) {
if (a != null) {
this.activities.add(a);
a.setWorkflow(this);
}
}
}
But because you use a Set as the one-to-many side, you need to pay extra attention to equals and hashcode. The best way is to use a business-key for checking equality and for the hash-code algorithm, and never use the database identifier for equals/hashcode, especially in conjunction with hash-like data structures (set/map).
Bidirectional associations are more complicated to manage than unidirectional ones. If you don't really need the one-to-many side, you can remove it and replace it with a query instead. That way you'd have to manage only the many-to-one side.
This is caused by the child-to-parent reference not being cleared. Since you mapped both sides (and configured it this way) Hibernate will actually look at the child end of the relation.
The best way to fix this is to also clear the workflow field on the activity when you remove the activity from the workflow (and reversely), so:
class Workflow {
public void remove (Activity a) {
if (this.activities.remove(a)) {
a.setWorkflow(null);
}
}
public void add (Activity a) {
if (this.activities.add(a)) {
a.setWorkflow(this);
}
}
}
The main question is which side of the relation do you want to maintain the relation-state in?
You could also map the relation on the Workflow (do not use the mappedBy attribute, but use a JoinTable annotation to keep the column on the child table) and only map the parent-Workflow as a read-only (insertable=false,updatable=false) field in the Activity.
This way the Workflow is completely in control of which activities are part of it and the activities can still see the workflow they are part of.
class Workflow {
#OneToMany
#JoinTable(...)
private Set<Activity> activities
}
class Activity {
#Column(insertable=false, updatable=false)
private Workflow workflow
}
TGIF guys, but I am still stuck on one of my projects. I have two interfaces IMasterOrder and IOrder. One IMasterOrder may have a Collection of IOrder. So there can be several MasterOrder entity classes and Order entity classes who implements the interfaces.
To simplify the coding, I create IMasterOrder and IOrder objects everywhere, but when it needs to specify the concrete type then I just cast IMasterOrder object to the class type.
The problem is, this makes master class always return null about its orders. I am very curious about how JPA works with polymorphism in general?
Update
Sorry for the early confusion. Actually the question is much simpler
Actually the entity class is something like this
public class MasterOrder implements IMasterOrder {
// Relationships
#OneToOne(mappedBy = "masterOrder")
private OrderCustomFields customFields;
#OneToMany(mappedBy = "masterOrder")
private List<OrderLog> logs;
#OneToMany(mappedBy = "masterOrder")
private Collection<Order> orders;
// Fields...
And the finder method to get the Master order entity instance is like this
public static MasterOrder findMasterOrder(String id) {
if (id == null || id.length() == 0) return null;
return entityManager().find(MasterOrder.class, id);
}
However, the MasterOrder instance from this finder method returns customFields and logs and orders which are all null. So how to fix this? Thanks in advance.
When you access logs and orders, is Master still a part of an active persistence context? ie: Has the EntityManager that found the Master Entity been closed or cleared? If yes, everything is working as expected.
For giggles, you could try changing the fetch attribute on logs and orders to EAGER ... this will help pinpoint if there is something else bad going on.
#OneToMany(mappedBy = "masterOrder", fetch = FetchType.LAZY)
private List<OrderLog> logs;
#OneToMany(mappedBy = "masterOrder", fetch = FetchType.LAZY)
private Collection<Order> orders;
Sounds like a problem with your mapping.. I don't think empty collections should be NULL, they should either be an empty list (if initialized), or a proxy that will be initialized when you read from it. If you leave the transaction and try to read from the collection, it SHOULD throw a lazy initialization exception. In either case, you should include all relevant classes in the question to provide further information.
I got this error message:
error: Found shared references to a collection: Person.relatedPersons
When I tried to execute addToRelatedPersons(anotherPerson):
person.addToRelatedPersons(anotherPerson);
anotherPerson.addToRelatedPersons(person);
anotherPerson.save();
person.save();
My domain:
Person {
static hasMany = [relatedPersons:Person];
}
any idea why this happens ?
Hibernate shows this error when you attempt to persist more than one entity instance sharing the same collection reference (i.e. the collection identity in contrast with collection equality).
Note that it means the same collection, not collection element - in other words relatedPersons on both person and anotherPerson must be the same. Perhaps you're resetting that collection after entities are loaded? Or you've initialized both references with the same collection instance?
I had the same problem. In my case, the issue was that someone used BeanUtils to copy the properties of one entity to another, so we ended up having two entities referencing the same collection.
Given that I spent some time investigating this issue, I would recommend the following checklist:
Look for scenarios like entity1.setCollection(entity2.getCollection()) and getCollection returns the internal reference to the collection (if getCollection() returns a new instance of the collection, then you don't need to worry).
Look if clone() has been implemented correctly.
Look for BeanUtils.copyProperties(entity1, entity2).
Explanation on practice. If you try to save your object, e.g.:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
message.setFiles(folders);
MESSAGESDAO.getMessageDAO().save(message);
you don't need to set updated object to a parent object:
message.setFiles(folders);
Simple save your parent object like:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
// Not set updated object here
MESSAGESDAO.getMessageDAO().save(message);
Reading online the cause of this error can be also an hibernate bug, as workaround that it seems to work, it is to put a:
session.clear()
You must to put the clear after getting data and before commit and close, see example:
//getting data
SrReq sr = (SrReq) crit.uniqueResult();
SrSalesDetailDTO dt=SrSalesDetailMapper.INSTANCE.map(sr);
//CLEAR
session.clear();
//close session
session.getTransaction().commit();
session.close();
return dt;
I use this solution for select to database, for update or insert i don't know if this solution can work or can cause problems.
My problem is equal at 100% of this: http://www.progtown.com/topic128073-hibernate-many-to-many-on-two-tables.html
I have experienced a great example of reproducing such a problem.
Maybe my experience will help someone one day.
Short version
Check that your #Embedded Id of container has no possible collisions.
Long version
When Hibernate instantiates collection wrapper, it searches for already instantiated collection by CollectionKey in internal Map.
For Entity with #Embedded id, CollectionKey wraps EmbeddedComponentType and uses #Embedded Id properties for equality checks and hashCode calculation.
So if you have two entities with equal #Embedded Ids, Hibernate will instantiate and put new collection by the first key and will find same collection for the second key.
So two entities with same #Embedded Id will be populated with same collection.
Example
Suppose you have Account entity which has lazy set of loans.
And Account has #Embedded Id consists of several parts(columns).
#Entity
#Table(schema = "SOME", name = "ACCOUNT")
public class Account {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "account")
private Set<Loan> loans;
#Embedded
private AccountId accountId;
...
}
#Embeddable
public class AccountId {
#Column(name = "X")
private Long x;
#Column(name = "BRANCH")
private String branchId;
#Column(name = "Z")
private String z;
...
}
Then suppose that Account has additional property mapped by #Embedded Id but has relation to other entity Branch.
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "BRANCH")
#MapsId("accountId.branchId")
#NotFound(action = NotFoundAction.IGNORE)//Look at this!
private Branch branch;
It could happen that you have no FK for Account to Brunch relation id DB so Account.BRANCH column can have any value not presented in Branch table.
According to #NotFound(action = NotFoundAction.IGNORE) if value is not present in related table, Hibernate will load null value for the property.
If X and Y columns of two Accounts are same(which is fine), but BRANCH is different and not presented in Branch table, hibernate will load null for both and Embedded Ids will be equal.
So two CollectionKey objects will be equal and will have same hashCode for different Accounts.
result = {CollectionKey#34809} "CollectionKey[Account.loans#Account#43deab74]"
role = "Account.loans"
key = {Account#26451}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
result = {CollectionKey#35653} "CollectionKey[Account.loans#Account#33470aa]"
role = "Account.loans"
key = {Account#35225}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
Because of this, Hibernate will load same PesistentSet for two entities.
In my case, I was copying and pasting code from my other classes, so I did not notice that the getter code was bad written:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "credito")
public Set getConceptoses() {
return this.letrases;
}
public void setConceptoses(Set conceptoses) {
this.conceptoses = conceptoses;
}
All references conceptoses but if you look at the get says letrases
I too got the same issue, someone used BeanUtils.copyProperties(source, target). Here both source and target, are using the same collection as attribute.
So i just used the deep copy as below..
How to Clone Collection in Java - Deep copy of ArrayList and HashSet
Consider an entity:
public class Foo{
private<user> user;
/* with getters and setters */
}
And consider an Business Logic class:
class Foo1{
List<User> user = new ArrayList<>();
user = foo.getUser();
}
Here the user and foo.getUser() share the same reference. But saving the two references creates a conflict.
The proper usage should be:
class Foo1 {
List<User> user = new ArrayList<>();
user.addAll(foo.getUser);
}
This avoids the conflict.
I faced similar exception in my application. After looking into the stacktrace it was clear that exception was thrown within a FlushEntityEventListener class.
In Hibernate 4.3.7 the MSLocalSessionFactory bean no longer supports the eventListeners property. Hence, one has to explicitly fetch the service registry from individual Hibernate session beans and then set the required custom event listeners.
In the process of adding custom event listeners we need to make sure the corresponding default event listeners are removed from the respective Hibernate session.
If the default event listener is not removed then the case arises of two event listeners registered against same event. In this case while iterating over these listeners, against first listeners any collections in the session will be flagged as reached and while processing the same collection against second listener would throw this Hibernate exception.
So, make sure that when registering custom listeners corresponding default listeners are removed from registry.
My problem was that I had setup an #ManyToOne relationship. Maybe if the answers above don't fix your problem you might want to check the relationship that was mentioned in the error message.
Posting here because it's taken me over 2 weeks to get to the bottom of this, and I still haven't fully resolved it.
There is a chance, that you're also just running into this bug which has been around since 2017 and hasn't been addressed.
I honestly have no clue how to get around this bug. I'm posting here for my sanity and hopefully to shave a couple weeks of your googling. I'd love any input anyone may have, but my particular "answer" to this problem was not listed in any of the above answers.
I had to replace the following collection initilization:
challenge.setGoals(memberChallenge.getGoals());
with
challenge.setGoals(memberChallenge.getGoals()
.stream()
.map(dmo -> {
final ChallengeGoal goal = new ChallengeGoalImpl();
goal.setMemberChallenge(challenge);
goal.setGoalDate(dmo.getGoalDate());
goal.setGoalValue(dmo.getGoalValue());
return goal;
})
.collect(Collectors.toList()));
I changed
#OneToMany( cascade= CascadeType.ALL)
#JoinColumn(
name = "some_id",
referencedColumnName = "some_id"
)
to
#OneToMany(mappedBy = "some_id", cascade= CascadeType.ALL)
You're using pointers(indirectly), so sometimes you're copying the memory address instead of the object/collection you want. Hibernate checks this and throw that error. Here's what can you do:
Don't copy the object/collection;
Initiate a new empty one;
Make a function to copy it's content and call it;
For example:
public Entity copyEntity(Entity e){
Entity copy = new Entity();
e.copy(name);
e.setCollection2(null);
e.setCollection3(copyCollection(e.getCollection3());
return copy;
}
In a one to many and many to one relationship this error will occur. If you attempt to devote same instance from many to one entity to more than one instance from one to many entity.
For example, each person can have many books but each of these books can be owned by only one person if you consider more than one owner for a book this issue is raised.