I am having a little issue with Ebean (in the context of Play Framework, Java).
I have elements sharing a one-to-many relationship (BankAccount <- BankingOperation).
I have defined the BankAccount class with, among others, the following fields:
#JsonIgnore
#OneToMany(cascade = CascadeType.ALL)
public List<BankingOperation> operations = new ArrayList<BankingOperation>();
For the Banking operation, the corresponding field:
#ManyToOne
#JsonIgnore
public BankAccount bankAccount;
My issue is that when I try to update the bank account, it deletes the related operations. Here's the code I am using:
public static Result saveAccount(Long id)
{
Form<BankAccount> form = Form.form(BankAccount.class).bindFromRequest();
if (form.hasErrors() || form.get().id != id) {
return badRequest();
}
form.get().update(id);
return ok();
}
I have the feeling that operations are deleted because they aren't loaded when I do the form().get(), and thus, when synchronizing with the DB, Ebean does what seems to be the best solution to it.
Would anyone have any clue on this issue? Is there another solution that I haven't discovered yet?
Thanks in advance for your help!
For now, I have found a (ugly) solution which is adding the following line before doing the update:
Ebean.refreshMany(form.get(), "operations");
Another solution could be not to build the form the model class but on an another class, forcing me to map each field one by one.
Related
So, I have found myself in quite a pickle regarding Hibernate. When I started developing my web application, I used "eager" loading everywhere so I could easily access children, parents etc.
After a while, I ran into my first problem - re-saving of deleted objects. Multiple stackoverflow threads suggested that I should remove the object from all the collections that it's in. Reading those suggestions made my "spidey sense" tickle as my relations weren't really simple and I had to iterate multiple objects which made my code look kind of ugly and made me wonder if this was the best approach.
For example, when deleting Employee (that belongs to User in a sense that User can act as multiple different Employees). Let's say Employee can leave Feedback to Party, so Employee can have multiple Feedback and Party can have multiple Feedback. Additionally, both Employee and Party belong to some kind of a parent object, let's say an Organization. Basically, we have:
class User {
// Has many
Set<Employee> employees;
// Has many
Set<Organization> organizations;
// Has many through employees
Set<Organization> associatedOrganizations;
}
class Employee {
// Belongs to
User user;
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Organization {
// Belongs to
User user;
// Has many
Set<Employee> employees;
// Has many
Set<Party> parties;
}
class Party {
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Feedback {
// Belongs to
Party party;
// Belongs to
Employee employee;
}
Here's what I ended up with when deleting an employee:
// First remove feedbacks related to employee
Iterator<Feedback> iter = employee.getFeedbacks().iterator();
while (iter.hasNext()) {
Feedback feedback = iter.next();
iter.remove();
feedback.getParty().getFeedbacks().remove(feedback);
session.delete(feedback);
}
session.update(employee);
// Now remove employee from organization
Organization organization = employee.getOrganization();
organization.getEmployees().remove(employee);
session.update(organization);
This is, by my definition, ugly. I would've assumed that by using
#Cascade({CascadeType.ALL})
then Hibernate would magically remove Employee from all associations by simply doing:
session.delete(employee);
instead I get:
Error during managed flush [deleted object would be re-saved by cascade (remove deleted object from associations)
So, in order to try to get my code a bit cleaner and maybe even optimized (sometimes lazy fetch is enough, sometimes I need eager), I tried lazy fetching almost everything and hoping that if I do, for example:
employee.getFeedbacks()
then the feedbacks are nicely fetched without any problem but nope, everything breaks:
failed to lazily initialize a collection of role: ..., could not initialize proxy - no Session
The next thing I thought about was removing the possibility for objects to insert/delete their related children objects but that would probably be a bad idea performance-wise - inserting every object separately with
child.parent=parent
instead of in a bulk with
parent.children().add(children).
Finally, I saw that multiple people recommended creating my own custom queries and stuff but at that point, why should I even bother with Hibernate? Is there really no good way to handle my problem relatively clean or am I missing something or am I an idiot?
If I understood the question correctly it's all about cascading through simple 1:N relations. In that case Hibernate can do the job rather well:
#Entity
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#OneToMany(cascade = CascadeType.ALL,
mappedBy = "post", orphanRemoval = true)
private List<Comment> comments = new ArrayList<>();
}
#Entity
public class Comment {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#ManyToOne
private Post post;
}
Code:
Post post = newPost();
doInTransaction(session -> {
session.delete(post);
});
Generates:
delete from Comment where id = 1
delete from Comment where id = 2
delete from Post where id = 1
But if you have some other (synthetic) collections, Hibernate has no chance to know which ones, so you have to handle them yourself.
As for Hibernate and custom queries, Hibernate provides HQL which is more compact then traditional SQL, but still is less transparent then annotations.
Working with Spring Data REST, if you have a OneToMany or ManyToOne relationship, the PUT operation returns 200 on the "non-owning" entity but does not actually persist the joined resource.
Example Entities:
#Entity(name = 'author')
#ToString
class AuthorEntity implements Author {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
String fullName
#ManyToMany(mappedBy = 'authors')
Set<BookEntity> books
}
#Entity(name = 'book')
#EqualsAndHashCode
class BookEntity implements Book {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(nullable = false)
String title
#Column(nullable = false)
String isbn
#Column(nullable = false)
String publisher
#ManyToMany(fetch = FetchType.LAZY, cascade = [CascadeType.ALL])
Set<AuthorEntity> authors
}
If you back them with a PagingAndSortingRepository, you can GET a Book, follow the authors link on the book and do a PUT with the URI of a author to associate with. You cannot go the other way.
If you do a GET on an Author and do a PUT on its books link, the response returns 200, but the relationship is never persisted.
Is this the expected behavior?
tl;dr
The key to that is not so much anything in Spring Data REST - as you can easily get it to work in your scenario - but making sure that your model keeps both ends of the association in sync.
The problem
The problem you see here arises from the fact that Spring Data REST basically modifies the books property of your AuthorEntity. That itself doesn't reflect this update in the authors property of the BookEntity. This has to be worked around manually, which is not a constraint that Spring Data REST makes up but the way that JPA works in general. You will be able to reproduce the erroneous behavior by simply invoking setters manually and trying to persist the result.
How to solve this?
If removing the bi-directional association is not an option (see below on why I'd recommend this) the only way to make this work is to make sure changes to the association are reflected on both sides. Usually people take care of this by manually adding the author to the BookEntity when a book is added:
class AuthorEntity {
void add(BookEntity book) {
this.books.add(book);
if (!book.getAuthors().contains(this)) {
book.add(this);
}
}
}
The additional if clause would've to be added on the BookEntity side as well if you want to make sure that changes from the other side are propagated, too. The if is basically required as otherwise the two methods would constantly call themselves.
Spring Data REST, by default uses field access so that theres actually no method that you can put this logic into. One option would be to switch to property access and put the logic into the setters. Another option is to use a method annotated with #PreUpdate/#PrePersist that iterates over the entities and makes sure the modifications are reflected on both sides.
Removing the root cause of the issue
As you can see, this adds quite a lot of complexity to the domain model. As I joked on Twitter yesterday:
#1 rule of bi-directional associations: don't use them… :)
It usually simplifies the matter if you try not to use bi-directional relationship whenever possible and rather fall back to a repository to obtain all the entities that make up the backside of the association.
A good heuristics to determine which side to cut is to think about which side of the association is really core and crucial to the domain you're modeling. In your case I'd argue that it's perfectly fine for an author to exist with no books written by her. On the flip side, a book without an author doesn't make too much sense at all. So I'd keep the authors property in BookEntity but introduce the following method on the BookRepository:
interface BookRepository extends Repository<Book, Long> {
List<Book> findByAuthor(Author author);
}
Yes, that requires all clients that previously could just have invoked author.getBooks() to now work with a repository. But on the positive side you've removed all the cruft from your domain objects and created a clear dependency direction from book to author along the way. Books depend on authors but not the other way round.
I faced a similar problem, while sending my POJO(containing bi-directional mapping #OneToMany and #ManyToOne) as JSON via REST api, the data was persisted in both the parent and child entities but the foreign key relation was not established. This happens because bidirectional associations need to be manually maintained.
JPA provides an annotation #PrePersist which can be used to make sure that the method annotated with it is executed before the entity is persisted. Since, JPA first inserts the parent entity to the database followed by the child entity, I included a method annotated with #PrePersist which would iterate through the list of child entities and manually set the parent entity to it.
In your case it would be something like this:
class AuthorEntitiy {
#PrePersist
public void populateBooks {
for(BookEntity book : books)
book.addToAuthorList(this);
}
}
class BookEntity {
#PrePersist
public void populateAuthors {
for(AuthorEntity author : authors)
author.addToBookList(this);
}
}
After this you might get an infinite recursion error, to avoid that annotate your parent class with #JsonManagedReference and your child class with #JsonBackReference. This solution worked for me, hopefully it will work for you too.
This link has a very good tutorial on how you can navigate the recursion problem:Bidirectional Relationships
I was able to use #JsonManagedReference and #JsonBackReference and it worked like a charm
I believe one can also utilize #RepositoryEventHandler by adding a #BeforeLinkSave handler to cross link the bidirectional relation between entities. This seems to be working for me.
#Component
#RepositoryEventHandler
public class BiDirectionalLinkHandler {
#HandleBeforeLinkSave
public void crossLink(Author author, Collection<Books> books) {
for (Book b : books) {
b.setAuthor(author);
}
}
}
Note: #HandlBeforeLinkSave is called based on the first parameter, if you have multiple relations in your equivalent of an Author class, the second param should be Object and you will need to test within the method for the different relation types.
TGIF guys, but I am still stuck on one of my projects. I have two interfaces IMasterOrder and IOrder. One IMasterOrder may have a Collection of IOrder. So there can be several MasterOrder entity classes and Order entity classes who implements the interfaces.
To simplify the coding, I create IMasterOrder and IOrder objects everywhere, but when it needs to specify the concrete type then I just cast IMasterOrder object to the class type.
The problem is, this makes master class always return null about its orders. I am very curious about how JPA works with polymorphism in general?
Update
Sorry for the early confusion. Actually the question is much simpler
Actually the entity class is something like this
public class MasterOrder implements IMasterOrder {
// Relationships
#OneToOne(mappedBy = "masterOrder")
private OrderCustomFields customFields;
#OneToMany(mappedBy = "masterOrder")
private List<OrderLog> logs;
#OneToMany(mappedBy = "masterOrder")
private Collection<Order> orders;
// Fields...
And the finder method to get the Master order entity instance is like this
public static MasterOrder findMasterOrder(String id) {
if (id == null || id.length() == 0) return null;
return entityManager().find(MasterOrder.class, id);
}
However, the MasterOrder instance from this finder method returns customFields and logs and orders which are all null. So how to fix this? Thanks in advance.
When you access logs and orders, is Master still a part of an active persistence context? ie: Has the EntityManager that found the Master Entity been closed or cleared? If yes, everything is working as expected.
For giggles, you could try changing the fetch attribute on logs and orders to EAGER ... this will help pinpoint if there is something else bad going on.
#OneToMany(mappedBy = "masterOrder", fetch = FetchType.LAZY)
private List<OrderLog> logs;
#OneToMany(mappedBy = "masterOrder", fetch = FetchType.LAZY)
private Collection<Order> orders;
Sounds like a problem with your mapping.. I don't think empty collections should be NULL, they should either be an empty list (if initialized), or a proxy that will be initialized when you read from it. If you leave the transaction and try to read from the collection, it SHOULD throw a lazy initialization exception. In either case, you should include all relevant classes in the question to provide further information.
I have found several questions about this, but none with a complete explaintation of the problem, and how to debug it - the answers are all anecdotal.
The problem is that in a Play 1.2.4 JPA test, I'm getting this exception when I save() a model:
org.hibernate.HibernateException: Found two representations of same
collection: models.Position.projects
I would like to know:
Is there a documentation of this problem in general, unrelated to Play? The issue is in hibernate, yet a lot of the Google results on this are within Play apps.
What are some basic best practices to avoid this problem?
Is it caused by Play? Or something I'm doing wrong?
How to resolve in my specific case?
Here is a reproduction of the problem on github. I have four entities:
#Entity
public class Person extends Model {
public String name;
#OneToMany(cascade = CascadeType.ALL)
public List<Position> positions;
}
#Entity
public class Position extends Model {
public Position(){}
public Position(Company companies) {
this.companies = companies;
this.projects = new ArrayList<Project>();
}
#OneToOne
public Company companies;
#ManyToOne
public Person person;
#OneToMany
public List<Project> projects;
}
#Entity
public class Company extends Model {
public String name;
}
#Entity
public class Project extends Model {
public Project(){}
public Project(String field, String status){
this.theField = field;
this.status = status;
}
#ManyToOne
public Position position;
public String theField;
public String status;
}
And my persistence code:
Company facebook = new Company();
facebook.name = "Facebook";
facebook.save();
Company twitter = new Company();
twitter.name = "Twitter";
twitter.save();
Person joe = new Person();
joe.name = "Joe";
joe.save();
joe.positions = new ArrayList<Position>();
Position joeAtFacebook = new Position(facebook);
joeAtFacebook.projects.add(new Project("Stream", "Architect"));
joeAtFacebook.projects.add(new Project("Messages", "Lead QA"));
joe.positions.add(joeAtFacebook);
Position joeAtTwitter = new Position(twitter);
joeAtTwitter.projects.add(new Project("Steal stuff from Facebook", "CEO"));
joe.positions.add(joeAtTwitter);
joe.save();
BTW, I've tried adding the Play associations module as one person suggested, and it does't seem to help.
I see that indeed that tables that are created are duplicate in a sense:
I have both a person_position table and a position table, where both contain similar fields: person_position contains a Person_id and positions_id, while the position table contain id (meaning position id), person_id, and companies_id. So I understand some kind of unintended redundancy is created by my model definition, but I don't really understand how to solve it.
I thought this might be related to bi-directional mappings, but here is a branch where the model is uni-directional (I removed some back-references) - and the problem still occurs.
As far as I've been able to tell, the error is caused by any combination of:
Lacking / missing mappedBy parameter on #OneToMany annotations. This parameter should receive the name of the field in the target model that refers back to this model.
Old hibernate - Play 1.2.4 ships with hibernate 3.6.1 ... upgrading to 3.6.8 seems to resolve another such issue (just add the following to dependencies.yml, and play deps)
- org.hibernate -> hibernate-core 3.6.8.Final:
force: true
For me, the above steps solved the issue.
It is in fact a bug in hibernate, because it is thrown when persisting objects, while it actually implies a "design time" problem that should be detected when creating the schema.
Steps I used to debug:
Wrote a test that reproduced the problem
Added the associations module - I'm not sure if it resolved a part of the issue, or made it worse.
Debugged through hibernate code, and realized this probably indicates a hibernate problem, not a user / configuration error.
Noticed that hibernate has quite a few bugfix versions after 3.6.1, and decided to try my luck and upgrade.
Also important, cleaning the tmp folder can't hurt - Play caches compiled jars there, and after a major change like upgrading hibernate version, it might be worthwhile to clean it.
Try
#OneToMany(mappedBy="position")
public List<Project> projects;
First I think you miss a line one before the last:
joe.positions.add(joeAtTwitter);
Second:
I think that you should not do
joe.positions = new ArrayList<Position>();
instead change Person to:
#Entity
public class Person extends Model {
public String name;
#OneToMany(cascade = CascadeType.ALL)
public List<Position> positions = new ArrayList<Position>();
}
It will solve your problem, plus it's a best practice, using empty collection instead of null value (see Effective Java) at general and specifically for working with Hibernate managed objects. Read first paragraph here for explanation why you better initialize with empty collections.
Now what I think is happened is: when you call joe.save() you have made the object managed (by Hibernate) then you overwritten a property with a new collection, I can't understand why the error you got is about model.Position.projects, but I think that's the case.
I got this error message:
error: Found shared references to a collection: Person.relatedPersons
When I tried to execute addToRelatedPersons(anotherPerson):
person.addToRelatedPersons(anotherPerson);
anotherPerson.addToRelatedPersons(person);
anotherPerson.save();
person.save();
My domain:
Person {
static hasMany = [relatedPersons:Person];
}
any idea why this happens ?
Hibernate shows this error when you attempt to persist more than one entity instance sharing the same collection reference (i.e. the collection identity in contrast with collection equality).
Note that it means the same collection, not collection element - in other words relatedPersons on both person and anotherPerson must be the same. Perhaps you're resetting that collection after entities are loaded? Or you've initialized both references with the same collection instance?
I had the same problem. In my case, the issue was that someone used BeanUtils to copy the properties of one entity to another, so we ended up having two entities referencing the same collection.
Given that I spent some time investigating this issue, I would recommend the following checklist:
Look for scenarios like entity1.setCollection(entity2.getCollection()) and getCollection returns the internal reference to the collection (if getCollection() returns a new instance of the collection, then you don't need to worry).
Look if clone() has been implemented correctly.
Look for BeanUtils.copyProperties(entity1, entity2).
Explanation on practice. If you try to save your object, e.g.:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
message.setFiles(folders);
MESSAGESDAO.getMessageDAO().save(message);
you don't need to set updated object to a parent object:
message.setFiles(folders);
Simple save your parent object like:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
// Not set updated object here
MESSAGESDAO.getMessageDAO().save(message);
Reading online the cause of this error can be also an hibernate bug, as workaround that it seems to work, it is to put a:
session.clear()
You must to put the clear after getting data and before commit and close, see example:
//getting data
SrReq sr = (SrReq) crit.uniqueResult();
SrSalesDetailDTO dt=SrSalesDetailMapper.INSTANCE.map(sr);
//CLEAR
session.clear();
//close session
session.getTransaction().commit();
session.close();
return dt;
I use this solution for select to database, for update or insert i don't know if this solution can work or can cause problems.
My problem is equal at 100% of this: http://www.progtown.com/topic128073-hibernate-many-to-many-on-two-tables.html
I have experienced a great example of reproducing such a problem.
Maybe my experience will help someone one day.
Short version
Check that your #Embedded Id of container has no possible collisions.
Long version
When Hibernate instantiates collection wrapper, it searches for already instantiated collection by CollectionKey in internal Map.
For Entity with #Embedded id, CollectionKey wraps EmbeddedComponentType and uses #Embedded Id properties for equality checks and hashCode calculation.
So if you have two entities with equal #Embedded Ids, Hibernate will instantiate and put new collection by the first key and will find same collection for the second key.
So two entities with same #Embedded Id will be populated with same collection.
Example
Suppose you have Account entity which has lazy set of loans.
And Account has #Embedded Id consists of several parts(columns).
#Entity
#Table(schema = "SOME", name = "ACCOUNT")
public class Account {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "account")
private Set<Loan> loans;
#Embedded
private AccountId accountId;
...
}
#Embeddable
public class AccountId {
#Column(name = "X")
private Long x;
#Column(name = "BRANCH")
private String branchId;
#Column(name = "Z")
private String z;
...
}
Then suppose that Account has additional property mapped by #Embedded Id but has relation to other entity Branch.
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "BRANCH")
#MapsId("accountId.branchId")
#NotFound(action = NotFoundAction.IGNORE)//Look at this!
private Branch branch;
It could happen that you have no FK for Account to Brunch relation id DB so Account.BRANCH column can have any value not presented in Branch table.
According to #NotFound(action = NotFoundAction.IGNORE) if value is not present in related table, Hibernate will load null value for the property.
If X and Y columns of two Accounts are same(which is fine), but BRANCH is different and not presented in Branch table, hibernate will load null for both and Embedded Ids will be equal.
So two CollectionKey objects will be equal and will have same hashCode for different Accounts.
result = {CollectionKey#34809} "CollectionKey[Account.loans#Account#43deab74]"
role = "Account.loans"
key = {Account#26451}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
result = {CollectionKey#35653} "CollectionKey[Account.loans#Account#33470aa]"
role = "Account.loans"
key = {Account#35225}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
Because of this, Hibernate will load same PesistentSet for two entities.
In my case, I was copying and pasting code from my other classes, so I did not notice that the getter code was bad written:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "credito")
public Set getConceptoses() {
return this.letrases;
}
public void setConceptoses(Set conceptoses) {
this.conceptoses = conceptoses;
}
All references conceptoses but if you look at the get says letrases
I too got the same issue, someone used BeanUtils.copyProperties(source, target). Here both source and target, are using the same collection as attribute.
So i just used the deep copy as below..
How to Clone Collection in Java - Deep copy of ArrayList and HashSet
Consider an entity:
public class Foo{
private<user> user;
/* with getters and setters */
}
And consider an Business Logic class:
class Foo1{
List<User> user = new ArrayList<>();
user = foo.getUser();
}
Here the user and foo.getUser() share the same reference. But saving the two references creates a conflict.
The proper usage should be:
class Foo1 {
List<User> user = new ArrayList<>();
user.addAll(foo.getUser);
}
This avoids the conflict.
I faced similar exception in my application. After looking into the stacktrace it was clear that exception was thrown within a FlushEntityEventListener class.
In Hibernate 4.3.7 the MSLocalSessionFactory bean no longer supports the eventListeners property. Hence, one has to explicitly fetch the service registry from individual Hibernate session beans and then set the required custom event listeners.
In the process of adding custom event listeners we need to make sure the corresponding default event listeners are removed from the respective Hibernate session.
If the default event listener is not removed then the case arises of two event listeners registered against same event. In this case while iterating over these listeners, against first listeners any collections in the session will be flagged as reached and while processing the same collection against second listener would throw this Hibernate exception.
So, make sure that when registering custom listeners corresponding default listeners are removed from registry.
My problem was that I had setup an #ManyToOne relationship. Maybe if the answers above don't fix your problem you might want to check the relationship that was mentioned in the error message.
Posting here because it's taken me over 2 weeks to get to the bottom of this, and I still haven't fully resolved it.
There is a chance, that you're also just running into this bug which has been around since 2017 and hasn't been addressed.
I honestly have no clue how to get around this bug. I'm posting here for my sanity and hopefully to shave a couple weeks of your googling. I'd love any input anyone may have, but my particular "answer" to this problem was not listed in any of the above answers.
I had to replace the following collection initilization:
challenge.setGoals(memberChallenge.getGoals());
with
challenge.setGoals(memberChallenge.getGoals()
.stream()
.map(dmo -> {
final ChallengeGoal goal = new ChallengeGoalImpl();
goal.setMemberChallenge(challenge);
goal.setGoalDate(dmo.getGoalDate());
goal.setGoalValue(dmo.getGoalValue());
return goal;
})
.collect(Collectors.toList()));
I changed
#OneToMany( cascade= CascadeType.ALL)
#JoinColumn(
name = "some_id",
referencedColumnName = "some_id"
)
to
#OneToMany(mappedBy = "some_id", cascade= CascadeType.ALL)
You're using pointers(indirectly), so sometimes you're copying the memory address instead of the object/collection you want. Hibernate checks this and throw that error. Here's what can you do:
Don't copy the object/collection;
Initiate a new empty one;
Make a function to copy it's content and call it;
For example:
public Entity copyEntity(Entity e){
Entity copy = new Entity();
e.copy(name);
e.setCollection2(null);
e.setCollection3(copyCollection(e.getCollection3());
return copy;
}
In a one to many and many to one relationship this error will occur. If you attempt to devote same instance from many to one entity to more than one instance from one to many entity.
For example, each person can have many books but each of these books can be owned by only one person if you consider more than one owner for a book this issue is raised.