I have a list with 25 MyApplication objects that I want to save using hibernate/JPA. This is done with the following method:
MyApplicationRepository.saveAll(myAppList);
However I noticed that hibernate creates over 60.000 MyApplication objects (close to the total amount of records already in database for this entity) while inserting/updating this list of 25 in the database. I don't have a lot of hibernate experience which leads me to believe I created a inefficient entity relations. A part of the MyApplication class:
public class MyApplication {
#ManyToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
#JoinTable(name = "APPLICATION_CATEGORY", joinColumns = {
#JoinColumn(name = "applicationid", nullable = false, updatable = false) },
inverseJoinColumns = { #JoinColumn(name = "categoryid",
nullable = false, updatable = false) })
private Set<Category> categorySet;
#OneToMany(mappedBy = "myApplication",
cascade = CascadeType.ALL, fetch = FetchType.LAZY)
private Set<Screenshot> screenshotSet;
}
Category class (one example of multiple of the many to many relations of MyApplication):
public class Category {
#ManyToMany(fetch = FetchType.LAZY, mappedBy = "categorySet")
private Set<MyApplication> myApplicationSet;
}
Screenshot class:
public class Screenshot {
#ManyToOne
#JoinColumn(name = "applicationid")
private MyApplication myApplication;
}
What did I do wrong that resulted in Hibernate creating so many instances of MyApplication when saving?
Note 1: In the end all of the information of MyApplication and the information of it's categories and screenshots is saved correctly in the database.
Note 2: It's important that not only MyApplication is saved but also everything from all its categories and screenshots as well.
I was able to fix the issue. The problem was caused by the bidirectional nature of the manyToMany relation. This resulted in the category also querying all the applications from the database before saving. As this is not what I want, I resolved the issue by turning it into a unidirectional relation by removing the myApplicationSet from the Category class. Now only 25 MyApplication instances are constructed to save 25 applications and my memory usage remains stable.
Related
I have a ManyToMany relationship between Profile and ProfileExperience that is mapped as follows:
#ManyToMany
#JoinTable(name = "profile_experience_relations",
joinColumns = {
#JoinColumn(name = "profile_id")
},
inverseJoinColumns = {
#JoinColumn(name = "profile_experience_id")
})
private List<ProfileExperience> experiences;
I have added localization support inside of ProfileExperience, following this guide like so:
ProfileExperience Class
#OneToMany(mappedBy = "profileExperience", cascade = {CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH}, orphanRemoval = true)
#MapKey(name = "localizedProfileExperiencePk.locale")
#Cache(usage = CacheConcurrencyStrategy.READ_ONLY)
private Map<String, LocalizedProfileExperience> localizations = new HashMap<>();
LocalizedProfileExperience Class
#Entity
#Getter
#Setter
#Cache(usage = CacheConcurrencyStrategy.READ_ONLY)
public class LocalizedProfileExperience {
#EmbeddedId
private LocalizedProfileExperiencePk localizedProfileExperiencePk;
#ManyToOne
#MapsId("id")
#JoinColumn(name = "profileExperienceId")
private ProfileExperience profileExperience;
private String value;
}
Composite PK Class
#Embeddable
#Getter
#Setter
public class LocalizedProfileExperiencePk implements Serializable {
private static final long serialVersionUID = 1L;
private String profileExperienceId;
private String locale;
public LocalizedProfileExperiencePk() {
}
Before adding the localization, there was no duplicate entries in the responses, however - everything retrieved is now duplicated.
I can solve the issue by using a Set, however I'm curious as to why this happened. What is the explanation? Can I solve it without using a set? Am I overlooking something incredibly simple?
The problem is that you are probably using join fetch or an entity graph to fetch nested collections. Now, when you look at the JDBC result set, you will see that there are many duplicate result set rows. If you have a profile with 2 profile experiences, and each has 3 localizations, you will see that you have 6 (2 * 3) duplicate rows. Theoretically, Hibernate could try to retain the expected object graph cardinality, but this is not so easy, especially when multiple collections are involved. Also, for certain collection mappings it would simply not be possible to do.
So the short answer to your problem is, never use a List unless duplicity matters to you. In this case, you will have an order column though, so even then it would be safe to use a list.
Implement the equal method of your data class. Hibernate need it.
In the context of a Spring Boot project using Spring Data JPA, I have defined the following entities:
Ent1 contains a list of Ent2 elements
Ent2 contains a list of Ent3 elements
When fetching a top-level Ent1 object through a repository, I'm seeing that every Ent2 which has more than one child appears multiple times in the Ent1.ent2 list. For example, an Ent2 with two childs will appear twice.
So instead of getting this:
I'm getting this:
Notes:
There are no duplicates in the database
If I delete ent3b in the database, the duplicated ent2 disappears
Here's a simplified version of the code:
```java
#Entity
public class Ent1 {
#OneToMany(mappedBy="parent", fetch = FetchType.EAGER, cascade = CascadeType.ALL, orphanRemoval = true)
private List<Ent2> ent2 = new ArrayList<Ent2>();
}
#Entity
public class Ent2 {
#ManyToOne
#JoinColumn(name = "PARENT_ID", nullable = false)
protected Ent1 parent;
#OneToMany(mappedBy="parent", fetch = FetchType.EAGER, cascade = CascadeType.ALL, orphanRemoval = true)
private List<Ent3> ent3 = new ArrayList<Ent3>();
}
#Entity
public class Ent3 {
#ManyToOne
#JoinColumn(name = "PARENT_ID", nullable = false)
protected Ent2 parent;
}
```
Solution was to convert Lists into Sets. Lists in JPA require additional data (i.e. an ordering column) to extract a total ordering of elements from the relationship. It can be done but typically the average user only needs Set and it's a reflection of the relationship that most people model.
OP also commented that the previous provider didn't have this requirement so if you were previously using EclipseLink and switching ORM providers this may be a problem for you too.
I have this class:
public class Tenant {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
#NaturalId
#Column(name = "name", nullable = false, updatable = false, unique = true)
private String name;
#OneToMany(mappedBy = "tenant", cascade = CascadeType.ALL, orphanRemoval = true)
private List<User> users;
#OneToMany(mappedBy = "tenant", cascade = CascadeType.ALL, orphanRemoval = true)
private List<Role> roles;
#OneToOne(mappedBy = "tenant", cascade = CascadeType.ALL, orphanRemoval = true, optional = false)
private TenantLimits limits;
}
Where of course all referenced classes are entities. I'm able to create, update and retrieve everything from here, but since private TenantLimits limits; refers to an entity created after Tenant was created many of my Tenants elements don't contains any matched TenantLimits.
So my question is: How can I insert in the database a value in TenantLimits if is null when I'm going to retrieve Tenant? In Java I can easily check (of course) if the property is null and insert manually foreach retrieve, but since the retrieve of this entity is present in different places in my code I'd to have something that manage this automatically if exists
You are telling Hibernate that Tenant.limits cannot be null by mapping it with "optional=false". It will 100% adhere to this definition. It will only create valid tenants and I assume it will throw you exceptions if the state of the database is invalid. It won't let you fix your data.
You should fix the state of your database by any other means than with this particular Hibernate mapping.
You might have to migrate in 2 steps. Let's say, make the mapping "optional=true". Then you can run a Java process to fix your data (maybe by using an entity listener). Then - change it back to "optional=false".
Posting this here as I wasn't seeing much interest here: http://www.java-forums.org/jpa/96175-openjpa-one-many-within-one-many-merge-problems.html
Trying to figure out if this is a problem with OpenJPA or something I may be doing wrong...
I'm facing a problem when trying to use OpenJPA to update an Entity that contains a One to Many relationship to another Entity, that has a One to Many relationship to another. Here's a quick example of what I'm talking about:
#Entity
#Table(name = "school")
public class School {
#Column(name = "id")
protected Long id;
#Column(name = "name")
protected String name;
#OneToMany(mappedBy = "school", orphanRemoval = true, cascade = CascadeType.ALL)
protected Collection<ClassRoom> classRooms;
}
#Entity
#Table(name = "classroom")
public class ClassRoom {
#Column(name = "id")
protected Long id;
#Column(name = "room_number")
protected String roomNumber;
#ManyToOne
#JoinColumn(name = "school_id")
protected School school;
#OneToMany(mappedBy = "classRoom", orphanRemoval = true, cascade = CascadeType.ALL, fetch = FetchType.EAGER)
protected Collection<Desk> desks;
}
#Entity
#Table(name = "desk")
public class Desk {
#Column(name = "id")
protected Long id;
#ManyToOne
#JoinColumn(name = "classroom_id")
protected ClassRoom classRoom;
}
In the SchoolService class, I have the following update method:
#Transactional
public void update(School school) {
em.merge(school);
}
I'm trying to remove a Class Room from the School. I remove it from the classRooms collection and call update. I'm noticing if the Class Room has no desks, there are no issues. But if the Class Room has desks, it throws a constraint error as it seems to try to delete the Class Room first, then the Desks. (There is a foreign key constraint for the classroom_id column)
Am I going about this the wrong way? Is there some setting I'm missing to get it to delete the interior "Desk" instances first before deleting the Class Room instance that was removed?
Any help would be appreciated. If you need any more info, please just let me know.
Thanks,
There are various bug reports around FK violations in OpenJPA when cascading remove operations to child entities:
The OpenJPA FAQ notes that the following:
http://openjpa.apache.org/faq.html#reorder
Can OpenJPA reorder SQL statements to satisfy database foreign key
constraints?
Yes. OpenJPA can reorder and/or batch the SQL statements using
different configurable strategies. The default strategy is capable of
reordering the SQL statements to satisfy foreign key constraints.
However ,you must tell OpenJPA to read the existing foreign key
information from the database schema:
It would seem you can force the correct ordering of the statements by either setting the following property in your OpenJPA config
<property name="openjpa.jdbc.SchemaFactory"> value="native(ForeignKeys=true)"/>
or by adding the org.apache.openjpa.persistence.jdbc.ForeignKey annotation to the mapping:
#OneToMany(mappedBy = "classRoom", orphanRemoval = true, cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#org.apache.openjpa.persistence.jdbc.ForeignKey
protected Collection<Desk> desks;
See also:
https://issues.apache.org/jira/browse/OPENJPA-1936
I'm using tenant by schema and i have the following entities:
#Entity
#Multitenant(MultitenantType.TABLE_PER_TENANT)
#TenantTableDiscriminator(type = TenantTableDiscriminatorType.SCHEMA)
public class Person {
#OneToOne(mappedBy = "person", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
private CTPS ctps;
}
#Entity
#Table(name = "CTPS")
#Multitenant(MultitenantType.TABLE_PER_TENANT)
#TenantTableDiscriminator(type = TenantTableDiscriminatorType.SCHEMA)
public class CTPS {
#OneToOne
#JoinTable(name = "PERSON_CTPS", joinColumns = #JoinColumn(name = "CTPS_ID"), inverseJoinColumns = #JoinColumn(name = "PERSON_ID"))
private Person person;
}
During an update at the same time using two differents tenants, occurs key violation error in one of requests, because tenant_a is trying to execute an insert in person_ctps table using tenant_b.
I'm using:
postgresql-9.4.5-3
wildfly-8.2.0
EclispeLink 2.6.3 with patchs of issues 410870 and 493235.
Anyone knows how to fix this?
I found the problem. The object that maintain relation tables is not cloned in EclipseLink.
With the attachment patch of issue 498891, the problem is solved