Hibernate: Persist tree like structure - java

I have a tree like structure in a Collection. I have ensured that all nodes in the collection do not make extraneous references and are topologically sorted such that the root node at the head of the collection and all leaves are near the end of it.
My primary abstract node class is something like this:
#Entity
public abstract class Node {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
public long ID;
#ManyToOne(fetch=FetchType.LAZY)
#JoinColumn(name = "parent_id", insertable = false, updatable = false)
Node parent;
#ManyToOne(fetch=FetchType.LAZY)
#JoinColumn(name = "root_id", insertable = false, updatable = false)
Node root;
}
I do not maintain children list, but each node points to its parent. The root reference is a convenience field to refer to the root node of the tree. For example, it will be easier for deleting entire trees. Now I have many descendants from Node such as A, B C, etc.
The problem:
When trying to persist the entire tree, I use the following code.
// Check for extraneous references, and sort them topologically.
Session s = hibernate.openSession();
Transaction tx = s.beginTransaction();
try {
int i = 0;
for (Node p: objects) {
if (p.parent == null) {
throw new IOException("Parent is `null`.");
}
s.persist(p);
if (i % batchSize == 0) {
s.flush();
s.clear();
}
i++;
}
tx.commit();
}
catch (Throwable t) {
log.error(t.getMessage(), t);
tx.rollback();
throw new IOException(t);
}
This method doesn't persist objects correctly. If the batch size is too small, I get a PersistentObjectException with a message:
org.hibernate.PersistentObjectException: detached entity passed to persist: com.example.Node
If the batch size is at least as large as total number of objects I can persist, but PARENT_ID and ROOT_ID in database is all set to null. I am using H2 while testing. Note, class A is always the root node, all other objects can appear at any level below A. I tried s.merge()ing too but that didn't work either. I have implemented equals() and hashCode() according to my business keys.
Is it a problem with my equals/hashCode method? Or is it the way I'm attempting to persist? I don't know what's wrong with my code. Somehow I feel this is a trivial error and that I'm overlooking fundamental aspect. Could someone please help me fix it? I tried reading through different blogs that talk about hierarchical representation using Hibernate, but nothing helped.

Try removing s.clear().
It is basically detaching the objects from the persistence context, which may be causing the exception detached entity passed to persist

Related

Hibernate update before insert in one to many

I am getting the constraint violation exception because of the order of operations performed by Hibernate. I have the following entities defined.
#Entity
public class A {
#Id
private Integer id;
#OneToMany(mappedBy = "a", fetch = FetchType.LAZY, cascade = CascadeType.ALL, orphanRemoval = true)
private List<B> bList;
public void setBList(List<B> bList) {
if(CollectionUtils.isNotEmpty(this.bList)) {
this.bList.clear();
}
if(CollectionUtils.isNotEmpty(bList)) {
this.bList.addAll(bList);
}
}
}
#Entity
#Table(uniqueConstraints={#UniqueConstraint(columnNames = {"name", "a_id", "isDeleted"})})
public class B {
#Id
private Integer id;
private String name;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn(name="a_id")
private A a;
private boolean isDeleted;
}
When I set the new list of Bs (containing one item updated as deleted and a new item having the same values in the columns corresponding to constraint) in entity A and save entity A, I get constraint violation.
Hibernate is performing insert of the new item before updating the old item as deleted leading to constraint violation when in fact the data is correct in the application.
Am I doing something wrong here or Is there any configuration or fix for this?
Answer changed on 2021/05/07 due to comment from the OP pointing out it was missing the point
There are 2 things you should change for things to work
You should not rely on Hibernate to guess the right order of operations for you. It relies on heuristics that might not fit your intent. In your case, you should call EntityManager.flush after your soft-delete of the old B and before persisting the new one.
Your unique constrain will cause problems anyway, when you'll soft-delete your second B, that is identical regarding unique columns. More hereafter
In general, ensuring this kind of constrains in DB is a bad idea. If you try and update/insert an entity that violates them, then you'll get an obscure PersistenceException and it will be hard to warn your users about the exact cause. So you will have to programmatically check those constrains before insertion/update anyways. Hence, you'd better remove them and ensure unicity through your program, unless they're vital to data integrity. Same goes for not-nullable columns and other constrains that are pure business logic.
Now last advice from experience: for soft-delete column, use a TimeStamp rather than a boolean. Same effort updating and reading your records, but it gives you some valuable information about when a record was deleted.

Hibernate many to many fetching associated objects

#Entity
#Table(name = "MATCHES")
public class Match implements Serializable{
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "MATCH_ID")
private Long id;
#ManyToMany(mappedBy = "matches", cascade = CascadeType.ALL)
private Set<Team> teams = new HashSet<Team>();
}
#Entity
#Table(name = "Teams")
public class Team implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "TEAM_ID")
private long id;
#ManyToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
#JoinTable(name = "TEAM_MATCH", joinColumns = { #JoinColumn(name = "TEAM_ID") }, inverseJoinColumns = {
#JoinColumn(name = "MATCH_ID") })
private Set<Match> matches = new HashSet<Match>();
}
I got those classes, now I want to get all the matches and let's say, print names of both teams.
public List getAllMatches() {
Session session = HibernateUtil.getSession();
Transaction t = session.beginTransaction();
Criteria criteria = session.createCriteria(Match.class, "match");
criteria.createAlias("match.teams", "mt", JoinType.LEFT_OUTER_JOIN);
List result = criteria.list();
t.commit();
session.close();
return result;
}
But when I invoke that method, result has size 2 when I got only 1 match in my table. Both of those matches in result have 2 teams, which is correct. I have no idea why this happends. What I want is to have one Match object with two Team objects in 'teams' set, but I have two of those Match objects. They are fine, but there are two of them. I'm completely new to this and have no idea how to fix those criterias. I tried deleting 'FetchType.LAZY' from #ManyToMany in Team but it doesn't work. Team also has properties like Players/Trainer etc. which are in their own tables, but I don't want to dig that deep yet, baby steps. I wonder tho if doing such queries is a good idea, should I just return Matches and then if I want to get Teams, get them in another session?
Edit: I added criteria.setResultTransformer(DistinctRootEntityResultTransformer.INSTANCE); and it works, is that how I was suppose to fix that or this is for something completely different and I just got lucky?
I think the duplication is a result of your createAlias call, which besides having this side effect is redundant in the first place.
By calling createAlias with those arguments, you are telling Hibernate to not just return all matches, but to first cross index the MATCHES table with the TEAM_MATCH table and return a result for each matching pair of rows. You get one result for a row in the matches table paired with the many-to-many mapping to the first team, and another result for the same row in the matches table paired with the many-to-many mapping to the second team.
I'm guessing your intent with that line was to tell Hibernate to fetch the association. This is not necessary, Hibernate will fetch associated objects on its own automatically when needed.
Simply delete the criteria.createAlias call, and you should get the result you expected - with one caveat. Because the association is using lazy fetching, Hibernate won't load it until you access it, and if that comes after the session is closed you will get a LazyInitializationException. In general I would suggest you prefer solving this by having the session opened and closed at a higher level of abstraction - getting all matches is presumably part of some larger task, and in most cases you should really use one session for the duration of the entire task unless there are substantial delays (such as waiting for user input) involved. Changing that would likely require significant redesign of your code, however; the quick solution is to simply loop over the result list and call Hibernate.initialize() on the teams collection in each Match. Or you could just change the fetch type to eager, if the performance cost of always loading the association whether or not you need it is acceptable.

StackOverflowError when retriving ManyToMany join table with Java EJB 3

I have the following entity mapings for my EJB3 application that map a many-to-many relationship:
#Entity
Crawl{
#OneToMany(fetch = FetchType.EAGER, mappedBy = "pk.crawl")
public List<Change> changes;
}
#Entity
Change{
#EmbeddedId
ChangePK pk;
#Temporal(javax.persistence.TemporalType.DATE)
Date changeDate;
}
#Embeddable
ChangePK{
#ManyToOne
Crawl crawl;
#ManyToOne
Page page;
}
#Entity
Page{
#OneToMany(fetch = FetchType.LAZY, mappedBy = "pk.page")
List<Change> changes;
}
I am trying to get all of the changes that are related to a crawl and order them by date using:
this.entityManager
.createQuery("SELECT c FROM Change c WHERE
c.pk.crawl.id = :id
ORDER BY c.changeDate DESC")
.setParameter("id", crawl.getId());
This is giving me a stack overflow error. I belive the eager fetch may have something to do with it but in every other occurence I want the changes loaded with a crawl and it will cause a lot of problems in the rest of my application if I change the fetch type to lazy.
I have overriden hashCode and equals methods for each class.
Edit:
hashcode and equals code:
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + id;
return result;
}
#Override
public boolean equals(Object obj) {
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Crawl other = (Crawl) obj;
if (id != other.id)
return false;
return true;
}
These are generated by Eclipse and I have selected the primary keys to use in them, the other classes all use the same thing.
If the whole tree of object tree is big, there's no way* to avoid the stackoverflow as hibernate resolves the dependencies recursively, which is ok for 99.9% of the cases (in 8 years of using hibernate, this is the first time that I've seen this error).
One alternative to fix this, is to increase the stack size, but that will increase the size on all the application threads (which might not be something good). so for example you can add the option -Xss1m when you run the JVM and you'll get a 1mb stack size. (the default stack size varies from platform to platform, but I think it's usually 512k)
Another alternative is to change the mapping, but I think all include denormalising the table a bit.
One option is to flatten the tree, so given a specific Crawl, you can retrieve all the children of the crawl with one query. In this case, the collection Crawl.changes contains all the children, grandsons, etc of the crawl.
*there is always a way

JPA ManyToMany ConcurrentModificationException issues

We have three entities with bidirectional many-to-many mappings in a A <-> B <-> C "hierarchy" like so (simplified, of course):
#Entity
Class A {
#Id int id;
#JoinTable(
name = "a_has_b",
joinColumns = {#JoinColumn(name = "a_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")})
#ManyToMany
Collection<B> bs;
}
#Entity
Class B {
#Id int id;
#JoinTable(
name = "b_has_c",
joinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "c_id", referencedColumnName = "id")})
#ManyToMany(fetch=FetchType.EAGER,
cascade=CascadeType.MERGE,CascadeType.PERSIST,CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<C> cs;
#ManyToMany(mappedBy = "bs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<A> as;
}
#Entity
Class C {
#Id int id;
#ManyToMany(mappedBy = "cs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<B> bs;
}
There's no conecpt of an orphan - the entities are "standalone" from the application's point of view - and most of the time we're going to have a fistful of A:s, each with a couple of B:s (some may be "shared" among the A:s), and some 1000 C:s, not all of which are always "in use" by any B. We've concluded that we need bidirectional relations, since whenever an entity instance is removed, all links (entries in the join tables) have to be removed too. That is done like this:
void removeA( A a ) {
if ( a.getBs != null ) {
for ( B b : a.getBs() ) { //<--------- ConcurrentModificationException here
b.getAs().remove( a ) ;
entityManager.merge( b );
}
}
entityManager.remove( a );
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown. I've been banging my head for a while now, but can't think of a reasonable way of removing the links without meddling with the collection, which makes underlying the Iterator angry.
Q1: How am I supposed to do this, given the current ORM setup? (If at all...)
Q2: Is there a more reasonable way do design the OR-mappings that will let JPA (provided by Hibernate in this case) take care of everything. It'd be just swell if we didn't have to include those I'll be deleted now, so everybody I know, listen carefully: you don't need to know about this!-loops, which aren't working anyway, as it stands...
This problem has nothing to do with the ORM, as far as I can tell. You cannot use the syntactic-sugar foreach construct in Java to remove an element from a collection.
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
Source
Simplified example of the problematic code:
List<B> bs = a.getBs();
for (B b : bs)
{
if (/* some condition */)
{
bs.remove(b); // throws ConcurrentModificationException
}
}
You must use the Iterator version to remove elements while iterating. Correct implementation:
List<B> bs = a.getBs();
for (Iterator<B> iter = bs.iterator(); iter.hasNext();)
{
B b = iter.next();
if (/* some condition */)
{
iter.remove(); // works correctly
}
}
Edit: I think this will work; untested however. If not, you should stop seeing ConcurrentModificationExceptions but instead (I think) you'll see ConstraintViolationExceptions.
void removeA(A a)
{
if (a != null)
{
a.setBs(new ArrayList<B>()); // wipe out all of a's Bs
entityManager.merge(a); // synchronize the state with the database
entityManager.remove(a); // removing should now work without ConstraintViolationExceptions
}
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown
The issue is that the collections inside of A, B, and C are magical Hibernate collections so when you run the following statement:
b.getAs().remove( a );
this removes a from b's collection but it also removes b from a's list which happens to be the collection being iterated over in the for loop. That generates the ConcurrentModificationException.
Matt's solution should work if you are really removing all elements in the collection. If you aren't however another work around is to copy all of the b's into a collection which removes the magical Hibernate collection from the process.
// copy out of the magic hibernate collection to a local collection
List<B> copy = new ArrayList<>(a.getBs());
for (B b : copy) {
b.getAs().remove(a) ;
entityManager.merge(b);
}
That should get you a little further down the road.
Gray's solution worked! Fortunately for us the JPA people seem to have been trying to implement collections as close to official Sun documentation on the proper use of List<> collections has indicated:
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
I was all but pulling out my hair over this exception thinking it meant one #Stateless method could not call another #Stateless method from it's own class. This I thought odd as I was sure that I read somewhere that nested transactions are allowed. So when I did a search on this very exception, I found this posting and applied Gray's solution. Only in my case I happened to have two independent collections that had to be handled. As Gray indicated, according the Java spec on the proper way to remove from a member from a Java container, you need to use a copy of the original container to iterate with and then do your remove() on the original container which makes a lot of sense. Otherwise, the original container's link list algorithm gets confused.
for ( Participant p2 : new ArrayList<Participant>( p1.getFollowing() )) {
p1.getFollowing().remove(p2);
getEm().merge(p1);
p2.getFollowers().remove(p1);
getEm().merge(p2);
}
Notice I only make a copy of the first collection (p1.getFollowing()) and not the second collection (p2.getFollowers()). That is because I only need to iterate from one collection even though I need to remove associations from both collections.

Hibernate - #ElementCollection - Strange delete/insert behavior

#Entity
public class Person {
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
private List<Location> locations;
[...]
}
#Embeddable
public class Location {
[...]
}
Given the following class structure, when I try to add a new location to the list of Person's Locations, it always results in the following SQL queries:
DELETE FROM PERSON_LOCATIONS WHERE PERSON_ID = :idOfPerson
And
A lotsa' inserts into the PERSON_LOCATIONS table
Hibernate (3.5.x / JPA 2) deletes all associated records for the given Person and re-inserts all previous records, plus the new one.
I had the idea that the equals/hashcode method on Location would solve the problem, but it didn't change anything.
Any hints are appreciated!
The problem is somehow explained in the page about ElementCollection of the JPA wikibook:
Primary keys in CollectionTable
The JPA 2.0 specification does not
provide a way to define the Id in the
Embeddable. However, to delete or
update a element of the
ElementCollection mapping, some unique
key is normally required. Otherwise,
on every update the JPA provider would
need to delete everything from the
CollectionTable for the Entity, and
then insert the values back. So, the
JPA provider will most likely assume
that the combination of all of the
fields in the Embeddable are unique,
in combination with the foreign key
(JoinColunm(s)). This however could be
inefficient, or just not feasible if
the Embeddable is big, or complex.
And this is exactly (the part in bold) what happens here (Hibernate doesn't generate a primary key for the collection table and has no way to detect what element of the collection changed and will delete the old content from the table to insert the new content).
However, if you define an #OrderColumn (to specify a column used to maintain the persistent order of a list - which would make sense since you're using a List), Hibernate will create a primary key (made of the order column and the join column) and will be able to update the collection table without deleting the whole content.
Something like this (if you want to use the default column name):
#Entity
public class Person {
...
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
#OrderColumn
private List<Location> locations;
...
}
References
JPA 2.0 Specification
Section 11.1.12 "ElementCollection Annotation"
Section 11.1.39 "OrderColumn Annotation"
JPA Wikibook
Java Persistence/ElementCollection
In addition to Pascal's answer, you have to also set at least one column as NOT NULL:
#Embeddable
public class Location {
#Column(name = "path", nullable = false)
private String path;
#Column(name = "parent", nullable = false)
private String parent;
public Location() {
}
public Location(String path, String parent) {
this.path = path;
this.parent= parent;
}
public String getPath() {
return path;
}
public String getParent() {
return parent;
}
}
This requirement is documented in AbstractPersistentCollection:
Workaround for situations like HHH-7072. If the collection element is a component that consists entirely
of nullable properties, we currently have to forcefully recreate the entire collection. See the use
of hasNotNullableColumns in the AbstractCollectionPersister constructor for more info. In order to delete
row-by-row, that would require SQL like "WHERE ( COL = ? OR ( COL is null AND ? is null ) )", rather than
the current "WHERE COL = ?" (fails for null for most DBs). Note that
the param would have to be bound twice. Until we eventually add "parameter bind points" concepts to the
AST in ORM 5+, handling this type of condition is either extremely difficult or impossible. Forcing
recreation isn't ideal, but not really any other option in ORM 4.
We discovered that entities we were defining as our ElementCollection types did not have an equals or hashcode method defined and had nullable fields. We provided those (via #lombok for what it's worth) on the entity type and it allowed hibernate (v 5.2.14) to identify that the collection was or was not dirty.
Additionally, this error manifested for us because we were within a service method that was marked with the annotation #Transaction(readonly = true). Since hibernate would attempt to clear the related element collection and insert it all over again, the transaction would fail when being flushed and things were breaking with this very difficult to trace message:
HHH000346: Error during managed flush [Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1]
Here is an example of our entity model that had the error
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
public class Entity2 {
private UUID someUUID;
}
Changing it to this
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
#EqualsAndHashCode
public class Entity2 {
#Column(nullable = false)
private UUID someUUID;
}
Fixed our issue. Good luck.
I had the same issue but wanted to map a list of enums: List<EnumType>.
I got it working like this:
#ElementCollection
#CollectionTable(
name = "enum_table",
joinColumns = #JoinColumn(name = "some_id")
)
#OrderColumn
#Enumerated(EnumType.STRING)
private List<EnumType> enumTypeList = new ArrayList<>();
public void setEnumList(List<EnumType> newEnumList) {
this.enumTypeList.clear();
this.enumTypeList.addAll(newEnumList);
}
The issue with me was that the List object was always replaced using the default setter and therefore hibernate treated it as a completely "new" object although the enums did not change.

Categories

Resources