Class A
#ManyToOne
private B b;
Class B
#OneToMany (mappedBy ="b")
private List<A> listA = new ArrayList<A>();
private void addA(A a) {
listA.add(a);
}
So A is the owning side,
If I do A.setB(new B()) then I merge A everything will work and the association will be kept.
If I do B.addA(new A()) then I merge B, the link between A and B will not be updated right ?
What should I do so B.add(new A()) will update the link between A and B ?
Thank you very much
I don't understand your question very well, but I think you should add in method addA
private void addA(A a) {
listA.add(a);
a.setB(this);
}
If I understand you well it should be enought to set cascade attribute. If you want to "control" links through the B's collection set it to #OneToMany(mappedBy="b", cascade=CascadeType.ALL).
According to docs, no operation are cascaded by default.
Related
So, I know this question has been asked a lot, but I haven't seen it asked about a case like this.
I have the following entities:
#Entity
public class A {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(updatable=false)
private Integer id;
#OneToMany(mappedBy="a", cascade=CascadeType.ALL)
private List<B> bs;
}
#Entity
public class B {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(updatable=false)
private Integer id;
#ManyToMany(cascade=CascadeType.ALL)
#JoinTable(
name="BToC"
, joinColumns={
#JoinColumn(name="BId", referencedColumnName="id")
}
, inverseJoinColumns={
#JoinColumn(name="CId", referencedColumnName="id")
}
)
private List<C> cs;
}
#Entity
public class C {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(updatable=false)
private Integer id;
#ManyToMany(mappedBy="cs",cascade=CascadeType.ALL)
private List<B> bs;
}
If you had an A, a1, containing two B's with id's 1 and 2 respectively (let's call them b1 and b2), each containing a single C with id 1 (let's call them c1 and c2), that would make c1 and c2 each have a bs list containing b1 and b2. When trying to merge A, the merge cascades to b1 and b2, and each of the B's merges then cascade to c1 and c2. Since c1 and c2 have identical contents, I would expect the following result to be pushed to table BtoC:
BId | CId
----------
1 | 1
2 | 1
However, the merge fails because c1 and c2 both represent the same entity but are technically different objects.
So, my question is this: Is there a change I can make to allow merges applied to either A, B, or C to succeed and cascade, while permitting cases like with c1 and c2 where the contents are identical but the Object is different? Or would I need to omit the Merge Cascade Type from at least one of the entities, and have merges I would like to "cascade" from one object to another be handled manually (i.e. iterate through each entity, merging only those whose contents have not been previously merged OR assigning objects with identical contents to a single reference)?
Ok, seems like the fix was pretty simple, I just needed to implement a proper equals method for C class. Using that, I'm guessing JPA was able to determine that both representations were identical, and had no problems merging them.
Hoping this works for anyone else running into a similar issue.
I have the following problem (pseudo-java-code):
Let me a class A,B,C with the following relationships:
#Entity
#Table(name = "A")
public class A {
#OneToMany(mappedBy = "a")
private B b;
}
#Entity
#Table(name = "B")
public class B {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "a_id")
private A a;
#OneToOne(mappedBy = "b", fetch = FetchType.LAZY)
private C c;
}
#Entity
#Table(name = "C")
public class C {
#OneToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "b_id")
private B b;
}
I'm using JpaRepository with #Query annotation and I implemented the following query:
#Query("SELECT DISTINCT(a) FROM A a "
+ "LEFT JOIN FETCH a.b as b"
+ "WHERE a.id = :id ")
A findById(#Param("id") Integer id);
I want retrieve the informations about class A and B, but not C.
Somehow (I don't know why) the query try to retrive also the relation between B and C.
And then, with hibernate, start the lazy invocation for retrieving C.
Naturally, if I fetch also the relation between B and C (adding LEFT JOIN FETCH b.c as c) that's not happen.
My question is, why? Why I'm forced to fetch all nested relations and not only the ones which I need?
thank you.
Carmelo
Nullable #OneToOne relation are always eager fetched as explained in this post
Making a OneToOne-relation lazy
Unconstrained (nullable) one-to-one association is the only one that
can not be proxied without bytecode instrumentation. The reason for
this is that owner entity MUST know whether association property
should contain a proxy object or NULL and it can't determine that by
looking at its base table's columns due to one-to-one normally being
mapped via shared PK, so it has to be eagerly fetched anyway making
proxy pointless.
Shot in the dark, but there are some issues with lazy-loading #OneToOne-relationships. At least in older versions of Hibernate. I think (but can't seem to find any documentation) that this was fixed in one of the newer versions, so if you are not using a new version of Hibernate, try upgrading.
I have two entities,
class A { #OneToOne B b; }
class B { ... lots of properties and associations ... }
When I create new A() and then save, i'd like to only set the id of b.
So new A().setB(new B().setId(123)).
Then save that and have the database persist it.
I do not really need to or want to fetch the entire B first from the database, to populate an instance of A.
I remember this used to work, but when I am testing it is not.
I have tried Cascade All as well.
B b = (B) hibernateSession.byId(B.class).getReference(b.getId());
a.setB(b);
hibernateSession.load(...) // can also be used as it does the same.
The JPA equivalent is :
entitymanager.getReference(B.class, id)
Below code should help.It will fetch B only when its accessed.
class A {
#OneToOne(fetch = FetchType.LAZY) B b;
}
Say I have an entity like this
#Entity
Class A{
//fields
#Onetomany
Set<B> b; //
}
Now, how do I limit the number of 'B's in the collection in such a way that, when there is a new entry in the collection, the oldest one is removed, some thing like removeEldestEntry we have in a LinkedHashMap.
I am using MySQL 5.5 DB with Hibernate. Thanks in advance.
EDIT
My goal is not to have more than N number of entries in that table at any point of time.
One solution I have is to use a Set and schedule a job to remove the older entries. But I find it dirty. I am looking for a cleaner solution.
I would use the code to manually enforce this rule. The main idea is that the collection B should be well encapsulated such that client only can change its content by a public method (i.e addB()) . Simply ensure this rule inside this method (addB()) to ensure that the number of entries inside the collection B cannot larger than a value.
A:
#Entity
public class A {
public static int MAX_NUM_B = 4;
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
private Set<B> b= new LinkedHashSet<B>();
public void addB(B b) {
if (this.b.size() == MAX_NUM_B) {
Iterator<B> it = this.b.iterator();
it.next();
it.remove();
}
this.b.add(b);
}
public Set<B> getB() {
return Collections.unmodifiableSet(this.b);
}
}
B:
#Entity
public class B{
#ManyToOne
private A a;
}
Main points:
A should be the owner of the relationship.
In A , do not simply return B as client can bypass the checking logic implemented in addB(B b) and change its content freely.Instead , return an unmodifiable view of B .
In #OneToMany , set orphanRemovalto true to tell JPA to remove the B 's DB records after its corresponding instances are removed from the B collection.
There is one API provided by Apache Commons Collection. Here you can use the class CircularFifoBuffer for your reference of the same problem you have, if you want example shown as below that you can achive that
Buffer buf = new CircularFifoBuffer(4);
buf.add("A");
buf.add("B");
buf.add("C");
buf.add("D"); //ABCD
buf.add("E"); //BCDE
I think you will have to do it manually.
One solution that comes to mind is using #PrePersist and #PreUpdate event listeners in entity A.
Within the method annotated with above annotations , you check if size of Set<B> , if it is above the max limit, delete the oldest B entries(which may be tracked by a created_time timestamp property of B)
We have three entities with bidirectional many-to-many mappings in a A <-> B <-> C "hierarchy" like so (simplified, of course):
#Entity
Class A {
#Id int id;
#JoinTable(
name = "a_has_b",
joinColumns = {#JoinColumn(name = "a_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")})
#ManyToMany
Collection<B> bs;
}
#Entity
Class B {
#Id int id;
#JoinTable(
name = "b_has_c",
joinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "c_id", referencedColumnName = "id")})
#ManyToMany(fetch=FetchType.EAGER,
cascade=CascadeType.MERGE,CascadeType.PERSIST,CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<C> cs;
#ManyToMany(mappedBy = "bs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<A> as;
}
#Entity
Class C {
#Id int id;
#ManyToMany(mappedBy = "cs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<B> bs;
}
There's no conecpt of an orphan - the entities are "standalone" from the application's point of view - and most of the time we're going to have a fistful of A:s, each with a couple of B:s (some may be "shared" among the A:s), and some 1000 C:s, not all of which are always "in use" by any B. We've concluded that we need bidirectional relations, since whenever an entity instance is removed, all links (entries in the join tables) have to be removed too. That is done like this:
void removeA( A a ) {
if ( a.getBs != null ) {
for ( B b : a.getBs() ) { //<--------- ConcurrentModificationException here
b.getAs().remove( a ) ;
entityManager.merge( b );
}
}
entityManager.remove( a );
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown. I've been banging my head for a while now, but can't think of a reasonable way of removing the links without meddling with the collection, which makes underlying the Iterator angry.
Q1: How am I supposed to do this, given the current ORM setup? (If at all...)
Q2: Is there a more reasonable way do design the OR-mappings that will let JPA (provided by Hibernate in this case) take care of everything. It'd be just swell if we didn't have to include those I'll be deleted now, so everybody I know, listen carefully: you don't need to know about this!-loops, which aren't working anyway, as it stands...
This problem has nothing to do with the ORM, as far as I can tell. You cannot use the syntactic-sugar foreach construct in Java to remove an element from a collection.
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
Source
Simplified example of the problematic code:
List<B> bs = a.getBs();
for (B b : bs)
{
if (/* some condition */)
{
bs.remove(b); // throws ConcurrentModificationException
}
}
You must use the Iterator version to remove elements while iterating. Correct implementation:
List<B> bs = a.getBs();
for (Iterator<B> iter = bs.iterator(); iter.hasNext();)
{
B b = iter.next();
if (/* some condition */)
{
iter.remove(); // works correctly
}
}
Edit: I think this will work; untested however. If not, you should stop seeing ConcurrentModificationExceptions but instead (I think) you'll see ConstraintViolationExceptions.
void removeA(A a)
{
if (a != null)
{
a.setBs(new ArrayList<B>()); // wipe out all of a's Bs
entityManager.merge(a); // synchronize the state with the database
entityManager.remove(a); // removing should now work without ConstraintViolationExceptions
}
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown
The issue is that the collections inside of A, B, and C are magical Hibernate collections so when you run the following statement:
b.getAs().remove( a );
this removes a from b's collection but it also removes b from a's list which happens to be the collection being iterated over in the for loop. That generates the ConcurrentModificationException.
Matt's solution should work if you are really removing all elements in the collection. If you aren't however another work around is to copy all of the b's into a collection which removes the magical Hibernate collection from the process.
// copy out of the magic hibernate collection to a local collection
List<B> copy = new ArrayList<>(a.getBs());
for (B b : copy) {
b.getAs().remove(a) ;
entityManager.merge(b);
}
That should get you a little further down the road.
Gray's solution worked! Fortunately for us the JPA people seem to have been trying to implement collections as close to official Sun documentation on the proper use of List<> collections has indicated:
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
I was all but pulling out my hair over this exception thinking it meant one #Stateless method could not call another #Stateless method from it's own class. This I thought odd as I was sure that I read somewhere that nested transactions are allowed. So when I did a search on this very exception, I found this posting and applied Gray's solution. Only in my case I happened to have two independent collections that had to be handled. As Gray indicated, according the Java spec on the proper way to remove from a member from a Java container, you need to use a copy of the original container to iterate with and then do your remove() on the original container which makes a lot of sense. Otherwise, the original container's link list algorithm gets confused.
for ( Participant p2 : new ArrayList<Participant>( p1.getFollowing() )) {
p1.getFollowing().remove(p2);
getEm().merge(p1);
p2.getFollowers().remove(p1);
getEm().merge(p2);
}
Notice I only make a copy of the first collection (p1.getFollowing()) and not the second collection (p2.getFollowers()). That is because I only need to iterate from one collection even though I need to remove associations from both collections.