This question sort of asks what I'm trying to achieve, but there isn't really an answer : Hibernate validateManyToOnehas at least one
I have two objects (A and B). A is the parent. B is the child. It's a one to many relationship, however, I need there to always be at least one B for each A. There are default values for all fields in B, so if an A is created without a B then a default B can be added to make sure there is always one B. If one or more B objects are added A then there's no need to create a default B.
This is A:
[Fields]
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
#JoinColumn(name = "key", nullable = false)
#Fetch(value = FetchMode.SUBSELECT)
private List<B> b = new ArrayList<>();
...
#PrePersist
protected void onCreate() {
// Default values configured here, for example
if (fieldA1 == null) {
fieldA1 = "A DEFAULT";
}
...
}
This is B:
[Fields]
#PrePersist
protected void onCreate() {
// Default values configured here, for example
if (fieldB1 == null) {
fieldB1 = "B DEFAULT";
}
...
}
I thought I could use the same #PrePersist annotation in A, check if there are any B objects, and if not create a default B:
#PrePersist
protected void onCreate() {
// Deafult values configured here
...
if (b.size() == 0) {
b.add(new B());
}
}
That doesn't work. If I create an A with no B objects then in the log I just get:
Handling transient entity in delete processing
and A is created without the B. If I try and create an A with at least one B then I get the following error:
Caused by: org.hibernate.TransientObjectException: object references
an unsaved transient instance - save the transient instance before
flushing
Any ideas how I can make this work?
Without knowing more of your code (and it might be a little too much for SO anyways) I'll have to make some assumptions but I'd say that when onCreate() is being called the Hibernate session is already in a "flushing" state and thus won't accept any new entities.
Currently I could think of 2 options which we also use in some cases:
Throw a CDI event and have an event handler (asynchronously) trigger the creation of the default element in a new transaction once the current transaction is completed successfully.
Create a sub-session in Hibernate that forks from the current one and uses the same transaction. Then use this sub-session to create your default element.
Here's how we do it in a Hibernate PostInsertEventListener:
if( postInsertEvent.getEntity() instanceof ClassThatNeedsToBeHandled ) {
ClassThatNeedsToBeHandled insertedEntity = (ClassThatNeedsToBeHandled )postInsertEvent.getEntity();
Session subSession = postInsertEvent.getSession().sessionWithOptions()
.connection() //use the same connection as the parent session
.noInterceptor() //we don't need additional interceptors
.flushMode( FlushMode.AUTO ) //flush on close
.autoClose( true) //close after the transaction is completed
.autoJoinTransactions( true ) //use the same transaction as the parent session
.openSession();
subSession.saveOrUpdate( new SomeRelatedEntity( insertedEntity ) );
}
One thing to keep in mind is that our SomeRelatedEntity is the owner of the relation. Your code indicates that A would be the owner but that might cause problems because you'd have to change the A instance during flush to get the relation persisted. If B was the owning side (it has a backreference to A and in A you have a mappedBy in your #OneToMany) it should work.
Edit:
Actually, there might be a 3rd option: when you create a new A add a default element and when real elements are added you remove it. That way you'd not have to mess with the Hibernate sessions or transaction scopes.
Related
I modeled a unidirectional #ManyToMany self-referencing relationship. A test may require other tests in order to be executed:
#Entity
public class Test{
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#ManyToMany
#JoinTable(
name = "required_",
joinColumns = #JoinColumn(name = "test_id", referencedColumnName = "id"),
inverseJoinColumns = #JoinColumn(name = "required_test_id", referencedColumnName = "id")
)
private Set<Test> requiredTests;
}
Each test is described by an XML-file.
In the XML the required tests are referenced by name.
Now i'm trying to import all the tests, but so far the dependency-relationships between the tests are not correctly saved in the DB. I guess I'm doing something wrong.
This is what I did (pseudo-code):
//Each test is imported with this method:
private void import(TestCaseXml testCaseXml) {
Test test= testRepository.findByName(testCaseXml.getName()).orElse(new Test());
test.setRequiredTests(fetchAlreadyExistentOrCreateRequiredTestsDeclaredIn(testCaseXml));
testRepository.save(test);
}
private Set<Test> fetchAlreadyExistentOrCreateRequiredTestsDeclaredIn(
final TestCaseXml testCaseXml) {
final List<String> requiredTestcasesByName = testCaseXml.getNameOfRequiredTests();
return requiredTestcasesByName.stream()
.map(name -> testRepository.findByName(name)
.orElse(testRepository.save(new Test().setName(name))))
.collect(Collectors.toSet());
}
// The import of all the tests is packed in one single transaction.
So far, as result: only one dependency is persisted (let's say Test A requires B, C and D), then the joint table would look like ( A - D ).
Does anyone would have a clue?
Edit after Chris' feedback
I made some more experiments, I'd like to share here since the outcome is really confusing.
First scenario: Let's say I have 3 tests I want to import (A, B and C) and they will be processed in this order.
Test A requires Test B and Test C.
Test B and C have no requirement.
When Test A is being imported, at some point fetchAlreadyExistentOrCreateRequiredTestsDeclaredIn() will be called. I debugged it and can confirm that the method returns a Set containing Test B and Test C, both of them with name and id (The presence of the id is a bit surprising - may be Hibernate flushed before the end of the global transaction?). Anyway, this result do not confirm Chris' hypothesis, since it does return a Set with the 2 expected tests.
Nevertheless: I repeated this first scenario, but this time using List instead of Set, as Chris suggested, and indeed it did work. To be honest, I don't understand why.
Now it gets still a bit more weird:
Second scenario: I have 3 tests I want to import (A, B and C) and they will be processed in this order.
Test A has no requirement
Test B requires Test A and C
Test C has no requirement
This will throw an Exception
java.sql.SQLIntegrityConstraintViolationException: (conn=819) Duplicate entry 'Test A' for key 'XYZ'
Somehow it seems I fixed this by getting rid of the functional syntax in fetchAlreadyExistentOrCreateRequiredTestsDeclaredIn()
I replaced
return requiredTestcasesByName.stream()
.map(name -> testRepository.findByName(name)
.orElse(testRepository.save(new Test().setName(name))))
.collect(Collectors.toList());
with this:
final var requiredTests = new ArrayList<Test>();
for (final String name: requiredTestcasesByName) {
final Test test = testRepository.findByName(testcaseName).isPresent()
? testRepository.findByName(name).get()
: testRepository.save(new Test().setName(name));
requiredTests.add(test);
}
return requiredTests;
After performing these 2 changes (List instead of Set, and get rid of the functional syntax) it seems to work as expected. I'd like to understand what is happening behind the scene.
Edit 27.06.22
I setup a demo project to reproduce this strange behaviour:
https://github.com/JulienDeBerlin/manyToMany/tree/master
I'm pretty confident you have implemented equals and hashcode methods in your entity classes, and that they rely on the ID. Your code can then be broken into the following equivalent sequence:
Set set = new HashSet();
Test b = new Test().setName(b);
set.add(b);
Test c = new Test().setName(c);
set.add(c);
Test d = new Test().setName(d);
set.add(d);
assertEquals(set.size(),1);
Why? If you check what is returned from each testRepository.save call, they do not have their IDs generated. JPA does NOT guarantee sequences are set on persist calls (which are underneath your Spring repository.save call), but does guarantee they will be set on the instance when the transaction is synchronized (flushed or committed) to the database. As they are all in the same transaction, that only happens AFTER they are added to the set. Your hashcode/equality methods have already dealt with all 3 and determined they are the same instance (null id), so replaces the existing one with the latest one added.
Simplest solution is to return a list instead:
private List<Test> fetchAlreadyExistentOrCreateRequiredTestsDeclaredIn(
final TestCaseXml testCaseXml) {
final List<String> requiredTestcasesByName = testCaseXml.getNameOfRequiredTests();
return requiredTestcasesByName.stream()
.map(name -> testRepository.findByName(name)
.orElse(testRepository.save(new Test().setName(name))))
.collect(Collectors.toList());
}
I'd also suggest fixing or just outright removing your Equals/hashcode methods from your entities, betting you don't really need them.
I have two entities,
class A { #OneToOne B b; }
class B { ... lots of properties and associations ... }
When I create new A() and then save, i'd like to only set the id of b.
So new A().setB(new B().setId(123)).
Then save that and have the database persist it.
I do not really need to or want to fetch the entire B first from the database, to populate an instance of A.
I remember this used to work, but when I am testing it is not.
I have tried Cascade All as well.
B b = (B) hibernateSession.byId(B.class).getReference(b.getId());
a.setB(b);
hibernateSession.load(...) // can also be used as it does the same.
The JPA equivalent is :
entitymanager.getReference(B.class, id)
Below code should help.It will fetch B only when its accessed.
class A {
#OneToOne(fetch = FetchType.LAZY) B b;
}
I'm getting the good, old and dreaded TransientObjectException, and, as often happens in such case, I'm having problems locating what kind of subtle bug in the code is causing the problem.
My question is: is there a way to obtain a list of every object that's in the current Hibernate session?
I'll probably have solved the current problem by the time I get an answer for this question, but, anyway, being able to list everything that is the session would help a lot in the next time that happens.
Hibernate does not expose its internals to the public, so you won't find what you are searching for in the public API. However you can find your answer in the implementation classes of the Hibernate interfaces:
This method (taken from http://code.google.com/p/bo2/source/browse/trunk/Bo2ImplHibernate/main/gr/interamerican/bo2/impl/open/hibernate/HibernateBo2Utils.java) will tell if an object exists in the session:
public static Object getFromSession
(Serializable identifier, Class<?> clazz, Session s) {
String entityName = clazz.getName();
if(identifier == null) {
return null;
}
SessionImplementor sessionImpl = (SessionImplementor) s;
EntityPersister entityPersister = sessionImpl.getFactory().getEntityPersister(entityName);
PersistenceContext persistenceContext = sessionImpl.getPersistenceContext();
EntityKey entityKey = new EntityKey(identifier, entityPersister, EntityMode.POJO);
Object entity = persistenceContext.getEntity(entityKey);
return entity;
}
If you drill down a little more, you will see that the only implementation of PersistenceContext is org.hibernate.engine.StatefulPersistenceContext.
This class has the following collections:
// Loaded entity instances, by EntityKey
private Map entitiesByKey;
// Loaded entity instances, by EntityUniqueKey
private Map entitiesByUniqueKey;
// Identity map of EntityEntry instances, by the entity instance
private Map entityEntries;
// Entity proxies, by EntityKey
private Map proxiesByKey;
// Snapshots of current database state for entities
// that have *not* been loaded
private Map entitySnapshotsByKey;
// Identity map of array holder ArrayHolder instances, by the array instance
private Map arrayHolders;
// Identity map of CollectionEntry instances, by the collection wrapper
private Map collectionEntries;
// Collection wrappers, by the CollectionKey
private Map collectionsByKey; //key=CollectionKey, value=PersistentCollection
// Set of EntityKeys of deleted objects
private HashSet nullifiableEntityKeys;
// properties that we have tried to load, and not found in the database
private HashSet nullAssociations;
// A list of collection wrappers that were instantiating during result set
// processing, that we will need to initialize at the end of the query
private List nonlazyCollections;
// A container for collections we load up when the owning entity is not
// yet loaded ... for now, this is purely transient!
private Map unownedCollections;
// Parent entities cache by their child for cascading
// May be empty or not contains all relation
private Map parentsByChild;
So, what you need to do is cast the PersistenceContext to a StatefulPersistenceContext, then use reflection to get the private collection that you want and then iterate on it.
I strongly suggest you do that only on debugging code. This is not public API and it could brake by newer releases of Hibernate.
Found #nakosspy post very useful. Inspired by his post, I added this very simple utility method that outputs the contents of Hibernate Session.
As nakosspy said this is ONLY for debugging purposes as it is a HACK.
public static void dumpHibernateSession(Session s) {
try {
SessionImplementor sessionImpl = (SessionImplementor) s;
PersistenceContext persistenceContext = sessionImpl.getPersistenceContext();
Field entityEntriesField = StatefulPersistenceContext.class.getDeclaredField("entityEntries");
entityEntriesField.setAccessible(true);
IdentityMap map = (IdentityMap) entityEntriesField.get(persistenceContext);
log.info(map);
} catch (Exception e)
{
log.error(e);
}
}
Say I have an entity like this
#Entity
Class A{
//fields
#Onetomany
Set<B> b; //
}
Now, how do I limit the number of 'B's in the collection in such a way that, when there is a new entry in the collection, the oldest one is removed, some thing like removeEldestEntry we have in a LinkedHashMap.
I am using MySQL 5.5 DB with Hibernate. Thanks in advance.
EDIT
My goal is not to have more than N number of entries in that table at any point of time.
One solution I have is to use a Set and schedule a job to remove the older entries. But I find it dirty. I am looking for a cleaner solution.
I would use the code to manually enforce this rule. The main idea is that the collection B should be well encapsulated such that client only can change its content by a public method (i.e addB()) . Simply ensure this rule inside this method (addB()) to ensure that the number of entries inside the collection B cannot larger than a value.
A:
#Entity
public class A {
public static int MAX_NUM_B = 4;
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
private Set<B> b= new LinkedHashSet<B>();
public void addB(B b) {
if (this.b.size() == MAX_NUM_B) {
Iterator<B> it = this.b.iterator();
it.next();
it.remove();
}
this.b.add(b);
}
public Set<B> getB() {
return Collections.unmodifiableSet(this.b);
}
}
B:
#Entity
public class B{
#ManyToOne
private A a;
}
Main points:
A should be the owner of the relationship.
In A , do not simply return B as client can bypass the checking logic implemented in addB(B b) and change its content freely.Instead , return an unmodifiable view of B .
In #OneToMany , set orphanRemovalto true to tell JPA to remove the B 's DB records after its corresponding instances are removed from the B collection.
There is one API provided by Apache Commons Collection. Here you can use the class CircularFifoBuffer for your reference of the same problem you have, if you want example shown as below that you can achive that
Buffer buf = new CircularFifoBuffer(4);
buf.add("A");
buf.add("B");
buf.add("C");
buf.add("D"); //ABCD
buf.add("E"); //BCDE
I think you will have to do it manually.
One solution that comes to mind is using #PrePersist and #PreUpdate event listeners in entity A.
Within the method annotated with above annotations , you check if size of Set<B> , if it is above the max limit, delete the oldest B entries(which may be tracked by a created_time timestamp property of B)
We have three entities with bidirectional many-to-many mappings in a A <-> B <-> C "hierarchy" like so (simplified, of course):
#Entity
Class A {
#Id int id;
#JoinTable(
name = "a_has_b",
joinColumns = {#JoinColumn(name = "a_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")})
#ManyToMany
Collection<B> bs;
}
#Entity
Class B {
#Id int id;
#JoinTable(
name = "b_has_c",
joinColumns = {#JoinColumn(name = "b_id", referencedColumnName = "id")},
inverseJoinColumns = {#JoinColumn(name = "c_id", referencedColumnName = "id")})
#ManyToMany(fetch=FetchType.EAGER,
cascade=CascadeType.MERGE,CascadeType.PERSIST,CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<C> cs;
#ManyToMany(mappedBy = "bs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<A> as;
}
#Entity
Class C {
#Id int id;
#ManyToMany(mappedBy = "cs", fetch=FetchType.EAGER,
cascade={CascadeType.MERGE,CascadeType.PERSIST, CascadeType.REFRESH})
#org.hibernate.annotations.Fetch(FetchMode.SUBSELECT)
private Collection<B> bs;
}
There's no conecpt of an orphan - the entities are "standalone" from the application's point of view - and most of the time we're going to have a fistful of A:s, each with a couple of B:s (some may be "shared" among the A:s), and some 1000 C:s, not all of which are always "in use" by any B. We've concluded that we need bidirectional relations, since whenever an entity instance is removed, all links (entries in the join tables) have to be removed too. That is done like this:
void removeA( A a ) {
if ( a.getBs != null ) {
for ( B b : a.getBs() ) { //<--------- ConcurrentModificationException here
b.getAs().remove( a ) ;
entityManager.merge( b );
}
}
entityManager.remove( a );
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown. I've been banging my head for a while now, but can't think of a reasonable way of removing the links without meddling with the collection, which makes underlying the Iterator angry.
Q1: How am I supposed to do this, given the current ORM setup? (If at all...)
Q2: Is there a more reasonable way do design the OR-mappings that will let JPA (provided by Hibernate in this case) take care of everything. It'd be just swell if we didn't have to include those I'll be deleted now, so everybody I know, listen carefully: you don't need to know about this!-loops, which aren't working anyway, as it stands...
This problem has nothing to do with the ORM, as far as I can tell. You cannot use the syntactic-sugar foreach construct in Java to remove an element from a collection.
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
Source
Simplified example of the problematic code:
List<B> bs = a.getBs();
for (B b : bs)
{
if (/* some condition */)
{
bs.remove(b); // throws ConcurrentModificationException
}
}
You must use the Iterator version to remove elements while iterating. Correct implementation:
List<B> bs = a.getBs();
for (Iterator<B> iter = bs.iterator(); iter.hasNext();)
{
B b = iter.next();
if (/* some condition */)
{
iter.remove(); // works correctly
}
}
Edit: I think this will work; untested however. If not, you should stop seeing ConcurrentModificationExceptions but instead (I think) you'll see ConstraintViolationExceptions.
void removeA(A a)
{
if (a != null)
{
a.setBs(new ArrayList<B>()); // wipe out all of a's Bs
entityManager.merge(a); // synchronize the state with the database
entityManager.remove(a); // removing should now work without ConstraintViolationExceptions
}
}
If the collection, a.getBs() here, contains more than one element, then a ConcurrentModificationException is thrown
The issue is that the collections inside of A, B, and C are magical Hibernate collections so when you run the following statement:
b.getAs().remove( a );
this removes a from b's collection but it also removes b from a's list which happens to be the collection being iterated over in the for loop. That generates the ConcurrentModificationException.
Matt's solution should work if you are really removing all elements in the collection. If you aren't however another work around is to copy all of the b's into a collection which removes the magical Hibernate collection from the process.
// copy out of the magic hibernate collection to a local collection
List<B> copy = new ArrayList<>(a.getBs());
for (B b : copy) {
b.getAs().remove(a) ;
entityManager.merge(b);
}
That should get you a little further down the road.
Gray's solution worked! Fortunately for us the JPA people seem to have been trying to implement collections as close to official Sun documentation on the proper use of List<> collections has indicated:
Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
I was all but pulling out my hair over this exception thinking it meant one #Stateless method could not call another #Stateless method from it's own class. This I thought odd as I was sure that I read somewhere that nested transactions are allowed. So when I did a search on this very exception, I found this posting and applied Gray's solution. Only in my case I happened to have two independent collections that had to be handled. As Gray indicated, according the Java spec on the proper way to remove from a member from a Java container, you need to use a copy of the original container to iterate with and then do your remove() on the original container which makes a lot of sense. Otherwise, the original container's link list algorithm gets confused.
for ( Participant p2 : new ArrayList<Participant>( p1.getFollowing() )) {
p1.getFollowing().remove(p2);
getEm().merge(p1);
p2.getFollowers().remove(p1);
getEm().merge(p2);
}
Notice I only make a copy of the first collection (p1.getFollowing()) and not the second collection (p2.getFollowers()). That is because I only need to iterate from one collection even though I need to remove associations from both collections.