Hibernate #MapsId why getting this errors? - java

So, I have Class A and Class B.
They share their primary key, using the following configuration:
In Class A I reference Class B as a child
#OneToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
#PrimaryKeyJoinColumn
public B getB()
{
return b;
}
In Class B, in order to get ID from parent Class A, I use the following annotations:
#Id
#GeneratedValue(generator = "customForeignGenerator")
#org.hibernate.annotations.GenericGenerator(name = "customForeignGenerator", strategy = "foreign", parameters = #org.hibernate.annotations.Parameter(name = "property", value = "a"))
#Column(name = "a_id")
public Long getId()
{
return id;
}
#MapsId("id")
#OneToOne(mappedBy = "b")
#PrimaryKeyJoinColumn
public A getA()
{
return a;
}
The problem is that uppon saving A with
session.saveOrUpdate(aInstance);
DB returns the following error:
Duplicate entry '123456' for key 'PRIMARY'
This tells us 2 things, first is that #MapsId is working correctly, giving A's Id to B as it should, the second is that hibernate decided it was a 'save' and not an 'update', and this only happens on saveOrUpdate when Id is null right? (wierd?)
The usual solution would be to get the old B from DB and merge, if existed, but that arrises a whole lot of problems like also getting the old A from DB to session or making the dreaded "a different object with the same identifier value was already associated with the session" hibernate errors for the assossiated objects. Also not very performance friendly, doing unecessery DB hits.
Is there an error in my anotations? Am I doing it wrong? What is the normal configuration for this?
EDIT:
It kind of defeats the purpose of using #MapsId, setting the IDs manually, but since no solution was found I did set the IDs manually like this:
if(aInstance.getId() != null)
aInstance.getB().setId(aInstance.getId());
session.saveOrUpdate(aInstance);
Just until moments ago this was returning the following error:
org.hibernate.StaleStateException:
Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
But for some reason it stopped throwing the error and now it works. In all cases, the previous code is still all valid since aInstance might not have Id, and in that case, the MapId works perfectly inserting a new A and B in BD. The problem was only on Update.
Was\Is it an hibernate bug? probably. I'll let you guys know when StaleStateException turn up again.
For now this is a temporary solution, until someone comes up with the actual solution.

I finally found the answer to all the problems.
To understand the root of the problem we must remind how saveOrUpdate(object) works.
1) If object has ID set, saveOrUpdate will Update else it will Save.
2) If hibernate decides it is a Save but object is already on DB, you want to update, the Duplicate entry '123456' for key 'PRIMARY' exception will occur.
3) If hibernate decides it is an Update but object is not in DB, you want to save, the StaleStateException Exception occurs.
The problem relies on the fact that if aInstance exists in DB and already has an ID, #MapsId will give that ID to B ignoring the rules above, making Hibernate think B also exists in DB when it may not. It only works properly when both A and B dont exist in DB or when they both exist.
Therefor the workaround solution is to make sure you Set the ID only and only if each object exists in DB, and set ID to null when it does not:
B dbB = (B) unmarshaller.getDetachedSession().createCriteria(B.class).add(Restrictions.idEq(aInstance.getId())).uniqueResult();
if (dbB != null) //exists in DB
{
aInstance.getB().setId(aInstance.getId()); //Tell hibernate it is an Update
//Do the same for any other child classes to B with the same strategy if there are any in here
}
else
{
aInstance.getB().setId(null); //Tell hibernate it is a Save
}
unmarshaller.getDetachedSession().clear();
(using detached session, so that main session stays clear of unwanted objects, avoiding the "object with the same identifier in session" exception)
If you dont need the DB object, and only want to know if it exists or not in the DB, you can use a Count, making it much lighter:
String query = "select count(*) from " + B.class.getName() + " where id = " + aInstance.getId();
Long count = DataAccessUtils.uniqueResult(hibernateTemplate.find(query));
if (count != null && count > 0)
{
aInstance.getB().setId(aInstance.getId()); // update
}
else
{
aInstance.getB().setId(null); // save
}
Now you can saveOrUpdate(aInstance);
But like i said, #MapsId strategy is not very Hibernate friendly.

Some key realization that helped me understanding #MapsId better:
The #MapsId annotation changes the ID type of the entity from Assign to Generate. (I wish I could override this behavior and set the IDs manually.)
Generate assumes that an entity with a non null ID already exists in DB. Therefore, setting the ID manually leads to StaleObjectException because hibernate issues an EntityUpdateAction instead of Create, but there is nothing to update inside the DB.
On persist the null ID will be automatically set by using the ID of the other side of the #OneToOne relationship.
If the other side is missing and the ID is null an Exception is raised.

Related

Struts2 and Hibernate insert operation error [duplicate]

org.hibernate.HibernateException: identifier of an instance
of org.cometd.hibernate.User altered from 12 to 3
in fact, my user table is really must dynamically change its value, my Java app is multithreaded.
Any ideas how to fix it?
Are you changing the primary key value of a User object somewhere? You shouldn't do that. Check that your mapping for the primary key is correct.
What does your mapping XML file or mapping annotations look like?
You must detach your entity from session before modifying its ID fields
In my case, the PK Field in hbm.xml was of type "integer" but in bean code it was long.
In my case getters and setter names were different from Variable name.
private Long stockId;
public Long getStockID() {
return stockId;
}
public void setStockID(Long stockID) {
this.stockId = stockID;
}
where it should be
public Long getStockId() {
return stockId;
}
public void setStockId(Long stockID) {
this.stockId = stockID;
}
In my case, I solved it changing the #Id field type from long to Long.
In my particular case, this was caused by a method in my service implementation that needed the spring #Transactional(readOnly = true) annotation. Once I added that, the issue was resolved. Unusual though, it was just a select statement.
Make sure you aren't trying to use the same User object more than once while changing the ID. In other words, if you were doing something in a batch type operation:
User user = new User(); // Using the same one over and over, won't work
List<Customer> customers = fetchCustomersFromSomeService();
for(Customer customer : customers) {
// User user = new User(); <-- This would work, you get a new one each time
user.setId(customer.getId());
user.setName(customer.getName());
saveUserToDB(user);
}
In my case, a template had a typo so instead of checking for equivalency (==) it was using an assignment equals (=).
So I changed the template logic from:
if (user1.id = user2.id) ...
to
if (user1.id == user2.id) ...
and now everything is fine. So, check your views as well!
It is a problem in your update method. Just instance new User before you save changes and you will be fine. If you use mapping between DTO and Entity class, than do this before mapping.
I had this error also. I had User Object, trying to change his Location, Location was FK in User table. I solved this problem with
#Transactional
public void update(User input) throws Exception {
User userDB = userRepository.findById(input.getUserId()).orElse(null);
userDB.setLocation(new Location());
userMapper.updateEntityFromDto(input, userDB);
User user= userRepository.save(userDB);
}
Also ran into this error message, but the root cause was of a different flavor from those referenced in the other answers here.
Generic answer:
Make sure that once hibernate loads an entity, no code changes the primary key value in that object in any way. When hibernate flushes all changes back to the database, it throws this exception because the primary key changed. If you don't do it explicitly, look for places where this may happen unintentionally, perhaps on related entities that only have LAZY loading configured.
In my case, I am using a mapping framework (MapStruct) to update an entity. In the process, also other referenced entities were being updates as mapping frameworks tend to do that by default. I was later replacing the original entity with new one (in DB terms, changed the value of the foreign key to reference a different row in the related table), the primary key of the previously-referenced entity was already updated, and hibernate attempted to persist this update on flush.
I was facing this issue, too.
The target table is a relation table, wiring two IDs from different tables. I have a UNIQUE constraint on the value combination, replacing the PK.
When updating one of the values of a tuple, this error occured.
This is how the table looks like (MySQL):
CREATE TABLE my_relation_table (
mrt_left_id BIGINT NOT NULL,
mrt_right_id BIGINT NOT NULL,
UNIQUE KEY uix_my_relation_table (mrt_left_id, mrt_right_id),
FOREIGN KEY (mrt_left_id)
REFERENCES left_table(lef_id),
FOREIGN KEY (mrt_right_id)
REFERENCES right_table(rig_id)
);
The Entity class for the RelationWithUnique entity looks basically like this:
#Entity
#IdClass(RelationWithUnique.class)
#Table(name = "my_relation_table")
public class RelationWithUnique implements Serializable {
...
#Id
#ManyToOne
#JoinColumn(name = "mrt_left_id", referencedColumnName = "left_table.lef_id")
private LeftTableEntity leftId;
#Id
#ManyToOne
#JoinColumn(name = "mrt_right_id", referencedColumnName = "right_table.rig_id")
private RightTableEntity rightId;
...
I fixed it by
// usually, we need to detach the object as we are updating the PK
// (rightId being part of the UNIQUE constraint) => PK
// but this would produce a duplicate entry,
// therefore, we simply delete the old tuple and add the new one
final RelationWithUnique newRelation = new RelationWithUnique();
newRelation.setLeftId(oldRelation.getLeftId());
newRelation.setRightId(rightId); // here, the value is updated actually
entityManager.remove(oldRelation);
entityManager.persist(newRelation);
Thanks a lot for the hint of the PK, I just missed it.
Problem can be also in different types of object's PK ("User" in your case) and type you ask hibernate to get session.get(type, id);.
In my case error was identifier of an instance of <skipped> was altered from 16 to 32.
Object's PK type was Integer, hibernate was asked for Long type.
In my case it was because the property was long on object but int in the mapping xml, this exception should be clearer
If you are using Spring MVC or Spring Boot try to avoid:
#ModelAttribute("user") in one controoler, and in other controller
model.addAttribute("user", userRepository.findOne(someId);
This situation can produce such error.
This is an old question, but I'm going to add the fix for my particular issue (Spring Boot, JPA using Hibernate, SQL Server 2014) since it doesn't exactly match the other answers included here:
I had a foreign key, e.g. my_id = '12345', but the value in the referenced column was my_id = '12345 '. It had an extra space at the end which hibernate didn't like. I removed the space, fixed the part of my code that was allowing this extra space, and everything works fine.
Faced the same Issue.
I had an assosciation between 2 beans. In bean A I had defined the variable type as Integer and in bean B I had defined the same variable as Long.
I changed both of them to Integer. This solved my issue.
I solve this by instancing a new instance of depending Object. For an example
instanceA.setInstanceB(new InstanceB());
instanceA.setInstanceB(YOUR NEW VALUE);
In my case I had a primary key in the database that had an accent, but in other table its foreign key didn't have. For some reason, MySQL allowed this.
It looks like you have changed identifier of an instance
of org.cometd.hibernate.User object menaged by JPA entity context.
In this case create the new User entity object with appropriate id. And set it instead of the original User object.
Did you using multiple Transaction managers from the same service class.
Like, if your project has two or more transaction configurations.
If true,
then at first separate them.
I got the issue when i tried fetching an existing DB entity, modified few fields and executed
session.save(entity)
instead of
session.merge(entity)
Since it is existing in the DB, when we should merge() instead of save()
you may be modified primary key of fetched entity and then trying to save with a same transaction to create new record from existing.

Conditional insert with Spring JPA / Hibernate

I'm working on a project that runs in a clustered environment, where there are many nodes and a single database. The project uses Spring-data-JPA (1.9.0) and Hibernate (5.0.1). I'm having trouble resolving how to prevent duplicate row issues.
For sake of example, here's a simple table
#Entity
#Table(name = "scheduled_updates")
public class ScheduledUpdateData {
public enum UpdateType {
TYPE_A,
TYPE_B
}
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private UUID id;
#Column(name = "type", nullable = false)
#Enumerated(EnumType.STRING)
private UpdateType type;
#Column(name = "source", nullable = false)
private UUID source;
}
The important part is that there is a UNIQUE(type, source) constraint.
And of course, matching example repository:
#Repository
public class ScheduledUpdateRepository implements JpaRepository<ScheduledUpdateData, UUID> {
ScheduledUpdateData findOneByTypeAndSource(final UpdateType type, final UUID source);
//...
}
The idea for this example is that parts of the system can insert rows to be schedule for something that runs periodically, any number of times between said runs. When whatever that something is actually runs, it doesn't have to worry about operating on the same thing twice.
How can I write a service method that would conditionally insert into this table? A few things I've tried that don't work are:
Find > Act - The service method would use the repository to see if a entry already exists, and then either update the found entry or save a new one as needed. This does not work.
Try insert > Update if fail - The service method would try to insert, catch the exception due to the unique constraint, and then do an update instead. This does not work since the transaction will already be in a rolled-back state and no further operations can be done in it.
Native query with "INSERT INTO ... WHERE NOT EXISTS ..."* - The repository has a new native query:
#Repository
public class ScheduledUpdateRepository implements JpaRepository<ScheduledUpdateData, UUID> {
// ...
#Modifying
#Query(nativeQuery = true, value = "INSERT INTO scheduled_updates (type, source)" +
" SELECT :type, :src" +
" WHERE NOT EXISTS (SELECT * FROM scheduled_updates WHERE type = :type AND source = :src)")
void insertUniquely(#Param("type") final String type, #Param("src") final UUID source);
}
This unfortunately also does not work, as Hibernate appears to perform the SELECT used by the WHERE clause on its own first - which means in the end multiple inserts are tried, causing a unique constraint violation.
I definitely don't know a lot of the finer points of JTA, JPA, or Hibernate. Any suggestions on how insert into tables with unique constraints (beyond just the primary key) across multiple JVMs?
Edit 2016-02-02
With Postgres (2.3) as a database, tried using Isolation level SERIALIZABLE - sadly by itself this still caused constraint violation exceptions.
You are trying to ensure that only 1 node can perform this operation at a time.
The best (or at least most DB-agnostic) way to do this is with a 'lock' table.
This table will have a single row, and will act as a semaphore to ensure serial access.
Make sure that this method is wrapped in a transaction
// this line will block if any other thread already has a lock
// until that thread's transaction commits
Lock lock = entityManager.find(Lock.class, Lock.ID, LockModeType.PESSIMISTIC_WRITE);
// just some change to the row, it doesn't matter what
lock.setDateUpdated(new Timestamp(System.currentTimeMillis()));
entityManager.merge(lock);
entityManager.flush();
// find your entity by unique constraint
// if it exists, update it
// if it doesn't, insert it
Hibernate and its query language offer support for an insert statement. So you can actually write that query with HQL. See here for more information. http://docs.jboss.org/hibernate/orm/5.0/userguide/html_single/Hibernate_User_Guide.html#_hql_syntax_for_insert
It sounds like an upsert case, that can be handled as suggested here.
Find > Act - The service method would use the repository to see if a entry already exists, and then either update the found entry or save a new one as needed. This does not work.
Why does this not work?
Have you considered "optimistic locking"?
These two posts may help:
https://www.baeldung.com/jpa-optimistic-locking
https://www.baeldung.com/java-jpa-transaction-locks

How to not crash if a foreign key refers to a null row while the key is not nullable?

I have this table "regions":
id | name | parent_id
1 | whatever | 100000
where the parent_id should be self referencing to the id, meaning this row geographically belongs to 100000.
However due to the data being imported at the beginning is dirty, the row with id 100000 doesn't exist.
Therefore in the given Entity:
#Entity("regions")
public class Region {
private int id;
private String name;
private Region parent;
...
#ManyToOne()
#JoinColumn(name = "parent_id")
public Region getParent() {
return parent;
}
public void setParent(Region parent) {
this.parent = parent;
}
}
When I do a list with hibernate:
Session session = sessionHandler.getSession(); //gets current session
Transaction tx = session.beginTransaction();
try {
return (List<T>)session.createQuery("FROM regions").list();
}
catch(HibernateException ex) {
ex.printStackTrace();
if (tx!=null) tx.rollback();
throw ex;
}finally {
sessionHandler.close();
}
It will throw exception:
org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [whatever.entities.Region#6046193]
Which indicates region of id 6046193 doesn't exist. As explained before, I expect something like this would happen.
My questions, given I can't edit the parent_id column to nullable, is it a way to handle this exception so that the system ignores the exception and keeps the program going?
You could try to set the fetch type of the many to one relationship to lazy.
...
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "parent_id")
public Region getParent() {
return parent;
}
...
I guess then hibernate would not throw an error during the "select all", and you can handle the error when you first call the getter, if you do it like that.
But I am not 100% sure if this works, and I don't think it is a really good solution. I really think you should sanitize your data during/after the import.
If you do not sanitize your data, you'll have to keep workarounds for the problems in the code forever. What if, in the future, someone removes the fetch = FetchType.LAZY, because they think it would lead to better performance? Your application will break in an unexpected way, just because your entities do not reflect correctly what is in your database.
You said you cannot set the parent_id to null since the column is not nullable. But what about creating dummy entries for the missing IDs? You could do that right after importing your dirty data, before you start up your application for the first time.
Also, just changing the column to nullable (assuming that you can do that, for a moment) won't work anyway. You would still have to sanitize the data - in that case, you would have to set all the parent_ids to null when the row with the referenced id does not exist.
I would say your data model design is flawed.
From a relational perspective, you're using parent_id to represent a not nullable self referencing foreign key to id. This means that whatever value you place inside parent_id should have a matching row with the same id value. This should automatically create a foreign key constraint violation by inserting the invalid rows you cite in your post.
If the field must remain nullable=false, you could create a sentinel row and any legacy data you load with invalid parent_id references, could be changed to use the sentinel row's id just so the data model is valid. If the data model can be altered slightly, a legacy_parent_id could hold the legacy reference and your code could have differing logic paths based on the sentinel row.
The only other idea I have assuming you can modify the data model slightly, would be to consider using a discriminator that separates the legacy rows from the non-legacy rows.
In the legacy model, you'd have a legacy_parent_id you populate that is simply an integer. In the non-legacy model, you could have the parent_id foreign key validated relationship that is not nullable.

Eclipselink inserting entity with 0 ID

I just discovered a rather strange problem with EclipseLink (both 2.3.0 and 2.4.1) and just wanted to ask if anyone can confirm that this is a bug and not just that I am missing something obvious ...
Basically, I have two entities involved, let us simply call them A and B. A had an eager (if that matters), non-cascading many-to-one unidirectional reference to B modelled in the database with a join column. The DB table A contains the columns ID (PK), B_ID (FK to B.ID), and more, the table B contains the columns ID and a few more.
Now, I had a list of As, which should be updated with references to new B instances. In pseudo-code something like:
for(A a : list) {
// using some criteria, check if perhaps a matching B
// entity is already found in the database
B b = perhapsGetExistingBFromDatabase();
if(b == null) {
// no existing B was found, create a new
b = new B();
// set values in B
b.setFoo(4711),
b.setBar(42);
// save B
em.merge(b);
}
// set reference from A to B ...
a.setB(b);
// ... and update A
em.merge(a);
}
Since the reference was non-cascading, it was necessary to merge both b and a. Everything worked as expected.
Then, someone (I) changed the cascade type of the relationship from none to merge/persist, since that was required somewhere else in the code. I expected the old code to work, merging b is not really required, shouldn't IMHO however hurt? A brief test confirmed that it still worked, the new B entity was inserted and A updated accordingly.
BUT, it only works if there is only one A entity in the list. Running through the loop a second time causes EclipseLink to auto-flush the session since perhapsGetExistingBFromDatabase does a "SELECT .. FROM B", there is a merged B entity cached and it wants the database table to be up to date. Using FINEST logging level and breakpoints in the code, I can verify that EclipseLink determines that it is required to generate an id for the new B entity, it invokes the sequence generator and also sets the id in the correct field of the entity. Still, EclipseLink invokes SQL statements similar to these:
INSERT INTO B (ID, ...) VALUES(0, ...);
UPDATE A SET B_ID = 0 WHERE ID = ...;
The generated id is lost somewhere and EclipseLink tries to create a new entity with id=0. The data is indeed invalid and later, EclipseLink also throws a PersistenceException: Null or zero primary key encountered in unit of work clone.
Bug or my mistake?
Edit: James asked for the mapping of B.ID:
#Id
#GeneratedValue(generator = "sq_idgen", strategy = GenerationType.SEQUENCE)
#SequenceGenerator(name = "sq_idgen", sequenceName = "...", allocationSize = 100)
#Column(name = "id")
protected long id;
Note, that removing the unneccesary em.merge(b); solves my problem. It is just not obvious to me why the invocation of merge causes EclipseLink to fail completely, trying to insert a B instance without a populated id.
That is odd, how is the Id of B mapped?
Seems like the merged might somehow be getting two separate instances of B (as there is nothing to identify them as being the same as they have no Id). Not that merge() is normally not required, it is only required for detached objects, such as when using serialization (try using persist instead).
To avoid the flush you can set the flushMode in the EntityManager or persistence unit to COMMIT instead of AUTO.

Found shared references to a collection org.hibernate.HibernateException

I got this error message:
error: Found shared references to a collection: Person.relatedPersons
When I tried to execute addToRelatedPersons(anotherPerson):
person.addToRelatedPersons(anotherPerson);
anotherPerson.addToRelatedPersons(person);
anotherPerson.save();
person.save();
My domain:
Person {
static hasMany = [relatedPersons:Person];
}
any idea why this happens ?
Hibernate shows this error when you attempt to persist more than one entity instance sharing the same collection reference (i.e. the collection identity in contrast with collection equality).
Note that it means the same collection, not collection element - in other words relatedPersons on both person and anotherPerson must be the same. Perhaps you're resetting that collection after entities are loaded? Or you've initialized both references with the same collection instance?
I had the same problem. In my case, the issue was that someone used BeanUtils to copy the properties of one entity to another, so we ended up having two entities referencing the same collection.
Given that I spent some time investigating this issue, I would recommend the following checklist:
Look for scenarios like entity1.setCollection(entity2.getCollection()) and getCollection returns the internal reference to the collection (if getCollection() returns a new instance of the collection, then you don't need to worry).
Look if clone() has been implemented correctly.
Look for BeanUtils.copyProperties(entity1, entity2).
Explanation on practice. If you try to save your object, e.g.:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
message.setFiles(folders);
MESSAGESDAO.getMessageDAO().save(message);
you don't need to set updated object to a parent object:
message.setFiles(folders);
Simple save your parent object like:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
// Not set updated object here
MESSAGESDAO.getMessageDAO().save(message);
Reading online the cause of this error can be also an hibernate bug, as workaround that it seems to work, it is to put a:
session.clear()
You must to put the clear after getting data and before commit and close, see example:
//getting data
SrReq sr = (SrReq) crit.uniqueResult();
SrSalesDetailDTO dt=SrSalesDetailMapper.INSTANCE.map(sr);
//CLEAR
session.clear();
//close session
session.getTransaction().commit();
session.close();
return dt;
I use this solution for select to database, for update or insert i don't know if this solution can work or can cause problems.
My problem is equal at 100% of this: http://www.progtown.com/topic128073-hibernate-many-to-many-on-two-tables.html
I have experienced a great example of reproducing such a problem.
Maybe my experience will help someone one day.
Short version
Check that your #Embedded Id of container has no possible collisions.
Long version
When Hibernate instantiates collection wrapper, it searches for already instantiated collection by CollectionKey in internal Map.
For Entity with #Embedded id, CollectionKey wraps EmbeddedComponentType and uses #Embedded Id properties for equality checks and hashCode calculation.
So if you have two entities with equal #Embedded Ids, Hibernate will instantiate and put new collection by the first key and will find same collection for the second key.
So two entities with same #Embedded Id will be populated with same collection.
Example
Suppose you have Account entity which has lazy set of loans.
And Account has #Embedded Id consists of several parts(columns).
#Entity
#Table(schema = "SOME", name = "ACCOUNT")
public class Account {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "account")
private Set<Loan> loans;
#Embedded
private AccountId accountId;
...
}
#Embeddable
public class AccountId {
#Column(name = "X")
private Long x;
#Column(name = "BRANCH")
private String branchId;
#Column(name = "Z")
private String z;
...
}
Then suppose that Account has additional property mapped by #Embedded Id but has relation to other entity Branch.
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "BRANCH")
#MapsId("accountId.branchId")
#NotFound(action = NotFoundAction.IGNORE)//Look at this!
private Branch branch;
It could happen that you have no FK for Account to Brunch relation id DB so Account.BRANCH column can have any value not presented in Branch table.
According to #NotFound(action = NotFoundAction.IGNORE) if value is not present in related table, Hibernate will load null value for the property.
If X and Y columns of two Accounts are same(which is fine), but BRANCH is different and not presented in Branch table, hibernate will load null for both and Embedded Ids will be equal.
So two CollectionKey objects will be equal and will have same hashCode for different Accounts.
result = {CollectionKey#34809} "CollectionKey[Account.loans#Account#43deab74]"
role = "Account.loans"
key = {Account#26451}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
result = {CollectionKey#35653} "CollectionKey[Account.loans#Account#33470aa]"
role = "Account.loans"
key = {Account#35225}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
Because of this, Hibernate will load same PesistentSet for two entities.
In my case, I was copying and pasting code from my other classes, so I did not notice that the getter code was bad written:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "credito")
public Set getConceptoses() {
return this.letrases;
}
public void setConceptoses(Set conceptoses) {
this.conceptoses = conceptoses;
}
All references conceptoses but if you look at the get says letrases
I too got the same issue, someone used BeanUtils.copyProperties(source, target). Here both source and target, are using the same collection as attribute.
So i just used the deep copy as below..
How to Clone Collection in Java - Deep copy of ArrayList and HashSet
Consider an entity:
public class Foo{
private<user> user;
/* with getters and setters */
}
And consider an Business Logic class:
class Foo1{
List<User> user = new ArrayList<>();
user = foo.getUser();
}
Here the user and foo.getUser() share the same reference. But saving the two references creates a conflict.
The proper usage should be:
class Foo1 {
List<User> user = new ArrayList<>();
user.addAll(foo.getUser);
}
This avoids the conflict.
I faced similar exception in my application. After looking into the stacktrace it was clear that exception was thrown within a FlushEntityEventListener class.
In Hibernate 4.3.7 the MSLocalSessionFactory bean no longer supports the eventListeners property. Hence, one has to explicitly fetch the service registry from individual Hibernate session beans and then set the required custom event listeners.
In the process of adding custom event listeners we need to make sure the corresponding default event listeners are removed from the respective Hibernate session.
If the default event listener is not removed then the case arises of two event listeners registered against same event. In this case while iterating over these listeners, against first listeners any collections in the session will be flagged as reached and while processing the same collection against second listener would throw this Hibernate exception.
So, make sure that when registering custom listeners corresponding default listeners are removed from registry.
My problem was that I had setup an #ManyToOne relationship. Maybe if the answers above don't fix your problem you might want to check the relationship that was mentioned in the error message.
Posting here because it's taken me over 2 weeks to get to the bottom of this, and I still haven't fully resolved it.
There is a chance, that you're also just running into this bug which has been around since 2017 and hasn't been addressed.
I honestly have no clue how to get around this bug. I'm posting here for my sanity and hopefully to shave a couple weeks of your googling. I'd love any input anyone may have, but my particular "answer" to this problem was not listed in any of the above answers.
I had to replace the following collection initilization:
challenge.setGoals(memberChallenge.getGoals());
with
challenge.setGoals(memberChallenge.getGoals()
.stream()
.map(dmo -> {
final ChallengeGoal goal = new ChallengeGoalImpl();
goal.setMemberChallenge(challenge);
goal.setGoalDate(dmo.getGoalDate());
goal.setGoalValue(dmo.getGoalValue());
return goal;
})
.collect(Collectors.toList()));
I changed
#OneToMany( cascade= CascadeType.ALL)
#JoinColumn(
name = "some_id",
referencedColumnName = "some_id"
)
to
#OneToMany(mappedBy = "some_id", cascade= CascadeType.ALL)
You're using pointers(indirectly), so sometimes you're copying the memory address instead of the object/collection you want. Hibernate checks this and throw that error. Here's what can you do:
Don't copy the object/collection;
Initiate a new empty one;
Make a function to copy it's content and call it;
For example:
public Entity copyEntity(Entity e){
Entity copy = new Entity();
e.copy(name);
e.setCollection2(null);
e.setCollection3(copyCollection(e.getCollection3());
return copy;
}
In a one to many and many to one relationship this error will occur. If you attempt to devote same instance from many to one entity to more than one instance from one to many entity.
For example, each person can have many books but each of these books can be owned by only one person if you consider more than one owner for a book this issue is raised.

Categories

Resources