I have had a problem since Hibernate 4.1.8 resulting in the following exception:
org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [test.hibernate.TestPrepravkaOsobaSAdresou$Uvazek#2]
I have a simple OneToMany association between two entities:
#Entity(name = "Ppv")
#Table(name = "PPV")
public static class Ppv {
#Id
Long ppvId;
#OneToMany(fetch = FetchType.EAGER, mappedBy = "ppv")
Set<Uvazek> uvazeks = new HashSet<Uvazek>(0);
}
#Entity(name = "Uvazek")
#Table(name = "UVAZEK")
public static class Uvazek {
#Id
Long uvazekId;
#ManyToOne
#JoinColumn(name = "PPV_FXID")
Ppv ppv;
}
and a test case where I have one Ppv and two Uvazek. When I load and detach a Ppv, delete one Uvazek associated with loaded Ppv and merge Ppv I get an exception.
jdbcTemplate.execute("insert into PPV values(1)");
jdbcTemplate.execute("insert into UVAZEK values(2, 1)");
jdbcTemplate.execute("insert into UVAZEK values(3, 1)");
Ppv ppv = (Ppv) getSession().get(Ppv.class, 1l);
getSession().clear();
getSession().delete(getSession().get(Uvazek.class, 2l));
getSession().flush();
getSession().merge(ppv);
getSession().flush(); //Causes the exception
During the Ppv merge, Hibernate tries to load the deleted Uvazek. Even though Uvazek is deleted, Hibernate still has information about it in
org.hibernate.collection.internal.AbstractPersistentCollection.storedSnapshot
on uvazek's set on the detached Ppv. In a previous version (<4.1.8) this works. In this simple example I can repair it by adding orphanRemoval=true on uvazeks set on Ppv and instead of deleting uvazek remove it from uvazeks set on Ppv.
So my question is: Is this a Hibernate bug or my bad practice?
The problem is that the Uvazek with id = 2 is attempted to be merged. Hibernate sees that it has a key, but it does not know if the object is dirty, so it's not clear if an SQL update should be made.
But because the key is 2, Hibernate knows that the object has to exist on the database, so it tries to load the object in order to compare it to the version it just received in memory, to see if the object has some pending modifications to be synchronized to the database.
But the select returns no results, so Hibernate has contradictory information: on one hand the database says the object does not exist. On the other hand, the object in memory says the object must exist with key 2. There is no way to decide which is correct, so the ObjectNotFoundException is thrown.
What happened is that before this version the code was accidentally basing itself on a bug that meanwhile got fixed, so this no longer works.
The best practice is to avoid clear and use it only if needed as a memory optimization, by clearing only objects that you know won't be modified or needed in the same session anymore, have a look at Is Session clear() considered harmful.
You also need to remove the reference to Uvazek from Ppv. Otherwise Hibernate tries to restore the relation when you merge it back and fails, because you deleted the referenced Uvazek.
This is also why adding orphan removal works for you.
I had the same Hibernate exception.
After debugging for sometime, i realized that the issue is caused by the Orphan child records.
As many are complaining, when they search the record it exists.
What i realized is that the issue is not because of the existence of the record but hibernate not finding it in the table, rather it is due to the Orphan child records.
The records which have reference to the non-existing parents!
What i did is, find the Foreign Key references corresponding to the Table linked to the Bean.
To find foreign key references in SQL developer
1.Save the below XML code into a file (fk_reference.xml)
<items>
<item type="editor" node="TableNode" vertical="true">
<title><![CDATA[FK References]]></title>
<query>
<sql>
<![CDATA[select a.owner,
a.table_name,
a.constraint_name,
a.status
from all_constraints a
where a.constraint_type = 'R'
and exists(
select 1
from all_constraints
where constraint_name=a.r_constraint_name
and constraint_type in ('P', 'U')
and table_name = :OBJECT_NAME
and owner = :OBJECT_OWNER)
order by table_name, constraint_name]]>
</sql>
</query>
</item></items>
2.Add the USER DEFINED extension to SQL Developer
Tools > Preferences
Database > User Defined Extensions
Click "Add Row" button
In Type choose "EDITOR", Location - where you saved the xml file above
Click "Ok" then restart SQL Developer
3.Navigate to any table and you will be able to see an additional tab next to SQL, labelled FK References, displaying FK information.
4.Reference
http://www.oracle.com/technetwork/issue-archive/2007/07-jul/o47sql-086233.html
How can I find which tables reference a given table in Oracle SQL Developer?
To find the Orphan records in all referred tables
select * from CHILD_TABLE
where FOREIGNKEY not in (select PRIMARYKEY from PARENT_TABLE);
Delete these Orphan records, Commit the changes and restart the server if required.
This solved my exception. You may try the same.
I'm having an Entity which has a primary key / id field like the following:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
This works well. I'm using EclipseLink to create the DDL-Schema, and the column is correctly created like so:
`id` bigint(20) NOT NULL AUTO_INCREMENT
However, I've got several entities for which I do want to specify the PK myself (it's a little application that transfers data from an old database to the new one we're building). If I specify the ID for the POJO (using setId(Long id)) and persist it, EclipseLink does not save it (i.e. the record is saved, but the id is auto generated by eclipseLink).
Is there a way to manually specify the value of a column which has a #GeneratedValue ?
Here some thoughts on the issue:
I tried to work around the problem by not using #GeneratedValue at all, but simply manually define the column to be AUTO_INCREMENTed. However this forces me to manually provide an IDs always, since EclipseLink validates the primary key (so it may not be null, zero, or a negative number). The exception message reads that I should specify eclipselink.id_validation, however this does not seem to make any difference (I annotated #PrimaryKey(validation = IdValidation.NONE) but still got the same message).
To clarify: I'm using EclipseLink (2.4.0) as persistence provider and I can't switch away from it (large portions of the project depend on eclipselink specific query hints, annotations, and classes).
EDIT (In Response to the answers):
Custom Sequencing: I tried to implement my own sequencing. I tried subclassing DefaultSequence, but EclipseLink will tell me Internal Exception: org.eclipse.persistence.platform.database.MySQLPlatform could not be found. But I've checked: The class is on the classpath.
So I subclassed another class, NativeSequence:
public class MyNativeSequence extends NativeSequence {
public MyNativeSequence() {
super();
}
public MyNativeSequence(final String name) {
super(name);
}
#Override
public boolean shouldAlwaysOverrideExistingValue() {
return false;
}
#Override
public boolean shouldAlwaysOverrideExistingValue(final String seqName) {
return false;
}
}
However, what I get is the following:
javax.persistence.RollbackException: Exception [EclipseLink-7197] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.ValidationException
Exception Description: Null or zero primary key encountered in unit of work clone [de.dfv.datenbank.domain.Mitarbeiter[ id=null ]], primary key [null]. Set descriptors IdValidation or the "eclipselink.id-validation" property.
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commitInternal(EntityTransactionImpl.java:102)
...
Caused by: Exception [EclipseLink-7197] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.ValidationException
Exception Description: Null or zero primary key encountered in unit of work clone [de.dfv.datenbank.domain.Mitarbeiter[ id=null ]], primary key [null]. Set descriptors IdValidation or the "eclipselink.id-validation" property.
at org.eclipse.persistence.exceptions.ValidationException.nullPrimaryKeyInUnitOfWorkClone(ValidationException.java:1451)
...
(stack trace shortened for clarity). This is the same message which I got before. Shouldn't I subclass NativeSequence? If so, I don't know what to implement for the abstract methods in Sequence or StandardSequence.
It may also be worth noting, that simply subclassing (without overriding any methods) the class works as expected. However, returing false in shouldAlwaysOverrideExistingValue(...) will not generate a single value at all (I stepped through the program and getGeneratedValue() is not called once).
Also, when I insert like 8 entities of a certain kind within a transaction it resulted in 11 records in the database (what the hell?!).
EDIT (2012-09-01): I still do not have a Solution for the problem, Implementing my own sequence did not solve it. What I need is a way to be able to not set an Id explicitly (so it will be auto generated) and to be able to set an Id explicitly (so it will be used for the creation of the record in the database).
I tried to define the column as auto_increment myself and ommit #GeneratedValue, however Validation will kick in and not allow me to save such an entity. If I specify a value != 0 and != zero, mysql will complain for a duplicate primary key.
I'm running out of ideas and options to try. Any? (starting a bounty)
This works with eclipselink. It will create a seperate table for the sequence, but that shouldn't pose a problem.
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#Column(name="id", insertable=true, updatable=true, unique=true, nullable=false)
private Long id;
GenerationType.AUTO will choose the ideal generation strategy. Since the field is specified as insertable and updateable, a TABLE generation strategy will be used. This means eclipselink will generate another table holding the current sequence value and generate the sequence itself instead of delegating it to the database. Since the column is declared insertable, if id is null when persisting, eclipselink will generate the id. Otherwise the existing id will be used.
If you use TABLE sequencing, then EclipseLink will allow you to override the value (or SEQUENCE if your database supports this, MySQL does not).
For IDENTITY, I'm not even sure that MySQL will allow you to supply your own Id, you might want to verify this. In general I would never recommend using IDENTITY as it does not support preallocation.
There are a few issues with allowing IDENTITY to provide the id or not. One is that two different insert SQL will need to be generated depending on the id value, as for IDENTITY the id cannot be in the insert at all. You may wish to log a bug to have IDENTITY support user provided ids.
You should still be able to get it working with your own Sequence subclass, or possibly MySQLPlatform subclass. You would set your MySQLPlatform subclass using the "eclipselink.target-database" persistence unit property.
Database-centric solution to your problem:
Create an auxiliary, nullable column in your table. It will hold your manually assigned ids:
CREATE TABLE `test_table`
(
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`manual_id` bigint(20) NULL,
`some_other_field` varchar(200) NOT NULL,
PRIMARY KEY(id)
);
Map this column to a normal field in your Entity:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Column(name="manual_id")
private Integer manualId;
Create a trigger that sets the table id to the manual assigned id if it is not null:
DELIMITER //
CREATE TRIGGER `test_table_bi` BEFORE INSERT ON `test_table`
FOR EACH ROW
BEGIN
IF NEW.`manual_id` IS NOT NULL THEN
SET NEW.`id` = NEW.`manual_id`;
END IF;
END;//
DELIMITER;
Always use the manualId when you need to assign a custom id. The trigger will do the magic for you:
testEntiy.setManualId(300);
entityManager.persist(testEntity);
After the database import phase, simple remove the trigger, the auxiliary column and it's mapping.
DROP TRIGGER `test_table_bi`;
ALTER TABLE `test_table` DROP COLUMN `manual_id`;
Warning
If you manually specify an id greater than the current AUTO_INCREMENT value, the next generated id will jump to the value of the manually assigned id plus 1, e.g.:
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (50, 'Something');
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (NULL, 'Something else');
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (90, 'Something 2');
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (NULL, 'Something else 2');
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (40, 'Something 3');
INSERT INTO `test_table` (manual_id, some_other_field) VALUES (NULL, 'Something else 3');
Will wield the results:
+----+-----------+------------------+
| id | manual_id | some_other_field |
+----+-----------+------------------+
| 50 | 50 | Something |
| 51 | NULL | Something else |
| 90 | 90 | Something 2 |
| 91 | NULL | Something else 2 |
| 40 | 40 | Something 3 |
| 92 | NULL | Something else 3 |
+----+-----------+------------------+
To avoid problems it is highly recommended to set the AUTO_INCREMENT column to start with a number greater than all of the existing ids in your previous database, e.g.:
ALTER TABLE `test_table` AUTO_INCREMENT = 100000;
I might be missing something obvious, but why not just define another Entity with the same #Table(name=".....") annotation, with the same fields, but make the id not generated? Then you can use that Entity for the code that copies data from the old DB to the new, and the one with the generated Id can be used for normal creates that require id generation.
I can't tell you if it works with EclipseLink, but we're using Hibernate here and it doesn't seem to mind it.
Using GenerationType.SEQUENCE with PostgreSQL and EclipseLink worked for me.
1) Change
#GeneratedValue(strategy = GenerationType.IDENTITY)
by
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="step_id_seq")
#SequenceGenerator(name="step_id_seq", sequenceName="step_id_seq")
Now, you can call sequence using NativeQuery:
return ((Vector<Integer>) em.createNativeQuery("select nextval('step_id_seq')::int").getSingleResult()).get(0);
and set the returned Id to your Entity before call EntityManager.persist() method.
Hope it's not too late!
Look for Custom Id Generator
http://blog.anorakgirl.co.uk/2009/01/custom-hibernate-sequence-generator-for-id-field/
maybe this could help.
My way (MySql) is deactivate GeneratedValue:
#Id
//#GeneratedValue
#Column(unique = true, nullable = false, columnDefinition ="BINARY(16)")
private UUID id;
And add in Entity:
#PrePersist
protected void onCreation() {
if (id == null) setId(UUID.randomUUID());
}
Now in my code I can do (on service for example):
String clientID = env.getParam("id");
Entity entity = entityRepository.findFirstById(UUID.fromString(clientID));
//is new?
if (entity==null){
entity = new Entity();
entity.setId(UUID.fromString(clientID));//set cumstom ID here
}
Entity entityNew = entityRepository.save(entity); //insert
if (entityNew.getId().equals(entity.getId()) ){
Log.i("OK!")
}
👌
I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. It doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error?
org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti
onManager.java:500)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag
er.java:473)
at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction
AspectSupport.java:267)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)
I got the same exception while deleting a record by Id that does not exists at all. So check that record you are updating/Deleting actually exists in DB
Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.
However, to get a better handle as to what causes the problem, try the following:
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
Solution:
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
or
It's error show when you mention in the mapping file the field generator="native" or "incremental" and in your DATABASE the table mapped is not auto_incremented
Solution: Go to your DATABASE and update your table to set auto_increment
In my case, I came to this exception in two similar cases:
In a method annotated with #Transactional I had a call to another service (with long times of response). The method updates some properties of the entity (after the method, the entity still exists in the database). If the user requests two times the method (as he thinks it doesn't work the first time) when exiting from the transactional method the second time, Hibernate tries to update an entity which already changed its state from the beginning of the transaction. As Hibernate search for an entity in a state, and found the same entity but already changed by the first request, it throws an exception as it can't update the entity. It's like a conflict in GIT.
I had automatic requests (for monitoring the platform) which update an entity (and the manual rollback a few seconds later). But this platform is already used by a test team. When a tester performs a test in the same entity as the automatic requests, (within the same hundredth of a millisecond), I get the exception. As in the previous case, when exiting from the second transaction, the entity previously fetched already changed.
Conclusion: in my case, it wasn't a problem which can be found in the code. This exception is thrown when Hibernate founds that the entity first fetched from the database changed during the current transaction, so it can't flush it to the database as Hibernate doesn't know which is the correct version of the entity: the one the current transaction fetch at the beginning; or the one already stored in the database.
Solution: to solve the problem, you will have to play with the Hibernate LockMode to find the one which best fit your requirements.
This happened to me once by accident when I was assigning specific IDs to some objects (testing) and then I was trying to save them in the database. The problem was that in the database there was an specific policy for setting up the IDs of the objects. Just do not assign an ID if you have a policy at Hibernate level.
I just encountered this problem and found out I was deleting a record and trying to update it afterwards in a Hibernate transaction.
Hibernate 5.4.1 and HHH-12878 issue
Prior to Hibernate 5.4.1, the optimistic locking failure exceptions (e.g., StaleStateException or OptimisticLockException) didn't include the failing statement.
The HHH-12878 issue was created to improve Hibernate so that when throwing an optimistic locking exception, the JDBC PreparedStatement implementation is logged as well:
if ( expectedRowCount > rowCount ) {
throw new StaleStateException(
"Batch update returned unexpected row count from update ["
+ batchPosition + "]; actual row count: " + rowCount
+ "; expected: " + expectedRowCount + "; statement executed: "
+ statement
);
}
Testing Time
I created the BatchingOptimisticLockingTest in my High-Performance Java Persistence GitHub repository to demonstrate how the new behavior works.
First, we will define a Post entity that defines a #Version property, therefore enabling the implicit optimistic locking mechanism:
#Entity(name = "Post")
#Table(name = "post")
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long id;
private String title;
#Version
private short version;
public Long getId() {
return id;
}
public Post setId(Long id) {
this.id = id;
return this;
}
public String getTitle() {
return title;
}
public Post setTitle(String title) {
this.title = title;
return this;
}
public short getVersion() {
return version;
}
}
We will enable the JDBC batching using the following 3 configuration properties:
properties.put("hibernate.jdbc.batch_size", "5");
properties.put("hibernate.order_inserts", "true");
properties.put("hibernate.order_updates", "true");
We are going to create 3 Post entities:
doInJPA(entityManager -> {
for (int i = 1; i <= 3; i++) {
entityManager.persist(
new Post()
.setTitle(String.format("Post no. %d", i))
);
}
});
And Hibernate will execute a JDBC batch insert:
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
Query: [
INSERT INTO post (title, version, id)
VALUES (?, ?, ?)
],
Params:[
(Post no. 1, 0, 1),
(Post no. 2, 0, 2),
(Post no. 3, 0, 3)
]
So, we know that JDBC batching works just fine.
Now, let's replicate the optimistic locking issue:
doInJPA(entityManager -> {
List<Post> posts = entityManager.createQuery("""
select p
from Post p
""", Post.class)
.getResultList();
posts.forEach(
post -> post.setTitle(
post.getTitle() + " - 2nd edition"
)
);
executeSync(
() -> doInJPA(_entityManager -> {
Post post = _entityManager.createQuery("""
select p
from Post p
order by p.id
""", Post.class)
.setMaxResults(1)
.getSingleResult();
post.setTitle(post.getTitle() + " - corrected");
})
);
});
The first transaction selects all Post entities and modifies the title properties.
However, before the first EntityManager is flushed, we are going to execute a second transition using the executeSync method.
The second transaction modifies the first Post, so its version is going to be incremented:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - corrected', 1, 1, 0)
]
Now, when the first transaction tries to flush the EntityManager, we will get the OptimisticLockException:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - 2nd edition', 1, 1, 0),
('Post no. 2 - 2nd edition', 1, 2, 0),
('Post no. 3 - 2nd edition', 1, 3, 0)
]
o.h.e.j.b.i.AbstractBatchImpl - HHH000010: On release of batch it still contained JDBC statements
o.h.e.j.b.i.BatchingBatch - HHH000315: Exception executing batch [
org.hibernate.StaleStateException:
Batch update returned unexpected row count from update [0];
actual row count: 0;
expected: 1;
statement executed:
PgPreparedStatement [
update post set title='Post no. 3 - 2nd edition', version=1 where id=3 and version=0
]
],
SQL: update post set title=?, version=? where id=? and version=?
So, you need to upgrade to Hibernate 5.4.1 or newer to benefit from this improvement.
This can happen when trigger(s) execute additional DML (data modification) queries which affect the row counts. My solution was to add the following at the top of my trigger:
SET NOCOUNT ON;
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
actual row count: 0 // means no record found to update
update: 0 // means no record found so nothing update
expected: 1 // means expected at least 1 record with key in db table.
Here the problem is that the query trying to update a record for some key, But hibernate didn't find any record with the key.
It also can happen when you try to UPDATE a PRIMARY KEY.
My two cents.
Problem: With Spring Boot 2.7.1 the h2 database version has changed to v2.1.214 which may result into a thrown OptimisticLockException when using generated UUIDs for Id columns, see https://hibernate.atlassian.net/browse/HHH-15373.
Solution: Add columnDefinition="UUID" to the #Column annotation
E.g., with a primary key definition for an entity like this:
#Id
#GeneratedValue(generator = "UUID")
#GenericGenerator(name = "UUID", strategy = "org.hibernate.id.UUIDGenerator")
#Column(name = COLUMN_UUID, updatable = false, nullable = false)
UUID uUID;
Change the column annotation to:
#Column(name = COLUMN_UUID, updatable = false, nullable = false, columnDefinition="UUID")
As Julius says this happens when an update Occurs on an Object that has its children being deleted. (Probably because there was a need for an update for the whole Father Object and sometimes we prefer to delete the children and re -insert them on the Father (new , old doesnt matter )along with any other updates the father could have on any of its other plain fields)
So ...in order for this to work delete the children (within a Transaction) by calling childrenList.clear() (Dont loop through the children and delete each one with some childDAO.delete(childrenList.get(i).delete())) and setting
#OneToMany(cascade = CascadeType.XXX ,orphanRemoval=true) on the Side of the Father Object. Then update the father (fatherDAO.update(father)). (Repeat for every father object) The result is that children have their link to their father stripped off and then they are being removed as orphans by the framework.
I encountered this problem where we had one-many relationship.
In the hibernate hbm mapping file for master, for object with set type arrangement, added cascade="save-update" and it worked fine.
Without this, by default hibernate tries to update for a non-existent record and by doing so it inserts instead.
Another way to get this error is if you have a null item in a collection.
It happens when you try to delete an object and then you try to update the same object. Use this after delete:
session.clear();
i got the same problem and i verified this may occur because of Auto increment primary key. To solve this problem do not inset auto increment value with data set. Insert data without the primary key.
This happened to me too, because I had my id as Long, and I was receiving from the view the value 0, and when I tried to save in the database I got this error, then I fixed it by set the id to null.
This problem mainly occurs when we are trying to save or update the object which are already fetched into memory by a running session.
If you've fetched object from the session and you're trying to update in the database, then this exception may be thrown.
I used session.evict(); to remove the cache stored in hibernate first or if you don't wanna take risk of loosing data, better you make another object for storing the data temp.
try
{
if(!session.isOpen())
{
session=EmployeyDao.getSessionFactory().openSession();
}
tx=session.beginTransaction();
session.evict(e);
session.saveOrUpdate(e);
tx.commit();;
EmployeyDao.shutDown(session);
}
catch(HibernateException exc)
{
exc.printStackTrace();
tx.rollback();
}
I ran into this issue when I was manually beginning and committing transactions inside of method annotated as #Transactional. I fixed the problem by detecting if an active transaction already existed.
//Detect underlying transaction
if (session.getTransaction() != null && session.getTransaction().isActive()) {
myTransaction = session.getTransaction();
preExistingTransaction = true;
} else {
myTransaction = session.beginTransaction();
}
Then I allowed Spring to handle committing the transaction.
private void finishTransaction() {
if (!preExistingTransaction) {
try {
tx.commit();
} catch (HibernateException he) {
if (tx != null) {
tx.rollback();
}
log.error(he);
} finally {
if (newSessionOpened) {
SessionFactoryUtils.closeSession(session);
newSessionOpened = false;
maxResults = 0;
}
}
}
}
This happens when you declared the JSF Managed Bean as
#RequestScoped;
when you should declare as
#SessionScoped;
Regards;
I got this error when I tried to update an object with an id that did not exist in the database. The reason for my mistake was that I had manually assigned a property with the name 'id' to the client side JSON-representation of the object and then when deserializing the object on the server side this 'id' property would overwrite the instance variable (also called 'id') that Hibernate was supposed to generate. So be careful of naming collisions if you are using Hibernate to generate identifiers.
I also came across the same challenge. In my case I was updating an object which was not even existing, using hibernateTemplate.
Actually in my application I was getting a DB object to update. And while updating its values, I also updated its ID by mistake, and went ahead to update it and came across the said error.
I am using hibernateTemplate for CRUD operations.
After reading all answers did´t find anyone to talk about inverse atribute of hibernate.
In my my opinion you should also verify in your relationships mapping whether inverse key word is appropiately setted. Inverse keyword is created to defines which side is the owner to maintain the relationship. The procedure for updating and inserting varies cccording to this attribute.
Let's suppose we have two tables:
principal_table, middle_table
with a relationship of one to many. The hiberntate mapping classes are Principal and Middle respectively.
So the Principal class has a SET of Middle objects. The xml mapping file should be like following:
<hibernate-mapping>
<class name="path.to.class.Principal" table="principal_table" ...>
...
<set name="middleObjects" table="middle_table" inverse="true" fetch="select">
<key>
<column name="PRINCIPAL_ID" not-null="true" />
</key>
<one-to-many class="path.to.class.Middel" />
</set>
...
As inverse is set to ”true”, it means “Middle” class is the relationship owner, so Principal class will NOT UPDATE the relationship.
So the procedure for updating could be implemented like this:
session.beginTransaction();
Principal principal = new Principal();
principal.setSomething("1");
principal.setSomethingElse("2");
Middle middleObject = new Middle();
middleObject.setSomething("1");
middleObject.setPrincipal(principal);
principal.getMiddleObjects().add(middleObject);
session.saveOrUpdate(principal);
session.saveOrUpdate(middleObject); // NOTICE: you will need to save it manually
session.getTransaction().commit();
This worked for me, bu you can suggest some editions in order to improve the solution. That way we all will be learning.
In our case we finally found out the root cause of StaleStateException.
In fact we were deleting the row twice in a single hibernate session. Earlier we were using ojdbc6 lib, and this was ok in this version.
But when we upgraded to odjc7 or ojdbc8, deleting records twice was throwing exception. There was bug in our code where we were deleting twice, but that was not evident in ojdbc6.
We were able to reproduce with this piece of code:
Detail detail = getDetail(Long.valueOf(1396451));
session.delete(detail);
session.flush();
session.delete(detail);
session.flush();
On first flush hibernate goes and makes changes in database. During 2nd flush hibernate compares session's object with actual table's record, but could not find one, hence the exception.
I solved it. I found that there was no primary key for my Id column in table.
Once I created it solved for me. Also there was duplicate id found in table before which I deleted and solved it.
This thread is a bit old, however I thought I should drop my fix here in case it may help someone with same root cause.
I was migrating a Java Spring hibernate app. from Oracle to Postgre, along the migration process, I converted a trigger from Oracle to Postgre, the trigger was "on Before Insert" of a table and was setting a one of the columns value (of course the desired column was marked update=false insert=false in hibernate mapping to allow the trigger to set its value), and when inserting data from the application I got this error Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
My mistake was that I was setting "Return NULL" at the end of the trigger function, so when the trigger set the column value and the control is back to hibernate for saving, the record was lost as I was returning null.
My fix was to change "Return NULL" to "RETURN NEW" in trigger, this will keep the record available after being altered by the trigger, simply this was what it means by "unexcepted row count for update: 0 expected 1"
This happened if you change something in data set using native sql query but persisted object for same data set is present in session cache.
Use session.evict(yourObject);
Hibernate caches objects from the session. If object is accessed and modified by more than 1 user then org.hibernate.StaleStateException may be be thrown. It may be solved with merge/refresh entity method before saving or using lock. More info: http://java-fp.blogspot.lt/2011/09/orghibernatestalestateexception-batch.html
One of the case
SessionFactory sf=new Configuration().configure().buildSessionFactory();
Session session=sf.openSession();
UserDetails user=new UserDetails();
session.beginTransaction();
user.setUserName("update user agian");
user.setUserId(12);
session.saveOrUpdate(user);
session.getTransaction().commit();
System.out.println("user::"+user.getUserName());
sf.close();
I was facing this exception, and hibernate was working well. I tried to insert manually one record using pgAdmin, here the issue became clear. SQL insert query returns 0 insert. and there is a trigger function that cause this issue because it returns null. so I have only to set it to return new.
and finally I solved the problem.
hope that helps any body.