Hibernate DataException : could not update - java

I have received an exception from a live deployment of a web application (JBoss, Turbine, Hibernate). I can not reproduce the exception, therefore I can not fix the bug.
Here is the exception that I get:
org.hibernate.exception.DataException: could not update: [com.myproject.project.mypackage.objects.MyObject#1190]
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:77)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2425)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2307)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2607)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:92)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:234)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:41)
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:969)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1114)
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:79)
The interesting thing is that I get this could not update error when the following hql is executed:
select sum(entity.totalPrice) from Entity entity where entity.parent.id = :parentId and deleted is null
Several entities belong to one parent.
This hql is a part of a bigger update process. I need the sum of totalPrices to update with it another entity. Is it possible that the could not update refers to the update process?
I do not think this is the case since the error occurs before the update is executed. More exactly, the exception occurs when the list() method is called on the Query object which holds the hql.
I have tried to reproduce the exception with totalPrice of the entities set to null, but that does not give any exception. If I have a lot of entities attached to the same parent and the sum of their totalPrice exceeds the limit I get a could not insert exception.
I can not figure out what is the problem.

I think first attribute name supposed to be entity.parent_id and entity.deleted instead of simply deleted.
final query supposed to be something like...
select sum(entity.totalPrice) from Entity entity where entity.parent_id = :parentId and entity.deleted is null
make sure that parent_id attribute name is correct.

Related

How eclpiselink manage table locks?

I need some aproach or help to visualize a problem im having. The thing is, I work in an application Java 6, database mysql, ecpliselink 1.0.1, glassfish 2.1.
Sometimes some of the functionalities have problems to access the database, and seeing the log I found this:
Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.0.1 (Build 20080905)): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction
Error Code: 1205
Call: UPDATE table_person SET sip_codigo_barra_bat_64 = ?, sip_version = ? WHERE ((sip_id = ?) AND (sip_version = ?))
bind => [null, 2, 89608, 1]
Searching another Stackoverflow post I fond out that it could be a dead-lock, and in my code there is a method that could be the problem, but I have some doubts of how elcpiselink manage the locks.
1-If my Java method doesnt have any lock annotation, Eclipselink lock the table anyways? or depends of the poolconnection? or what?
2-If three threads execute the same Java method at the same time and the method execute for the same table a SELECT, then an UPDATE, and then another UPDATE that could generate a dead-lock? how?
3-I think that the problem I have is a deadlock in a method that does something like this, it modify two tables: table_person and table_payment (table_payment has a foreing key to table_person)
public void method(Integer id){
//Select table_person element from database
Query q = em.createNativeQuery("SELECT * from table_persona where id=id", Pèrson.class);
tablePersonElement=q.getResultList().get(0);
//Select table_payment from database
Query q = em.createNativeQuery("SELECT * from table_payment where id_table_persona=id", Payment.class);
tablePaymentElement=q.getResultList().get(0);
//If tablePaymentElement element doesnt exists is created
if(tablePaymentElement==null){
tablePaymentElement=new Payment();
tablePaymentElement.setMoney(money);
tablePaymentElement.setIdTablePerson(tablePersonElement);
em.persist(tablePaymentElement);
em.flush();
}else{
tablePaymentElement=new Payment();
tablePaymentElement.setMoney(money);
tablePaymentElement.setIdTablePerson(tablePersonElement);
em.merge(tablePaymentElement);
em.flush();
}
//Finally it set some values of tablePerson and save it
tablePersonElement.setValue(value);
em.merge(tablePersonElement);
em.flush();
}
Is possible to generate a deadlock with this method? or in case that other method update something of table_person while this one is executing?

QuerySyntaxException when trying to validate a table of unclean records to a new table with clean data. Necessary to map entities?

I have an unclean UncleanRecord tbl with dirty and unvalidated data.
My plan is to go through all the records in the tbl, and then clean and validate it to add into my new CleanRecord tbl.
I am using Spring Batch with Hibernate. When I execute the job, it returned these error:
org.springframework.batch.core.step.AbstractStep - Encountered an error executing step step1 in job validate
org.springframework.batch.item.ItemStreamException: Failed to initialize the reader
Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException: UNCLEANRECORD is not mapped [SELECT name, contact,address FROM UNCLEANRECORD WHERE date is null]
Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException: UNCLEANRECORD is not mapped
I have entity UncleanRecord, and CleanRecord.
With the error in console log, I suppose that I need to map UncleanRecord to CleanRecord? However, the data in Unclean is inconsistent--- there is also no unique identifiers in UncleanRecords... it's purely to store whatever data that is passed into the application. The rows in UncleanRecord would be checked against CleanRecord before being added into CleanRecord
Is it possible to achieve what I want to without Hibernate mapping?
I understand I can do the above with RowMapper<>, however, I am using Hibernate.
Looks like you are using a HQL/JPQL query which requires an entity named UNCLEANRECORD to exists but it doesn't. I don't know how spring-batch works, but it looks like you should mark the query as being a native query instead.

Getting ObjectOptimisticLockingFailureException without version annotation or OptimisticLocking strategy

I am getting optimistic locking exception (mentioned below) but strange thing is we haven't specified any of our entities with #version annotation or OptimisticLocking, so just wondering what can cause this exception? We are using JPA, hibernate, spring data & spring. Database is postgresql.
System exception occurred while processing request, ERROR_CODE: a18d5739 org.springframework.orm.ObjectOptimisticLockingFailureException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:301)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:225)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:521)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:485)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
This is not about Optimistic lock. This exception is throwing while deleting/updating a record by Id that does not exists at all. So check that record you are updating/Deleting actually exists in DB.
However, to get a better handle as to what causes the problem you can:
1) Set show_sql as true
2) Set the log levels for Spring and Hibernate to DEBUG
This will help you to understand the issue and fix it.
So the reason you get ObjectOptimisticLockingFailureException is that the update statement is using the version column as part of the predicate.
When Hibernate fetches the record that you are going it update it is also pulling back the version column.
select id, value, version from my_table where id = 1;
which lets say returns back (1, 'cur_val', 6)
You then update 'cur_val' to 'new_val'.
On commit of the transaction the update statement will be link so.
update my_table set value='new_val', version=7 where id = 1 and version=6;
So if another update has already been made between the time you queried for the object and the transaction

Exception in inserting data into data using JPA in netbeans

SEVERE: Local Exception Stack:
Exception [EclipseLink-7092] (Eclipse Persistence Services - 2.0.0.v20091127-r5931):
org.eclipse.persistence.exceptions.ValidationException
Exception Description: Cannot add a query whose types conflict with an existing query.
Query To Be Added: [ReadAllQuery(name="Voter.findAll" referenceClass=Voter
jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments [[]].
The existing conflicting query: [ReadAllQuery(name="Voter.findAll" referenceClass=
Voter jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments: [[]].
I too have come across this issue and it makes little sense. I only have one entity bean with one defined query, and it continues to tell me it's the problem. I did a stop, then start of GF3, redploy my app, and I still get it.. and worse, I am not even using the query.
One thing I don't understand.. why is EclipseLink being used in GF? Is that part of GF? I use Eclipse IDE, but I don't deploy from within Eclipse.. I deploy from my ant build script at command line. I am guessing GF must be using some EclipseLink (used to be TopLink?).
One answer above said to make sure there are no stale files, undeploy app, etc. Would be great if someone that has figured this out could provide more details and explain it. If it is another query that has an error in it, sure would be nice if the error was shown instead of this misleading one.
So far, I've stopped GF, dropped all the tables, restarted, redeployed (in autodeploy folder), and still get this issue right away. I generally build/deploy to autodeploy folder several times in short periods of time, as I make quick changes then build/redeploy.
I encountered this problem also, I found out the exception isn't related with the error file at all, the problem is from another query for example:
#NamedQuery(name = "ChannelType.ALL", query = "SELECT channelType FROM ChannelType channelType WHERE channelType.applicationClient.applicationClientId =:applicationClientId ORDER BY channelType.channelTypId ASC")
the problem is from "ORDER BY channelType.channelTypId" its not except to order by primary key ,when I remove this line the exception just gone also.
Maybe someone else could explain why this happen.
Thanks
Just for the people out there that are still struggling with this error:
Undeploy your application and check if there are any stale (maybe locked) files left. This would cause the old namedqueries to still exist and thus not replacing them.
Delete the files and redeploy. The error should disappear.
Edit:
Also check if you haven't done anything like ... WHERE o.object_id = :id ... instead of ... WHERE o.object = :object ...
This was the solution for my problems. Took me 3 weeks to figure that out. EclipseLink isn't very clear when it comes to exceptions. There was actually a query compile error. Instead it throws a duplicate query exception.
It looks like you have the query defined twice. Either on the same entity, or on another entity, or in orm.xml
Not sure of what you're doing exactly since you're not showing any code but this is what EclipseLink documentation says about error ECLIPSELINK-07092:
ECLIPSELINK-07092: Cannot add a query whose types conflict with an
existing query. Query To Be Added:
[{0}] is named: [{1}] with arguments
[{2}].The existing conflicting query:
[{3}] is named: [{4}] with arguments:
[{5}].
Cause: EclipseLink has detected a conflict between a custom
query with the same name and arguments
to a session.
Action: Ensure that no query is added to the session more than once
or change the query name so that the
query can be distinguished from
others.
According to the above description and to the trace, it seems that you're adding a query (actually the same) with the same query name more than once to the session. You shouldn't (or use another query name).
also the error can come from a namedquery malformed, i had an where y o.activo -> that show me the specified error.
I had the problem...
The real Exception was a #Named Query malformed but the stacktrace just said:
"Exception Description: Cannot add a query whose types conflict with
an existing query"
My solution:
In the persistence unit change Table Generation Strategy to "None" and Validation Strategy to "None". When run again I obtained the real Exception (Malformed Query). I resolved the error in the query, returned to the old configuration in the persistence unit and all exceptions disappeared.
I'm going crazy but at least, this not works:
#NamedQuery(name = "xyx", query = "SELECT count(v) FROM Classe v WHERE v.id =:_id);
this works:
#NamedQuery(name = "xyx", query = "SELECT count(v) FROM Classe v WHERE v.id = :_id);
"WHERE v.id =:_id" was the error

Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. It doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error?
org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti
onManager.java:500)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag
er.java:473)
at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction
AspectSupport.java:267)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)
I got the same exception while deleting a record by Id that does not exists at all. So check that record you are updating/Deleting actually exists in DB
Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.
However, to get a better handle as to what causes the problem, try the following:
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
Solution:
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
or
It's error show when you mention in the mapping file the field generator="native" or "incremental" and in your DATABASE the table mapped is not auto_incremented
Solution: Go to your DATABASE and update your table to set auto_increment
In my case, I came to this exception in two similar cases:
In a method annotated with #Transactional I had a call to another service (with long times of response). The method updates some properties of the entity (after the method, the entity still exists in the database). If the user requests two times the method (as he thinks it doesn't work the first time) when exiting from the transactional method the second time, Hibernate tries to update an entity which already changed its state from the beginning of the transaction. As Hibernate search for an entity in a state, and found the same entity but already changed by the first request, it throws an exception as it can't update the entity. It's like a conflict in GIT.
I had automatic requests (for monitoring the platform) which update an entity (and the manual rollback a few seconds later). But this platform is already used by a test team. When a tester performs a test in the same entity as the automatic requests, (within the same hundredth of a millisecond), I get the exception. As in the previous case, when exiting from the second transaction, the entity previously fetched already changed.
Conclusion: in my case, it wasn't a problem which can be found in the code. This exception is thrown when Hibernate founds that the entity first fetched from the database changed during the current transaction, so it can't flush it to the database as Hibernate doesn't know which is the correct version of the entity: the one the current transaction fetch at the beginning; or the one already stored in the database.
Solution: to solve the problem, you will have to play with the Hibernate LockMode to find the one which best fit your requirements.
This happened to me once by accident when I was assigning specific IDs to some objects (testing) and then I was trying to save them in the database. The problem was that in the database there was an specific policy for setting up the IDs of the objects. Just do not assign an ID if you have a policy at Hibernate level.
I just encountered this problem and found out I was deleting a record and trying to update it afterwards in a Hibernate transaction.
Hibernate 5.4.1 and HHH-12878 issue
Prior to Hibernate 5.4.1, the optimistic locking failure exceptions (e.g., StaleStateException or OptimisticLockException) didn't include the failing statement.
The HHH-12878 issue was created to improve Hibernate so that when throwing an optimistic locking exception, the JDBC PreparedStatement implementation is logged as well:
if ( expectedRowCount > rowCount ) {
throw new StaleStateException(
"Batch update returned unexpected row count from update ["
+ batchPosition + "]; actual row count: " + rowCount
+ "; expected: " + expectedRowCount + "; statement executed: "
+ statement
);
}
Testing Time
I created the BatchingOptimisticLockingTest in my High-Performance Java Persistence GitHub repository to demonstrate how the new behavior works.
First, we will define a Post entity that defines a #Version property, therefore enabling the implicit optimistic locking mechanism:
#Entity(name = "Post")
#Table(name = "post")
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long id;
private String title;
#Version
private short version;
public Long getId() {
return id;
}
public Post setId(Long id) {
this.id = id;
return this;
}
public String getTitle() {
return title;
}
public Post setTitle(String title) {
this.title = title;
return this;
}
public short getVersion() {
return version;
}
}
We will enable the JDBC batching using the following 3 configuration properties:
properties.put("hibernate.jdbc.batch_size", "5");
properties.put("hibernate.order_inserts", "true");
properties.put("hibernate.order_updates", "true");
We are going to create 3 Post entities:
doInJPA(entityManager -> {
for (int i = 1; i <= 3; i++) {
entityManager.persist(
new Post()
.setTitle(String.format("Post no. %d", i))
);
}
});
And Hibernate will execute a JDBC batch insert:
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
Query: [
INSERT INTO post (title, version, id)
VALUES (?, ?, ?)
],
Params:[
(Post no. 1, 0, 1),
(Post no. 2, 0, 2),
(Post no. 3, 0, 3)
]
So, we know that JDBC batching works just fine.
Now, let's replicate the optimistic locking issue:
doInJPA(entityManager -> {
List<Post> posts = entityManager.createQuery("""
select p
from Post p
""", Post.class)
.getResultList();
posts.forEach(
post -> post.setTitle(
post.getTitle() + " - 2nd edition"
)
);
executeSync(
() -> doInJPA(_entityManager -> {
Post post = _entityManager.createQuery("""
select p
from Post p
order by p.id
""", Post.class)
.setMaxResults(1)
.getSingleResult();
post.setTitle(post.getTitle() + " - corrected");
})
);
});
The first transaction selects all Post entities and modifies the title properties.
However, before the first EntityManager is flushed, we are going to execute a second transition using the executeSync method.
The second transaction modifies the first Post, so its version is going to be incremented:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - corrected', 1, 1, 0)
]
Now, when the first transaction tries to flush the EntityManager, we will get the OptimisticLockException:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - 2nd edition', 1, 1, 0),
('Post no. 2 - 2nd edition', 1, 2, 0),
('Post no. 3 - 2nd edition', 1, 3, 0)
]
o.h.e.j.b.i.AbstractBatchImpl - HHH000010: On release of batch it still contained JDBC statements
o.h.e.j.b.i.BatchingBatch - HHH000315: Exception executing batch [
org.hibernate.StaleStateException:
Batch update returned unexpected row count from update [0];
actual row count: 0;
expected: 1;
statement executed:
PgPreparedStatement [
update post set title='Post no. 3 - 2nd edition', version=1 where id=3 and version=0
]
],
SQL: update post set title=?, version=? where id=? and version=?
So, you need to upgrade to Hibernate 5.4.1 or newer to benefit from this improvement.
This can happen when trigger(s) execute additional DML (data modification) queries which affect the row counts. My solution was to add the following at the top of my trigger:
SET NOCOUNT ON;
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
actual row count: 0 // means no record found to update
update: 0 // means no record found so nothing update
expected: 1 // means expected at least 1 record with key in db table.
Here the problem is that the query trying to update a record for some key, But hibernate didn't find any record with the key.
It also can happen when you try to UPDATE a PRIMARY KEY.
My two cents.
Problem: With Spring Boot 2.7.1 the h2 database version has changed to v2.1.214 which may result into a thrown OptimisticLockException when using generated UUIDs for Id columns, see https://hibernate.atlassian.net/browse/HHH-15373.
Solution: Add columnDefinition="UUID" to the #Column annotation
E.g., with a primary key definition for an entity like this:
#Id
#GeneratedValue(generator = "UUID")
#GenericGenerator(name = "UUID", strategy = "org.hibernate.id.UUIDGenerator")
#Column(name = COLUMN_UUID, updatable = false, nullable = false)
UUID uUID;
Change the column annotation to:
#Column(name = COLUMN_UUID, updatable = false, nullable = false, columnDefinition="UUID")
As Julius says this happens when an update Occurs on an Object that has its children being deleted. (Probably because there was a need for an update for the whole Father Object and sometimes we prefer to delete the children and re -insert them on the Father (new , old doesnt matter )along with any other updates the father could have on any of its other plain fields)
So ...in order for this to work delete the children (within a Transaction) by calling childrenList.clear() (Dont loop through the children and delete each one with some childDAO.delete(childrenList.get(i).delete())) and setting
#OneToMany(cascade = CascadeType.XXX ,orphanRemoval=true) on the Side of the Father Object. Then update the father (fatherDAO.update(father)). (Repeat for every father object) The result is that children have their link to their father stripped off and then they are being removed as orphans by the framework.
I encountered this problem where we had one-many relationship.
In the hibernate hbm mapping file for master, for object with set type arrangement, added cascade="save-update" and it worked fine.
Without this, by default hibernate tries to update for a non-existent record and by doing so it inserts instead.
Another way to get this error is if you have a null item in a collection.
It happens when you try to delete an object and then you try to update the same object. Use this after delete:
session.clear();
i got the same problem and i verified this may occur because of Auto increment primary key. To solve this problem do not inset auto increment value with data set. Insert data without the primary key.
This happened to me too, because I had my id as Long, and I was receiving from the view the value 0, and when I tried to save in the database I got this error, then I fixed it by set the id to null.
This problem mainly occurs when we are trying to save or update the object which are already fetched into memory by a running session.
If you've fetched object from the session and you're trying to update in the database, then this exception may be thrown.
I used session.evict(); to remove the cache stored in hibernate first or if you don't wanna take risk of loosing data, better you make another object for storing the data temp.
try
{
if(!session.isOpen())
{
session=EmployeyDao.getSessionFactory().openSession();
}
tx=session.beginTransaction();
session.evict(e);
session.saveOrUpdate(e);
tx.commit();;
EmployeyDao.shutDown(session);
}
catch(HibernateException exc)
{
exc.printStackTrace();
tx.rollback();
}
I ran into this issue when I was manually beginning and committing transactions inside of method annotated as #Transactional. I fixed the problem by detecting if an active transaction already existed.
//Detect underlying transaction
if (session.getTransaction() != null && session.getTransaction().isActive()) {
myTransaction = session.getTransaction();
preExistingTransaction = true;
} else {
myTransaction = session.beginTransaction();
}
Then I allowed Spring to handle committing the transaction.
private void finishTransaction() {
if (!preExistingTransaction) {
try {
tx.commit();
} catch (HibernateException he) {
if (tx != null) {
tx.rollback();
}
log.error(he);
} finally {
if (newSessionOpened) {
SessionFactoryUtils.closeSession(session);
newSessionOpened = false;
maxResults = 0;
}
}
}
}
This happens when you declared the JSF Managed Bean as
#RequestScoped;
when you should declare as
#SessionScoped;
Regards;
I got this error when I tried to update an object with an id that did not exist in the database. The reason for my mistake was that I had manually assigned a property with the name 'id' to the client side JSON-representation of the object and then when deserializing the object on the server side this 'id' property would overwrite the instance variable (also called 'id') that Hibernate was supposed to generate. So be careful of naming collisions if you are using Hibernate to generate identifiers.
I also came across the same challenge. In my case I was updating an object which was not even existing, using hibernateTemplate.
Actually in my application I was getting a DB object to update. And while updating its values, I also updated its ID by mistake, and went ahead to update it and came across the said error.
I am using hibernateTemplate for CRUD operations.
After reading all answers did´t find anyone to talk about inverse atribute of hibernate.
In my my opinion you should also verify in your relationships mapping whether inverse key word is appropiately setted. Inverse keyword is created to defines which side is the owner to maintain the relationship. The procedure for updating and inserting varies cccording to this attribute.
Let's suppose we have two tables:
principal_table, middle_table
with a relationship of one to many. The hiberntate mapping classes are Principal and Middle respectively.
So the Principal class has a SET of Middle objects. The xml mapping file should be like following:
<hibernate-mapping>
<class name="path.to.class.Principal" table="principal_table" ...>
...
<set name="middleObjects" table="middle_table" inverse="true" fetch="select">
<key>
<column name="PRINCIPAL_ID" not-null="true" />
</key>
<one-to-many class="path.to.class.Middel" />
</set>
...
As inverse is set to ”true”, it means “Middle” class is the relationship owner, so Principal class will NOT UPDATE the relationship.
So the procedure for updating could be implemented like this:
session.beginTransaction();
Principal principal = new Principal();
principal.setSomething("1");
principal.setSomethingElse("2");
Middle middleObject = new Middle();
middleObject.setSomething("1");
middleObject.setPrincipal(principal);
principal.getMiddleObjects().add(middleObject);
session.saveOrUpdate(principal);
session.saveOrUpdate(middleObject); // NOTICE: you will need to save it manually
session.getTransaction().commit();
This worked for me, bu you can suggest some editions in order to improve the solution. That way we all will be learning.
In our case we finally found out the root cause of StaleStateException.
In fact we were deleting the row twice in a single hibernate session. Earlier we were using ojdbc6 lib, and this was ok in this version.
But when we upgraded to odjc7 or ojdbc8, deleting records twice was throwing exception. There was bug in our code where we were deleting twice, but that was not evident in ojdbc6.
We were able to reproduce with this piece of code:
Detail detail = getDetail(Long.valueOf(1396451));
session.delete(detail);
session.flush();
session.delete(detail);
session.flush();
On first flush hibernate goes and makes changes in database. During 2nd flush hibernate compares session's object with actual table's record, but could not find one, hence the exception.
I solved it. I found that there was no primary key for my Id column in table.
Once I created it solved for me. Also there was duplicate id found in table before which I deleted and solved it.
This thread is a bit old, however I thought I should drop my fix here in case it may help someone with same root cause.
I was migrating a Java Spring hibernate app. from Oracle to Postgre, along the migration process, I converted a trigger from Oracle to Postgre, the trigger was "on Before Insert" of a table and was setting a one of the columns value (of course the desired column was marked update=false insert=false in hibernate mapping to allow the trigger to set its value), and when inserting data from the application I got this error Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
My mistake was that I was setting "Return NULL" at the end of the trigger function, so when the trigger set the column value and the control is back to hibernate for saving, the record was lost as I was returning null.
My fix was to change "Return NULL" to "RETURN NEW" in trigger, this will keep the record available after being altered by the trigger, simply this was what it means by "unexcepted row count for update: 0 expected 1"
This happened if you change something in data set using native sql query but persisted object for same data set is present in session cache.
Use session.evict(yourObject);
Hibernate caches objects from the session. If object is accessed and modified by more than 1 user then org.hibernate.StaleStateException may be be thrown. It may be solved with merge/refresh entity method before saving or using lock. More info: http://java-fp.blogspot.lt/2011/09/orghibernatestalestateexception-batch.html
One of the case
SessionFactory sf=new Configuration().configure().buildSessionFactory();
Session session=sf.openSession();
UserDetails user=new UserDetails();
session.beginTransaction();
user.setUserName("update user agian");
user.setUserId(12);
session.saveOrUpdate(user);
session.getTransaction().commit();
System.out.println("user::"+user.getUserName());
sf.close();
I was facing this exception, and hibernate was working well. I tried to insert manually one record using pgAdmin, here the issue became clear. SQL insert query returns 0 insert. and there is a trigger function that cause this issue because it returns null. so I have only to set it to return new.
and finally I solved the problem.
hope that helps any body.

Categories

Resources