How eclpiselink manage table locks? - java

I need some aproach or help to visualize a problem im having. The thing is, I work in an application Java 6, database mysql, ecpliselink 1.0.1, glassfish 2.1.
Sometimes some of the functionalities have problems to access the database, and seeing the log I found this:
Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.0.1 (Build 20080905)): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction
Error Code: 1205
Call: UPDATE table_person SET sip_codigo_barra_bat_64 = ?, sip_version = ? WHERE ((sip_id = ?) AND (sip_version = ?))
bind => [null, 2, 89608, 1]
Searching another Stackoverflow post I fond out that it could be a dead-lock, and in my code there is a method that could be the problem, but I have some doubts of how elcpiselink manage the locks.
1-If my Java method doesnt have any lock annotation, Eclipselink lock the table anyways? or depends of the poolconnection? or what?
2-If three threads execute the same Java method at the same time and the method execute for the same table a SELECT, then an UPDATE, and then another UPDATE that could generate a dead-lock? how?
3-I think that the problem I have is a deadlock in a method that does something like this, it modify two tables: table_person and table_payment (table_payment has a foreing key to table_person)
public void method(Integer id){
//Select table_person element from database
Query q = em.createNativeQuery("SELECT * from table_persona where id=id", Pèrson.class);
tablePersonElement=q.getResultList().get(0);
//Select table_payment from database
Query q = em.createNativeQuery("SELECT * from table_payment where id_table_persona=id", Payment.class);
tablePaymentElement=q.getResultList().get(0);
//If tablePaymentElement element doesnt exists is created
if(tablePaymentElement==null){
tablePaymentElement=new Payment();
tablePaymentElement.setMoney(money);
tablePaymentElement.setIdTablePerson(tablePersonElement);
em.persist(tablePaymentElement);
em.flush();
}else{
tablePaymentElement=new Payment();
tablePaymentElement.setMoney(money);
tablePaymentElement.setIdTablePerson(tablePersonElement);
em.merge(tablePaymentElement);
em.flush();
}
//Finally it set some values of tablePerson and save it
tablePersonElement.setValue(value);
em.merge(tablePersonElement);
em.flush();
}
Is possible to generate a deadlock with this method? or in case that other method update something of table_person while this one is executing?

Related

How to set lock timeout in postgres - Hibernate

I'm trying to set a Lock for the row I'm working on until the next commit:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
What I thought should happen is that if two threads will try to write to the db at the same time, one thread will reach the update operation before the other, the second thread should wait 10 seconds and then throw PessimisticLockException.
But instead the thread hangs until the other thread finishes, regardless of the timeout set.
Look at this example :
database.createTransaction(transaction -> {
// Execute the first request to the db, and lock the table
requestAndLock(transaction);
// open another transaction, and execute the second request in
// a different transaction
database.createTransaction(secondTransaction -> {
requestAndLock(secondTransaction);
});
transaction.commit();
});
I expected that in the second request the transaction will wait until the timeout set and then throw the PessimisticLockException, but instead it deadlocks forever.
Hibernate generates my request to the db this way :
SELECT value from Table where id=123 FOR UPDATE
In this answer I saw that Postgres allows only SELECT FOR UPDATE NO WAIT that sets the timeout to 0, but it isn't possible to set a timeout in that way.
Is there any other way that I can use with Hibernate / JPA?
Maybe this way is somehow recommended?
Hibernate supports a bunch of query hints. The one you're using sets the timeout for the query, not for the pessimistic lock. The query and the lock are independent of each other, and you need to use the hint shown below.
But before you do that, please be aware, that Hibernate doesn't handle the timeout itself. It only sends it to the database and it depends on the database, if and how it applies it.
To set a timeout for the pessimistic lock, you need to use the javax.persistence.lock.timeout hint instead. Here's an example:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
I think you could try
SET LOCAL lock_timeout = '10s';
SELECT ....;
I doubt Hibernate supports this out-of-box. You could try find a way to extend it, not sure if it worth it. Because I guess using locks on a postges database (which is mvcc) is not the smartest option.
You could also do NO WAIT and delay-retry several times from your code.
There is the lock_timeout parameter that does exactly what you want.
You can set it in postgresql.conf or with ALTER ROLE or ALTER DATABASE per user or per database.
The hint for lock timeout for PostgresSQL doesn't work on PostreSQL 9.6 (.setHint("javax.persistence.lock.timeout", 10000)
The only solution I found is uncommenting lock_timeout property in postgresql.conf:
lock_timeout = 10000 # in milliseconds, 0 is disabled
For anyone who's still looking for a data jpa solution, this is how i managed to do it
First i've created a function in postgres
CREATE function function_name (some_var bigint)
RETURNS TABLE (id BIGINT, counter bigint, organisation_id bigint) -- here you list all the columns you want to be returned in the select statement
LANGUAGE plpgsql
AS
$$
BEGIN
SET LOCAL lock_timeout = '5s';
return query SELECT * from some_table where some_table.id = some_var FOR UPDATE;
END;
$$;
then in the repository interface i've created a native query that calls the function. This will apply the lock timeout on that particular transaction
#Transactional
#Query(value = """
select * from function_name(:id);
""", nativeQuery = true)
Optional<SomeTableEntity> findById(Long id);

Timeout on select query from external service after making an update on the same table

This is such a weird situation. I have Microsoft SQL Server database, app written in java (queries run using javax.persistence.Query) and external service written in C#.
I have a procedure that makes update on table MyTable.
There are only two update statements in this procedure:
UPDATE
MyTable
SET
status = dbo.getStat(id)
WHERE
EXISTS
(
...
)
UPDATE
MyTable
SET
status = 3
WHERE
status = 2
AND EXISTS
(
...
)
EXISTS part is the same for both statements.
In my java app I use following code to launch this procedure:
Query query = em.createNativeQuery(recources.getString("updateStatuses"));
query.setHint("toplink.refresh", "true");
query.setParameter(1, sessId);
query.executeUpdate();
Under updateStatuses I have following sql:
EXEC updateStatusesP ?
After that I select records from this table if they have status equal to 3:
SELECT
id
FROM
MyTable
WHERE
status = 3
AND EXISTS
(
...
)
I iterate through ids and I make a call to external service written in C# which modifies data in db.
The thing is that currently I'm getting a timeout on a select query launched from this external service. It's a query selecting data from MyTable.
If I remove first update statement from my updateStatusesP procedure then it works fine (no timeout). If I modify first query to include "status = " condition in WHERE clause then it also works fine. But without it I get this timeout.
There is a trigger and index on this table. I removed them to check if maybe there is something happening there. No changes.
If I make a direct call to this external service (via postman) then I get correct response. Only when calling it from application code: procedure, select query to select ids and then call to external service - I'm getting a response from this service (with info about timeout) and I can see in db monitoring tool that it didn't go pass first select query launched inside this service.
I don't understand what is happening. I tried putting those updates into transaction and committing it at the end of the procedure but it didn't help. Any ideas?

Hibernate select and insert strange behaviour

I'm developing a Spring + Hibernate application and everything is working pretty fine. Making a method I found out a strange behaviour that I can't really explain, so I'll show you what I got and maybe we'll find a solution.
This method retrieves a list of soccer players parsing a web page and I try to find if I already have a player with the same name already on the database; if I already have it, I set some parameters and update that object. If I have no player with that name I want to insert it. I obviously can't use the saveOrUpdate method as my parsed objects have no id as I didn't retrieve them from the db.
This is the code snippet that generates the error (it's in the Service layer, then declared as Transactional):
List<Calciatore> calciatoriAggiornati = PopolaDbCalciatori.getListaCalciatori(imagesPath);
for(Calciatore calciatore: calciatoriAggiornati){
Calciatore current = calciatoreDao.getCalciatoreByNome(calciatore.getNome());
if( current != null){
current.setAttivo(true);
current.setRuolo(calciatore.getRuolo());
current.setUrlFigurina(calciatore.getUrlFigurina());
current.setSquadraReale(calciatore.getSquadraReale());
calciatoreDao.update(current);
}
else{
calciatore.setAttivo(true);
calciatoreDao.insert(calciatore);
}
}
return true;
}
The getCalciatoreByName method is the following (it's working if used alone):
public Calciatore getCalciatoreByNome(String nomeCalciatore) {
List<Calciatore> calciatori = getSession().createCriteria(Calciatore.class)
.add(Restrictions.eq("nome",nomeCalciatore)).list();
return calciatori.size() == 0? null : calciatori.get(0);
}
The insert method, inherited by the class BaseDaoImpl works when used standalone too, and is the following:
public Boolean insert(T obj) throws DataAccessException {
getSession().save(obj);
return true;
}
The result is strange: the first object of the list passes the method getCalciatoreByNome without problem; as I have no instances on the database, the flow goes to the insert. After the first round of the for is over, this is the console:
Hibernate:
select
this_.kid as kid1_0_3_,
this_.attivo as attivo2_0_3_,
this_.dataDiNascita as dataDiNa3_0_3_,
this_.nome as nome4_0_3_,
this_.ruolo as ruolo5_0_3_,
this_.squadraCorrente_kid as squadraC9_0_3_,
this_.squadraReale as squadraR6_0_3_,
this_.urlFigurina as urlFigur7_0_3_,
this_.version as version8_0_3_,
squadrafan2_.kid as kid1_7_0_,
squadrafan2_.attiva as attiva2_7_0_,
squadrafan2_.nome as nome3_7_0_,
squadrafan2_.utenteAssociato_kid as utenteAs5_7_0_,
squadrafan2_.version as version4_7_0_,
utente3_.kid as kid1_10_1_,
utente3_.attivo as attivo2_10_1_,
utente3_.hashPwd as hashPwd3_10_1_,
utente3_.ruolo_kid as ruolo_ki6_10_1_,
utente3_.username as username4_10_1_,
utente3_.version as version5_10_1_,
ruolo4_.kid as kid1_5_2_,
ruolo4_.nome as nome2_5_2_,
ruolo4_.version as version3_5_2_
from
Calciatore this_
left outer join
SquadraFantacalcio squadrafan2_
on this_.squadraCorrente_kid=squadrafan2_.kid
left outer join
Utente utente3_
on squadrafan2_.utenteAssociato_kid=utente3_.kid
left outer join
Ruolo ruolo4_
on utente3_.ruolo_kid=ruolo4_.kid
where
this_.nome=?
Hibernate:
call next value for SEQ_CALCIATORE
As you can see no exception is raised but the behaviour is already compromised, as no insert is really executed! Last line of log show only the sequence generator!
On the second round of the for cycle, as the flow approaches the getCalciatoreByNome method, this is the console log:
Hibernate:
insert
into
Calciatore
(attivo, dataDiNascita, nome, ruolo, squadraCorrente_kid, squadraReale, urlFigurina, version, kid)
values
(?, ?, ?, ?, ?, ?, ?, ?, ?)
24/06/2015 09:03:27 - INFO - (AbstractBatchImpl.java:208) - HHH000010: On release of batch it still contained JDBC statements
24/06/2015 09:03:27 - WARN - (SqlExceptionHelper.java:144) - SQL Error: -5563, SQLState: 42563
24/06/2015 09:03:27 - ERROR - (SqlExceptionHelper.java:146) - incompatible data type in operation
24/06/2015 09:03:39 - DEBUG - (AbstractPlatformTransactionManager.java:847) - Initiating transaction rollback
Wow, that's strange. As I try to execute the second time the select method, Hibernate tries to make the insert generating that error that I can't really find anywhere, and the rollback\exception generation is started.
I tried to debug as much as I could, but I can't really understand what's going on, as when I execute these operation as standalone everything seems to work fine.
Any suggestion?
When you use AUTO flushing, the current pending changes are flushed when:
the transaction commits
a query is executed
When you issue the insert, Hibernate only add an EntityInsertAction in the action queue, but it delays the INSERT until flush time.
The reason you see the insert executed on the second iteration cycle is because the select query triggers a flush.

The Following simple query does not complete. Prepared Statement

i have the following query:
String updatequery = "UPDATE tbl_page SET linkCount = ?, pageProcessed = 1 WHERE pageUrl =?";
PreparedStatement updatestmt = kon.prepareStatement(updatequery);
updatestmt.clearParameters();
//updatestmt.setQueryTimeout(10);
updatestmt.setInt(1, linkCount);
updatestmt.setString(2, urlLink);
updatestmt.executeUpdate();
When i set the query timeout for 10 seconds it will catch an exception the query timed out. but when i dont it goes on waiting. Whats wrong with the query? pageUrl column is the Primary Key with varchar(900)
I know something might be wrong with the prepared statement because when i run this query in MS SQl Server Management Studio ('?' replaced with its value) it works fine.
Am i missing something in Java or MSSQL?
Since the code looks just fine, this could be an issue at database side. May be someone else has blocked the row by updating it and not doing a commit/rollback (most possibly from you MS-SQL Server Management studio !). You could look for locks owned by other processes for the same record so that you can be sure that this is not a database issue.
Create an index on pageUrl:
create index tbl_page_pageUrl_index on tbl_page(pageUrl);
That will allow speedy access to the rows you want to update.
Without this index, the database must do a full table scan, and when combined with an update command, if likely to lead to lock contention and possibly even deadlocks, depending on your locking options.

Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. It doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error?
org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti
onManager.java:500)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag
er.java:473)
at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction
AspectSupport.java:267)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)
I got the same exception while deleting a record by Id that does not exists at all. So check that record you are updating/Deleting actually exists in DB
Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.
However, to get a better handle as to what causes the problem, try the following:
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
Solution:
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
or
It's error show when you mention in the mapping file the field generator="native" or "incremental" and in your DATABASE the table mapped is not auto_incremented
Solution: Go to your DATABASE and update your table to set auto_increment
In my case, I came to this exception in two similar cases:
In a method annotated with #Transactional I had a call to another service (with long times of response). The method updates some properties of the entity (after the method, the entity still exists in the database). If the user requests two times the method (as he thinks it doesn't work the first time) when exiting from the transactional method the second time, Hibernate tries to update an entity which already changed its state from the beginning of the transaction. As Hibernate search for an entity in a state, and found the same entity but already changed by the first request, it throws an exception as it can't update the entity. It's like a conflict in GIT.
I had automatic requests (for monitoring the platform) which update an entity (and the manual rollback a few seconds later). But this platform is already used by a test team. When a tester performs a test in the same entity as the automatic requests, (within the same hundredth of a millisecond), I get the exception. As in the previous case, when exiting from the second transaction, the entity previously fetched already changed.
Conclusion: in my case, it wasn't a problem which can be found in the code. This exception is thrown when Hibernate founds that the entity first fetched from the database changed during the current transaction, so it can't flush it to the database as Hibernate doesn't know which is the correct version of the entity: the one the current transaction fetch at the beginning; or the one already stored in the database.
Solution: to solve the problem, you will have to play with the Hibernate LockMode to find the one which best fit your requirements.
This happened to me once by accident when I was assigning specific IDs to some objects (testing) and then I was trying to save them in the database. The problem was that in the database there was an specific policy for setting up the IDs of the objects. Just do not assign an ID if you have a policy at Hibernate level.
I just encountered this problem and found out I was deleting a record and trying to update it afterwards in a Hibernate transaction.
Hibernate 5.4.1 and HHH-12878 issue
Prior to Hibernate 5.4.1, the optimistic locking failure exceptions (e.g., StaleStateException or OptimisticLockException) didn't include the failing statement.
The HHH-12878 issue was created to improve Hibernate so that when throwing an optimistic locking exception, the JDBC PreparedStatement implementation is logged as well:
if ( expectedRowCount > rowCount ) {
throw new StaleStateException(
"Batch update returned unexpected row count from update ["
+ batchPosition + "]; actual row count: " + rowCount
+ "; expected: " + expectedRowCount + "; statement executed: "
+ statement
);
}
Testing Time
I created the BatchingOptimisticLockingTest in my High-Performance Java Persistence GitHub repository to demonstrate how the new behavior works.
First, we will define a Post entity that defines a #Version property, therefore enabling the implicit optimistic locking mechanism:
#Entity(name = "Post")
#Table(name = "post")
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long id;
private String title;
#Version
private short version;
public Long getId() {
return id;
}
public Post setId(Long id) {
this.id = id;
return this;
}
public String getTitle() {
return title;
}
public Post setTitle(String title) {
this.title = title;
return this;
}
public short getVersion() {
return version;
}
}
We will enable the JDBC batching using the following 3 configuration properties:
properties.put("hibernate.jdbc.batch_size", "5");
properties.put("hibernate.order_inserts", "true");
properties.put("hibernate.order_updates", "true");
We are going to create 3 Post entities:
doInJPA(entityManager -> {
for (int i = 1; i <= 3; i++) {
entityManager.persist(
new Post()
.setTitle(String.format("Post no. %d", i))
);
}
});
And Hibernate will execute a JDBC batch insert:
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
Query: [
INSERT INTO post (title, version, id)
VALUES (?, ?, ?)
],
Params:[
(Post no. 1, 0, 1),
(Post no. 2, 0, 2),
(Post no. 3, 0, 3)
]
So, we know that JDBC batching works just fine.
Now, let's replicate the optimistic locking issue:
doInJPA(entityManager -> {
List<Post> posts = entityManager.createQuery("""
select p
from Post p
""", Post.class)
.getResultList();
posts.forEach(
post -> post.setTitle(
post.getTitle() + " - 2nd edition"
)
);
executeSync(
() -> doInJPA(_entityManager -> {
Post post = _entityManager.createQuery("""
select p
from Post p
order by p.id
""", Post.class)
.setMaxResults(1)
.getSingleResult();
post.setTitle(post.getTitle() + " - corrected");
})
);
});
The first transaction selects all Post entities and modifies the title properties.
However, before the first EntityManager is flushed, we are going to execute a second transition using the executeSync method.
The second transaction modifies the first Post, so its version is going to be incremented:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - corrected', 1, 1, 0)
]
Now, when the first transaction tries to flush the EntityManager, we will get the OptimisticLockException:
Query:[
UPDATE
post
SET
title = ?,
version = ?
WHERE
id = ? AND
version = ?
],
Params:[
('Post no. 1 - 2nd edition', 1, 1, 0),
('Post no. 2 - 2nd edition', 1, 2, 0),
('Post no. 3 - 2nd edition', 1, 3, 0)
]
o.h.e.j.b.i.AbstractBatchImpl - HHH000010: On release of batch it still contained JDBC statements
o.h.e.j.b.i.BatchingBatch - HHH000315: Exception executing batch [
org.hibernate.StaleStateException:
Batch update returned unexpected row count from update [0];
actual row count: 0;
expected: 1;
statement executed:
PgPreparedStatement [
update post set title='Post no. 3 - 2nd edition', version=1 where id=3 and version=0
]
],
SQL: update post set title=?, version=? where id=? and version=?
So, you need to upgrade to Hibernate 5.4.1 or newer to benefit from this improvement.
This can happen when trigger(s) execute additional DML (data modification) queries which affect the row counts. My solution was to add the following at the top of my trigger:
SET NOCOUNT ON;
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
actual row count: 0 // means no record found to update
update: 0 // means no record found so nothing update
expected: 1 // means expected at least 1 record with key in db table.
Here the problem is that the query trying to update a record for some key, But hibernate didn't find any record with the key.
It also can happen when you try to UPDATE a PRIMARY KEY.
My two cents.
Problem: With Spring Boot 2.7.1 the h2 database version has changed to v2.1.214 which may result into a thrown OptimisticLockException when using generated UUIDs for Id columns, see https://hibernate.atlassian.net/browse/HHH-15373.
Solution: Add columnDefinition="UUID" to the #Column annotation
E.g., with a primary key definition for an entity like this:
#Id
#GeneratedValue(generator = "UUID")
#GenericGenerator(name = "UUID", strategy = "org.hibernate.id.UUIDGenerator")
#Column(name = COLUMN_UUID, updatable = false, nullable = false)
UUID uUID;
Change the column annotation to:
#Column(name = COLUMN_UUID, updatable = false, nullable = false, columnDefinition="UUID")
As Julius says this happens when an update Occurs on an Object that has its children being deleted. (Probably because there was a need for an update for the whole Father Object and sometimes we prefer to delete the children and re -insert them on the Father (new , old doesnt matter )along with any other updates the father could have on any of its other plain fields)
So ...in order for this to work delete the children (within a Transaction) by calling childrenList.clear() (Dont loop through the children and delete each one with some childDAO.delete(childrenList.get(i).delete())) and setting
#OneToMany(cascade = CascadeType.XXX ,orphanRemoval=true) on the Side of the Father Object. Then update the father (fatherDAO.update(father)). (Repeat for every father object) The result is that children have their link to their father stripped off and then they are being removed as orphans by the framework.
I encountered this problem where we had one-many relationship.
In the hibernate hbm mapping file for master, for object with set type arrangement, added cascade="save-update" and it worked fine.
Without this, by default hibernate tries to update for a non-existent record and by doing so it inserts instead.
Another way to get this error is if you have a null item in a collection.
It happens when you try to delete an object and then you try to update the same object. Use this after delete:
session.clear();
i got the same problem and i verified this may occur because of Auto increment primary key. To solve this problem do not inset auto increment value with data set. Insert data without the primary key.
This happened to me too, because I had my id as Long, and I was receiving from the view the value 0, and when I tried to save in the database I got this error, then I fixed it by set the id to null.
This problem mainly occurs when we are trying to save or update the object which are already fetched into memory by a running session.
If you've fetched object from the session and you're trying to update in the database, then this exception may be thrown.
I used session.evict(); to remove the cache stored in hibernate first or if you don't wanna take risk of loosing data, better you make another object for storing the data temp.
try
{
if(!session.isOpen())
{
session=EmployeyDao.getSessionFactory().openSession();
}
tx=session.beginTransaction();
session.evict(e);
session.saveOrUpdate(e);
tx.commit();;
EmployeyDao.shutDown(session);
}
catch(HibernateException exc)
{
exc.printStackTrace();
tx.rollback();
}
I ran into this issue when I was manually beginning and committing transactions inside of method annotated as #Transactional. I fixed the problem by detecting if an active transaction already existed.
//Detect underlying transaction
if (session.getTransaction() != null && session.getTransaction().isActive()) {
myTransaction = session.getTransaction();
preExistingTransaction = true;
} else {
myTransaction = session.beginTransaction();
}
Then I allowed Spring to handle committing the transaction.
private void finishTransaction() {
if (!preExistingTransaction) {
try {
tx.commit();
} catch (HibernateException he) {
if (tx != null) {
tx.rollback();
}
log.error(he);
} finally {
if (newSessionOpened) {
SessionFactoryUtils.closeSession(session);
newSessionOpened = false;
maxResults = 0;
}
}
}
}
This happens when you declared the JSF Managed Bean as
#RequestScoped;
when you should declare as
#SessionScoped;
Regards;
I got this error when I tried to update an object with an id that did not exist in the database. The reason for my mistake was that I had manually assigned a property with the name 'id' to the client side JSON-representation of the object and then when deserializing the object on the server side this 'id' property would overwrite the instance variable (also called 'id') that Hibernate was supposed to generate. So be careful of naming collisions if you are using Hibernate to generate identifiers.
I also came across the same challenge. In my case I was updating an object which was not even existing, using hibernateTemplate.
Actually in my application I was getting a DB object to update. And while updating its values, I also updated its ID by mistake, and went ahead to update it and came across the said error.
I am using hibernateTemplate for CRUD operations.
After reading all answers did´t find anyone to talk about inverse atribute of hibernate.
In my my opinion you should also verify in your relationships mapping whether inverse key word is appropiately setted. Inverse keyword is created to defines which side is the owner to maintain the relationship. The procedure for updating and inserting varies cccording to this attribute.
Let's suppose we have two tables:
principal_table, middle_table
with a relationship of one to many. The hiberntate mapping classes are Principal and Middle respectively.
So the Principal class has a SET of Middle objects. The xml mapping file should be like following:
<hibernate-mapping>
<class name="path.to.class.Principal" table="principal_table" ...>
...
<set name="middleObjects" table="middle_table" inverse="true" fetch="select">
<key>
<column name="PRINCIPAL_ID" not-null="true" />
</key>
<one-to-many class="path.to.class.Middel" />
</set>
...
As inverse is set to ”true”, it means “Middle” class is the relationship owner, so Principal class will NOT UPDATE the relationship.
So the procedure for updating could be implemented like this:
session.beginTransaction();
Principal principal = new Principal();
principal.setSomething("1");
principal.setSomethingElse("2");
Middle middleObject = new Middle();
middleObject.setSomething("1");
middleObject.setPrincipal(principal);
principal.getMiddleObjects().add(middleObject);
session.saveOrUpdate(principal);
session.saveOrUpdate(middleObject); // NOTICE: you will need to save it manually
session.getTransaction().commit();
This worked for me, bu you can suggest some editions in order to improve the solution. That way we all will be learning.
In our case we finally found out the root cause of StaleStateException.
In fact we were deleting the row twice in a single hibernate session. Earlier we were using ojdbc6 lib, and this was ok in this version.
But when we upgraded to odjc7 or ojdbc8, deleting records twice was throwing exception. There was bug in our code where we were deleting twice, but that was not evident in ojdbc6.
We were able to reproduce with this piece of code:
Detail detail = getDetail(Long.valueOf(1396451));
session.delete(detail);
session.flush();
session.delete(detail);
session.flush();
On first flush hibernate goes and makes changes in database. During 2nd flush hibernate compares session's object with actual table's record, but could not find one, hence the exception.
I solved it. I found that there was no primary key for my Id column in table.
Once I created it solved for me. Also there was duplicate id found in table before which I deleted and solved it.
This thread is a bit old, however I thought I should drop my fix here in case it may help someone with same root cause.
I was migrating a Java Spring hibernate app. from Oracle to Postgre, along the migration process, I converted a trigger from Oracle to Postgre, the trigger was "on Before Insert" of a table and was setting a one of the columns value (of course the desired column was marked update=false insert=false in hibernate mapping to allow the trigger to set its value), and when inserting data from the application I got this error Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
My mistake was that I was setting "Return NULL" at the end of the trigger function, so when the trigger set the column value and the control is back to hibernate for saving, the record was lost as I was returning null.
My fix was to change "Return NULL" to "RETURN NEW" in trigger, this will keep the record available after being altered by the trigger, simply this was what it means by "unexcepted row count for update: 0 expected 1"
This happened if you change something in data set using native sql query but persisted object for same data set is present in session cache.
Use session.evict(yourObject);
Hibernate caches objects from the session. If object is accessed and modified by more than 1 user then org.hibernate.StaleStateException may be be thrown. It may be solved with merge/refresh entity method before saving or using lock. More info: http://java-fp.blogspot.lt/2011/09/orghibernatestalestateexception-batch.html
One of the case
SessionFactory sf=new Configuration().configure().buildSessionFactory();
Session session=sf.openSession();
UserDetails user=new UserDetails();
session.beginTransaction();
user.setUserName("update user agian");
user.setUserId(12);
session.saveOrUpdate(user);
session.getTransaction().commit();
System.out.println("user::"+user.getUserName());
sf.close();
I was facing this exception, and hibernate was working well. I tried to insert manually one record using pgAdmin, here the issue became clear. SQL insert query returns 0 insert. and there is a trigger function that cause this issue because it returns null. so I have only to set it to return new.
and finally I solved the problem.
hope that helps any body.

Categories

Resources