Why does Hibernate throw org.hibernate.exception.LockAcquisitionException? - java

I have this method :
mymethod(long id){
Person p = DAO.findPerson(id);
Car car = new Car();
car.setPerson(p);
p.getCars().add(car);
DAO.saveOrUpdate(car);
DAO.saveOrUpdate(p);
DAO.delete(p.getCars().get(0));//A person have many cars
}
Mapping :
Person.hbm.xml
<!-- one-to-many : [1,1]-> [0,n] -->
<set name="car" table="cars" lazy="true" inverse="true">
<key column="id_doc" />
<one-to-many class="Car" />
</set>
<many-to-one name="officialCar"
class="Car"
column="officialcar_id" lazy="false"/>
Cars.hbm.xml
<many-to-one name="person" class="Person"
column="id_person" not-null="true" lazy="false"/>
This method works well for a single thread, and on multiple threads, gives me an error:
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - WARN - org.hibernate.util.JDBCExceptionReporter - SQL Error: 60, SQLState: 61000
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.util.JDBCExceptionReporter - ORA-00060: deadlock detection while waiting for a resource
 
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - WARN - org.hibernate.util.JDBCExceptionReporter - SQL Error: 60, SQLState: 61000
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.util.JDBCExceptionReporter - ORA-00060: deadlock detection while waiting for a resource
 
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.event.def.AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.LockAcquisitionException: Could not execute JDBC batch update
AOP Transaction :
<tx:advice id="txAdviceNomService" transaction-manager="txManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED" rollback-for="java.lang.Exception" />
<tx:method name="getAll*" read-only="true" propagation="SUPPORTS" />
<tx:method name="find*" read-only="true" propagation="SUPPORTS" />
</tx:attributes>
</tx:advice>
NB : When i add Thread.sleep(5000) after update, it is ok. But this solution is not clean.

According to your mapping, the sequence of operations should look like this:
Person p = DAO.findPerson(id);
Car car = new Car();
car.setPerson(p);
DAO.saveOrUpdate(car);
p.getCars().add(car);
Car firstCar = p.getCars().get(0);
firstCar.setPerson(null);
p.getCars().remove(firstCar);
if (p.officialCar.equals(firstCar)) {
p.officialCar = null;
p.officialCar.person = null;
}
DAO.delete(firstCar);
An update or a delete means acquiring an exclusive lock, even on READ_COMMITTED isolation level.
If another transaction wants to update the same row with the current running transaction (which already locked this row in question) you won't get a deadlock, but a lock acquisition timeout exception.
Since you got a deadlock, it means you acquire locks on multiple tables and the lock acquisitions are not properly ordered.
So, make sure that the service layer methods set the transaction boundaries, not the DAO methods. I see you declared the get and find methods to use SUPPORTED, meaning they will use a transaction only if one is currently started. I think you should use REQUIRED for those as well, but simply mark them as read-only = true.
So make sure the transaction aspect applies the transaction boundary on the "mymethod" and not on the DAO ones.

I have Cars -> (1 -n) places.
And i have a foreign key in the table place (id_car). This foreign key dont have an index.
When i add an index to this foreign key, my problem is resolved.
Refer to This Answer

Related

SQLite in-memory database encounters SQLITE_LOCKED_SHAREDCACHE intermittently

I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly

PSQLException on heroku

I use java, apache-cayenne and postgreSQL.
My app works fine on desktop, but I get an error when I run it on Heroku:
org.postgresql.util.PSQLException: Bad value for type timestamp/date/time:
{1}
there are also warnings:
INFO org.apache.cayenne.log.JdbcEventLogger - --- transaction started.
WARN org.apache.cayenne.access.types.SerializableTypeFactory - Haven't found suitable ExtendedType for class 'java.time.LocalDate'. Most likely you need to define a custom ExtendedType.
WARN org.apache.cayenne.access.types.SerializableTypeFactory - SerializableType will be used for type conversion.
INFO org.apache.cayenne.log.JdbcEventLogger - --- transaction started.
INFO org.apache.cayenne.log.JdbcEventLogger - SELECT t0.DATE, t0.ROOM, t0.TIME, t0.TYPE, t0.PROFESSOR_ID, t0.SUBJECT_ID, t0.LESSON_ID FROM Lesson t0 JOIN Subject t1 ON (t0.SUBJECT_ID = t1.SUBJECT_ID) WHERE (t0.DATE = ?) AND (t1.USER_ID = ?) [bind: 1->DATE:2017-12-08, 2->USER_ID:81627965]
Here is my xml:
<db-entity name="Lesson">
<db-attribute name="DATE" type="DATE"/>
<db-attribute name="LESSON_ID" type="INTEGER" isPrimaryKey="true" isMandatory="true"/>
<db-attribute name="PROFESSOR_ID" type="INTEGER"/>
<db-attribute name="ROOM" type="VARCHAR" length="50"/>
<db-attribute name="SUBJECT_ID" type="INTEGER"/>
<db-attribute name="TIME" type="TIME"/>
<db-attribute name="TYPE" type="INTEGER"/>
</db-entity>
<obj-entity name="Lesson" className="com.intetics.organizerbot.entities.Lesson" dbEntityName="Lesson">
<obj-attribute name="date" type="java.time.LocalDate" db-attribute-path="DATE"/>
<obj-attribute name="room" type="java.lang.String" db-attribute-path="ROOM"/>
<obj-attribute name="time" type="java.time.LocalTime" db-attribute-path="TIME"/>
<obj-attribute name="type" type="int" db-attribute-path="TYPE"/>
</obj-entity>
I work with the same Heroku Postgres database from desktop and Heroku.
It seems there is some problem connected with LocalDate class. But I have no idea why everything works fine on my computer, while there are problems on heroku.
I also tried to deploy jar which worked fine and it still doesn't work.
Do you have any idea on why this happens and how can I fix it?
Simular question wath asked about production server Bad value for type timestamp on production server
but I doesn't seem I can apply it's answers on Heroku.
In order to use Java 8 java.time.* classes from Cayenne you need to make sure you include cayenne-java8 module to your project, see this docs for details. Without it Cayenne just don't know how to handle those classes.

How to flush each transaction to DB using Entity Manager

I was fetch records from .csv file using smooks the size of file is around 30MB data in single file ,in this file around 3 laks of records fetch into List..
i was done next List is divided into subparts using subList Partition size is upto 2000 .
i want to flush 2000 records in single transaction .but it not allow like that in my code.
I am using seam 2.1.2 ,jap with hibernate ,EntityManager ,JTA Transactions.
components.xml
<core:init debug="false" jndi-pattern="#jndiPattern#" />
<core:manager concurrent-request-timeout="2000"
conversation-id-parameter="cid" conversation-timeout="120000"
parent-conversation-id-parameter="pid" />
<web:hot-deploy-filter url-pattern="/*.mobee" />
<persistence:entity-manager-factory
installed="#seamBootstrapsPu#" name="entityManagerFactory"
persistence-unit-name="mobeeadmin" />
<persistence:managed-persistence-context
auto-create="true" entity-manager-factory="#seamEmfRef#" name="entityManager"
persistence-unit-jndi-name="#puJndiName#" />
<async:quartz-dispatcher />
<security:identity authenticate-method="#{authenticator.authenticate}" />
<web:rewrite-filter view-mapping="*.mobee" />
<web:multipart-filter create-temp-files="true" max-request-size="28672000" url-pattern="*.seam"/>
<event type="org.jboss.seam.security.notLoggedIn">
<action execute="#{redirect.captureCurrentView}" />
</event>
<event type="org.jboss.seam.security.loginSuccessful">
<action execute="#{redirect.returnToCapturedView}" />
</event>
<mail:mail-session host="localhost" port="25" />
Java Code:
private List<DoTempCustomers> doTempCustomers;
int partitionSize = 2000;
for (int i = 0; i < doTempCustomers.size(); i += partitionSize) {
String message= tempCustomerMigration(doTempCustomers.subList(i,
i + Math.min(partitionSize, doTempCustomers.size() - i)));
}
#Begin(join=true)
public String tempCustomerMigration(List<DoTempCustomers> list){
PersistenceProvider.instance().setFlushModeManual(getEntityManager());
TempCustomers temp = null;
for(DoTempCustomers tempCustomers:list){
try {
temp=new TempCustomers();
BeanUtils.copyProperties(temp, tempCustomers);
getEntityManager.persist();
getEntityManager.flush();
}
i was tried so many times this issue never got sol on how to flush record to DB in each transaction before send server response to GUI
otherwise some process time i got exception is
2012-12-06 17:09:56,380 WARN [com.arjuna.ats.arjuna.logging.arjLoggerI18N] [com.arjuna.ats.arjuna.coordinator.BasicAction_58] - Abort of action id -53eff40e:f2db:50c0a356:7d invoked while multiple threads active within it.
2012-12-06 17:09:56,380 WARN [com.arjuna.ats.arjuna.logging.arjLoggerI18N] [com.arjuna.ats.arjuna.coordinator.CheckedAction_2] - CheckedAction::check - atomic action -53eff40e:f2db:50c0a356:7d aborting with 1 threads active!
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.parentTraceEnabled=true
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.nestedTraceEnabled=false
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.detectDuplicateNesting=true
2012-12-06 17:09:56,524 INFO [STDOUT] [Mobee]- WARN 2012-12-06 17:09:56,524 [] JDBCExceptionReporter - SQL Error: 0, SQLState: null
2012-12-06 17:09:56,525 INFO [STDOUT] [Mobee]-ERROR 2012-12-06 17:09:56,524 [] JDBCExceptionReporter - Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eff40e:f2db:50c0a356:7d status: ActionStatus.ABORTING >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eff40e:f2db:50c0a356:7d status: ActionStatus.ABORTING >)
2012-12-06 17:09:56,545 INFO [STDOUT] [Mobee]-ERROR 2012-12-06 17:09:56,527 [] AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
for this exception i found sol in google for increase to TransactionTimeout in jboss--service.xml for me there is no use even increasing timeout parameter.
You could create a new Seam component that is an EJB Session Bean and use a UserTransaction to perform your updates/inserts in batches. UserTransaction also lets you specify the transaction timeout. You would inject this new component into the component that you're using above. Here is an example - see the 5th post.. Otherwise, if you don't want to use an EJB, I think your approach would require you to use a nested Seam conversation since it appears that you are using a Seam managed persistence context that is scoped to a single conversation.

PostgreSQL + Hibernate + C3P0 = FATAL: sorry, too many clients already

I have the following code
Configuration config = new Configuration().configure();
config.buildMappings();
serviceRegistry = new ServiceRegistryBuilder().applySettings(config.getProperties()).buildServiceRegistry();
SessionFactory factory = config.buildSessionFactory(serviceRegistry);
Session hibernateSession = factory.openSession();
Transaction tx = hibernateSession.beginTransaction();
ObjectType ot = (ObjectType)hibernateSession.merge(someObj);
tx.commit();
return ot;
hibernate.cfg.xml contains:
<session-factory>
<property name="connection.url">jdbc:postgresql://127.0.0.1:5432/dbase</property>
<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="connection.driver_class">org.postgresql.Driver</property>
<property name="connection.username">username</property>
<property name="connection.password">password</property>
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<property name="hibernate.c3p0.acquireRetryAttempts">1</property>
<property name="hibernate.c3p0.acquireRetryDelay">250</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.use_sql_comments">true</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.current_session_context_class">thread</property>
<mapping class="...." />
</session-factory>
After a few seconds and some successful inserts, the following exception appears:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
12:24:19.151 [ Thread-160] WARN internal.JdbcServicesImpl - HHH000342: Could not obtain connection to query metadata : Connections could not be acquired from the underlying database!
12:24:19.151 [ Thread-160] INFO dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
12:24:19.151 [ Thread-160] INFO internal.LobCreatorBuilder - HHH000422: Disabling contextual LOB creation as connection was null
12:24:19.151 [ Thread-160] INFO internal.TransactionFactoryInitiator - HHH000268: Transaction strategy: org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory
12:24:19.151 [ Thread-160] INFO ast.ASTQueryTranslatorFactory - HHH000397: Using ASTQueryTranslatorFactory
12:24:19.151 [ Thread-160] INFO hbm2ddl.SchemaUpdate - HHH000228: Running hbm2ddl schema update
12:24:19.151 [ Thread-160] INFO hbm2ddl.SchemaUpdate - HHH000102: Fetching database metadata
12:24:19.211 [Runner$PoolThread-#0] WARN resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#ee4084 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (1). Last acquisition attempt exception:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
It seems that the hibernate doesn't realse the connection. But hibernateSession.close() causes exception Session is closed because tx.commit() is called.
I'm not quite sure what's going on here, but I'd recommend you not set hibernate.c3p0.acquireRetryAttempts to 1. First, that renders your next setting, hibernate.c3p0.acquireRetryDelay irrelevant -- that sets the length of time between retry attempts, but if there is only one attempt (ok, the param name is misleading, it sets the total number of tries), there are no retries. The effect of your settings is simply to have the pool try to fetch a Connection whenever a client comes in, then throw an Exception to clients immediately if that fails. It doesn't at all limit the number of Connections the pool will try to acquire (unless you set breakOnAcquireFailure to true, in which case, with your settings, any failure to acquire a Connection would invalidate the whole pool).
I share sola's concern about your lack of reliable resource cleanup. If, under your settings, commit() means close() (and you are not allowed to call close explicitly? that seems bad), then it is commit that should be in the finally block (but commit in a finally block also seems bad, sometimes you don't want to commit). Whatever the issue with close/commit, with the code you have, occasional Exceptions between openSession and commit will lead to Connection leaks.
But that should not be the cause of your too-many-open-Connections problem. If you leak Connections, you'll find that the Connection pool eventually freezes (as maxPoolSize Connectiosn are checked out forever due to the leaks). You'd only have 25 open Connections. Something else is going on. Try reviewing your logs. Is more than one Connection pool somehow being initialized? (c3p0 dumps config information at INFO level on pool init, so if multiple pools are getting opened, you should see multiple messages. alternatively, you can inspect running c3p0 pools via JMX, to see whether/why more than 25 Connections have been opened.)
Good luck!
I found the cause why c3p0 behaved in this way.The issue was quite trivial...
This part of code:
Configuration config = new Configuration().configure();
config.buildMappings();
serviceRegistry = new ServiceRegistryBuilder().applySettings(config.getProperties()).buildServiceRegistry();
SessionFactory factory = config.buildSessionFactory(serviceRegistry);
was executed multiple times. Thank you Steve for the tip.
I'm suggesting you to use try-catch-finally block,
in finally kindly close the session
i.e
try {
tx.commit();
} catch (HibernateException e) {
handleException(e);
} finally {
hibernateSession.close();
}
and also,
the max_connections property in postgresql.conf it's 100 by default. Increase it if you need.

Resolving MySQL Deadlock issue

Service: We have a service that receives Request and it just saves it in the database. Following is hbm file for same.
<hibernate-mapping package="com.project.dao">
<class name="Request" table="requests">
<id name="requestId" type="long" column="req_id">
<generator class="native" />
</id>
<class>
<discriminator column="req_type" type="string" force="true"/>
<property name="status" type="string" column="status"/>
<property name="processId" type="string" column="process_id"/>
<subclass name="RequestType1" discriminator-value="Type1">
..
</subclass>
<subclass name="RequestType2" discriminator-value="Type2">
..
</subclass>
</hibernate-mapping>
Code which obtains session and saves Request is as follows.
try{
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = session.beginTransaction();
requestDAO.save(request);
tx.commit();
}catch(Exception e){
log.error(e);
}
Client: There are two hosts on client side that reads these received and unprocessed request. Each client(Host1, Host2) does following.
Update process_id of unprocessed request to its hostname.
update requests set process_id='" + hostName + "' where status='Received' and process_id is null order by req_id asc limit 100"
Retrieve requests updated above and process them.
select * from requests where process_id='"+ hostName + "' where status='Received';
Now problem is, for some time, these clients would work fine. But after some time, they start throwing following exception.
org.hibernate.exception.GenericJDBCException: could not execute native bulk manipulation query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126)
Caused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
Client again starts to work normally if we restart them.
On service side also, we see following exceptions sometimes.
org.hibernate.AssertionFailure: null id don't flush the Session after an exception occurs
We have been trying to resolve this issue but not able to figure out root cause. Any insight into the probable issue would be helpful.
Thanks

Categories

Resources