In my web application I'm using Stateless sessions with Hibernate to have better performances on my inserts and updates.
It was working fine with H2 database (the one used in play framework in dev mode).
But when I test it with MySQL I get the following exception :
ERROR ~ Lock wait timeout exceeded; try restarting transaction
ERROR ~ HHH000315: Exception executing batch [Lock wait timeout exceeded; try restarting transaction]
Here is the code :
public static void update() {
Session session = (Session) JPA.em().getDelegate();
StatelessSession stateless = this.session.getSessionFactory().openStatelessSession();
try {
stateless.beginTransaction();
// Fetch all products
{
List<ProductType> list = ProductType.retrieveAllWithHistory();
for (ProductType pt : list) {
updatePrice(pt, stateless);
}
}
// Fetch all raw materials
{
List<RawMaterialType> list = RawMaterialType.retrieveAllWithHistory();
for (RawMaterialType rm : list) {
updatePrice(rm, stateless);
}
}
} catch (Exception ex) {
play.Logger.error(ex.getMessage());
ExceptionLog.log(ex, Thread.currentThread());
} finally {
stateless.getTransaction().commit();
stateless.close();
}
}
private static void updatePrice(ProductType pt, StatelessSession stateless) {
pt.priceDelta = computeDelta();
pt.unitPrice = computePrice();
stateless.update(pt);
PriceHistory ph = new PriceHistory(pt, price);
stateless.insert(ph);
}
private static void updatePrice(RawMaterialType rm, StatelessSession stateless) {
rm.priceDelta = computeDelta();
rm.unitPrice = computePrice();
stateless.update(rm);
PriceHistory ph = new GoodPriceHistory(rm, price);
stateless.insert(ph);
}
In this example I have 3 simple Entities (ProductType, RawMaterialType and PriceHistory).
computeDelta and computePrice are just algorithm functions with no DB stuff.
retrieveAllWithHistory functions are functions that fetch some data from the database using Play framework model functions.
So, this code retrieves some data, edit some, create new one and finally save everything.
Why have I a lock exception with MySQL and no exception with H2 ?
I'm not sure why you have a commit in a finally block. Give this structure a try:
try {
factory.getCurrentSession().beginTransaction();
factory.getCurrentSession().getTransaction().commit();
} catch (RuntimeException e) {
factory.getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
Also, it might be helpful for you to check this documentation.
Related
I have message queue, that gives messages with some entity field update info. There are 10 threads, that process messages from the queue.
For example
1st thread processes message, this thread should update field A from my entity with id 123.
2nd thread processes another message, this thread should update field B from my entity with id 123 at the same time.
Sometimes after updates database don't contain some updated fields.
some updater:
someService.updateEntityFieldA(entityId, newFieldValue);
some service:
public Optional<Entity> findById(String entityId) {
return Optional.ofNullable(new DBWorker().findOne(Entity.class, entityId));
}
public void updateEntityFieldA(String entityId, String newFieldValue) {
findById(entityId).ifPresent(entity -> {
entity.setFieldA(newFieldValue);
new DBWorker().update(entity);
});
}
db worker:
public <T> T findOne(final Class<T> type, Serializable entityId) {
T findObj;
try (Session session = HibernateUtil.openSessionPostgres()) {
findObj = session.get(type, entityId);
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
return findObj;
}
public void update(Object entity) {
try (Session session = HibernateUtil.openSessionPostgres()) {
session.beginTransaction();
session.update(entity);
session.getTransaction().commit();
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
}
HibernateUtil.openSessionPostgres() gets each time new session from
sessionFactory.openSession()
Is it possible do such logic without threads locks / optimistic locking and pessimistic locking?
If you use sessionFactory.openSession() to always open a new session, on the update hibernate may be lossing the info about the dirty fields it needs to update, so issues and update to all fields.
Setting hibernate.show_sql property to true will show you the SQL UPDATE statements generated by hibernate.
Try refactoring your code to, in the same transaction, load the entity and update the field. A session.update is not needed, as the entity is managed, on transaction commit hibernate will flush changes and issue a SQL update.
So I would like to wrap a PessimisticLockingFailureException that gets thrown in a jpa repo when trying to get a lock for an entity that is already locked. And handle the wrapped exception in my exception handlers.
But it seems that when spring tries to end the transaction the connection is already closed and spring throws a new exception that overwrites the exception I would like to see.
In the logs I get "Application exception overridden by rollback exception" and it is this I would like to avoid. (Cause of rollback ex is that "Connection is closed")
Is there a solution to this? Or am I doing something wrong?
(Here's some pseudo code of what I'm doing)
String restControllerMethod(String args) {
try {
return service.serviceMethod(args);
} catch (Exception e1) {
throw e1; // org.springframework.orm.jpa.JpaSystemException caused by org.hibernate.TransactionException caused by java.sql.SQLException
}
}
#Transactional
String serviceMethod(String args) {
Entity entity;
try {
entity = repo.repoFindMethod(args);
} catch (Exception e2) {
throw new WrappingException(e2); // org.springframework.dao.PessimisticLockingFailureException caused by org.hibernate.PessimisticLockException
}
// do some processing with entity
return result;
}
#Lock(LockModeType.PESSIMISTIC_READ)
String repoFindMethod(String args);
I'm using spring-boot-starter-parent 2.3.2.RELEASE with spring-boot-starter-web spring-boot-starter-data-jpa and an emmbedded h2 db
Fixed this by adding a com.zaxxer.hikari.SQLExceptionOverride implementation and pointing the
spring.datasource.hikari.exception-override-class-name to it.
This causes hikari to not close the connection when the db throws an exception with the specified error code.
I've also added #QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "0")}) to the locking query since default lock wait times can be vendor specific
The issue with this solution is that it is vendor specific (both for h2 and hikari). And not all vendors support a custom timeout for obtaining locks (h2 for example does not support this but it matters less since it's timeout is very short anyway)
Example of my solution (for h2):
spring.datasource.hikari.exception-override-class-name=com.example.H2SQLExceptionOverride
public class H2SQLExceptionOverride implements SQLExceptionOverride {
private static final Logger logger = LoggerFactory.getLogger(H2SQLExceptionOverride.class);
public static final int LOCK_TIMOUT_ERROR_CODE = 50200;
#java.lang.Override
public Override adjudicate(SQLException sqlException) {
if (sqlException.getErrorCode() == LOCK_TIMOUT_ERROR_CODE) {
logger.debug("Diverting from default hikari impl and continuing transaction with errorCode: "
+ sqlException.getErrorCode() + " and sqlState: " + sqlException.getSQLState());
return Override.DO_NOT_EVICT;
}
return Override.CONTINUE_EVICT;
}
}
We have a large multithreaded Java EE application running on Wildfly 8.
We are using OrientDB 2.1.19.
And we have some problems with the connection leaks. At some point orient server stops responding and all threads working with db stuck on retrieving new connection.
Configuration is following:
OGlobalConfiguration.CLIENT_CONNECT_POOL_WAIT_TIMEOUT.setValue(5000);
OGlobalConfiguration.CLIENT_DB_RELEASE_WAIT_TIMEOUT.setValue(5000);
OGlobalConfiguration.CLIENT_CHANNEL_MAX_POOL.setValue(1000);
OGlobalConfiguration.DB_POOL_MIN.setValue(100);
OGlobalConfiguration.DB_POOL_MAX.setValue(5000);
OGlobalConfiguration.STORAGE_LOCK_TIMEOUT.setValue(5000);
OGlobalConfiguration.DB_POOL_IDLE_TIMEOUT.setValue(5000);
OGlobalConfiguration.DB_POOL_IDLE_CHECK_DELAY.setValue(1000);
OPartitionedDatabasePool we're getting thru OPartitionedDatabasePoolFactory
poolFactory = new OPartitionedDatabasePoolFactory();
documentPool = poolFactory.get(orientDBPath, username, password);
Then depending on our needs we are using ODatabaseDocumentTx or OObjectDatabaseTx using
documentPool.acquire();
Acquired ODatabaseDocumentTx or OObjectDatabaseTx we are passing to so called executor, which executes Runnable or Callable.
public void run(ORunnable<DB> oRunnable) {
// db is the private member of executor containing acquired db.
try {
//...
oRunnable.run(db);
//...
} catch (Exception e) {
} finally {
if(!db.isClosed()) {
db.close();
}
}
}
As you can see we are closing db in finally section, also we are fully detaching all documents recieved from db. But in some cases number of connections to DB is not dropping and after that we're getting following exception
2016-07-28 23:10:45,957 FINE [com.orientechnologies.orient.client.remote.ORemoteConnectionManager] (default task-46) Network connection pool is receiving a closed connection to reuse: discard it
2016-07-28 23:10:45,957 FINE [com.orientechnologies.orient.client.remote.ORemoteConnectionManager] (default task-46) Cannot unlock connection lock: java.lang.IllegalMonitorStateException
at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:155) [rt.jar:1.7.0_13]
at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1260) [rt.jar:1.7.0_13]
at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:460) [rt.jar:1.7.0_13]
at com.orientechnologies.common.concur.lock.OAdaptiveLock.unlock(OAdaptiveLock.java:123) [orientdb-core-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynchClient.unlock(OChannelBinaryAsynchClient.java:371) [orientdb-enterprise-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.client.remote.ORemoteConnectionManager.remove(ORemoteConnectionManager.java:128) [orientdb-client-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.client.remote.ORemoteConnectionManager.release(ORemoteConnectionManager.java:119) [orientdb-client-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.client.remote.OStorageRemote.endResponse(OStorageRemote.java:1643) [orientdb-client-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.client.remote.OStorageRemote.command(OStorageRemote.java:1240) [orientdb-client-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.client.remote.OStorageRemoteThread.command(OStorageRemoteThread.java:453) [orientdb-client-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.core.sql.query.OSQLQuery.run(OSQLQuery.java:72) [orientdb-core-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.core.sql.query.OSQLSynchQuery.run(OSQLSynchQuery.java:85) [orientdb-core-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.core.query.OQueryAbstract.execute(OQueryAbstract.java:33) [orientdb-core-2.1.19.jar:2.1.19]
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.query(ODatabaseDocumentTx.java:717) [orientdb-core-2.1.19.jar:2.1.19]
Here is the piece of code for the exception above
ODocument storedSession = orient.onDocuments().call(new ODocCallable<ODocument>() {
#Override
public ODocument call(ODatabaseDocumentTx db) {
try {
List<ODocument> list = db.query(new OSQLSynchQuery<ODocument>("select from " + CLAZZ + " where storedSession_sessionID = ?"), sessionId);
if (list.isEmpty()){
return null;
}
return documentUnpin(list.get(0));
} catch (Exception e) {
// PersistentSession may not be in DB on request (for example in workspaces)
}
return null;
}
});
Any suggestions how to improve the situation with connections?
Thanks.
I am working in java hibernate and mysql. I want to use transaction settimeout for a payment functionality of application. I just test the code as below for settimeout to work.
Transaction tx = (Transaction) threadTransaction.get();
try {
if (tx == null) {
Session session = (Session) threadSession.get();
session.getTransaction().setTimeout(5);
tx=session.beginTransaction();
try {
Thread.sleep(6000);
} catch (InterruptedException e) {
e.printStackTrace();
}
if(session.getTransaction().isActive()) {
System.out.println("session active");
}
else {
System.out.println("session inactive");
}
threadTransaction.set(tx);
}
}
catch (HibernateException e) {
throw new HibernateException("", e);
}
But it print session active, means the timeout doesnt work. What is the reason? please help !
Hibernate is pretty good at doing nothing as long as nothing needs to be done (efficiency and such). I think that is what your test shows, not that the the timeout is not working for what it was intended to do:
"... ensuring that database level deadlocks and queries with huge result sets are limited by a defined timeout."
Also note that "setTimeout() cannot be called in a CMT bean ..."
Test the transaction timeout with some code that does what the transaction timeout was intended for and I think you'll find it working properly.
I am getting a
org.hibernate.TransactionException: nested transactions not supported
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:152)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1395)
at com.mcruiseon.server.hibernate.ReadOnlyOperations.flush(ReadOnlyOperations.java:118)
Code that throws that exception. I am calling flush from a thread that runs infinite until there is data to flush.
public void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
// Below Line 118
txD = session.beginTransaction();
txD.begin() ;
session.saveOrUpdate(dataStore);
try {
txD.commit();
while(!txD.wasCommitted()) ;
} catch (ConstraintViolationException e) {
txD.rollback() ;
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
txD.rollback() ;
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
Edit :
I am calling flush in a independent thread as datastore list contains data. From what I see its a sync operation call to flush, so ideally flush should not return until transaction is complete. I would like it that way is the least I want to expect. Since its a independent thread doing its job, all I care about it flush being a sync operation. Now my question is, is txD.commit a async operation ? Does it return before that transaction has a chance to finish. If yes, is there a way to get commit to "Wait" until the transaction completes ?
public void run() {
Object dataStore = null;
while (true) {
try {
synchronized (flushQ) {
if (flushQ.isEmpty())
flushQ.wait();
if (flushQ.isEmpty()) {
continue;
}
dataStore = flushQ.removeFirst();
if (dataStore == null) {
continue;
}
}
try {
flush(dataStore);
} catch (DidNotSaveRequestSomeRandomError e) {
e.printStackTrace();
log.fatal(e);
}
} catch (HibernateException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Edit 2 : Added while(!txD.wasCommitted()) ; (in code above), still I get that freaking nested transactions not supported. Infact due to this exception a record is not being written to by table too. Is there something to do with the type of table ? I have INNODB for all my tables?
Finally got the nested transaction not supported error fixed. Changes made to code are
if (session.getTransaction() != null
&& session.getTransaction().isActive()) {
txD = session.getTransaction();
} else {
txD = session.beginTransaction();
}
//txD = session.beginTransaction();
// txD.begin() ;
session.saveOrUpdate(dataStore);
try {
txD.commit();
while (!txD.wasCommitted())
;
}
Credits of above code also to Venkat. I did not find HbTransaction, so just used getTransaction and beginTransaction. It worked.
I also made changes in the hibernate properties due to advice on here. I added these lines to the hibernate.properties. This alone did not solve the issue. But I am leaving it there.
hsqldb.write_delay_millis=0
shutdown=true
You probably already began a transaction before calling this method.
Either this should be part of the enclosing transaction, and you should thus not start another one; or it shouldn't be part of the enclosing transaction, and you should thus open a new session and a new transaction rather than using the current session.