Quarkus and JDBC error attempting to commit transaction - java

I am in the process of experimenting with Quarkus, building a small REST application. For this I have elected to use the Agroal datasource but neither Panache or plain Hibernate (as is shown in their examples), but rather I'm using plaing JDBC.
For database interaction I have created a small service that injects the AgroalDataSource and uses it to open up database connections. Said service exposes two methods one for running no transactional queries and one or transactional ones. For the first part everything works fine, but when I attempt to update a database entry the action fails upon attempting to commit the connection.
public <E> E update(TransactionalRunner<E> runner) {
try (var connection = dataSource.getConnection()) {
return attemptTransactional(runner, connection);
} catch (SQLException e) {
log.error("Establishing a database connection has failed", e);
throw new DataAccessException(e);
}
}
private <E> E attemptTransactional(TransactionalRunner<E> runner, Connection connection) {
try {
connection.setAutoCommit(Boolean.FALSE);
E result = runner.run(new QueryRunner(), connection);
connection.commit();
return result;
} catch (SQLException e) {
try {
log.error("Committing a transaction has failed. Attempting rollback", e);
DbUtils.rollback(connection);
throw new DataAccessException(e);
} catch (SQLException ex) {
log.error("Rolling back a transaction has failed. Giving up...", ex);
throw new DataAccessException(ex);
}
}
}
The stacktrace I'm getting is the following:
2019-12-21 14:25:59,350 ERROR [com.ari.rev.dat.DatabaseAccessService] (vert.x-worker-thread-1) Committing a transaction has failed. Attempting rollback: java.sql.SQLException: Attempting to commit while taking part in a transaction
at io.agroal.pool.wrapper.ConnectionWrapper.commit(ConnectionWrapper.java:183)
at com.ariskourt.revolut.database.DatabaseAccessService.attemptTransactional(DatabaseAccessService.java:44)
at com.ariskourt.revolut.database.DatabaseAccessService.update(DatabaseAccessService.java:33)
at com.ariskourt.revolut.database.DatabaseAccessService_ClientProxy.update(DatabaseAccessService_ClientProxy.zig:114)
at com.ariskourt.revolut.services.AccountTransferService.transferAmount(AccountTransferService.java:66)
at com.ariskourt.revolut.services.AccountTransferService_Subclass.transferAmount$$superaccessor2(AccountTransferService_Subclass.zig:164)
at com.ariskourt.revolut.services.AccountTransferService_Subclass$$function$$2.apply(AccountTransferService_Subclass$$function$$2.zig:51)
at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorBase.invokeInOurTx(TransactionalInterceptorBase.java:119)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorBase.invokeInOurTx(TransactionalInterceptorBase.java:92)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorRequired.doIntercept(TransactionalInterceptorRequired.java:32)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorBase.intercept(TransactionalInterceptorBase.java:53)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorRequired.intercept(TransactionalInterceptorRequired.java:26)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorRequired_Bean.intercept(TransactionalInterceptorRequired_Bean.zig:168)
at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41)
at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32)
I'm using the default Agroal datasource offered by Quarkus without any custom configuration. As for the QueryRunner these are just part of the Apache DbUtils suite.
Has anyone got any idea on how to resolve this?

Related

Why is db connection closed after trying and failing to get a lock with spring-data-jpa?

So I would like to wrap a PessimisticLockingFailureException that gets thrown in a jpa repo when trying to get a lock for an entity that is already locked. And handle the wrapped exception in my exception handlers.
But it seems that when spring tries to end the transaction the connection is already closed and spring throws a new exception that overwrites the exception I would like to see.
In the logs I get "Application exception overridden by rollback exception" and it is this I would like to avoid. (Cause of rollback ex is that "Connection is closed")
Is there a solution to this? Or am I doing something wrong?
(Here's some pseudo code of what I'm doing)
String restControllerMethod(String args) {
try {
return service.serviceMethod(args);
} catch (Exception e1) {
throw e1; // org.springframework.orm.jpa.JpaSystemException caused by org.hibernate.TransactionException caused by java.sql.SQLException
}
}
#Transactional
String serviceMethod(String args) {
Entity entity;
try {
entity = repo.repoFindMethod(args);
} catch (Exception e2) {
throw new WrappingException(e2); // org.springframework.dao.PessimisticLockingFailureException caused by org.hibernate.PessimisticLockException
}
// do some processing with entity
return result;
}
#Lock(LockModeType.PESSIMISTIC_READ)
String repoFindMethod(String args);
I'm using spring-boot-starter-parent 2.3.2.RELEASE with spring-boot-starter-web spring-boot-starter-data-jpa and an emmbedded h2 db
Fixed this by adding a com.zaxxer.hikari.SQLExceptionOverride implementation and pointing the
spring.datasource.hikari.exception-override-class-name to it.
This causes hikari to not close the connection when the db throws an exception with the specified error code.
I've also added #QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "0")}) to the locking query since default lock wait times can be vendor specific
The issue with this solution is that it is vendor specific (both for h2 and hikari). And not all vendors support a custom timeout for obtaining locks (h2 for example does not support this but it matters less since it's timeout is very short anyway)
Example of my solution (for h2):
spring.datasource.hikari.exception-override-class-name=com.example.H2SQLExceptionOverride
public class H2SQLExceptionOverride implements SQLExceptionOverride {
private static final Logger logger = LoggerFactory.getLogger(H2SQLExceptionOverride.class);
public static final int LOCK_TIMOUT_ERROR_CODE = 50200;
#java.lang.Override
public Override adjudicate(SQLException sqlException) {
if (sqlException.getErrorCode() == LOCK_TIMOUT_ERROR_CODE) {
logger.debug("Diverting from default hikari impl and continuing transaction with errorCode: "
+ sqlException.getErrorCode() + " and sqlState: " + sqlException.getSQLState());
return Override.DO_NOT_EVICT;
}
return Override.CONTINUE_EVICT;
}
}

Spring Boot 2.x Connection Factory

I have the below class that works well in Spring Boot 1.5.x
public ConnectionFactory connectionFactory() {
org.springframework.data.jdbc.config.oracle.AqJmsFactoryBeanFactory f=new AqJmsFactoryBeanFactory();
f.setDataSource(dataSource);
f.setCoordinateWithDataSourceTransactions(true);
f.setNativeJdbcExtractor(new org.springframework.jdbc.support.nativejdbc.Jdbc4NativeJdbcExtractor());
f.setConnectionFactoryType(ConnectionFactoryType.QUEUE_CONNECTION);
try {
return f.getObject();
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
I have now upgraded to 2.0.4 version where NativeJdbcExtractor is not present. Can someone help me how to reconfgure the above to get the connectionFactory.
The whole nativejdbc hierarchy doesn't exists anymore and what isn't there anymore cannot be used.So either don't upgrade or figure out a way to not need it (jdbc 4 can use unwrap to get the actual underlying connection).
To get the actual underlying connection there is an unwrap method on the Connection. So something along the lines of connection.unwrap(OracleConnection.class) should get you the actual underlying connection. However that might require an additional upgrade of the aq-jms-connection-factory which I'm not certain supports Spring 5.0.
I have used something like this which can help you :
public OracleConnection getOracleConnection(Connection connection) throws SQLException {
OracleConnection oconn = null;
try {
if (connection.isWrapperFor(oracle.jdbc.OracleConnection.class)) {
oconn = (OracleConnection) connection.unwrap(oracle.jdbc.OracleConnection.class)._getPC();
}
} catch (SQLException e) {
throw e;
}
return oconn;
}

Debugging database connection leak in concurrent environment

I'm currently working on a project which was originally not build for high load.
My problem atm is that at some point during the stress test (30 users) the application seems to "get stuck" and when it releases it spits out a lot of exceptions. Unable to get managed connection for [MY_DS]
When I run only one user, it works like a charm so it has something to do with the concurrency.
I also checked if there were any unclosed DB connections at the end of one run and there were none, so at normal usage, there are no connection leaks.
My suspicion goes out to my open and close methods (because they are static). Here are the methods:
public static Connection getConnection() {
if (logger.isDebugEnabled()) logger.debugLog("getConnection()");
try {
return DSUtils.getDefaultDataSource().getConnection();
} catch (SQLException se) {
logger.errorLog("SQLException", se);
throw new ApplicationRuntimeException(MessageCodesConstants.ERROR_SQL_EXCEPTION, se);
}
}
public static void closeConnection(Connection con) {
if (logger.isDebugEnabled()) logger.debugLog("closeConnection");
try {
if (con != null) {
con.close();
}
} catch (SQLException se) {
logger.warnLog("SQLException while closing connection");
}
}
It is an EE application running on JBoss EAP 6.2.0 backed-up by a SQL Server 2008.
Can somebody point me in the right direction to find out where the solution can be found?

hibernate settimeout not working mysql

I am working in java hibernate and mysql. I want to use transaction settimeout for a payment functionality of application. I just test the code as below for settimeout to work.
Transaction tx = (Transaction) threadTransaction.get();
try {
if (tx == null) {
Session session = (Session) threadSession.get();
session.getTransaction().setTimeout(5);
tx=session.beginTransaction();
try {
Thread.sleep(6000);
} catch (InterruptedException e) {
e.printStackTrace();
}
if(session.getTransaction().isActive()) {
System.out.println("session active");
}
else {
System.out.println("session inactive");
}
threadTransaction.set(tx);
}
}
catch (HibernateException e) {
throw new HibernateException("", e);
}
But it print session active, means the timeout doesnt work. What is the reason? please help !
Hibernate is pretty good at doing nothing as long as nothing needs to be done (efficiency and such). I think that is what your test shows, not that the the timeout is not working for what it was intended to do:
"... ensuring that database level deadlocks and queries with huge result sets are limited by a defined timeout."
Also note that "setTimeout() cannot be called in a CMT bean ..."
Test the transaction timeout with some code that does what the transaction timeout was intended for and I think you'll find it working properly.

H2 on Tomcat shutdown hanging, memory leak when using Tomcat Datasource

This is related to these 2 posts:
What is the proper way to close H2?
Tomcat doesn't stop. How can I debug this?
Basically H2 keeps a lock on the database, even when all connections are closed, and so when stopping Tomcat it hangs waiting on a thread, the process is still running.
The only way I managed to get H2 to not lock the database is by issueing the statement SHUTDOWN IMMEDIATELY command (the vanilla or compact did not release the lock).
This is performed in my ServletContextListener class in the contextDestroyed like this (I have omitted comments and log lines):
ServletContext ctx = servletContextEvent.getServletContext();
DataSource closeDS = databaseConnection.getDatasource();
Connection closeConn = null;
PreparedStatement closePS = null;
try {
closeConn = closeDS.getConnection();
closePS = closeConn.prepareStatement("SHUTDOWN IMMEDIATELY");
closePS.execute();
} catch (Exception ex) {
} finally {
if (closePS != null) {
try { closePS.close(); } catch (SQLException ex) {}
}
if (closeConn != null) {
try { closeConn.close(); } catch (SQLException ex) {}
}
}
try {
databaseConnection.close();
databaseConnection = null;
ctx.setAttribute("databaseConnection", null);
} catch(Exception e) {
}
Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
Driver driver = drivers.nextElement();
try {
DriverManager.deregisterDriver(driver);
} catch (Exception e) {
}
}
Now the lock is released Tomcat stops (although I still get the severe memory leak messages in the logs) but now I receive also a number of error stacks in the logs thus:
INFO: Illegal access: this web application instance has been stopped already.
Could not load java.lang.ThreadGroup.
The eventual following stack trace is caused by an error thrown
for debugging purposes as well as to attempt to terminate
the thread which caused the illegal access, and has no functional impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1531)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1491)
at org.h2.engine.DatabaseCloser.reset(DatabaseCloser.java:43)
at org.h2.engine.Database.close(Database.java:1155)
at org.h2.engine.DatabaseCloser.run(DatabaseCloser.java:80)
10-sep-2013 13:31:41 org.apache.catalina.loader.WebappClassLoader loadClass
The question is: how can I shut down the database without causing illegal state exceptions. Is there something wrong in my code to call the shutdown command?
Why is this such an issue with H2? I do not have this issue with JBoss or Websphere where the application also runs using datasources provided by the container.

Categories

Resources