This is my connection detail in JBoss standalone.xml
<connection-url>
jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=xx.1xx.119.1xx)(PORT=1521))(LOAD_BALANCE=on)(FAILOVER=on))(CONNECT_DATA=(SERVICE_NAME=XE)))
</connection-url>
I want to handle a corner case of failover where post getting EntityManager object during a call of persist(), the connection is lost. Failover option is not switching to next database in the same transaction, it switches to active connection in the next transaction. I attempted something like this: (Catch Exception and get updated bean object)
public EntityManager getEntityManager() {
try {
entityManager = getEntityManagerDao(Constant.JNDI_NFVD_ASSURANCE_ENTITY_MANAGER);
} catch (NamingException e) {
LOGGER.severe("Data could not be persisted.");
throw new PersistenceException();
}
return entityManager.getEntityManager();
}
/**
* Inserts record in database. In case multiple connections/databases exist, one more attempt will be made to
* insert record.
*
* #param entry
*/
public void persist(Object entry) {
try {
getEntityManager().persist(entry);
} catch (PersistenceException pe) {
LOGGER.info("Could not persist data. Trying new DB connection.");
getEntityManager().persist(entry);
}
}
private static Object getJNDIObject(String path) throws NamingException {
Object jndiObject = null;
InitialContext initialContext = new InitialContext();
jndiObject = initialContext.lookup(path);
return jndiObject;
}
private static AssuranceEntityManager getEntityManagerDao(String path) throws NamingException {
return (AssuranceEntityManager) getJNDIObject(path);
}
But this one also is not helping. After catching the exception, getting a new bean with JNDI lookup does not contain an updated new connection and an exception is thrown. This results in loss of data of that transaction.
Please suggest how to handle this corner case of "Connection lost post getting EntityManager and before persisting."
I think it's quite impossible what you want to achieve. The thing is that if internal DB transction is aborted then the JTA transaction is in abort state and you can't continue with it.
I expect it's kind of similar to this case
#Stateless
public class TableCreator {
#Resource
DataSource datasource;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void create() {
try(Connection connection = datasource.getConnection()) {
Statement st = connection.createStatement();
st.execute("CREATE TABLE user (id INTEGER NOT NULL, name VARCHAR(255))");
} catch (SQLException sqle) {
// ignore this as table already exists
}
}
}
#Stateless
public class Inserter {
#EJB
private TableCreator creator;
public void call() {
creator.create();
UserEntity entity = new UserEntity(1, "EAP QE");
em.persist(entity);
}
}
In case that table user exists and you would use annotation #TransactionAttribute(TransactionAttributeType.REQUIRED) then the create call will be part of the same jta global transaction as call of persist. As in such case the transaction was aborted the persist call would fail with exception like (postgresql case)
Caused by: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
I mean if Oracle jdbc driver is not able to to handle connection fail transparently to JBoss app server and throws the exception upwards then I think that the only possible solution is to repeat the whole update action.
Related
So I would like to wrap a PessimisticLockingFailureException that gets thrown in a jpa repo when trying to get a lock for an entity that is already locked. And handle the wrapped exception in my exception handlers.
But it seems that when spring tries to end the transaction the connection is already closed and spring throws a new exception that overwrites the exception I would like to see.
In the logs I get "Application exception overridden by rollback exception" and it is this I would like to avoid. (Cause of rollback ex is that "Connection is closed")
Is there a solution to this? Or am I doing something wrong?
(Here's some pseudo code of what I'm doing)
String restControllerMethod(String args) {
try {
return service.serviceMethod(args);
} catch (Exception e1) {
throw e1; // org.springframework.orm.jpa.JpaSystemException caused by org.hibernate.TransactionException caused by java.sql.SQLException
}
}
#Transactional
String serviceMethod(String args) {
Entity entity;
try {
entity = repo.repoFindMethod(args);
} catch (Exception e2) {
throw new WrappingException(e2); // org.springframework.dao.PessimisticLockingFailureException caused by org.hibernate.PessimisticLockException
}
// do some processing with entity
return result;
}
#Lock(LockModeType.PESSIMISTIC_READ)
String repoFindMethod(String args);
I'm using spring-boot-starter-parent 2.3.2.RELEASE with spring-boot-starter-web spring-boot-starter-data-jpa and an emmbedded h2 db
Fixed this by adding a com.zaxxer.hikari.SQLExceptionOverride implementation and pointing the
spring.datasource.hikari.exception-override-class-name to it.
This causes hikari to not close the connection when the db throws an exception with the specified error code.
I've also added #QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "0")}) to the locking query since default lock wait times can be vendor specific
The issue with this solution is that it is vendor specific (both for h2 and hikari). And not all vendors support a custom timeout for obtaining locks (h2 for example does not support this but it matters less since it's timeout is very short anyway)
Example of my solution (for h2):
spring.datasource.hikari.exception-override-class-name=com.example.H2SQLExceptionOverride
public class H2SQLExceptionOverride implements SQLExceptionOverride {
private static final Logger logger = LoggerFactory.getLogger(H2SQLExceptionOverride.class);
public static final int LOCK_TIMOUT_ERROR_CODE = 50200;
#java.lang.Override
public Override adjudicate(SQLException sqlException) {
if (sqlException.getErrorCode() == LOCK_TIMOUT_ERROR_CODE) {
logger.debug("Diverting from default hikari impl and continuing transaction with errorCode: "
+ sqlException.getErrorCode() + " and sqlState: " + sqlException.getSQLState());
return Override.DO_NOT_EVICT;
}
return Override.CONTINUE_EVICT;
}
}
I have a problem when testing JTA1.2 #Transactional annotation on Glassfish 4.1 application server.
If I run execute() method of this bean:
#Named
#RequestScoped
public class IndexController {
#Resource(name = "ds")
private DataSource ds;
#Transactional
public void execute() throws SQLException, SystemException {
try (Connection con = ds.getConnection();) {
try (PreparedStatement ps = con.prepareStatement(
"INSERT INTO test(id) VALUES(1)"
);) {
ps.executeUpdate();
throw new IllegalArgumentException();
}
}
}
}
I get expected error:
Caused by: javax.transaction.RollbackException: Transaction marked for rollback.
but when I execute select statement:
SELECT * FROM test;
I see that row was inserted. What's wrong?
What DB you are using and in which mode it is? Maybe I don't see it, but where are you doing a rollback? So the entry will still be temporarily written to DB until you rollback. Try this:
#Transactional(rollbackOn={Exception.class})
And normaly in a catch block you will call the rollback method. Because the transaction is only marked to be rolled back and you be responsable to roll it back.
I have a JDBC code where there are multiple Savepoints present; something like this:
1st insert statement
2nd insert statement
savepoint = conn.setSavepoint("S1");
1st insert statement
2nd update statement
savepoint = conn.setSavepoint("S2");
1st delete statement
2nd delete statement
savepoint = conn.setSavepoint("S3");
1st insert statement
2nd delete statement
savepoint = conn.setSavepoint("S4");
Now in the catch block, I am catching the exception and checking whether the Savepoint is null or not; if yes then rollback the entire connection else rollback till a Savepoint. But I am not able to understand till which Savepoint shall I roll back.
Will it be fine if I change all the savepoint names to "S1" ? In that case how will I understand how many till Savepoint did work correctly?
Please advise how to understand until what Savepoint the work was performed correctly?
Would view this as multiple transactions. Hence you could handle this with multiple try/ catch blocks. You also seem to be overwriting the savepoint objects hence it would be not feasible to rollback.
More info.
JDBC also supports to set save points and then rollback to the specified save point. The following method could be used to define save points.
SavePoint savePoint1 = connection.setSavePoint();
Rollback a transaction to an already defined save point using rollback call with an argument.
connection.rollback(savePoint1);
Reference.
https://www.stackstalk.com/2014/08/jdbc-handling-transactions.html
In such cases, I've found out the tricky part is to make sure you commit the transaction only if all inserts succeed, but rollback all updates if any insert fails. I've used a savepoint stack to handle such situations. The highly simplified code is as follows:
A connection wrapper class:
public class MyConnection {
Connection conn;
static DataSource ds;
Stack<Savepoint> savePoints = null;
static {
//... stuff to initialize datasource.
}
public MyConnection() {
conn = ds.getConnection();
}
public void beginTransaction() {
if (savePoints == null) {
savePoints = new Stack<Savepoint>();
conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
} else {
savePoints.push(conn.setSavepoint());
}
}
public void commit() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.commit();
} else {
Savepoint sp = savePoints.pop();
conn.releaseSavepoint(sp);
}
}
public void rollback() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.rollback();
} else {
Savepoint sp = savePoints.pop();
conn.rollback(sp);
}
}
public void releaseConnection() {
conn.close();
}
}
Then you can have various methods that may be called independently or in combination. In the example below, methodA may be called on its own, or as a result of calling methodB.
public class AccessDb {
public void methodA(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
// update table A
// update table B
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
public void methodB(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
methodA(myConn);
// update table C
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
}
This way, if anything goes wrong, it rolls back fully (as a result of the exception handling), but it will only commit the entire transaction instead of committing a partially completed transaction.
I am creating a java application that connects to multiple databases. A user will be able to select the database they want to connect to from a drop down box.
The program then connects to the database by passing the name to a method that creates an initial context so it can talk with an oracle web logic data source.
public class dbMainConnection {
private static dbMainConnection conn = null;
private static java.sql.Connection dbConn = null;
private static javax.sql.DataSource ds = null;
private static Logger log = LoggerUtil.getLogger();
private dbMainConnection(String database) {
try {
Context ctx = new InitialContext();
if (ctx == null) {
log.info("JDNI Problem, cannot get InitialContext");
}
database = "jdbc/" + database;
log.info("This is the database string in DBMainConnection" + database);
ds = (javax.sql.DataSource) ctx.lookup (database);
} catch (Exception ex) {
log.error("eMTSLogin: Error in dbMainConnection while connecting to the database : " + database, ex);
}
}
public Connection getConnection() {
try {
return ds.getConnection();
} catch (Exception ex) {
log.error("Error in main getConnection while connecting to the database : ", ex);
return null;
}
}
public static dbMainConnection getInstance(String database) {
if (dbConn == null) {
conn = new dbMainConnection(database);
}
return conn;
}
public void freeConnection(Connection c) {
try {
c.close();
log.info(c + " is now closed");
} catch (SQLException sqle) {
log.error("Error in main freeConnection : ", sqle);
}
}
}
My problem is what happens if say someone forgets to create the data source for the database but they still add it to the drop down box? Right now what happens is if I try and connect to a database that doesn't have a data source it errors saying it cannot get a connection. Which is what I want but if I connect to a database that does have a data source first, which works, then try and connect to the database that doesn't have a data source, again it errors with
javax.naming.NameNotFoundException: Unable to resolve 'jdbc.peterson'. Resolved 'jdbc'; remaining name 'peterson'.
Which again I would expect but what is confusing me is it then grabs the last good connection which is for a different database and process everything as if nothing happened.
Anyone know why that is? Is weblogic caching the connection or something as a fail safe? Is it a bad idea to create connections this way?
You're storing a unique datasource (and connection, and dbMainConnection) in a static variable of your class. Each time someone asks for a datasource, you replace the previous one by the new one. If an exception occurs while getting a datasource from JNDI, the static datasource stays as it is. You should not store anything in a static variable. Since your dbMainConnection class is constructed with the name of a database, and there are several database names, it makes no sense to make it a singleton.
Just use the following code to access the datasource:
public final class DataSourceUtil {
/**
* Private constructor to prevent unnecessary instantiations
*/
private DataSourceUtil() {
}
public static DataSource getDataSource(String name) {
try {
Context ctx = new InitialContext();
String database = "jdbc/" + name;
return (javax.sql.DataSource) ctx.lookup (database);
}
catch (NamingException e) {
throw new IllegalStateException("Error accessing JNDI and getting the database named " + name);
}
}
}
And let the callers get a connection from the datasource and close it when they have finished using it.
You're catching JNDI exception upon lookup of the nonexistent datasource but your singleton still keeps the reference to previously looked up datasource. As A.B. Cade says, null reference to ds upon exception, or even before that.
On a more general note, perhaps using Singleton is not the best idea.
When I try to save a new entity to the database, I have the following error:
org.hibernate.AssertionFailure: null id in xxx.nameBean entry (don't flush the Session after an exception occurs)
produced at the code
session.save(nameBean)
but, "magically" it only appears at Production Server. When I try to reproduce the error at localhost, with the same code and data (using copy of the DB of Production Server, via bak file) it works ok.
What can it be?
EDIT: Adding the code that probably cause the error. The objective of that code is save the bean and update the otherBean in the same transaction, so if something wrong ocurrs make the rollback.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean);
doOtherAction(bean);
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
.
.
otherBeanDao.attachDirty(otherBean)
}
}
As the error message says, it's probably caused by attempt to use a session after it thrown an exception. Make sure your code doesn't swallow any Hibernate exceptions.