I have a JDBC batch update operation which might take long time, hence I am using transaction timeout to handle this.
#Override
#Transactional(propagation=Propagation.REQUIRES_NEW,timeout=10)
public void saveAllUsingBatch(List<KillPrintModel> list){
PreparedStatmentMapper ps= new HibernateDao.PreparedStatmentMapper<KillPrintModel>() {
#Override
public void prepareStatement(PreparedStatement ps, KillPrintModel t)
throws SQLException {
ps.setString(1, t.getOffice());
ps.setString(2, t.getAccount());
ps.setDate(3, new java.sql.Date(t.getUpdatedOn().getTime()));
}
};
String sql = String.format("INSERT INTO dbo.%s (%s,%s,%s) VALUES (?,?,?)",KillPrintModel.TABLE_NAME,KillPrintModel.FIELD_Office,KillPrintModel.FIELD_Account,KillPrintModel.FIELD_UpdatedOn);
this.jdbcBatchOperation(list, sql, ps);
}
This method goes on for more than a minute(and returns successfully) even when I have a transaction time out of 10 seconds. It works fine when the timeout is 0.
Is it because My thread is always in running state once it starts execution ?
If debugging in trace mode does not help, just put a breakpoint in the following hibernate classes, they ultimately set a timeout in the preparedstatement.setQueryTimeout(...) from the #Transactional Annotation
org.hibernate.engine.jdbc.internal.StatementPreparerImpl
private void setStatementTimeout(PreparedStatement preparedStatement) throws SQLException {
final int remainingTransactionTimeOutPeriod = jdbcCoordinator.determineRemainingTransactionTimeOutPeriod();
if ( remainingTransactionTimeOutPeriod > 0 ) {
preparedStatement.setQueryTimeout( remainingTransactionTimeOutPeriod );
}
}
Or even better, as early as the transaction manager and step through until you hit the statement.setQueryTimout(..).
org.springframework.orm.hibernate4.HibernateTransactionManager
int timeout = determineTimeout(definition);
if (timeout != TransactionDefinition.TIMEOUT_DEFAULT) {
// Use Hibernate's own transaction timeout mechanism on Hibernate 3.1+
// Applies to all statements, also to inserts, updates and deletes!
hibTx = session.getTransaction();
hibTx.setTimeout(timeout);
hibTx.begin();
}
else {
// Open a plain Hibernate transaction without specified timeout.
hibTx = session.beginTransaction();
}
Related
We are trying to get the connection object from the EntityManager
Below is the sample code
final Session unwrap = proxy.unwrap(Session.class);
unwrap.doWork(new Work()
{
#Override
public void execute(Connection connection) throws SQLException
{
PreparedStatement ps = connection.prepareStatement(MY_QUERY);
for (Object value : valueSet)
{
....
....
ps.addBatch();
}
try
{
int[] ints = ps.executeBatch();
} finally
{
ps.close();
}
}
});
This works fine .
The concern we have is that when this code is invoked , everytime getConnection is called on the DataSource. Does that mean a new connection is obtained from the pool ?
This has performance impact in our use case .
Our understanding is that the current active connection will be utilised.
Is the understanding incorrect ?
The Hibernate documentation says:
Controller for allowing users to perform JDBC related work using the Connection managed by this Session.
So it is the (single) Connection used by the Session.
Everything else would be a Bug.
This is my connection detail in JBoss standalone.xml
<connection-url>
jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=xx.1xx.119.1xx)(PORT=1521))(LOAD_BALANCE=on)(FAILOVER=on))(CONNECT_DATA=(SERVICE_NAME=XE)))
</connection-url>
I want to handle a corner case of failover where post getting EntityManager object during a call of persist(), the connection is lost. Failover option is not switching to next database in the same transaction, it switches to active connection in the next transaction. I attempted something like this: (Catch Exception and get updated bean object)
public EntityManager getEntityManager() {
try {
entityManager = getEntityManagerDao(Constant.JNDI_NFVD_ASSURANCE_ENTITY_MANAGER);
} catch (NamingException e) {
LOGGER.severe("Data could not be persisted.");
throw new PersistenceException();
}
return entityManager.getEntityManager();
}
/**
* Inserts record in database. In case multiple connections/databases exist, one more attempt will be made to
* insert record.
*
* #param entry
*/
public void persist(Object entry) {
try {
getEntityManager().persist(entry);
} catch (PersistenceException pe) {
LOGGER.info("Could not persist data. Trying new DB connection.");
getEntityManager().persist(entry);
}
}
private static Object getJNDIObject(String path) throws NamingException {
Object jndiObject = null;
InitialContext initialContext = new InitialContext();
jndiObject = initialContext.lookup(path);
return jndiObject;
}
private static AssuranceEntityManager getEntityManagerDao(String path) throws NamingException {
return (AssuranceEntityManager) getJNDIObject(path);
}
But this one also is not helping. After catching the exception, getting a new bean with JNDI lookup does not contain an updated new connection and an exception is thrown. This results in loss of data of that transaction.
Please suggest how to handle this corner case of "Connection lost post getting EntityManager and before persisting."
I think it's quite impossible what you want to achieve. The thing is that if internal DB transction is aborted then the JTA transaction is in abort state and you can't continue with it.
I expect it's kind of similar to this case
#Stateless
public class TableCreator {
#Resource
DataSource datasource;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void create() {
try(Connection connection = datasource.getConnection()) {
Statement st = connection.createStatement();
st.execute("CREATE TABLE user (id INTEGER NOT NULL, name VARCHAR(255))");
} catch (SQLException sqle) {
// ignore this as table already exists
}
}
}
#Stateless
public class Inserter {
#EJB
private TableCreator creator;
public void call() {
creator.create();
UserEntity entity = new UserEntity(1, "EAP QE");
em.persist(entity);
}
}
In case that table user exists and you would use annotation #TransactionAttribute(TransactionAttributeType.REQUIRED) then the create call will be part of the same jta global transaction as call of persist. As in such case the transaction was aborted the persist call would fail with exception like (postgresql case)
Caused by: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
I mean if Oracle jdbc driver is not able to to handle connection fail transparently to JBoss app server and throws the exception upwards then I think that the only possible solution is to repeat the whole update action.
I have some entities using join-inheritance and I'm doing bulk operations on them. As explained in Multi-table Bulk Operations Hibernate uses a temporary table to execute the bulk operations.
As I understand temporary tables the data in them is temporary (deleted at end of transaction or session) but the table themselves are permanent. What I see is that Hibernate tries to create the temporary table every time such a query is executed. Which in my case is up more than 35.000 times per hour. The create table statement obviously fails every time, because a table with that name already exists. This is really unnecessary and probably hurts the performance, also the DBAs are not happy...
Is there a way that Hibernate remembers that it already created the temporary table?
If not, are there any workarounds? My only idea is to use single-table-inheritance instead to avoid using temporary tables completely.
Hibernate version is 4.2.8, DB is Oracle 11g.
I think this is a bug in TemporaryTableBulkIdStrategy, because when using the Oracle8iDialect says that temporary tables shouldn't be deleted:
#Override
public boolean dropTemporaryTableAfterUse() {
return false;
}
But this check is made only when deleting the table:
protected void releaseTempTable(Queryable persister, SessionImplementor session) {
if ( session.getFactory().getDialect().dropTemporaryTableAfterUse() ) {
TemporaryTableDropWork work = new TemporaryTableDropWork( persister, session );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
else {
// at the very least cleanup the data :)
PreparedStatement ps = null;
try {
final String sql = "delete from " + persister.getTemporaryIdTableName();
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( sql, false );
session.getTransactionCoordinator().getJdbcCoordinator().getResultSetReturn().executeUpdate( ps );
}
catch( Throwable t ) {
log.unableToCleanupTemporaryIdTable(t);
}
finally {
if ( ps != null ) {
try {
session.getTransactionCoordinator().getJdbcCoordinator().release( ps );
}
catch( Throwable ignore ) {
// ignore
}
}
}
}
}
but now when creating the table:
protected void createTempTable(Queryable persister, SessionImplementor session) {
// Don't really know all the codes required to adequately decipher returned jdbc exceptions here.
// simply allow the failure to be eaten and the subsequent insert-selects/deletes should fail
TemporaryTableCreationWork work = new TemporaryTableCreationWork( persister );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
As a workaround, you could extend the Oracle dialect and override the dropTemporaryTableAfterUse method to return false.
I filled the HHH-9744 issue for this.
With Vlad pointing me in the right direction I came up with the following workaround to cache the names of already created temporary tables:
public class FixedTemporaryTableBulkIdStrategy extends TemporaryTableBulkIdStrategy {
private final Set<String> tables = new CopyOnWriteArraySet<>();
#Override
protected void createTempTable(Queryable persister, SessionImplementor session) {
final String temporaryIdTableName = persister.getTemporaryIdTableName();
if (!tables.contains(temporaryIdTableName)) {
super.createTempTable(persister, session);
tables.add(temporaryIdTableName);
}
}
}
This can be used by setting the property hibernate.hql.bulk_id_strategy to the fully qualified name of this class.
Please not that this is not a general solution and only works if the database/dialect uses global temporary tables (opposed to session/transaction specific).
I have a JDBC code where there are multiple Savepoints present; something like this:
1st insert statement
2nd insert statement
savepoint = conn.setSavepoint("S1");
1st insert statement
2nd update statement
savepoint = conn.setSavepoint("S2");
1st delete statement
2nd delete statement
savepoint = conn.setSavepoint("S3");
1st insert statement
2nd delete statement
savepoint = conn.setSavepoint("S4");
Now in the catch block, I am catching the exception and checking whether the Savepoint is null or not; if yes then rollback the entire connection else rollback till a Savepoint. But I am not able to understand till which Savepoint shall I roll back.
Will it be fine if I change all the savepoint names to "S1" ? In that case how will I understand how many till Savepoint did work correctly?
Please advise how to understand until what Savepoint the work was performed correctly?
Would view this as multiple transactions. Hence you could handle this with multiple try/ catch blocks. You also seem to be overwriting the savepoint objects hence it would be not feasible to rollback.
More info.
JDBC also supports to set save points and then rollback to the specified save point. The following method could be used to define save points.
SavePoint savePoint1 = connection.setSavePoint();
Rollback a transaction to an already defined save point using rollback call with an argument.
connection.rollback(savePoint1);
Reference.
https://www.stackstalk.com/2014/08/jdbc-handling-transactions.html
In such cases, I've found out the tricky part is to make sure you commit the transaction only if all inserts succeed, but rollback all updates if any insert fails. I've used a savepoint stack to handle such situations. The highly simplified code is as follows:
A connection wrapper class:
public class MyConnection {
Connection conn;
static DataSource ds;
Stack<Savepoint> savePoints = null;
static {
//... stuff to initialize datasource.
}
public MyConnection() {
conn = ds.getConnection();
}
public void beginTransaction() {
if (savePoints == null) {
savePoints = new Stack<Savepoint>();
conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
} else {
savePoints.push(conn.setSavepoint());
}
}
public void commit() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.commit();
} else {
Savepoint sp = savePoints.pop();
conn.releaseSavepoint(sp);
}
}
public void rollback() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.rollback();
} else {
Savepoint sp = savePoints.pop();
conn.rollback(sp);
}
}
public void releaseConnection() {
conn.close();
}
}
Then you can have various methods that may be called independently or in combination. In the example below, methodA may be called on its own, or as a result of calling methodB.
public class AccessDb {
public void methodA(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
// update table A
// update table B
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
public void methodB(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
methodA(myConn);
// update table C
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
}
This way, if anything goes wrong, it rolls back fully (as a result of the exception handling), but it will only commit the entire transaction instead of committing a partially completed transaction.
In my web application I'm using Stateless sessions with Hibernate to have better performances on my inserts and updates.
It was working fine with H2 database (the one used in play framework in dev mode).
But when I test it with MySQL I get the following exception :
ERROR ~ Lock wait timeout exceeded; try restarting transaction
ERROR ~ HHH000315: Exception executing batch [Lock wait timeout exceeded; try restarting transaction]
Here is the code :
public static void update() {
Session session = (Session) JPA.em().getDelegate();
StatelessSession stateless = this.session.getSessionFactory().openStatelessSession();
try {
stateless.beginTransaction();
// Fetch all products
{
List<ProductType> list = ProductType.retrieveAllWithHistory();
for (ProductType pt : list) {
updatePrice(pt, stateless);
}
}
// Fetch all raw materials
{
List<RawMaterialType> list = RawMaterialType.retrieveAllWithHistory();
for (RawMaterialType rm : list) {
updatePrice(rm, stateless);
}
}
} catch (Exception ex) {
play.Logger.error(ex.getMessage());
ExceptionLog.log(ex, Thread.currentThread());
} finally {
stateless.getTransaction().commit();
stateless.close();
}
}
private static void updatePrice(ProductType pt, StatelessSession stateless) {
pt.priceDelta = computeDelta();
pt.unitPrice = computePrice();
stateless.update(pt);
PriceHistory ph = new PriceHistory(pt, price);
stateless.insert(ph);
}
private static void updatePrice(RawMaterialType rm, StatelessSession stateless) {
rm.priceDelta = computeDelta();
rm.unitPrice = computePrice();
stateless.update(rm);
PriceHistory ph = new GoodPriceHistory(rm, price);
stateless.insert(ph);
}
In this example I have 3 simple Entities (ProductType, RawMaterialType and PriceHistory).
computeDelta and computePrice are just algorithm functions with no DB stuff.
retrieveAllWithHistory functions are functions that fetch some data from the database using Play framework model functions.
So, this code retrieves some data, edit some, create new one and finally save everything.
Why have I a lock exception with MySQL and no exception with H2 ?
I'm not sure why you have a commit in a finally block. Give this structure a try:
try {
factory.getCurrentSession().beginTransaction();
factory.getCurrentSession().getTransaction().commit();
} catch (RuntimeException e) {
factory.getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
Also, it might be helpful for you to check this documentation.