Best way to truncate all tables with hibernate? - java

I would like to truncate all my database tables between one integration test to another. What is the best way to do this using hibernate?
Currently I'm doing this:
public void cleanDatabase() {
doWithSession(new Action1<Session>() {
#Override
public void doSomething(Session session) {
SQLQuery query = session.createSQLQuery("truncate table stuff");
// todo - generify this to all tables
query.executeUpdate();
}
});
(doWithSession is a small wrapper that creates and closes a session). I could iterate on all my mapped objects using reflection ... I'm wondering if someone already solved this problem.

I guess you probably don't use Spring. If you did, Spring's transactional test support would be ideal.
In short: Spring automatically starts a transaction before each test case and automatically rolls it back after the test case, leaving you with an empty (or at least unchanged) database.
Perhaps you could just mimic that mechanism:
Open a transaction in a #Before method, roll it back in an #After method.

I wrote an integrator exactly for this purpose. Basically, we hook into the session factory creation flow, iterating over the table mappings found by the Hibernate, and the execute TRUNCATE TABLE xxx for each table. Since we couldn't truncate tables with foreign key constraints, foreign key checks disabled before truncation operation and then re-enabled.
static final class TruncatorIntegrator implements org.hibernate.integrator.spi.Integrator {
#Override
public void integrate(Metadata metadata,
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
try (Session session = sessionFactory.openSession()) {
session.doWork(connection -> {
try (PreparedStatement preparedStatement = connection.prepareStatement("SET FOREIGN_KEY_CHECKS = 0;")) {
preparedStatement.executeUpdate();
System.out.printf("Disabled foreign key checks%n");
} catch (SQLException e) {
System.err.printf("Cannot disable foreign key checks: %s: %s%n", e, e.getCause());
}
metadata.collectTableMappings().forEach(table -> {
String tableName = table.getQuotedName();
try (PreparedStatement preparedStatement = connection.prepareStatement("TRUNCATE TABLE " + tableName)) {
preparedStatement.executeUpdate();
System.out.printf("Truncated table: %s%n", tableName);
} catch (SQLException e) {
System.err.printf("Couldn't truncate table %s: %s: %s%n", tableName, e, e.getCause());
}
});
try (PreparedStatement preparedStatement = connection.prepareStatement("SET FOREIGN_KEY_CHECKS = 1;")) {
preparedStatement.executeUpdate();
System.out.printf("Enabled foreign key checks%n");
} catch (SQLException e) {
System.err.printf("Cannot enable foreign key checks: %s: %s%n", e, e.getCause());
}
});
}
}
#Override
public void disintegrate(SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
}
}
Usage: We have to use this Integrator in session factory creation flow, and also need to create a new session factory for each test.
BootstrapServiceRegistry bootstrapServiceRegistry = new BootstrapServiceRegistryBuilder().applyIntegrator(new TruncatorIntegrator()).build();
StandardServiceRegistry registry = new StandardServiceRegistryBuilder(bootstrapServiceRegistry).build();
SessionFactory sessionFactory = new Configuration().buildSessionFactory(registry);

Have you looked at in memory mysql http://dev.mysql.com/downloads/connector/mxj/? also is it not possible to rollback after each test? I believe you can configure it like so.

You could probably drop and recreate the Hibernate schema using SchemaExport, although that seems pretty heavy-handed. Rolling back a transaction sounds like a better idea.

You could use a in memmory database and drop the complete database between your tests.
You can use that way if you have not so many but long tests.
But be aware that every database behaves a bit different then all other. So using an in memory database (for example HyperSQL) will not behave exactly like your normal data base in some cases - so it is not a correct integration test.

Related

Mix JOOQ query with JDBC transaction

I have a use case where I would like to mix a jdbc transaction with jooq context.
The JDBC code looks like that:
public void inTransaction(InTransaction lambda) {
DataSource ds = dataSource.get();
try (Connection connection = ds.getConnection()) {
try {
logger.info("set autocommit to false");
connection.setAutoCommit(false);
try (Statement statement = connection.createStatement()) {
lambda.execute(statement);
logger.info("commiting transaction");
connection.commit();
}
} catch (RuntimeException e) {
logger.info("rolling back transaction");
connection.rollback();
throw e;
} finally {
logger.info("set autocommit to true");
connection.setAutoCommit(true);
}
} catch (SQLException e) {
throw new TilerException(e);
}
}
#FunctionalInterface
public interface InTransaction {
void execute(Statement statement) throws SQLException;
}
And I would like the lambda parameter to be able to work with both jdbc and jooq.
For jdbc using a statement is pretty straight-forward. For example something like this tutorail:
inTransaction(stmt -> {
String SQL = "INSERT INTO Employees " +
"VALUES (106, 20, 'Rita', 'Tez')";
stmt.executeUpdate(SQL);
String SQL = "INSERTED IN Employees " +
"VALUES (107, 22, 'Sita', 'Singh')";
stmt.executeUpdate(SQL);
});
In order to execute jooq queries on the same transaction I have to obtain a context. I found an api to get a DSLContext from datasource/connection.
What is not clear to me is if/how to create a jooq DSLContext from a statement?
A solution to the problem you described
You can do all of this with jOOQ's transaction API:
// Create this ad-hoc, or inject it, or whatever
DSLContext ctx = DSL.using(dataSource, dialect);
And then:
public void inJDBCTransaction(InJDBCTransaction lambda) {
ctx.transaction(config -> {
config.dsl().connection(connection -> {
try (Statement statement = connection.createStatement()) {
lambda.execute(statement);
}
});
});
}
public void inJOOQTransaction(InJOOQTransaction lambda) {
ctx.transaction(config -> lambda.execute(config.dsl()));
}
#FunctionalInterface
public interface InJDBCTransaction {
void execute(Statement statement) throws SQLException;
}
#FunctionalInterface
public interface InJOOQTransaction {
void execute(DSLContext ctx);
}
Your final code:
inJDBCTransaction(stmt -> {
String SQL = "INSERT INTO Employees " +
"VALUES (106, 20, 'Rita', 'Tez')";
stmt.executeUpdate(SQL);
String SQL = "INSERTED IN Employees " +
"VALUES (107, 22, 'Sita', 'Singh')";
stmt.executeUpdate(SQL);
});
inJOOQTransaction(ctx -> {
ctx.insertInto(EMPLOYEES).values(106, 20, "Rita", "Tez").execute();
ctx.insertInto(EMPLOYEES).values(107, 22, "Sita", "Singh").execute();
});
I'm not too convinced about the need for this abstraction over jOOQ and JDBC. jOOQ never hides JDBC from you. You can always access the JDBC API as shown above when using the DSLContext.connection() method. So, as shown above:
The jOOQ transaction API does exactly what you're planning to do. Wrap a lambda in a transactional context, commit if it succeeds, rollback if it fails (your version's rollback doesn't work because it catches the wrong exception).
If the "JDBC escape hatch" is needed, jOOQ can offer that
Side note
In many RDBMS, you don't want to run queries on a static JDBC Statement. You'll want to use PreparedStatement instead because:
You'll profit from execution plan caching (and less contention on the cache)
You'll avoid syntax errors (in case your real query is dynamic)
You'll avoid SQL injection trouble
If you want to get the query string from jOOQ you can call
String sqlString = query.getSQL()
and then use this string in your statement:
stmt.executeUpdate(sqlString);

Hibernate JPA update multi threads single entity

I have message queue, that gives messages with some entity field update info. There are 10 threads, that process messages from the queue.
For example
1st thread processes message, this thread should update field A from my entity with id 123.
2nd thread processes another message, this thread should update field B from my entity with id 123 at the same time.
Sometimes after updates database don't contain some updated fields.
some updater:
someService.updateEntityFieldA(entityId, newFieldValue);
some service:
public Optional<Entity> findById(String entityId) {
return Optional.ofNullable(new DBWorker().findOne(Entity.class, entityId));
}
public void updateEntityFieldA(String entityId, String newFieldValue) {
findById(entityId).ifPresent(entity -> {
entity.setFieldA(newFieldValue);
new DBWorker().update(entity);
});
}
db worker:
public <T> T findOne(final Class<T> type, Serializable entityId) {
T findObj;
try (Session session = HibernateUtil.openSessionPostgres()) {
findObj = session.get(type, entityId);
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
return findObj;
}
public void update(Object entity) {
try (Session session = HibernateUtil.openSessionPostgres()) {
session.beginTransaction();
session.update(entity);
session.getTransaction().commit();
} catch (Exception e) {
throw new HibernateException("database error. " + e.getMessage(), e);
}
}
HibernateUtil.openSessionPostgres() gets each time new session from
sessionFactory.openSession()
Is it possible do such logic without threads locks / optimistic locking and pessimistic locking?
If you use sessionFactory.openSession() to always open a new session, on the update hibernate may be lossing the info about the dirty fields it needs to update, so issues and update to all fields.
Setting hibernate.show_sql property to true will show you the SQL UPDATE statements generated by hibernate.
Try refactoring your code to, in the same transaction, load the entity and update the field. A session.update is not needed, as the entity is managed, on transaction commit hibernate will flush changes and issue a SQL update.

Hibernate Multi-table Bulk Operations always try to create the temporary table

I have some entities using join-inheritance and I'm doing bulk operations on them. As explained in Multi-table Bulk Operations Hibernate uses a temporary table to execute the bulk operations.
As I understand temporary tables the data in them is temporary (deleted at end of transaction or session) but the table themselves are permanent. What I see is that Hibernate tries to create the temporary table every time such a query is executed. Which in my case is up more than 35.000 times per hour. The create table statement obviously fails every time, because a table with that name already exists. This is really unnecessary and probably hurts the performance, also the DBAs are not happy...
Is there a way that Hibernate remembers that it already created the temporary table?
If not, are there any workarounds? My only idea is to use single-table-inheritance instead to avoid using temporary tables completely.
Hibernate version is 4.2.8, DB is Oracle 11g.
I think this is a bug in TemporaryTableBulkIdStrategy, because when using the Oracle8iDialect says that temporary tables shouldn't be deleted:
#Override
public boolean dropTemporaryTableAfterUse() {
return false;
}
But this check is made only when deleting the table:
protected void releaseTempTable(Queryable persister, SessionImplementor session) {
if ( session.getFactory().getDialect().dropTemporaryTableAfterUse() ) {
TemporaryTableDropWork work = new TemporaryTableDropWork( persister, session );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
else {
// at the very least cleanup the data :)
PreparedStatement ps = null;
try {
final String sql = "delete from " + persister.getTemporaryIdTableName();
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( sql, false );
session.getTransactionCoordinator().getJdbcCoordinator().getResultSetReturn().executeUpdate( ps );
}
catch( Throwable t ) {
log.unableToCleanupTemporaryIdTable(t);
}
finally {
if ( ps != null ) {
try {
session.getTransactionCoordinator().getJdbcCoordinator().release( ps );
}
catch( Throwable ignore ) {
// ignore
}
}
}
}
}
but now when creating the table:
protected void createTempTable(Queryable persister, SessionImplementor session) {
// Don't really know all the codes required to adequately decipher returned jdbc exceptions here.
// simply allow the failure to be eaten and the subsequent insert-selects/deletes should fail
TemporaryTableCreationWork work = new TemporaryTableCreationWork( persister );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
As a workaround, you could extend the Oracle dialect and override the dropTemporaryTableAfterUse method to return false.
I filled the HHH-9744 issue for this.
With Vlad pointing me in the right direction I came up with the following workaround to cache the names of already created temporary tables:
public class FixedTemporaryTableBulkIdStrategy extends TemporaryTableBulkIdStrategy {
private final Set<String> tables = new CopyOnWriteArraySet<>();
#Override
protected void createTempTable(Queryable persister, SessionImplementor session) {
final String temporaryIdTableName = persister.getTemporaryIdTableName();
if (!tables.contains(temporaryIdTableName)) {
super.createTempTable(persister, session);
tables.add(temporaryIdTableName);
}
}
}
This can be used by setting the property hibernate.hql.bulk_id_strategy to the fully qualified name of this class.
Please not that this is not a general solution and only works if the database/dialect uses global temporary tables (opposed to session/transaction specific).

MS SQL JDBC: set connection URL parameter to disable foreign-key checks

Exactly like here I want to disable the foreign key check when modifying the database.
However I can't get it to work. If I try this:
jdbc:sqlserver://myserver:1433?sessionVariables=FOREIGN_KEY_CHECKS=0
It will try to take the whole 1433?sessionVariables=FOREIGN_KEY_CHECKS=0 as a port number.
Trying it like this:
jdbc:sqlserver://myserver:1433;FOREIGN_KEY_CHECKS=0
doesn't help either, it will setup the connection but throw a foreign-key constraint violation.
I already looked through the Microsoft API on the JDBC driver and googled, but it wasn't any help.
Does someone know a solution?
It seems like this is not possible with MSSQL. The spec for the URL is
jdbc:sqlserver://[serverName[\instanceName][:portNumber]];property=value[;property=value]]
No session variables can be added here, at least it seems to me.
My solution is to en- and disable the database constraints using a statement.
public static void resetDatabase(String dataSetFilename) throws DatabaseException {
IDatabaseConnection connection = null;
try {
connection = getDatabaseConnection();
disableDatabaseConstraints(connection);
executeDatabaseReset(connection, dataSetFilename);
} catch (Exception ex) {
throw new DatabaseException(ex);
} finally {
enableDatabaseConstraints(connection);
}
}
private static void disableDatabaseConstraints(IDatabaseConnection connection) throws DatabaseException {
try {
Statement disableConstraintsStatement = connection.getConnection().createStatement();
disableConstraintsStatement.execute("exec sp_MSforeachtable \"ALTER TABLE ? NOCHECK CONSTRAINT ALL\"");
disableConstraintsStatement.execute("exec sp_MSforeachtable \"ALTER TABLE ? DISABLE TRIGGER ALL\"");
} catch (SQLException ex) {
throw new DatabaseException(ex);
}
}
private static void enableDatabaseConstraints(IDatabaseConnection connection) throws DatabaseException {
try {
Statement enableConstraintsStatement = connection.getConnection().createStatement();
enableConstraintsStatement.execute("exec sp_MSforeachtable \"ALTER TABLE ? CHECK CONSTRAINT ALL\"");
enableConstraintsStatement.execute("exec sp_MSforeachtable \"ALTER TABLE ? ENABLE TRIGGER ALL\"");
} catch (SQLException ex) {
throw new DatabaseException(ex);
}
}

Lock wait timeout exceeded with Hibernate and MySQL (using play framework)

In my web application I'm using Stateless sessions with Hibernate to have better performances on my inserts and updates.
It was working fine with H2 database (the one used in play framework in dev mode).
But when I test it with MySQL I get the following exception :
ERROR ~ Lock wait timeout exceeded; try restarting transaction
ERROR ~ HHH000315: Exception executing batch [Lock wait timeout exceeded; try restarting transaction]
Here is the code :
public static void update() {
Session session = (Session) JPA.em().getDelegate();
StatelessSession stateless = this.session.getSessionFactory().openStatelessSession();
try {
stateless.beginTransaction();
// Fetch all products
{
List<ProductType> list = ProductType.retrieveAllWithHistory();
for (ProductType pt : list) {
updatePrice(pt, stateless);
}
}
// Fetch all raw materials
{
List<RawMaterialType> list = RawMaterialType.retrieveAllWithHistory();
for (RawMaterialType rm : list) {
updatePrice(rm, stateless);
}
}
} catch (Exception ex) {
play.Logger.error(ex.getMessage());
ExceptionLog.log(ex, Thread.currentThread());
} finally {
stateless.getTransaction().commit();
stateless.close();
}
}
private static void updatePrice(ProductType pt, StatelessSession stateless) {
pt.priceDelta = computeDelta();
pt.unitPrice = computePrice();
stateless.update(pt);
PriceHistory ph = new PriceHistory(pt, price);
stateless.insert(ph);
}
private static void updatePrice(RawMaterialType rm, StatelessSession stateless) {
rm.priceDelta = computeDelta();
rm.unitPrice = computePrice();
stateless.update(rm);
PriceHistory ph = new GoodPriceHistory(rm, price);
stateless.insert(ph);
}
In this example I have 3 simple Entities (ProductType, RawMaterialType and PriceHistory).
computeDelta and computePrice are just algorithm functions with no DB stuff.
retrieveAllWithHistory functions are functions that fetch some data from the database using Play framework model functions.
So, this code retrieves some data, edit some, create new one and finally save everything.
Why have I a lock exception with MySQL and no exception with H2 ?
I'm not sure why you have a commit in a finally block. Give this structure a try:
try {
factory.getCurrentSession().beginTransaction();
factory.getCurrentSession().getTransaction().commit();
} catch (RuntimeException e) {
factory.getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
Also, it might be helpful for you to check this documentation.

Categories

Resources