I have a use case where I would like to mix a jdbc transaction with jooq context.
The JDBC code looks like that:
public void inTransaction(InTransaction lambda) {
DataSource ds = dataSource.get();
try (Connection connection = ds.getConnection()) {
try {
logger.info("set autocommit to false");
connection.setAutoCommit(false);
try (Statement statement = connection.createStatement()) {
lambda.execute(statement);
logger.info("commiting transaction");
connection.commit();
}
} catch (RuntimeException e) {
logger.info("rolling back transaction");
connection.rollback();
throw e;
} finally {
logger.info("set autocommit to true");
connection.setAutoCommit(true);
}
} catch (SQLException e) {
throw new TilerException(e);
}
}
#FunctionalInterface
public interface InTransaction {
void execute(Statement statement) throws SQLException;
}
And I would like the lambda parameter to be able to work with both jdbc and jooq.
For jdbc using a statement is pretty straight-forward. For example something like this tutorail:
inTransaction(stmt -> {
String SQL = "INSERT INTO Employees " +
"VALUES (106, 20, 'Rita', 'Tez')";
stmt.executeUpdate(SQL);
String SQL = "INSERTED IN Employees " +
"VALUES (107, 22, 'Sita', 'Singh')";
stmt.executeUpdate(SQL);
});
In order to execute jooq queries on the same transaction I have to obtain a context. I found an api to get a DSLContext from datasource/connection.
What is not clear to me is if/how to create a jooq DSLContext from a statement?
A solution to the problem you described
You can do all of this with jOOQ's transaction API:
// Create this ad-hoc, or inject it, or whatever
DSLContext ctx = DSL.using(dataSource, dialect);
And then:
public void inJDBCTransaction(InJDBCTransaction lambda) {
ctx.transaction(config -> {
config.dsl().connection(connection -> {
try (Statement statement = connection.createStatement()) {
lambda.execute(statement);
}
});
});
}
public void inJOOQTransaction(InJOOQTransaction lambda) {
ctx.transaction(config -> lambda.execute(config.dsl()));
}
#FunctionalInterface
public interface InJDBCTransaction {
void execute(Statement statement) throws SQLException;
}
#FunctionalInterface
public interface InJOOQTransaction {
void execute(DSLContext ctx);
}
Your final code:
inJDBCTransaction(stmt -> {
String SQL = "INSERT INTO Employees " +
"VALUES (106, 20, 'Rita', 'Tez')";
stmt.executeUpdate(SQL);
String SQL = "INSERTED IN Employees " +
"VALUES (107, 22, 'Sita', 'Singh')";
stmt.executeUpdate(SQL);
});
inJOOQTransaction(ctx -> {
ctx.insertInto(EMPLOYEES).values(106, 20, "Rita", "Tez").execute();
ctx.insertInto(EMPLOYEES).values(107, 22, "Sita", "Singh").execute();
});
I'm not too convinced about the need for this abstraction over jOOQ and JDBC. jOOQ never hides JDBC from you. You can always access the JDBC API as shown above when using the DSLContext.connection() method. So, as shown above:
The jOOQ transaction API does exactly what you're planning to do. Wrap a lambda in a transactional context, commit if it succeeds, rollback if it fails (your version's rollback doesn't work because it catches the wrong exception).
If the "JDBC escape hatch" is needed, jOOQ can offer that
Side note
In many RDBMS, you don't want to run queries on a static JDBC Statement. You'll want to use PreparedStatement instead because:
You'll profit from execution plan caching (and less contention on the cache)
You'll avoid syntax errors (in case your real query is dynamic)
You'll avoid SQL injection trouble
If you want to get the query string from jOOQ you can call
String sqlString = query.getSQL()
and then use this string in your statement:
stmt.executeUpdate(sqlString);
Related
While connecting one applet to an Access DB using the jdbc:ucanaccess method, I get the following error:
Firstdb.java:44: error: unreported exception SQLException;
must be caught or declared to be thrown
stmt.executeUpdate(sql);
^
The code that I used for the applet is as follows (add() and setBounds() are removed from init()):
public class Firstdb extends Applet implements ActionListener {
TextField t1, t2;
Label l1;
Button b1, b2;
Connection con;
Statement stmt;
public void init() {
try {
con = DriverManager.getConnection("jdbc:ucanaccess://H:/test/db.mdb");
stmt = con.createStatement();
} catch (Exception e) {
}
}
public void actionPerformed(ActionEvent ae) {
String sql;
if (ae.getSource() == b1) {
sql = "insert into user (Uname) values(" + t1.getText() + ")";
stmt.executeUpdate(sql);
} else if (ae.getSource() == b2) {
//do something
}
}
}
Note: java version "1.8.0_141"
Why am I getting this error?
Your code has two fatal flaws:
value is not a valid SQL keyword. It should be values. [Fixed in subsequent edit to question.]
Your dynamic SQL is generating command text with invalid syntax (unquoted string literal).
Also, user is a reserved word (function name), so if you need to use it as a table name you really should enclose it in square brackets.
The proper solution to issue #2 above is to use a parameterized query, e.g.,
sql = "insert into [user] ([Uname]) values (?)";
PreparedStatement ps = conn.prepareStatement(sql);
ps.setString(1, t1.getText());
ps.executeUpdate();
It is a compile error which means that you either need to surround your stmt.executeUpdate(sql); with try-catch statement, or your method should throw an SQLException:
public void actionPerformed(ActionEvent ae) {
try {
String sql;
if (ae.getSource() == b1) {
sql="insert into user (Uname) values("+t1.getText()+")";
stmt.executeUpdate(sql);
} else if (ae.getSource() == b2) {
//do something
}
catch (SQLException e) {
// do something
}
}
or
public void actionPerformed(ActionEvent ae) throws SQLException {
String sql;
if (ae.getSource() == b1) {
sql="insert into user (Uname) values("+t1.getText()+")";
stmt.executeUpdate(sql);
} else if (ae.getSource() == b2) {
//do something
}
}
EDIT: By the way, I don't see if you are loading or registering Oracle JDBC driver class for UCanAccess before opening the database connection:
// Step 1: Loading or registering Oracle JDBC driver class
try {
Class.forName("net.ucanaccess.jdbc.UcanaccessDriver");
} catch(ClassNotFoundException cnfex) {
System.out.println("Problem in loading or "
+ "registering MS Access JDBC driver");
cnfex.printStackTrace();
}
So do you have it or no? If no, you should add it before getting your connection:
con = DriverManager.getConnection("jdbc:ucanaccess://H:/test/db.mdb");
The message
Firstdb.java:44: error: unreported exception SQLException;
must be caught or declared to be thrown
stmt.executeUpdate(sql);
^
looks like a compile error so the question is, how do you compile/deploy the applet-classes and where exactly do you see that error message. IDEs like Eclipse are creating class-files even in case of compile errors containing and outputting the error message that you can also see in the IDE. Maybe you fail to update the applet classes and end up testing with the same old classes instead of the changed ones leading to the same message over and over again.
If you deploy the applet as part of a web app, the HTTP server might cache the class-file if the web app has been deployed as WAR-file. Also, browsers tend to cache class-files quite aggressively so you should make sure that the browser's cache is empty each time you start a new test.
I'm developing tool to continuously export changes from MongoDb to Oracle database.
I have problem with execution batch operation(Oracle).
static void save(List result) {
withBatchConnection { Statement stm ->
result.each { String line ->
stm.addBatch(line)
}
}
}
static withConnection(Closure closure) {
def conn = null
boolean success = false
while (!success) {
try {
conn = getConnection()
closure.call(conn)
success = true
} catch (e) {
log.error('Connection problem', e)
log.error(e, e)
log.info('Retrying for 30 sec')
sleep(30000)
} finally {
conn?.close()
}
}
}
static withTransactionConnection(Closure closure) {
withConnection { Sql sql ->
OracleConnection conn = sql.getConnection() as OracleConnection
conn.setAutoCommit(false)
closure.call(conn)
conn.commit()
}
}
static withBatchConnection(Closure closure) {
withTransactionConnection { Connection conn ->
def statement = conn.createStatement()
closure.call(statement)
statement.executeBatch()
statement.close()
}
}
Problem is i cant use prepared statement because order of operations is very important.
When I'm saving to MySql with Rewrite Batched Statements its like 10k operations per second. For Oracle is 400 operations/s
Is any chance to make it faster?
I'm using OJDBC 7 and groovy 2.4.7
Please set the array size from client side to maximum and try
When a customer gets removed from the database, his status has to be set to offline (rather than actually deleting the customer from the database). In my database I'm using tinyint(1) to set the status to 0 (inactive) or 1 (active). Now how do I go about coding this?
// Delete a customer on the database by setting his status to inactive
public void deleteCust(int id) throws DBException {
// connect (and close connection)
try (Connection conn = ConnectionManager.getConnection();) {
// PreparedStatement
try (PreparedStatement stmt = conn.prepareStatement(
"update customer set active = ? where id = ?");) {
stmt.settinyint(1, 0);
stmt.setInt(2, id);
stmt.execute();
} catch (SQLException sqlEx) {
throw new DBException("SQL-exception in deleteCust - statement"+ sqlEx);
}
} catch (SQLException sqlEx) {
throw new DBException(
"SQL-exception in deleteCust - connection"+ sqlEx);
}
}
I'm using an SQL database via USB webserver on localhost. I'm unsure whether this works right now because the rest of my code is incomplete and I'm unable to test, but I have to be sure before continuing. Thank you
use setByte() instead.
because TINYINT in SQL is equal to byte in java, also it has some methods, like: setByte() and updateByte() and getByte()
I have a JDBC code where there are multiple Savepoints present; something like this:
1st insert statement
2nd insert statement
savepoint = conn.setSavepoint("S1");
1st insert statement
2nd update statement
savepoint = conn.setSavepoint("S2");
1st delete statement
2nd delete statement
savepoint = conn.setSavepoint("S3");
1st insert statement
2nd delete statement
savepoint = conn.setSavepoint("S4");
Now in the catch block, I am catching the exception and checking whether the Savepoint is null or not; if yes then rollback the entire connection else rollback till a Savepoint. But I am not able to understand till which Savepoint shall I roll back.
Will it be fine if I change all the savepoint names to "S1" ? In that case how will I understand how many till Savepoint did work correctly?
Please advise how to understand until what Savepoint the work was performed correctly?
Would view this as multiple transactions. Hence you could handle this with multiple try/ catch blocks. You also seem to be overwriting the savepoint objects hence it would be not feasible to rollback.
More info.
JDBC also supports to set save points and then rollback to the specified save point. The following method could be used to define save points.
SavePoint savePoint1 = connection.setSavePoint();
Rollback a transaction to an already defined save point using rollback call with an argument.
connection.rollback(savePoint1);
Reference.
https://www.stackstalk.com/2014/08/jdbc-handling-transactions.html
In such cases, I've found out the tricky part is to make sure you commit the transaction only if all inserts succeed, but rollback all updates if any insert fails. I've used a savepoint stack to handle such situations. The highly simplified code is as follows:
A connection wrapper class:
public class MyConnection {
Connection conn;
static DataSource ds;
Stack<Savepoint> savePoints = null;
static {
//... stuff to initialize datasource.
}
public MyConnection() {
conn = ds.getConnection();
}
public void beginTransaction() {
if (savePoints == null) {
savePoints = new Stack<Savepoint>();
conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
} else {
savePoints.push(conn.setSavepoint());
}
}
public void commit() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.commit();
} else {
Savepoint sp = savePoints.pop();
conn.releaseSavepoint(sp);
}
}
public void rollback() throws SQLException {
if (savePoints == null || savePoints.empty()) {
conn.rollback();
} else {
Savepoint sp = savePoints.pop();
conn.rollback(sp);
}
}
public void releaseConnection() {
conn.close();
}
}
Then you can have various methods that may be called independently or in combination. In the example below, methodA may be called on its own, or as a result of calling methodB.
public class AccessDb {
public void methodA(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
// update table A
// update table B
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
public void methodB(MyConnection myConn) throws Exception {
myConn.beginTransaction();
try {
methodA(myConn);
// update table C
myConn.commit();
} catch (Exception e) {
myConn.rollback();
throw e;
} finally {
}
}
}
This way, if anything goes wrong, it rolls back fully (as a result of the exception handling), but it will only commit the entire transaction instead of committing a partially completed transaction.
I would like to truncate all my database tables between one integration test to another. What is the best way to do this using hibernate?
Currently I'm doing this:
public void cleanDatabase() {
doWithSession(new Action1<Session>() {
#Override
public void doSomething(Session session) {
SQLQuery query = session.createSQLQuery("truncate table stuff");
// todo - generify this to all tables
query.executeUpdate();
}
});
(doWithSession is a small wrapper that creates and closes a session). I could iterate on all my mapped objects using reflection ... I'm wondering if someone already solved this problem.
I guess you probably don't use Spring. If you did, Spring's transactional test support would be ideal.
In short: Spring automatically starts a transaction before each test case and automatically rolls it back after the test case, leaving you with an empty (or at least unchanged) database.
Perhaps you could just mimic that mechanism:
Open a transaction in a #Before method, roll it back in an #After method.
I wrote an integrator exactly for this purpose. Basically, we hook into the session factory creation flow, iterating over the table mappings found by the Hibernate, and the execute TRUNCATE TABLE xxx for each table. Since we couldn't truncate tables with foreign key constraints, foreign key checks disabled before truncation operation and then re-enabled.
static final class TruncatorIntegrator implements org.hibernate.integrator.spi.Integrator {
#Override
public void integrate(Metadata metadata,
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
try (Session session = sessionFactory.openSession()) {
session.doWork(connection -> {
try (PreparedStatement preparedStatement = connection.prepareStatement("SET FOREIGN_KEY_CHECKS = 0;")) {
preparedStatement.executeUpdate();
System.out.printf("Disabled foreign key checks%n");
} catch (SQLException e) {
System.err.printf("Cannot disable foreign key checks: %s: %s%n", e, e.getCause());
}
metadata.collectTableMappings().forEach(table -> {
String tableName = table.getQuotedName();
try (PreparedStatement preparedStatement = connection.prepareStatement("TRUNCATE TABLE " + tableName)) {
preparedStatement.executeUpdate();
System.out.printf("Truncated table: %s%n", tableName);
} catch (SQLException e) {
System.err.printf("Couldn't truncate table %s: %s: %s%n", tableName, e, e.getCause());
}
});
try (PreparedStatement preparedStatement = connection.prepareStatement("SET FOREIGN_KEY_CHECKS = 1;")) {
preparedStatement.executeUpdate();
System.out.printf("Enabled foreign key checks%n");
} catch (SQLException e) {
System.err.printf("Cannot enable foreign key checks: %s: %s%n", e, e.getCause());
}
});
}
}
#Override
public void disintegrate(SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
}
}
Usage: We have to use this Integrator in session factory creation flow, and also need to create a new session factory for each test.
BootstrapServiceRegistry bootstrapServiceRegistry = new BootstrapServiceRegistryBuilder().applyIntegrator(new TruncatorIntegrator()).build();
StandardServiceRegistry registry = new StandardServiceRegistryBuilder(bootstrapServiceRegistry).build();
SessionFactory sessionFactory = new Configuration().buildSessionFactory(registry);
Have you looked at in memory mysql http://dev.mysql.com/downloads/connector/mxj/? also is it not possible to rollback after each test? I believe you can configure it like so.
You could probably drop and recreate the Hibernate schema using SchemaExport, although that seems pretty heavy-handed. Rolling back a transaction sounds like a better idea.
You could use a in memmory database and drop the complete database between your tests.
You can use that way if you have not so many but long tests.
But be aware that every database behaves a bit different then all other. So using an in memory database (for example HyperSQL) will not behave exactly like your normal data base in some cases - so it is not a correct integration test.