How to get the underlying connection inside a transaction using jOOQ? - java

I'm using jOOQ inside an existing project which also uses some custom JDBC code. Inside a jOOQ transaction I need to call some other JDBC code and I need to pass through the active connection so everything gets inside the same transaction.
I don't know how to retrieve the underlying connection inside a jOOQ transaction.
create.transaction(configuration -> {
DSLContext ctx = DSL.using(configuration);
// standard jOOQ code
ctx.insertInto(...);
// now I need a Connection
Connection c = ctx.activeConnection(); // not real, this is what I need
someOtherCode(c, ...);
});
Reading the docs and peeking a bit on the source code my best bet is this:
configuration.connectionProvider().acquire()
But the name is a bit misleading in this particular use case. I don't want a new connection, just the current one. I think this is the way to go because the configuration is derived and I will always get the same connection, but I'm not sure and I can't find the answer in the documentation.

jOOQ's API makes no assumptions about the existence of a "current" connection. Depending on your concrete implementations of ConnectionProvider, TransactionProvider, etc., this may or may not be possible.
Your workaround is generally fine, though. Just make sure you follow the ConnectionProvider's SPI contract:
Connection c = null;
try {
c = configuration.connectionProvider().acquire();
someOtherCode(c, ...);
}
finally {
configuration.connectionProvider().release(c);
}
The above is fine when you're using jOOQ's DefaultTransactionProvider, for instance.
Note there is a pending feature request #4552 that will allow you to run code in the context of a ConnectionProvider and its calls to acquire() and release(). This is what it will look like:
DSL.using(configuration)
.connection(c -> someOtherCode(c, ...));

Related

LazyInitializationException in unit tests under Spring-Data/Spring-Boot

My unit tests are seeing org.hibernate.LazyInitializationException: could not initialize proxy [org.openapitools.entity.MenuItem#5] - no Session. I'm not sure why they expect a session in a unit test. I'm trying to write to an in-memory h2 database for the unit tests of my Controller classes that implement the RESTful APIs. I'm not using any mock objects for the test, because I want to test the actual database transactions. This worked fine when I was using Spring-Boot version 1.x, but broke when I moved to version 2. (I'm not sure if that's what caused the tests to break, since I made lots of other changes. My point is that my code has passed these tests already.)
My Repositories extend JPARepository, so I'm using a standard Hibernate interface.
There are many answers to this question on StackOverflow, but very few describe a solution that I could use with Spring-Data.
Addendum: Here's a look at the unit test:
#Test
public void testDeleteOption() throws ResponseException {
MenuItemDto menuItemDto = createPizzaMenuItem();
ResponseEntity<CreatedResponse> responseEntity
= adminApiController.addMenuItem(menuItemDto);
final CreatedResponse body = responseEntity.getBody();
assertNotNull(body);
Integer id = body.getId();
MenuItem item = menuItemApiController.getMenuItemTestOnly(id);
// Hibernate.initialize(item); // attempted fix blows up
List<String> nameList = new LinkedList<>();
for (MenuItemOption option : item.getAllowedOptions()) { // blows up here
nameList.add(option.getName());
}
assertThat(nameList, hasItems("pepperoni", "olives", "onions"));
// ... (more code)
}
My test application.properties has these settings
spring.datasource.url=jdbc:h2:mem:pizzaChallenge;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.username=pizza
spring.datasource.password=pizza
spring.jpa.show-sql=true
This is not standard Hibernate, but spring data. You have to understand that Hibernate uses lazy loading to avoid loading the whole object graph from the database. If you close the session or connection to the database e.g. by ending a transaction, Hibernate can't lazy load anymore and apparently, your code tries to access state that needs lazy loading.
You can use #EntityGraph on your repository to specify that an association should be fetched or you avoid accessing the state that isn't initialized outside of a transaction. Maybe you just need to enlarge the transaction scope by putting #Transactional on the method that calls the repository and accesses the state, so that lazy loading works.
I found a way around this. I'm not sure if this is the best approach, so if anyone has any better ideas, I'd appreciate hearing from them.
Here's what I did. First of all, before reading a value from the lazy-loaded entity, I call Hibernate.initialize(item);
This throws the same exception. But now I can add a property to the test version of application.properties that says
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
Now the initialize method will work.
P.S. I haven't been able to find a good reference for Spring properties like this one. If anyone knows where I can see the available properties, I'd love to hear about it. The folks at Spring don't do a very good job of documenting these properties. Even when they mention a specific property, they don't provide a link that might explain it more thoroughly.

How to write 'Junits for a Vertx microservice which is interacting with Oracle DB'?

I need to write the junits for a oracle-wrapper (basically a microservice written on vertx which is interacting with oracle db).How to proceed?Mockito can't be used
How about using an in memory database eg: h2 database. Which can run in oracle compatibility mode:
To use the Oracle mode, use the database URL
jdbc:h2:~/test;MODE=Oracle or the SQL statement SET MODE Oracle.
First you write unit tests focusing on establishing that the Dao is working properly, that is every insert, delete, update and query is working as intended and so on. This approach would assume that the network access is working properly to the microservice from the clients.
Example:
public class MyFirstdao {
private static final MyFirstDao dao = new MyFirstDao(dbAddress, dbName, ...);
#Test
private void insert() {
SomeResult result = dao.insert(InsertSomeObject);
assertSomething(result);
}
...
}
After that you can create a fake client that you can use to access the microservice and perform predefined operations. Though if you only have one type of client accessing your microservice I would probably put these tests on the client rather than having to write the same code twice. I'm just speculating here but I hope it was of use.

JOOQ transactions with DefaultTransactionProvider without functions

I'm aware that I can use DefaultTransactionProvider with DSLContext and lambdas like this
DSL.using(configuration)
.transaction(ctx -> {
DSL.using(ctx)
.update(TABLE)
.set(TABLE.COL, newValue)
.where(...)
.execute();
});
However I would like to control my transaction outside the scope of a code block (but still using DefaultTransactionProvider as its behavior with checkpointing and such is what I'm looking for). More like this
configuration.transactionProvider().begin(transactionContext);
DSL.using(configuration)
.update(TABLE)
.set(TABLE.COL, newValue)
.where(...)
.execute();
configuration.transactionProvider().commit(transactionContext);
Is this possible or will I need to implement the transaction SPI myself to accomplish this?
As of jOOQ 3.8, this is not possible out of the box. There is a pending feature request for this:
https://github.com/jOOQ/jOOQ/issues/5376
Your code will probably work:
configuration.transactionProvider().begin(transactionContext);
DSL.using(configuration)
.update(TABLE)...
configuration.transactionProvider().commit(transactionContext);
But beware that you're calling SPI methods, not API methods. These methods have not been designed for direct access by you as an API consumer. They're designed for implementation and injection into the jOOQ SPI context in the Configuration. If you want to continue this path, your TransactionProvider will need to access the Configuration.connectionProvider() and modify its state in order to produce always the right connection until commit() or rollback() is called.
See also a related discussion on the jOOQ user group:
https://groups.google.com/forum/#!msg/jooq-user/1JwWMChD2SM/NHUhSnI8AgAJ

JDBC connection pool design

I have developed a JDBC connection pool using synchronized methods like getConnection and returnConnection. This works well and it is fast enough for my purposes. Now the problem happens when this connection pool has to be shared in other packages of our application and so other developers will make use it as well. I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection / returnConnection inside the connection pool.
It would be something like this:
public MyConnectionPool {
private Connection getConnection() {
//return connection
}
private void returnConnection(Connection connection) {
//add connection to list
}
public void executeDBTask(DBTaskIF task) {
Connection connection = getConnection();
task.execute(connection);
returnConnection(connection);
}
}
where:
public interface DBTaskIF {
public execute(Connection connection);
}
with an example of this DBTaskIF:
connectionPool.executeDBTask( new DBTaskIF() {
public void execute(Connection connection) {
PreparedStatement preStmt = null;
try {
preStmt = connection.prepareStatement(Queries.A_QUERY);
preStmt.setString(1, baseName);
preStmt.executeUpdate();
} finally {
if(preStmt!=null) {
try {
preStmt.close();
} catch (SQLException e) {
log.error(e.getStackTrace());
}
}
}}});
I hope you can get the idea. What I want to know is your opinion about this approach. I want to propose this to the development team and I worry some one comes up saying that this is not standard or OOP or something else...
Any comments are much appreciated.
I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection returnConnection inside the connection pool.
I'm concerned with this statement. APIs should not (never?) assume that whoever uses them will do so in some way that is not enforced contractually by whichever method it exposes.
And java.sql.Connection is a widely used interface so you'll be making enemies by telling people how to use it with your pool.
Instead, you should assume that your Connection instances will be used correctly, i.e., that they will be closed (connection.close() in a finally block) once their use is over (see, for instance, Connecting with DataSource Objects):
Connection con;
PreparedStatement stmt;
try {
con = pool.getConnection();
con.setAutoCommit(false);
stmt = con.prepareStatement(...);
stmt.setFloat(1, ...);
stmt.setString(2, ...);
stmt.executeUpdate();
con.commit();
stmt.close();
} catch (SQLException e) {
con.rollback();
} finally {
try {
if(con!=null)
con.close();
if(stmt!=null) {
stmt.close();
} catch (SQLException e) {
...
} finally {
}
}
And the Connection implementation of your pool should be recycled when closed.
I second #lreeder's comment that you're really reinventing the wheel here and that most connection pools already available are definitely fast enough for most purposes, and underwent many fine tweakings over time. This also applies to embedded databases.
Disclaimer; this is just my opinion, but I have written custom connection pools before.
I find Java code where you have to create inner class impls a little clunky. However in Java8 lambda or Scala anonymous functions this would be a clean design. I probably would just expose returnConnection() as a public method and allow callers to use it directly.
Third option: use a utility class that takes care of most of the administration.
Not only forgetting to close a Connection can cause trouble, but also forgetting to close a Statement or a Resultset can cause trouble. This is similar to using various IO streams in a method: at some point you make an extra utility class in which you register all opened IO streams so that if an error occurs, you can call close in the utility class and be sure that all opened IO streams are closed.
Such a utility class will not cover all use cases but there is always the option to write another one for other (more complex) use cases. As long as they keep the same kind of contract, using them should just make things easier (and will not feel forced).
Wrapping or proxying a Connection to change the behavior of close to return the Connection to the pool is in general how connection pools prevent connections from actually being closed. But if a connection pool is not used, the application is usually written in a different manner: a connection (or two) is created (at startup) and used wherever a query is executed and the connection is only closed when it is known that a connection is not needed for a while (at shutdown). In contrast, when a pool is used, the connection is "closed" as soon as possible so that other processes can re-use the connection. This together with the option to use a utility class, made me decide to NOT wrap or proxy a connection, but instead let the utility class actually return the connection to the pool if a pool was used (i.e. not call connection.close() but call pool.release(connection)). Usage example of such a utility class is here, the utlity class itself is here.
Proxying causes small delays which is why for example BoneCP decided to wrap Connection and Datasource (wrapping causes very little overhead). The Datasource interface changes with each Java version (at least from 1.6 to 1.7) which means the code will not compile with older/newer versions of Java. This made me decide to proxy the Datasource because it is easier to maintain, but it is not easy to setup (see the various proxy helper classes here). Proxying also has the drawback of making stack-traces harder to read (which makes debugging harder) and sometimes makes exceptions disappear (I have seen this happen in JBoss where the underlying object threw a runtime exception from the constructor).
tl;dr If you make your own specialized pool, also deliver a utility class which makes it easy to use the pool and takes care of most of the administration that is required (like closing used resources) so that it is unlikely to be forgotten. If a utility class is not an option, wrapping or proxying is the standard way to go.

How to do transactional without lose encapsulation?

I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.

Categories

Resources