JDBI manual transaction management - java

I'm using JDBI for preparing data for test scenarios (setting up preconditions) in a separate project (rest-assured stuff). Now I have to use those JDBI-powered Daos for integration tests in a production project.
An example data setup Dao looks like this:
class DocumentDao(jdbi: Jdbi) : DefaultCrudRepository<Document>(jdbi) {
/* ... */
override fun save(entity: Document, handle: Handle): Int {
return handle.createUpdate(
"""
INSERT INTO documents (
/* ... */
)
VALUES (
/* ... */
)
"""
)
.bindBean(entity)
.execute()
}
/* ... */
}
And now I must use it together with a production persistence framework, because each scenario in integration tests is performed in a single transaction. So both precondition insertion and production data modification in DB must happen in a single transaction and be rolled back. Like this:
#Test
public void LoadDocument_001() throws Exception
{
// Start a transaction
startTx();
Integer id = testDocumentDao.nextIdFromSequence();
// Insert fake document into DB as a test precondition
test.document.Document document = new test.document.Document(/* ... */);
testDocumentDao.save(document);
// Production Dao (subject) is called
Document loadedDocument = subject.load(id.toString());
// Rollback the transaction
rollbackTx();
}
startTx() and rollbackTx() are methods I can use to manually control a transaction by communicating with a connection directly. BUT! I can't control transactions for those JDBI-powered Daos. Every operation that is using JDBI happens to be automatically committed by JDBI. So Document I've created as a precondition is left in a DB after test finishes.
Note: of course, I'm using the same connection instance in JDBI and in production persistence layer.
So I have two questions:
How do I do manual transaction management for JDBI-only operations? For instance, I have two Daos (DocumentDao and CustomerDao) built like the example above. I want to call methods on both of them, but I want all changes to happen in a single transaction. And then I want to roll back that transaction.
How do I do manual transaction management for JDBI operation mixed with other non-JDBI code? E.g. I'm using DocumentDao and also some production code which is changing DB state too. How can I control a transaction using JDBI and/or communicating with a connection directly? Do I have to somehow disable automatic transaction management in JDBI? I tried doing it using fake TransactionHandler with empty methods, but it doesn't take any effect.

Related

Spring JdbcTemplate rollback using annotations

I am new to Java and Spring. I am learning spring jdbc connectivity using JdbcTemplate. I wrote the below code.
Controller
service.importInsuranceEstimates(insuranceEstimates);
Service class
public void importInsuranceEstimates(List<String[]> insuranceEstimates) {
for (String[] insuranceEstimate: insuranceEstimates) {
repository.insertInsuranceEstimate(insuranceEstimate);
}
}
Repository class
public void insertInsuranceEstimate(String[] insuranceEstimate) {
jdbcTemplate.update("insert into some_table values (?, ?, ?)", insuranceEstimate[0], insuranceEstimate[1], insuranceEstimate[2]);
}
Assume that after inserting few records, the next insert statement failed. In this case, I would like the previously inserted records to be rolled back.
So I decorated the repository method with #Transactional(propagation = Propagation.REQUIRED). But still I don't see the previous records being rolled back if the insert failed.
Then I understood that the rollback is not done because each insert is done in its own transaction and committed before the repository is returned.
So then I decorated the service method also with the same annotation #Transactional(propagation = Propagation.REQUIRED). But no success. The records are still not being rolled back.
Then, I understood that I have to insert all the records under the same transaction. So I changed my repository signature to
public void importInsuranceEstimates(List<String[]> insuranceEstimates)
then service class
repository.importInsuranceEstimates(insuranceEstimates);
In the repository class I am using batchUpdate instead of using the regular update.
What I understood is
1. queries related to a single transaction must be run/executed under a single transaction.
2. annotation based rollback is not possible using JdbcTemplate. We have to get the connection and play with setAutoCommit(boolean) method.
Are my observations right?
Also, in some cases one would like to make multiple insert/update/delete db calls for different tables from service layer. How to make multiple db calls from service layer under the same transaction. Is it even possible?
For example I want to write a code to transfer money from an account to another. So I have to make two db calls, one to debit the send and one to credit the receiver. In this case I would write something like below
Service class
repository.debitSender(id, amount);
repository.creditReceiver(id, amount);
Since I cannot run these two method calls under the same transaction, I have to modify my service class to
repository.transferMoney(senderId, receiverId, amount)
and do the two updates under the same transaction in the repository like below
public void transferMoney(String senderId, String receiverId, double amount) {
jdbcTemplate.getConnection().setAutoCommit(false);
// update query to debit the sender
// update query to credit the receiver
jdbcTemplate.getConnection().setAutoCommit(true);
}
What if I do not want to use transferMoney method and instead split the method into two - debitSender and creditReceiver and call these two methods from the service class under the same transaction with JdbcTemplate?

Is there a generic way to work with optimistic locking using Hibernate/Spring Data JPA?

I'm using hibernate's #Version annotation for optimistic locking. My usual use case for updating something in DB looks like this:
Open a transaction (using #Transactional)
Select entities I need to change
Check if I can make the needed changes (validation step)
Do changes
Save everything (Spring Data JPA repository save() method), commit transaction
If I catch any of OptimisticLockException, then I have to retry everything from step 1 (until successful save or fail on validation step, and not more than X times)
This algorithm looks quite common for any kind of optimistic locking processing. Is there a generic way using hibernate or spring data jpa to do these retries/handling of optimistic locking failures or should I write method like this myself? I mean something like (but not literally):
boolean trySaveUntilDoneOrNotOptimisticLockingExceptionOccur(Runnable codeWhichSelectsValidatesUpdatesAndSavesDataButWithoutOptimisticLockingProcessing, int maxOptimisticLockingRetries)
As the question is tagged with spring-data-jpa, I will answer from Spring world.
Just have a look at #Retryable. I find it quite useful for exactly the same use case you describe. This is my usual pattern:
#Service
#Transactional
#Retryable(maxAttempts = 7,
backoff = #Backoff(delay = 50),
include = { TransientDataAccessException.class,
RecoverableDataAccessException.class }
)
public class MyService {
// all methods in this service are now transactional and automatically retried.
}
You can play with backoff options, of course.
Check out here for further examples on #Retryable.

How does transaction propagation impact update in the database

I am trying to understand the behavior of transaction propagation using SpringJTA - JPA - Hibernate.
Essentially I am trying to update an entity. To do so I have written a test method where I fetch an object using entity manager (em) find method ( so now this object is manged object). Update the attributes of the fetched object. And then optionally make a call to service layer(service layer propagation=required) which is calling em.merge
Now I have three variations here :
Test method has no transactional annotation. Update the attributes
of the fetched object and make no call to service layer.
1.1. Result level 1 cache doesn't gets updated and no update to DB.
Test method has no transactional annotation. Update the attributes of the fetched object. Call the service layer.
2.1. Result level 1 cache and DB gets updated.
Test method has Transnational annotation which could be any of the following. Please see the table below for Propagation value at the test method and the outcome of a service call.
(service layer propagation=required)
So to read the above table, the row 1 says if the Test method has transaction propagation= REQUIRED and if a service layer call is made then the result is update to Level 1 cache but not to the DB
Below is my test case
#Test
public void testUpdateCategory() {
//Get the object via entity manager
Category rootAChild1 = categoryService.find(TestCaseConstants.CategoryConstant.rootAChild1PK);
assertNotNull(rootAChild1);
rootAChild1.setName(TestCaseConstants.CategoryConstant.rootAChild1 + "_updated");
// OPTIONALLY call update
categoryService.update(rootAChild1);
//Get the object via entity manager. I believe this time object is fetched from L1 cache. As DB doesn't get updated but test case passes
Category rootAChild1Updated = categoryService.find(TestCaseConstants.CategoryConstant.rootAChild1PK);
assertNotNull(rootAChild1Updated);
assertEquals(TestCaseConstants.CategoryConstant.rootAChild1 + "_updated", rootAChild1Updated.getName());
List<Category> categories = rootAChild1Updated.getCategories();
assertNotNull(categories);
assertEquals(TestCaseConstants.CategoryConstant.rootAChild1_Child1,categories.get(0).getName());
}
Service Layer
#Service
public class CategoryServiceImpl implements CategoryService {
#Transactional
#Override
public void update(Category category) {
categoryDao.update(category);
}
}
DAO
#Repository
public class CategoryDaoImpl {
#Override
public void update(Category category) {
em.merge(category);
}
}
Question
Can someone please explain why does REQUIRED, REQUIRES_NEW, and NESTED doesn't lead to insertion in the DB?
And why absence of transaction annotation on Test case lead to insertion in the DB as presented in my three variations?
Thanks
The effect you're seeing for REQUIRED, NESTED, and REQUIRES_NEW is due to the fact that you're checking for updates too early
(I'm assuming here that you check for db changes at the same moment when the test method reaches the assertions, or that you roll the test method transaction back somehow after executing the test)
Simply enough, your assertions are still within the context created by the #Transactional annotation in the test method. Consequently, the implicit flush to the db has not been invoked yet.
In the other three cases, the #Transactional annotation on the test method does not start a transaction for the service method to join. As a result, the transaction only spans the execution of the service method, and the flush occurs before your assertions are tested.

How to refresh JPA entities when backend database changes asynchronously?

I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.

Getting DbUnit to Work with Hibernate Transaction

I'm having problem trying to push changes made within a Hibernate transaction to the database for DbUnit to work properly in my test case. It seems like DbUnit is not seeing the changes made by Hibernate because they are not committed at the end of the transaction yet... and I'm not sure how to restructure my test case to get this to work.
Here's my over-simplified test case to demonstrate my problem:-
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {
"classpath:applicationContext-test.xml"
})
#TransactionConfiguration(transactionManager = "transactionManager")
#Transactional
public class SomeTest {
#Autowired
protected DataSource dataSource;
#Autowired
private SessionFactory sessionFactory;
#Test
public void testThis() throws Exception {
Session session = sessionFactory.getCurrentSession();
assertEquals("initial overlayType count", 4, session.createQuery("from OverlayType").list().size());
//-----------
// Imagine this block is an API call, ex: someService.save("AAA");
// But for the sake of simplicity, I do it this way
OverlayType overlayType = new OverlayType();
overlayType.setName("AAA");
session.save(overlayType);
//-----------
// flush has no effect here
session.flush();
assertEquals("new overlayType count", 5, session.createQuery("from OverlayType").list().size());
// pull the data from database using dbunit
IDatabaseConnection connection = new DatabaseConnection(dataSource.getConnection());
connection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new MySqlDataTypeFactory());
QueryDataSet partialDataSet = new QueryDataSet(connection);
partialDataSet.addTable("resultSet", "select * from overlayType");
ITable actualTable = partialDataSet.getTable("resultSet");
// FAIL: Actual row count is 4 instead of 5
assertEquals("dbunit's overlayType count", 5, actualTable.getRowCount());
DataSourceUtils.releaseConnection(connection.getConnection(), dataSource);
}
}
My whole idea in using DbUnit is to:-
Call someService.save(...) that saves data into several tables.
Use DbUnit to get expected table from XML.
Use DbUnit to get actual table from database.
Do Assertion.assertEquals(expectedTable, actualTable);.
But, at this point, I'm not able to get DbUnit to see the changes made by Hibernate within the transaction.
How should I change to get DbUnit to work nicely with Hibernate transaction?
Thanks.
I have never worked with DbUnit, but it seems like TransactionAwareDataSourceProxy will do the trick. Basically you need to wrap your original data source with this proxy and use it instead, so that this code:
new DatabaseConnection(dataSource.getConnection())
actually goes through the proxy and uses the same transaction and connection as Hibernate.
I found Transaction aware datasource (use dbunit & hibernate in spring) blog post explaining this.
Another approach would be to skip transactional tests altogether and cleanup the database instead manually. Check out my transactional tests considered harmful artcle.
Looks like that test case needs two transactions: one for putting data into the database, and a second one to retrieve it.
What I would do is:
Use a memory database so the data is cleaned when the unit test ends.
Remove the transactional annotations and use the beginTransaction and commit methods of the session directly.
The initial overlaytype count would be 0, and after the session is saved, it should be 1.

Categories

Resources