Guice - multiple persistence providers using same connection per transaction - java

I have two persistence providers I like to use - my own JDBC approach (DBAccess), and jooq (DSLContext). A DSLContext and my DBAccess both can be created using a Connection and some configuration details. I'm trying to convert a project to use Guice, and would like to create a DAO that has the ability to use both in one transaction, e.g.
class ThingDAO {
final DBAccess dbAccess;
final DSLContext dslContext;
#Inject
ThingDAO(DBAccess dbAccess, DSLContext dslContext) {
this.dbAccess = dbAccess;
this.dslContext = dslContext;
}
Thing getThingForId(int id) {
return dslContext.select().from(OBJECT)....
}
void save(Thing t) {
dbAccess.save(t);
}
Stuff joinThingToStuffTableAndGetStuff(Thing t) {
// the Stuff I get may depend on what has been saved so far, so I need
// the dslContext and dbAccess operating on the same connection
dslContext....
}
}
which I could then use along the lines of
#Transactional
doTheThings(int id, int[] data) {
ThingDAO dao = thingDaoProvider.get();
Thing t = dao.getThingForId(id);
t.doTheThings(data);
dao.save(t);
Stuff s = dao.joinThingToStuffTableAndGetStuff(t);
....
}
I've been looking at this guice extension for jooq, which makes me think I want something along the lines of a UnitOfWork that grabs a connection from my datasource in order to give to a DBAccess and a DSLContext, but I'm unsure if that's right or how to proceed even if it is.

Related

Correct use of Transactions in EJB Timer

I am using a timer to send out emails on a schedule based on a JPA query but I am getting an error that the driver doesn't support XA. I am unable to switch over to an XA driver and whilst I know I can set some options on the data source I am thinking I am not handling transactions correctly.
There is nothing being persisted and the pseudocode of the time would be
Get list of emails to send from DB (from an EntityManager)
Get email addresses (from a CDI bean)
Send emails
So I don't actually need a two phase commit and I was wondering what the correct way of handling this should be?
The code looks like
#Startup
#Singleton
public class EmailTimer {
#PersistenceContext(unitName = "xyz")
private EntityManager em;
#Inject
private EmailLookup emailLookup;
#Resource
TimerService timerService;
#PostConstruct
public void initTimer() {
// define schedule
timerService.createCalendarTimer(schedExpr, timertConfig);
}
#Timeout
private void sendEmails() {
List<Email> emailsToSend = listEmailsToSend();
For (Email e : emailsToSend) {
sendMail(emailLookup.getEmail(e.userName), "Some Text");
}
}
private List<Email> listEmailsToSend() {
String sql = "select ..."; //moderately long select query
TypedQuery<EmailResults> emailResultsQuery = em.createQuery(sql, EmailResults.class);
return emailResultsQuery.getResultList();
}
}
As nothing has been set explicitly everything should be set to TransactionAttributeType.REQUIRED and TransactionManagementType.CONTAINER? so currently everything will be running in the same transaction (hence the need for the two phase commit?
Do I want to mark the #Timeout method as TransactionAttributeType.NOT_SUPPORTED or should I mark the DB method as TransactionAttributeType.REQUIRES_NEW or should I be handling this in a different way?

Intercept repository method calls in Spring Data, refining the query on the fly

Say I've got a few interfaces extending CRUDRepositor. There are methods in there like findByField. Some of these methods should only return entities that belong to a group of entities to which the user has access (the group is a column in the database, so it's a field that's defined for most entities). I want to achieve this by allowing the use of annotations (like #Protected) on the repository methods, and then when these methods are called instead of calling findByField a method findByFieldAndGroup is called behind the scenes. With the use of AOP (which intercepts methods annotated with my #Protected tag) the group can be assigned before the method is effectively executed.
public interface MyRepository extends CRUDRepository<MyEntity,long> {
#Protected
Optional<MyEntity> findById(Long id); // Should become findByIdAndGroup(Long id, String group) behind the scenes
#Protected
Collection<MyEntity> findAll();
}
Is there a way to achieve this? In the worst case I either add all the methods manually, or completely switch to a query by example approach (where you can more easily add the group dynamically) or generate methods with a Java agent using ASM (manipulating the bytecode) ... but these are much less practical approaches which demand a good deal of refactoring.
Edit : found these relevant questions Spring data jpa - modifying query before execution
Spring Data JPA and spring-security: filter on database level (especially for paging)
Other relevant references include this ticket on GitHub (no progress, only a sort-of-solution with QueryDSL which precludes the use of queries based on method names) and this thread.
You can use filters, a specific Hibernate feature, for this problem.
The idea is the following.
First, you need to annotate your entity with the different filters you want to apply, in your case, something like:
#Entity
//...
#Filters({
#Filter(name="filterByGroup", condition="group_id = :group_id")
})
public class MyEntity implements Serializable {
// ...
}
Then, you need access to the underlying EntityManager because you need to interact with the associated Hibernate Session. You have several ways to do this. For example, you can define a custom transaction manager for the task, something like:
public class FilterAwareJpaTransactionManager extends JpaTransactionManager {
#Override
protected EntityManager createEntityManagerForTransaction() {
final EntityManager entityManager = super.createEntityManagerForTransaction();
// Get access to the underlying Session object
final Session session = entityManager.unwrap(Session.class);
// Enable filter
try{
this.enableFilterByGroup(session);
}catch (Throwable t){
// Handle exception as you consider appropriate
t.printStackTrace();
}
return entityManager;
}
private void enableFilterByGroup(final Session session){
final String group = this.getGroup();
if (group == null) {
// Consider logging the problem
return;
}
session
.enableFilter("filterByGroup")
.setParameter("group_id", group)
;
}
private String getGroup() {
// You need access to the user information. For instance, in the case of Spring Security you can try:
final Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (authentication == null) {
return null;
}
// Your user type
MyUser user = (MyUser)authentication.getPrincipal();
String group = user.getGroup();
return group;
}
}
Then, register this TransationManager in your database configuration instead of the default JpaTransactionManager:
#Bean
public PlatformTransactionManager transactionManager() {
JpaTransactionManager transactionManager = new FilterAwareJpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory());
return transactionManager;
}
You can also have access to the EntityManager and associated Session by creating a custom JpaRepository or by injecting #PersistenceContext in your beans, but I think the above-mentioned approach is the simpler one although it has the drawback of being always applied.

How to write a unit test for a method, which creates database table on the fly?

tl;dr;. I have a method for creating new database table on the fly, and I want to write a unit test for it. Unfortunately, test runner does not perform rollback after tests in a proper way, and the table still remains in the DB after tests finished. What should I do?
Long story:
I am not very familiar neither with Java Persistence nor with Java Spring, so, if you found current solution ugly (as for me, it is rather ugly), please, tell me how to improve it - I will very appreciate your opinion.
I have a SysDatastoreService with the following implementation of addStaticDatastore method.
#Service
public class SysDatastoreServiceImpl implements SysDatastoreService {
#Autowired
private SysDatastoreRepository datastoreRepository;
#Autowired
private DataSource dataSource;
#Override
#Transactional
public Optional<SysDatastore> addStaticDatastore(String name, String tableName, String ident, Long ord) {
String createTableSql = PostgresTableSqlBuilder.createTableInPublicSchemaWithBigintPkAndFkId(
tableName,
SysObject.TABLE_NAME,
Optional.of(SysObject.ID_COLUMN_NAME)).buildSql();
Optional<SysDatastore> sysDatastore = Optional.empty();
try(
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement()
) {
connection.setAutoCommit(false);
Savepoint beforeTableCreation = connection.setSavepoint();
try {
statement.execute(createTableSql);
sysDatastore = Optional.ofNullable(
datastoreRepository.save(new SysDatastore(name, tableName, DatastoreType.STATIC, ident, ord)));
} catch(SQLException e) {
e.printStackTrace();
}
if(!sysDatastore.isPresent()) {
connection.rollback(beforeTableCreation);
} else {
connection.commit();
}
} catch(SQLException e1) {
e1.printStackTrace();
}
return sysDatastore;
}
}
So, as you can see, I receive new connection from DataSource and try to create new table. In success, I will create a new entry in the SysDataStoreRepository, and, if this fails, I will perform a rollback for the table creation.
There are some disadvantages of current approach, one of them is that table creation and entry insertion operates on separate connections (am I right?).
But I have some problem while writing a unit test. This is what I tried:
#Transactional(propagation = Propagation.REQUIRED)
#RunWith(SpringJUnit4ClassRunner.class)
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
#ContextConfiguration(locations = "file:src/main/webapp/WEB-INF/rest-servlet.xml")
public class SysDatastoreServiceTest {
#Autowired
private SysDatastoreService sysDatastoreService;
#Autowired
private DataSource dataSource;
#Test
public void testAddStaticDatastore() throws Exception {
Optional<SysDatastore> sysDatastore =
sysDatastoreService.addStaticDatastore("New static datastore", "new_datastore_table",
"NEW_DATASTORE_TABLE", 42L);
assertTrue(sysDatastore.isPresent());
assertEquals("New static datastore", sysDatastore.get().getName());
assertEquals("NEW_DATASTORE_TABLE", sysDatastore.get().getIdent());
assertEquals("new_datastore_table", sysDatastore.get().getTableName());
assertEquals(DatastoreType.STATIC, sysDatastore.get().getDynType());
assertEquals(42L, sysDatastore.get().getOrd().longValue());
assertTrue(dataSource.getConnection()
.getMetaData()
.getTables(null, null, sysDatastore.get().getTableName(), null)
.next());
}
This test seems pretty easy: I just compare all the fields, and then checks database for a new table.
However, this test fails when I run it twice or more times. Looking at the database I noticed, that the table new_datastore_table still remained in the schema. I guess, it was not rollbacked properly because of hand-written transaction and raw sql execution, but I am not sure.
Question: How should I write a test case for this method in a proper way? And, in case if the current approach is fundamentally wrong, how it should be changed?
Side notes: I use PostgreSQL database, and it cannot be replaced with non-relational database.
First a CREATE TABLE is a DDL statement, not a DML one. That means that a rollback will not delete the table. If you want to clean your database, you must explicitely remove it in a #After or #AfterClass method.
But do you really need to do the tests on the PostgreSQL database? Spring has great support for in memory databases and the default embedded database is HSQL which has a pretty good support for postgresql syntax. Provided you have no complex statement, it could be enough and avoids cluttering the main database for (potentially destructive) unit tests.
You could create the database in a #BeforeClass method. Here is an oversimplified example:
private static DriverManagerDataSource dataSource;
#BeforeClass
public static void setupClass() throws Exception {
ResourceDatabasePopulator populator = new ResourceDatabasePopulator();
populator.addScript(new ClassPathResource("path/to/package/defaults.sql"));
dataSource = new DriverManagerDataSource();
dataSource.setUrl("jdbc:hsqldb:mem:pgtest;sql.syntax_pgs=true");
dataSource.setUsername("SA");
Connection con = dataSource.getConnection();
assertNotNull(con);
populator.populate(con);
con.close();
}

Unit testing a DAO class that uses Spring JDBC

I have several DAO objects that are used to retrieve information from a database and I really want to write some automated tests for them but I'm having a hard time figuring out how to do it.
I'm using Spring's JdbcTemplate to run the actual query (via a prepared statement) and map the results to the model object (via the RowMapper class).
If I were to write unit tests, I'm not sure how I would/should mock the objects. For example, since there are only reads, I would use the actual database connection and not mock the jdbcTemplate, but I'm not sure that's right.
Here's the (simplified) code for the simplest DAO of the batch:
/**
* Implementation of the {#link BusinessSegmentDAO} interface using JDBC.
*/
public class GPLBusinessSegmentDAO implements BusinessSegmentDAO {
private JdbcTemplate jdbcTemplate;
private static class BusinessSegmentRowMapper implements RowMapper<BusinessSegment> {
public BusinessSegment mapRow(ResultSet rs, int arg1) throws SQLException {
try {
return new BusinessSegment(rs.getString(...));
} catch (SQLException e) {
return null;
}
}
}
private static class GetBusinessSegmentsPreparedStatementCreator
implements PreparedStatementCreator {
private String region, cc, ll;
private int regionId;
private GetBusinessSegmentsPreparedStatementCreator(String cc, String ll) {
this.cc = cc;
this.ll = ll;
}
public PreparedStatement createPreparedStatement(Connection connection)
throws SQLException {
String sql = "SELECT ...";
PreparedStatement ps = connection.prepareStatement(sql);
ps.setString(1, cc);
ps.setString(2, ll);
return ps;
}
}
public GPLBusinessSegmentDAO(DataSource dataSource) {
jdbcTemplate = new JdbcTemplate(dataSource);
}
public Collection<BusinessSegment> getBusinessSegments(String cc, String ll) {
return jdbcTemplate.query(
new GetBusinessSegmentsPreparedStatementCreator(cc, ll),
new BusinessSegmentRowMapper());
}
}
Any idea would be appreciated.
Thanks!
Please have a look at below links:
Testing SQL queries with Spring and DbUnit
MockObjects or DBUnit for testing Code using JdbcTemplate
Hope that helps.
EDIT:
Here is the GitHub version of RowMapperTests for easy reference.
I recommend breaking your dependency on JdbcTemplate class, and using the JdbcOperations interface instead, e.g.
public class GPLBusinessSegmentDAO implements BusinessSegmentDAO {
private final JdbcOperations jdbc;
public GPLBusinessSegmentDAO(DataSource dataSource) {
this(new JdbcTemplate(dataSource));
}
public GPLBusinessSegmentDAO(JdbcOperations jdbc) {
this.jdbc = jdbc;
}
// ... DAO methods here
}
Your unit test can invoke the second constructor, passing in a mock JdbcOperations object. Since all DB operations are performed via the jdbc object, you can mock that easily enough.
Your live code can call the first constructor as before.
To write a true unit test for this, you would not be touching a real database.
You may however find it more practical to pass in a real DataSource to your underlying db, and test the getBusinessSegments() method returns 0, 1 and many results depending on the cc and ll values you pass in.
Another option worth investigating would be to pass in a DataSource of an embedded Java DB that was initialised with your schema in a setUp/#Before method. I guess what you really want to test is that the SELECT... query maps correctly to the schema, so such a test would catch any errors that arise at runtime when the schema, say, changes.

JUnit + Derby + Spring: drop in-memory db after every test

In my unit tests I autowired some DataSources, which use URLs like
jdbc:derby:memory:mydb;create=true
to create an in-memory DBs.
To drop an in-memory Derby db you have to connect with:
jdbc:derby:memory:mydb;drop=true
I would like this to happen after every test and start with a fresh db. How can I do this using Spring?
How to shutdown Derby in-memory database Properly
gave me a hint to a solution:
mydb.drop.url = jdbc:derby:memory:mydb;drop=true
...
<bean id="mydbDropUrl" class="java.lang.String">
<constructor-arg value="${mydb.drop.url}" />
</bean>
...
#Resource
private String mydbDropUrl;
#After
public void tearDown() {
try {
DriverManager.getConnection(mydbDropUrl);
} catch (SQLException e) {
// ignore
}
}
A downside is the use of the String constructor which accepts a String (an immutable String object around an immutable String object). I read that there is a #Value annotation in Spring 3, which might help here, but I'm using Spring 2.5.
Please let me know if you have a nicer solution.
There is a database-agnostic way to do this if you are using Spring together with Hibernate.
Make sure the application context will be created / destroyed before / after every test method:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration({"classpath*:application-context-test.xml"})
#TestExecutionListeners({DirtiesContextTestExecutionListener.class,
DependencyInjectionTestExecutionListener.class})
#DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
public abstract class AbstractTest {
}
Instruct Hibernate to auto create the schema on startup and to drop the schema on shutdown:
hibernate.hbm2ddl.auto = create-drop
Now before every test
the application context is created and the required spring beans are injected (spring)
the database structures are created (hibernate)
the import.sql is executed if present (hibernate)
and after every test
the application context is destroyed (spring)
the database schema is dropped (hibernate).
If you are using transactions, you may want to add the TransactionalTestExecutionListener.
After spring test 3, you can use annotations to inject configurations:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration("/spring-test.xml")
public class MyTest {
}
Just do something like:
public class DatabaseTest implements ApplicationContextAware {
private ApplicationContext context;
private DataSource source;
public void setApplicationContext(ApplicationContext applicationContext) {
this.context = applicationContext;
}
#Before
public void before() {
source = (DataSource) dataSource.getBean("dataSource", DataSource.class);
}
#After
public void after() {
source = null;
}
}
Make your bean have a scope of prototype (scope="prototype"). This will get a new instance of the data source before every test.
If you use the spring-test.jar library, you can do something like this:
public class MyDataSourceSpringTest extends
AbstractTransactionalDataSourceSpringContextTests {
#Override
protected String[] getConfigLocations() {
return new String[]{"classpath:test-context.xml"};
}
#Override
protected void onSetUpInTransaction() throws Exception {
super.deleteFromTables(new String[]{"myTable"});
super.executeSqlScript("file:db/load_data.sql", true);
}
}
And an updated version based on latest comment, that drops db and recreates tables before every test:
public class MyDataSourceSpringTest extends
AbstractTransactionalDataSourceSpringContextTests {
#Override
protected String[] getConfigLocations() {
return new String[]{"classpath:test-context.xml"};
}
#Override
protected void onSetUpInTransaction() throws Exception {
super.executeSqlScript("file:db/recreate_tables.sql", true);
}
}
This is what we do at the start of every test.
Drop all Previous Objects.
Create all tables mentioned in the create_table.sql
Insert values onto the created tables based on what you want to test.
#Before
public void initialInMemoryDatabase() throws IOException, FileNotFoundException {
inMemoryDerbyDatabase.dropAllObjects();
inMemoryDerbyDatabase.executeSqlFile("/create_table_policy_version_manager.sql");
inMemoryDerbyDatabase.executeSqlFile("/insert_table_policy_version_manager.sql");
}
Works like a charm!

Categories

Resources