I'm learning Mockito and Unit Testing in general. I want to learn how to unit test better by using Argument Captor. I'm using jdbc to handle my SQL statement. I have a method that inserts a user into my DB.
public void insert(User user) {
String sql = "INSERT INTO user (id) VALUES ?";
jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection) {
final PreparedStatement ps = connection.prepareStatement(sql);
ps.setString(1, user.getId().trim());
return ps;
}
});
}
Below is the test that I'm trying to write with ArgumentCaptor.
#Test
public void testInsert() {
User user = new User("testID");
ArgumentCaptor<PreparedStatementCreator> captor = ArgumentCaptor.forClass(PreparedStatementCreator.class);
insert(user);
verify(mockJdbc, times(1)).update(captor.capture());
PreparedStatementCreator actual = captor.getValue();
assertEquals(??, actual.createPreparedStatement(??));
}
Any advice or insight on what should be in the '??' for the assert statement or if this is the correct way to use Argument Captor?
Thank You
Edit:
#Test
public void testInsert() throws SQLException {
ArgumentCaptor<PreparedStatementCreator> captor = ArgumentCaptor.forClass(PreparedStatementCreator.class);
PreparedStatement ps = mockConnection.prepareStatement("INSERT INTO user (id) VALUES ?";);
ps.setString(1, user.getId().trim());
insert(user);
verify(mockJdbcTemplate, times(1)).update(captor.capture());
PreparedStatementCreator actual = captor.getValue();
assertEquals(ps, actual.createPreparedStatement(mockConnection));
}
I like your approach of using ArgumentCaptors.
You are using the ArgumentCaptor correctly to capture the argument of the method update on the mocked JDBC template; however, you cannot extract the argument used to call the PreparedStatementCreator, because this is object is not a mock.
Conceptually, the difficulty you have to test this part of your code comes from the fact you don't control the creation of the PreparedStatementCreator. One possible solution would be to take back control on how and when you create these objects; so as to allow you to mock them in your tests.
Following a standard creational pattern, you could introduce a factory which (single) responsibility is to create PreparedStatementCreator.
interface PreparedStatementCreatorFactory {
PreparedStatementCreator newPreparedStatementCreator(Connection connection, String sql, User user);
}
public final class DefaultPreparedStatementCreatorFactory {
#Override
public PreparedStatementCreator newPreparedStatementCreator(Connection connection, String sql, User user) {
final PreparedStatement ps = connection.prepareStatement(sql);
ps.setString(1, user.getId().trim());
return ps;
}
}
Then, in the class you are testing (which contains the JDBC mock), you can inject a mock of the PreparedStatementCreatorFactory. Then, instead of capturing the argument of the JDBC mock, you can capture the argument on the factory instead; and, of course, specify what the mocked factory returns.
PreparedStatementCreatorFactory factory = Mockito.mock(PreparedStatementCreatorFactory.class);
PreparedStatementCreator creator = Mockito.mock(PreparedStatementCreator.class);
// Mock the creator at your convenience.
when(factory.newPreparedStatementCreator(any(Connection.class), any(String.class), any(User.class)).thenReturn(creator);
...
User user = new User("testID");
ArgumentCaptor<Connection> connectionCaptor = ArgumentCaptor.forClass(Connector.class);
ArgumentCaptor<String> sqlCaptor = ArgumentCaptor.forClass(String.class);
ArgumentCaptor<User> userCaptor = ArgumentCaptor.forClass(User.class);
insert(user);
verify(factory, times(1)).newPreparedStatementCreator(connectionCaptor.capture(), sqlCaptor.capture(), userCaptor.capture());
assertEquals(user, userCaptor.getValue());
One drawback of this approach is that it adds one level of indirection and relative complexity; the main advantage is, as we see, to improve the separation of concerns in your design and in fine the testability of your code.
Related
I am working on one JAVA + Hibernate project but currently, I saw one interface in my code (i.e. ReturningWork<Long>) which has one method called execute(java.sql.Connection).
My question is what is the use of this ReturningWork interface?
As I explained in more details on my blog, you can use the ReturningWork and the Work interfaces to implement any logic that requires direct access to the java.sql.Connection used by your Hibernate session.
Here is a simple example that uses the ReturningWork interface to execute a very simple query (which you could also implement with JPQL) and return the result.
Session session = em.unwrap(Session.class);
Integer bookCount = session.doReturningWork(new ReturningWork<Integer>() {
#Override
public Integer execute(Connection con) throws SQLException {
// do something useful
try (PreparedStatement stmt = con.prepareStatement("SELECT count(b.id) FROM Book b")) {
ResultSet rs = stmt.executeQuery();
rs.next();
return rs.getInt(1);
}
}
});
log.info("Found " + bookCount + " books.");
Hibernate Session Hibernate Session doReturningWork
The Hibernate Session doReturningWork method has the following signature:
<T> T doReturningWork(ReturningWork<T> work) throws HibernateException;
And the ReturningWork interface looks as follows:
public interface ReturningWork<T> {
public T execute(Connection connection) throws SQLException;
}
So, unlike the doWork method, the doReturningWork allows us to return an object to the method caller.
For example, we can use the doReturningWork method to get the current transaction isolation level:
Session session = entityManager.unwrap(Session.class);
int isolationLevel = session.doReturningWork(
connection -> connection.getTransactionIsolation()
);
assertEquals(
Connection.TRANSACTION_READ_COMMITTED,
isolationLevel
);
When to use doWork and doReturningWork?
In general, you can use the JPA or Hibernate-specific API to execute SQL statements or call database procedures or functions.
However, if you want to get access to the underlying JDBC Connection method and execute any possible operation via the JDBC API, then you need to use the doWork and doReturningWork Hibernate Session methods.
tl;dr;. I have a method for creating new database table on the fly, and I want to write a unit test for it. Unfortunately, test runner does not perform rollback after tests in a proper way, and the table still remains in the DB after tests finished. What should I do?
Long story:
I am not very familiar neither with Java Persistence nor with Java Spring, so, if you found current solution ugly (as for me, it is rather ugly), please, tell me how to improve it - I will very appreciate your opinion.
I have a SysDatastoreService with the following implementation of addStaticDatastore method.
#Service
public class SysDatastoreServiceImpl implements SysDatastoreService {
#Autowired
private SysDatastoreRepository datastoreRepository;
#Autowired
private DataSource dataSource;
#Override
#Transactional
public Optional<SysDatastore> addStaticDatastore(String name, String tableName, String ident, Long ord) {
String createTableSql = PostgresTableSqlBuilder.createTableInPublicSchemaWithBigintPkAndFkId(
tableName,
SysObject.TABLE_NAME,
Optional.of(SysObject.ID_COLUMN_NAME)).buildSql();
Optional<SysDatastore> sysDatastore = Optional.empty();
try(
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement()
) {
connection.setAutoCommit(false);
Savepoint beforeTableCreation = connection.setSavepoint();
try {
statement.execute(createTableSql);
sysDatastore = Optional.ofNullable(
datastoreRepository.save(new SysDatastore(name, tableName, DatastoreType.STATIC, ident, ord)));
} catch(SQLException e) {
e.printStackTrace();
}
if(!sysDatastore.isPresent()) {
connection.rollback(beforeTableCreation);
} else {
connection.commit();
}
} catch(SQLException e1) {
e1.printStackTrace();
}
return sysDatastore;
}
}
So, as you can see, I receive new connection from DataSource and try to create new table. In success, I will create a new entry in the SysDataStoreRepository, and, if this fails, I will perform a rollback for the table creation.
There are some disadvantages of current approach, one of them is that table creation and entry insertion operates on separate connections (am I right?).
But I have some problem while writing a unit test. This is what I tried:
#Transactional(propagation = Propagation.REQUIRED)
#RunWith(SpringJUnit4ClassRunner.class)
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
#ContextConfiguration(locations = "file:src/main/webapp/WEB-INF/rest-servlet.xml")
public class SysDatastoreServiceTest {
#Autowired
private SysDatastoreService sysDatastoreService;
#Autowired
private DataSource dataSource;
#Test
public void testAddStaticDatastore() throws Exception {
Optional<SysDatastore> sysDatastore =
sysDatastoreService.addStaticDatastore("New static datastore", "new_datastore_table",
"NEW_DATASTORE_TABLE", 42L);
assertTrue(sysDatastore.isPresent());
assertEquals("New static datastore", sysDatastore.get().getName());
assertEquals("NEW_DATASTORE_TABLE", sysDatastore.get().getIdent());
assertEquals("new_datastore_table", sysDatastore.get().getTableName());
assertEquals(DatastoreType.STATIC, sysDatastore.get().getDynType());
assertEquals(42L, sysDatastore.get().getOrd().longValue());
assertTrue(dataSource.getConnection()
.getMetaData()
.getTables(null, null, sysDatastore.get().getTableName(), null)
.next());
}
This test seems pretty easy: I just compare all the fields, and then checks database for a new table.
However, this test fails when I run it twice or more times. Looking at the database I noticed, that the table new_datastore_table still remained in the schema. I guess, it was not rollbacked properly because of hand-written transaction and raw sql execution, but I am not sure.
Question: How should I write a test case for this method in a proper way? And, in case if the current approach is fundamentally wrong, how it should be changed?
Side notes: I use PostgreSQL database, and it cannot be replaced with non-relational database.
First a CREATE TABLE is a DDL statement, not a DML one. That means that a rollback will not delete the table. If you want to clean your database, you must explicitely remove it in a #After or #AfterClass method.
But do you really need to do the tests on the PostgreSQL database? Spring has great support for in memory databases and the default embedded database is HSQL which has a pretty good support for postgresql syntax. Provided you have no complex statement, it could be enough and avoids cluttering the main database for (potentially destructive) unit tests.
You could create the database in a #BeforeClass method. Here is an oversimplified example:
private static DriverManagerDataSource dataSource;
#BeforeClass
public static void setupClass() throws Exception {
ResourceDatabasePopulator populator = new ResourceDatabasePopulator();
populator.addScript(new ClassPathResource("path/to/package/defaults.sql"));
dataSource = new DriverManagerDataSource();
dataSource.setUrl("jdbc:hsqldb:mem:pgtest;sql.syntax_pgs=true");
dataSource.setUsername("SA");
Connection con = dataSource.getConnection();
assertNotNull(con);
populator.populate(con);
con.close();
}
Imagine I have the following simple example interface:
public interface UserDB {
void addUser(User user);
void updateUser(User user);
User getUser(String id);
void deleteUser(String id);
}
I want to write tests with Mockito to test the simple CRUD operations. I want to verify that things like:
Update/get/delete work only if the user was added before with 'add'
They fail if the user was deleted before with 'delete'.
Already created users cannot be created again
etc.
My idea is, that I need something like this:
UserDB udb = mock(UserDB.class);
when(udb.getUser("1")).thenThrow(new UserNotFoundException());
when(udb.addUser(new User("1"))).when(udb.getUser("1").thenReturn(new User("1"));
However, things like the last line are not proper Mockito syntax. How can I check verify different results, for different preconditions or different orders of methods called?
Doing it this way is a code smell. The fact that you want to write all this code to see if a "user cannot be added twice" is really basically just writing a new class, that has nothing to do with your database rules.
Here is one idea for something you could do instead; structure your validation rules as a Decorator on the database, and then test the decorator itself, with a mock "undecorated" database. For example:
public class ValidatingUserDB implements UserDB {
private UserDB delegate;
public ValidatingUserDB(UserDB delegate) {
this.delegate = delegate;
}
public void addUser(User user) {
User oldUser = delegate.get(user.getId());
if (oldUser != null) throw new IllegalArgumentException(
"User " + user.getId() + " already exists!";
delegate.addUser(user);
}
}
Then, you would write your tests like this:
#Test(expected=IllegalArgumentException.class)
public void testNoDuplicateUsers() {
User sampleUser = new User("1");
UserDB delegate = mock(UserDB.class);
when(delegate.getUser(any(User.class))).thenReturn(sampleUser);
UserDB db = new ValidatingUserDB(delegate);
db.addUser(sampleUser);
}
public void testAddingUser() {
User sampleUser = new User("1");
UserDB delegate = mock(UserDB.class);
UserDB db = new ValidatingUserDB(delegate);
db.addUser(sampleUser);
verify(delegate).getUser(sampleUser);
verify(delegate).addUser(sampleUser);
}
By separating the validation behavior from the CRUD behavior, you enable yourself to write tests in a way that doesn't involve you rewriting all these tests with super complicated answer rules and so forth.
here are examples of a methods :
public <T> T save(final T o){
return (T) sessionFactory.getCurrentSession().save(o);
}
public <T> T get(final Class<T> type, final Long id){
return (T) sessionFactory.getCurrentSession().get(type, id);
}
public <T> List<T> getFieldLike(final Class<T> type, final String propertyName,
final String value, final MatchMode matchMode) {
final Session session = sessionFactory.getCurrentSession();
final Criteria crit = session.createCriteria(type);
crit.add(Restrictions.like(propertyName, value, matchMode));
return crit.list();
}
Any tips on either unit or integration test these ? Pass in a mock session ?
About the only thing you could do in a unit test is mock Session and Criteria and set expectations - I've done this for a few cases using JMock and ended up having to write a Hamcrest matcher for the Restricitons. I'm not convinced there's much value in it, other than blindly increasing the test coverage.
On the other hand - writing an integration test here would be of definite use, set up an in memory database with some data and assert that the methods return the correct objects
An example for integration test for get():
// if you can inject the object:
// #Inject
// public MyTestDAO<MyType> dao;
//
#Test
public testGet() throws Exception {
Session session = HibernateUtils.getSessionFactory().getCurrentSession();
MyTestDAO<MyType> dao = new MyTestDAO<MyType>();
// if you use DI facility, check against null
// assertNotNull(dao)
MyType myType = dao.get(Test.class, 1L);
assertNotNull(myType);
assertEqual(myType, equalObj);
// more asserts?
}
if you use spring framework, you can use spring-test module to use DI using spring context and class runner:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations={"classpath:spring.xml"})
By your question I assume you are managing your sessions yourself. I will talk about the unit testing part. Integration tests can be done on an in memory database during integration. H2 or Derby are one of the most used ones as far as I know.
You can go by passing a mock session to your DAO, but what will happen if one of your colleagues writes a unit test and forgets to pass a mock session? Do you hit database? If you DO NOT want to hit database, this approach can be a problem
What a better approach would be to use an abstract factory pattern to even ACQUIRE the dao. Basically your code is currently like this:
MyDao.get(someId);
However, you can go in the following way:
If you define your Dao first like this:
public interface IDao <SomeGenerics> {
public <T> T get(Class<T> type, ID key);
//more generic methods
}
Then you can have an implementation of the Dao, which is like this:
public class MyDao <SomeGenerics> implements IDao<SomeGenerics> {
public <T> T get(Class<T> type, ID key){
//some real session stuff
}
//...more real session stuff
}
Then in your unit tests you can define a mock of your IDao interface, kinda like this:
public class MyDaoMock <SomeGenerics> implements IDao<SomeGenerics> {
//this thingy can hold a small array of mock instances
private ArrayList<T> mockInstances;
public <T> T get(Class<T> type, ID key){
//some fake stuff, kinda like this;
for (T mockInstance : mockInstances) {
if ( mockInstance.getId().equals(key) ) {//here i assume your T has a method getId, you figure out the kinks
return mockInstance;
}
}
//if not found, emulate your persistence's behavior, return null, throw an exception or do whatever
return null;
}
//...more fake stuff
}
Now you have an interface and two implementations, one for your production code and one for your unit tests. Now all you have to do is to implement the factory pattern to give you the right dao in production and in testing code.
I have several DAO objects that are used to retrieve information from a database and I really want to write some automated tests for them but I'm having a hard time figuring out how to do it.
I'm using Spring's JdbcTemplate to run the actual query (via a prepared statement) and map the results to the model object (via the RowMapper class).
If I were to write unit tests, I'm not sure how I would/should mock the objects. For example, since there are only reads, I would use the actual database connection and not mock the jdbcTemplate, but I'm not sure that's right.
Here's the (simplified) code for the simplest DAO of the batch:
/**
* Implementation of the {#link BusinessSegmentDAO} interface using JDBC.
*/
public class GPLBusinessSegmentDAO implements BusinessSegmentDAO {
private JdbcTemplate jdbcTemplate;
private static class BusinessSegmentRowMapper implements RowMapper<BusinessSegment> {
public BusinessSegment mapRow(ResultSet rs, int arg1) throws SQLException {
try {
return new BusinessSegment(rs.getString(...));
} catch (SQLException e) {
return null;
}
}
}
private static class GetBusinessSegmentsPreparedStatementCreator
implements PreparedStatementCreator {
private String region, cc, ll;
private int regionId;
private GetBusinessSegmentsPreparedStatementCreator(String cc, String ll) {
this.cc = cc;
this.ll = ll;
}
public PreparedStatement createPreparedStatement(Connection connection)
throws SQLException {
String sql = "SELECT ...";
PreparedStatement ps = connection.prepareStatement(sql);
ps.setString(1, cc);
ps.setString(2, ll);
return ps;
}
}
public GPLBusinessSegmentDAO(DataSource dataSource) {
jdbcTemplate = new JdbcTemplate(dataSource);
}
public Collection<BusinessSegment> getBusinessSegments(String cc, String ll) {
return jdbcTemplate.query(
new GetBusinessSegmentsPreparedStatementCreator(cc, ll),
new BusinessSegmentRowMapper());
}
}
Any idea would be appreciated.
Thanks!
Please have a look at below links:
Testing SQL queries with Spring and DbUnit
MockObjects or DBUnit for testing Code using JdbcTemplate
Hope that helps.
EDIT:
Here is the GitHub version of RowMapperTests for easy reference.
I recommend breaking your dependency on JdbcTemplate class, and using the JdbcOperations interface instead, e.g.
public class GPLBusinessSegmentDAO implements BusinessSegmentDAO {
private final JdbcOperations jdbc;
public GPLBusinessSegmentDAO(DataSource dataSource) {
this(new JdbcTemplate(dataSource));
}
public GPLBusinessSegmentDAO(JdbcOperations jdbc) {
this.jdbc = jdbc;
}
// ... DAO methods here
}
Your unit test can invoke the second constructor, passing in a mock JdbcOperations object. Since all DB operations are performed via the jdbc object, you can mock that easily enough.
Your live code can call the first constructor as before.
To write a true unit test for this, you would not be touching a real database.
You may however find it more practical to pass in a real DataSource to your underlying db, and test the getBusinessSegments() method returns 0, 1 and many results depending on the cc and ll values you pass in.
Another option worth investigating would be to pass in a DataSource of an embedded Java DB that was initialised with your schema in a setUp/#Before method. I guess what you really want to test is that the SELECT... query maps correctly to the schema, so such a test would catch any errors that arise at runtime when the schema, say, changes.