I`m writing test for my "MySQL Requests Manager" and the problem is that some of the tests is depends on the data that contained in the database. So if any other test will delete the required records or someone else will delete them, that means that test will fail even if they correct.
I`m thinking about two approaches here:
1. In the test itself backup all the needed data before, run the test and restore the data from backup. But this is much more error prone and "heavier", in my humble opinion.
2. Before running one of the tests or even all of them is to create whole new database with structure and required data (from previously made dump, i think). This involves only around of two 'global' actions: create database and dropping it. Of course, i need to have totally isolated MySQL user and database for this.
What you think and what can you recommend? How another programmers dealing with that kind of issue?
Here's a different idea if you want to check it out: there's a java framework Mockito that is pretty helpful in cases like this. With it, you can create 'Mock' instantiations of certain objects/services that allow you to avoid actually instantiating them. With a mock, you can return a custom/hard coded results and test that your service handles that response correctly. For example, say you have a class 'SQLTestService' that has a method called 'getData()'. You can instantiate a Mock of 'SQLTestService' and have it return a specific value when 'getData()' is called. That way your tests are never actually dependent on the data in the DB and you can test for a specific outcome that you know your service should be able to handle.
When writing unit test, one should:
Use a test DB
Load data before each test (or load data before all tests)
#Before
public void setUp() {
//insert your test data here
}
Drop data after each test (or drop data after all tests)
#After
public void tearDown() {
// drop your test data here
}
That means the DB is system independent and each test runs isolated without having the fear of losing data, or interference between tests.
Related
I've been struggling with understanding the minefield that is classes supported by the jmockit Mocking API. I found what I thought was a bug with File, but the issue was simply closed saying "Don't mock a File or Path" which no explanation. Can someone on here help me understand why certain classes should not be mocked and how I'm supposed to work around that. I'm trying to contribute to the library with bug reports, but each one I file just leaves me more confused. Pardon my ignorance, but if anyone can point me to the rationale for forbidden mocks etc. I'd greatly appreciate it.
I'm not certain that there's a fool proof list of rules. There's always a certain degree of knowledge and taste in these matters. Dogma is usually not a good idea.
I voted to close this question because I think a lot of it is opinion.
But with that said, I'll make an attempt.
Unit tests should be about testing individual classes, not interactions between classes. Those are called integration tests.
If your object is calling out to other remote objects, like services, you should mock those to return the data needed for your test. The idea is that services and their clients should also be tested individually in their own unit tests. You need not repeat those when testing a class that depends on them.
One exception to this rule, in my opinion, are data access objects. There is no sense in testing one of those without connecting to a remote database. Your test needs to prove the proper operation of your code. That requires a database connection in the case of data access objects. These should be written to be transactional: seed the database, perform the test, and reverse the actions of the test. The database should be in the same state when you're done.
Once your data access objects are certified as working correctly, all clients that use them should mock them. No need to re-test.
You should not be mocking classes in the JVM.
You asked for a why about File or Stream in particular - here's one. Take it or ignore it.
You don't have to test JVM classes because Sun/Oracle have already done that. You know they work. You want your class to use those classes because a failing test will expose the fact that the necessary file isn't available. A mock won't tell me that I've neglected to put a required file in my CLASSPATH. I want to find out during testing, not in production.
Another reason is that unit tests are also documentation. It's a live demonstration for others to show how to properly use your class.
In unit test you have to test that your code is doing the right thing. You can mock out any external pieces of code that is not the direct code being tested. This is a strict definition of unit test and it assumes there is another form of testing called integration testing that will be done later on. In integration testing you test how your code interacts with external elements, like a DB or another web service, or the network, or the hard drive.
If I have a piece of code that interacts with an object, like a File, and my code does 3 things to that file, then in my unit test I am going to test that my code has done those three things.
For example:
public void processFile(File f) {
if (f.exists()) {
//perform some tasks
} else {
//perform some other tasks
}
}
To properly unit-test the code above I would run at least two unit tests. One to test if the file exists, and the other to test that my code does the correct thing when the file does not exist. Because unit testing, IMHO, is only testing my code and not doing integration tests, then it is perfectly fine to mock File so that you can test both branches of this method.
During integration testing you can then test with a real File as your application will be interacting with its surroundings.
Try Mockito:
I do not know why jmockit does not allow you to mock the File class. In Mockito it can be done. Example below.
import java.io.File;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class NewMain {
public static void main(String[] args) {
File f = mock(File.class);
when(f.exists()).thenReturn(true);
System.out.println("f.exists = " + f.exists());
}
}
I have a DAO class in which I need to test method called getItemById() which returns Item object from DB's table.
As long as I understand I have to make an Item object in that test and check if it equals to returned from method? Or I have to just check if it returns an Item object?
What if table is empty or no row with that id at all?
Sorry, this is a quite newbie question, but I can't make it clear in my head. Please help!
Running tests against a database where you can't predict what's in it is not effective; any test that is resilient enough to accommodate changing data is going to be worthless for the purpose of confirming whether the code under test actually does the right thing. I would make the test use its own database instance, so that there's no question of interference from other users mucking up my test, or my test changing data out from under somebody else. The ideal choice would be an in-memory database like H2, that the test can instantiate and throw away when it's done with it. That way the test can run anywhere (for instance on a CI server), with the same results.
The test needs to run the ddl to create the schema and populate the database before executing. There are different tools you can use for this. DbUnit is popular, there is also an alternative called DBSetup which is supposed to be less complicated. You can have separate test data for different scenarios. DbUnit has tools to extract data from a database to make it easier to create your test data.
Since the database is under your control and you can populate it as you wish, you should verify that the returned object's fields are what you expect based on the populated data. Make the test as specific as possible.
For testing the SQL and how the object is mapped to the resultset it makes sense to use a database. For some parts of this it would make sense to use a unit test that doesn't touch the database and uses mocks. For instance, it would be good to confirm that the connection gets closed in all cases, it's easier to use mocks than it is to cause a SQLException in your code.
Testing using mocks would be easier if the DBConnection class was injected instead of being instantiated within the method. If you changed the code to inject the DBConnection then you could write a unit test (one using mocks that doesn't use a database) that checks whether the connection gets closed.
To perform unit test you should walk by three steps:
Prepare test environement (eg. populate db with known test data)- so you wont ask is the table empty or not etc.
Perform test and assert result
Do cleanup - so test wont have influence on other tests
Besides, you should test all scenarios cuz you sholuld handle all of them
I want to test my class MyTypeDAO implemented with Hibernate 4.1 using JUnit 4.9. I have the following question:
In my DAO, I have a findById method that retrieve an instance of my type by its ID. How to test this method?
What I've done:
I create an instance of my type.
Then, I need to persist this instance, but how? Can I rely on my saveMyType method? I don't think so, since I'm in the test case and this method is not tested.
Then, I need to call the findById method with the ID of the instance created in step 1.
Finally, I check that the instance created in step 1 equals the one I get in step 3.
Any idea? What are the best practices?
I have the same questions for the save method, since after running it, I need to retrieve the save instance. Here also, I don't think I can rely on my findById method since it's not already tested.
Thanks
One possible way is:
Create a in memory db for testing, load contents of this db from a predefined sql script andthen test your DAO classes against this database.
Everytime you start tests, database will be created from scratch using the sql script and you will know which id should return a result and which one should not.
See [DbUnit][1] (from satoshi's comment)
I don't think you have much choice to achieve this. It's not a good practice to have orthognal tests (tests that test 2 things or are dependent). Nevertheless, you should really consider this exception valid and fast. You are right : persisting an object and retrieving it is a good idea to test this dao layer.
Other options include having a record that you are sure about in the database and testing the retrieval (findById) on it. And the a second test to persist an object and removing it the teardown method.
But really, it would be simpler to test loading and saving together and it makes much sense.
How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.
I created one class,in which i am inserting values into SQL as follows:
public class ABC{
some code here..........
...............
public void insertUsers(String firstName,String lastName,String location){
pre.setString(1,firstName);
I created test class for this class.
I want to write test case for this method insertUsers(),using assert statement.
how to write assert statement for above method.
When doing unit testing one should avoid accessing external resources such as databases, filesystems, network etc. This is to keep the tests in memory (fast), but also isolated from external failures. You only want to test a specific part of some functionality in e.g. a class, nothing else.
What this means for you is that the conn variable (I assume is the db connection) needs to be mocked out. You can do this easily with something like dependency injection, which means you pass in things into your class when constructing it. In this case you would pass in an interface which has the necessary functions conn uses.
Then in production you pass in the real db connection object while in test you pass in a mock which you control. Hence, you can then check that ABC calls and does what you expect it to do with conn. The same goes for pre you're using.
You can see it like this: I would like to test class ABC, and in order to do that I need to see how it uses pre and conn, so I replace those with my own test implementations I can check after doing something with ABC.
In order to specifically help you with what you're doing you need to show what pre is and tell us what you intend to test.
Well if you really want to test updating your database you can do that. Usually people follow one of the below two approaches -
Use Spring AbstractTransactionalDataSourceSpringContextTests This allows you to add any values to the database and then spring will take care and revert the values that you have inserted.
Use a seperate database Just for your JUnit tests. You really dont need anything heavy. You can use something like the HSQLDB which is really a lightweight java database. This will allow you to have separate test data from your production/QA database.
After the above is done(and you have run the insert statement) simply run select statement from your JUnit to get the data and then compare the previous data with the actual data.
A couple of remarks.
I'd use the standard assert during development only. It will check a condition and throw a runtime exception, if the condition evaluates to false.
If you expect illegal arguments, than it's much better to add some "normal" code to the method to handle those values or throw an IllegalArgumenException and write log entry.
Do not close the connection in this method! Do it only when you open/create the connection in the very same method. In larger applications you won't be able find out who closed the connection after a while. If the caller of insertUsers opened the connection, the caller should close it itself!
(more help possible if you tell us what exactly you want to test - the method parameters or if the insert was a success)
I wouldnt test the insertion of the data to the database, actually its not performant to access database during unittesting, this can be covered threw automated functional GUI testing tools of your application.
what you may want to test is the generation of the expected queries, this can realised if you seperate the geenration and the execution of the statements, you will be able to compare generated statements with expected ones without having to access you database from the unitest.