I annotate my test methods like this:
#Test
#DatabaseSetup("/default_database_data.xml")
#ExpectedDatabase(value = "/expected_database_1.xml", assertionMode = NON_STRICT)
is it possible to manually perform the things that #DatabaseSetup and #ExpectedDatabase does:
#Test
public void test(){
// DBUnit.setup("/default_database_data.xml");
dao.insert(...);
// DBUnit.expected("/expected_database_1.xml");
}
I made the syntax up, just to give you an idea of what I need: perform 2 setups and assertions in one unit test.
two things that might work, check this link.
Link
And also this annotation:
#DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD)
Related
I am making junit tests which adds some users before each test case. The code is:
#BeforeEach
void setUp() {
saveUser();
saveEntry();
}
#Test
void saveUser() {
User user = new User();
user.setUserId(null);
user.setUsername("John");
user.setEmail("john#foo.com");
user.setPassword("password");
userService.saveUser(user);
}
#Test
void saveEntry() {
Entry entry = new Entry();
entry.setText("test text");
entry.setUserId(1L);
entryService.saveEntry(entry);
}
As you see I am using the methods that I have in my service layer to create entries and users. If I run the tests one by one there is no problem. But when I run all tests then db is not returning 1 item and returning multiple items so exception occurs.
I need to cleanup h2 db after each test with maybe #AfterEach annotation but I do not have and delete method in my code to invoke. How can I cleanup the H2 db ?
In addition to #J Asgarov answer which is correct providing you use spring-boot if you want to perform some actions before and after each test (more specifically before #Before and after #After methods) you can use #Sql annotation to execute specific sql script for example from test resources.
#Sql("init.sql")
#Sql(scripts = "clean.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
public class TestClass
}
It might be handy for sequences, since they don't care of rollback.
Regarding #Transactional mentioned by #Mark Bramnik be careful, because the transaction spans for entire test method, so you cannot verify correctness of the transaction boundaries.
You've mentioned spring and it looks like you're testing DAO layer, so I assume the tests are running with SpringExtension/SpringRunner if you're on junit 4.
In this case,
Have you tried to use #Transactional on the test method? Or alternatively if all the test methods are "transactional" you can place it once on a test class.
This works as follows:
If everything is configured correctly spring will open a transaction before the test starts, and will rollback that transaction after the test ends.
The rollback is supposed to clean the inserted data automatically.
Quick googling revealed this tutorial, surely there are many others
#JpaDataTest annotating the test class will rollback every test by default
https://www.baeldung.com/spring-jpa-test-in-memory-database
I have a maven Java project in Intellij IDEA community. The TestNg version is very old i.e. 6.9.5 and I simply cannot update it. I have 6 TestNg test methods in a class. Only 5/6 of these methods use data provider methods, all of which are in one DataProvider class.
When I run the test class, only the method without data provider (say test_5) runs successfully. The others are marked as "test ignored". Moreover, when I comment or disable test_5, then all the other tests run. Can I make testng give a detailed reason for ignoring tests ?
Here is brief information about my project. I can't give the full code.
public class MyUtilityClass {
public class MyUtilityClass(){
//Load data from property files and initialize members, do other stuff.
}
}
public class BaseTest {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj, provide common annotated methods for tests etc.
}
public class TestClass extends BaseTest {
#BeforeClass
public void beforeMyClass(){
//Get some data from this.utilObj and do other things also.
}
#Test(dataProvider = "test_1", dataProviderClass = MyDataProvider.class)
test_1(){}
#Test(dataProvider = "test_2", dataProviderClass = MyDataProvider.class)
test_2(){}
...
//test_5 was the only one without data provider.
test_5(){}
#Test(dataProvider = "test_6", dataProviderClass = MyDataProvider.class)
test_6(){}
}
public class MyDataProvider {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj besides other things.
}
Your tests need to end in exactly the same environment in which they started.
You gave nary a clue as to what your code is like, but I can say that it is almost certainly either a database that is being written to and not reverted or an internal, persistent data structure that is being modified and not cleared.
If the tests go to the database, try enclosing the entire test in a transaction that you revert at the end of the test. If you can't do this, try mocking out the database.
If it's not the DB, look for an internal static somewhere, either a singleton pattern or a static collection contained in an object. Improve that stuff right out of your design and you should be okay.
I could give you more specific tips with code, but as is--that's about all I can tell you.
I solved my problem. Test_5 is the only test method which does not have a data provider. So, I provided a mock data provider method for it.
I have three JUnit tests, which are shown below. These test all succeed if they are executed individually i.e. commenting out the two other tests and only executing one.
However, if I execute all three tests uncommented then "testOrderDatabaseReturnsOrdersCorrectly" produces an error and "testOrderDatabaseRemovesOrdersCorrectly" fails.
I really don't understand why this is happening. I'm using #Before to set up before each test, so the conditions for all three tests should be the same? Why are some of them failing when they work fine individually?
#Before
public void setup()
{
sys = new OrderSystem();
sys.getDb().clearDb();
}
#Test
public void testOrderDatabaseAddsOrders()
{
sys.getDb().clearDb();
sys.createOrder(25);
assertEquals(sys.getDb().getDbArrayList().size(), 1);
sys.createOrder(30);
assertEquals(sys.getDb().getDbArrayList().size(), 2);
sys.createOrder(35);
assertEquals(sys.getDb().getDbArrayList().size(), 3);
}
#Test
public void testOrderDatabaseRemovesOrdersCorrectly()
{
sys.createOrder(25);
assertEquals(sys.getDb().getDbArrayList().size(), 1);
sys.removeOrder("BRICK1");
assertEquals(sys.getDb().getDbArrayList().size(), 0);
}
#Test
public void testOrderDatabaseReturnsOrdersCorrectly()
{
System.out.println("Size of db: " + sys.getDb().getDbArrayList().size());
sys.createOrder(25);
System.out.println("Size of db: " + sys.getDb().getDbArrayList().size());
BrickOrder o = sys.getOrder("BRICK1");
assertEquals(o.getNumberOfBricks(), 25);
}
However, if I execute all three tests uncommented then "testOrderDatabaseReturnsOrdersCorrectly" produces an error and "testOrderDatabaseRemovesOrdersCorrectly" fails.
Your problem most likely is that the results from one of the test methods is not being cleared when the next test method is run and they are clashing. Maybe you are using a memory database like H2 which is not being fully cleared even though you are calling sys.getDb().clearDb();?
Couple of ways you can verify this:
First put a System.out.println() message in setup() to make sure it is being called before each method.
At the start of your test methods, do a lookup in your database to see if there are results there to verify the clearDb() did something. I suspect you will see that it isn't working fully.
Change the test methods to use non overlapping tables or to use non overlapping data to see if that works. For example, create a int orderNumber field and do a ++ on it in each test method to make sure you are using a new order. This is a work around of course. It would be better to understand why clear isn't doing what you want.
How to fix this is a more complicated problem that depends on what actually is backing your db. Maybe you just have some bugs in your clearDb() method. I mention H2 because even if you setup a brand new database connection, the old one never is destroyed and will be reused. If it is a SQL database, fully dropping the tables and recreating them is one thing that forces even a persistent memory database like H2 to clear its stuff.
Hope this helps.
I have figured out what was causing the errors and strange test behaviour.
I had a field variable which was declared static. For some reason this was not being reset before each test, even though the sys variable was being reset each time. When I removed the static declaration for this variable all of the tests succeeded.
I ran into some trouble testing a Spring app. The current approach in my team is to write scenarios in Gherkin and have Serenity provide its pretty reports.
A new component in the app will need a lot of test cases. The requirements will be provided in a few 'parsable' excel files so I thought it would be neat to just use them directly, row by row, in a Junit parametrized test. Another option would be to write a bloated Gherkin feature and tediously compose each example manually.
So I thought of something like that:
#RunWith(Parameterized.class)
private static class Tests {
#Parameterized.Parameters(name = "...") // name with the params
public static Collection params() {
// parse excel here or use some other class to do it
}
#Test
public void test() {
/* do the actual test - it involves sending and receiving some JSON objects */
}
}
This works smoothly but I ran into trouble trying to use
#RunWith(SerenityRunner.class)
The problem is that Junit does not support multiple runners. A solution I found is to make a nested class and annotate each with a different runner, but I don't know how to make it work (which runner should be on the outside, where do I actually run the tests, an so on).
Any thoughts?
Actually Serenity provides another runner - SerenityParameterizedRunner which seems to have the same features as JUnit's Parameterized.
I have a JUnit class with different methods to perform different tests.
I use Mockito to create a spy on real instance, and then override some method which is not relevant to the actual test I perform.
Is there a way, just for the sake of cleaning up after me in case some other tests that run after my tests also use the same instances and might execute a mocked method they didn't ask to mock, to un-mock a method?
say I have a spy object called 'wareHouseSpy'
say I overriden the method isSomethingMissing :
doReturn(false).when(wareHouseSpy).isSomethingMissing()
What will be the right way to un-override, and bring things back to normal on the spy i.e make the next invokation of isSomethingMissing to run the real method?
something like
doReturn(Mockito.RETURN_REAL_METHOD).when(wareHouseSpy).isSomethingSpy()
or maybe
Mockito.unmock(wareHouseSpy)
Who knows? I couldn't find nothing in that area
Thanks!
Assaf
I think
Mockito.reset(wareHouseSpy)
would do it.
Let's say most of your tests use the stubbed response. Then you would have a setUp() method that looks like this:
#Before
public void setUp() {
wareHouseSpy = spy(realWarehouse);
doReturn(false).when(wareHouseSpy).isSomethingMissing();
}
Now let's say you want to undo the stubbed response and use the real implementation in one test:
#Test
public void isSomethingMissing_useRealImplementation() {
// Setup
when(wareHouseSpy.isSomethingMissing()).thenCallRealMethod();
// Test - Uses real implementation
boolean result = wareHouseSpy.isSomethingMissing();
}
It depends whether you are testing with TestNG or JUnit.
JUnit creates a new instance of itself for each test method. You basically don't have to worry about reseting mocks.
With TestNG, you have to reset the mock(s) with Mockito.reset(mockA, mockB, ...) in either an #BeforeMethod or an #AfterMethod
The "normal" way is to re-instantiate things in your "setUp" method. However, if you have a real object that is expensive to construct for some reason, you could do something like this:
public class MyTests {
private static MyBigWarehouse realWarehouse = new MyBigWarehouse();
private MyBigWarehouse warehouseSpy;
#Before
public void setUp() {
warehouseSpy = spy(realWarehouse); // same real object - brand new spy!
doReturn(false).when(wareHouseSpy).isSomethingMissing();
}
#Test
...
#Test
...
#Test
...
}
Maybe I am not following but when you have a real object real:
Object mySpy = spy(real);
Then to "unspy" mySpy... just use real.
As per the documentation, we have
reset(mock);
//at this point the mock forgot any interactions & stubbing
The documentation specifies further
Normally, you don't need to reset your mocks, just create new mocks
for each test method. Instead of #reset() please consider writing
simple, small and focused test methods over lengthy, over-specified
tests.
Here's an example from their github repo which tests this behavior and uses it:
#Test
public void shouldRemoveAllInteractions() throws Exception {
mock.simpleMethod(1);
reset(mock);
verifyZeroInteractions(mock);
}
reference : ResetTest.java
Addressing this piece specifically:
Is there a way, just for the sake of cleaning up after me in case some other tests that run after my tests also use the same instances and might execute a mocked method they didn't ask to mock, to un-mock a method?
If you are using JUnit, the cleanest way to do this is to use #Before and #After (other frameworks have equivalents) and recreate the instance and the spy so that no test depends on or is impacted by whatever you have done on any other test. Then you can do the test-specific configuration of the spy/mock inside of each test. If for some reason you don't want to recreate the object, you can recreate the spy. Either way, everyone starts with a fresh spy each time.