I was annoyed to find in the Parameterized documentation that "when running a parameterized test class, instances are created for the cross-product of the test methods and the test data elements." This means that the constructor is run once for every single test, instead of before running all of the tests. I have an expensive operation (1-5 seconds) that I put in the constructor, and now the operation is repeated way too many times, slowing the whole test suite needlessly. The operation is only needed once to set the state for all of the tests. How can I run several tests with one instance of a parameterized test?
I would move the expensive operation to a #BeforeClass method, which should execute just once for the entire parameterized test.
A silly example is shown below:
#RunWith(Parameterized.class)
public class QuickTest {
private static Object expensiveObject;
private final int value;
#BeforeClass
public static void before() {
System.out.println("Before class!");
expensiveObject = new String("Just joking!");
}
#Parameters
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] { { 1 }, { 2 } });
}
public QuickTest(int value) {
this.value = value;
}
#Test
public void test() {
System.out.println(String.format("Ran test #%d.", value));
System.out.println(expensiveObject);
}
}
Will print:
Before class!
Ran test #1.
Just joking!
Ran test #2.
Just joking!
Related
Is there a way to put a priority on a #Factory method? I've tried #AfterMethod, #AfterTest, and #AfterClass, all result in my factory method running immediately after my setup call with the #BeforeClass tag.
My code is similar to this:
#BeforeClass
public void setup() {
}
#Test()
public void method1() throws Exception {
}
#Test(dependsOnMethods = "method1")
public void method2() {
}
#Test(dependsOnMethods = "method2")
public void method3() throws Exception {
}
#Test(dependsOnMethods = "method3")
public void method4() throws Exception {
}
#Test(dependsOnMethods = "method4")
public void method5() throws Exception {
}
#AfterClass
#Factory
public Object[] factory() {
Object[] values = new Object[10];
for (int i = 0; i < values.length; i++) {
values[i] = new validationFactory(map.get(i).x, map.get(i).y);
}
return values;
}
What the code is doing is reaching out to an API, retrieving any requested data, slicing that data up into a map, and then passing that map of data into the factory method in order to validate it. The problem is that immediately after my setup method runs, the factory shoots off and validates an empty map. Is there any way to make the factory method wait until the data is ready?
The purpose of #Factory is to create tests dynamically. It doesn't make sense to run those tests after an #AfterClass method, so even if you could work around your problem by checking if the map was empty (so that factory() runs twice, but the loop - only once), any tests created by the factory would not be executed by the framework.
If what you need is to validate some data after all the tests have finished, put that validation in a method annotated with #AfterClass (without #Factory). You can use assertions there as in a regular test.
If for some reason you want to run the validation as separate tests and have to use a factory, there is a way to defer their execution until after the other tests, but they still have to be instantiated at the beginning. So it looks like you need to pass some object that would load the data when required instead of initializing the validation tests with map entries right away. A data provider may work, but here's a simpler example.
Add all the main tests to a group.
#Test(dependsOnMethods = "method1", groups = "Main")
public void method2() { }
Create a class or a method that will load the data when needed, it depends on how you populate the map. It has to be thread-safe because TestNG runs tests in parallel. A very simplistic implementation:
public class DataLoader {
// Location has data members X and Y
private Map<Integer, Location> locations;
public synchronized Location getLocation(int index) {
if (locations == null) {
locations = new HashMap<>();
// load the map
}
return locations.get(index);
}
}
Create a class to represent a validation test. Notice its only test depends on the main group.
public class ValidationTest {
private int index;
private DataLoader loader;
public ValidationTest(int index, DataLoader loader) {
this.number = number;
this.loader = loader;
}
#Test(dependsOnGroups = "Main")
public void validate() {
Location location = this.loader.getLocation(this.index);
// do whatever with location.x and location.y
}
}
Instantiate the validation tests. They will run after the main group has finished. Notice I have removed the #AfterClass annotation.
#Factory
public Object[] factory() {
DataLoader loader = new DataLoader();
Object[] validators = new Object[10];
for (int i = 0; i < validators.length; i++) {
validators[i] = new ValidationTest(i, loader);
}
return validators;
}
By the way, dependencies between test methods indicate poorly written tests, and should be avoided, at least for unit-tests. There are frameworks other than TestNG for testing complex scenarios.
A TestNG run has 2 distinct phases:
Creation of tests ;
Run of tests.
Then, you can expect to create some new tests during the run of tests.
I changed a junit test class to run with Parameterized to test two different implementations of the same interface. Here it is :
#RunWith(Parameterized.class)
public class Stack_Tests {
private Stack<String> stack;
public Stack_Tests(Stack<String> stack) {
this.stack = stack;
}
#Parameters
public static Collection<Object[]> parameters() {
// The two object to test
return Arrays.asList(new Object[][] { { new LinkedStack<String>() }, { new BoundedLinkedStack<String>(MAX_SIZE) } });
}
#Test
public void test() {
...
}
The results are wrong since I changed to Parameterized. Half of the tests fail (the same for the two objects), all of them worked before.
It works without Parameterized like this :
public class Stack_Tests {
private Stack<String> stack;
#Before
public void setUp() throws Exception {
stack = new LinkedStack<String>();
}
#Test
public void test() {
...
}
The complete test class here
As you suggested in the comments, try resetting the stack before every test, since previous tests change it.
You can create a new stack instance before every unit test:
#Before
public void setUp() throws Exception {
stack = stack.getClass().newInstance();
}
Though this has the side effect that your classes must have 0-argument constructors.
Note: If some of your stacks can not have 0-argument constructors, consider invoking the constructor with arguments as per this SO answer. This means that you must provide the constructor types list and its arguments list along with the stack object to the unit test class as parameters. Then you can do:
#Before
public void setUp() throws Exception {
stack = stack.getClass().getDeclaredConstructors(typesList).newInstance(argsList);
}
add :
#Before
public void setUp() throws Exception {
stack.clear();
}
the stack is shared for each test, and your tests modify the stack.
To get a consistent stack for all tests, an alternative approach is to clone the stack before modifying it in a particular test.
#Test public void testPush() {
Stack<String> myStack = (Stack<String>) stack.clone();
myStack.push("hello");
assertFalse(myStack.empty());
}
Thus, every test that modifies the stack should first clone it.
This is a little more cumbersome, but allows to provide more sophisticated stacks as parameters (e.g., ones with some elements to start with).
I am running some JUnit tests programatically with JUnitCore, and I want to get some data out of the test class once it is finished (so #AfterClass). Here is a pseudocode example of the constraints I am working under:
public class A {
public static String testData;
public static void runTest() {
JUnitCore juc = new JUnitCore();
juc.run(B);
// This is where I would like to access testData for this
// particular run
}
public static void setTestData(String s) {
testData = s;
}
}
public class B {
// Some #Test methods and stuff omitted
#AfterClass
public static void done(String s) {
A.setTestData(someData);
}
}
My problem is that different threads might be calling runTest(), so testData might be wrong. How do I work around this? I'm so lost.
If you really need/want to go with this design, you can make testData a java.lang.ThreadLocal<String>. This will solve the multi-threading issue.
I want to back up my application's database before replacing it with the test fixture. I'm forced to use Junit3 because of Android limitations, and I want to implement the equivalent behavior of #BeforeClass an #AfterClass.
UPDATE: There is now a tool (Junit4Android) to get support for
Junit4 on Android. It's a bit of a kludge but should work.
To achieve the #BeforeClass equivalent, I had been using a static variable and initializing it during the first run like this, but I need to be able to restore the database after running all the tests. I can't think of a way of detecting when the last test has run (since I believe there is no guarantee on the order of test execution.)
public class MyTest extends ActivityInstrumentationTestCase2<MainActivity> {
private static boolean firstRun = true;
#Override
protected void setUp() {
if(firstRun) {
firstRun = false;
setUpDatabaseFixture();
}
}
...
}
From the junit website:
Wrapped the setUp and tearDown method in the suite.This is for the
case if you want to run a single YourTestClass testcase.
public static Test suite() {
return new TestSetup(new TestSuite(YourTestClass.class)) {
protected void setUp() throws Exception {
System.out.println(" Global setUp ");
}
protected void tearDown() throws Exception {
System.out.println(" Global tearDown ");
}
};
}
If you would like to run only one setUp and tearDown for all the
testcase, make a suite and add testClass to it and pass the suite
object in TestSetup constructor.But I think there is not much usage
for this,and in a way it is violating JUnit philosophy.
Recently, I was looking for a similar solution too. Fortunately, in my case after the JVM exits after the last test is run. So I was able to achieve this by adding a JVM shutdown hook.
// Restore database after running all tests
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
restoreDatabase();
}
});
hope this helps.
I would suggest avoiding these kind of dependencies where you need to know the order in which tests are run. If all you need is to restore a real database that was replaced by setUpDatabaseFixture() probably you solution comes from the use of a RenamingDelegatingContext. Anyway, if you can't avoid knowing when the last test was run, you can use something like this:
...
private static final int NUMBER_OF_TESTS = 5; // count your tests here
private static int sTestsRun = 0;
...
protected void tearDown() throws Exception {
super.tearDown();
sTestsRun += countTestCases();
if ( sTestsRun >= NUMBER_OF_TESTS ) {
android.util.Log.d("tearDow", "*** Last test run ***");
}
}
Isn't this (dealing elegantly with data, so you don't have to worry about restoring it) what testing with mock objects are for? Android supports mocking.
I ask as a question, since I've never mocked Android.
In my experiences, and from this blog post, when the Android tests are made into a suite and run by the InstrumentationTestRunner - ActivityInstrumentationTestCase2 is an extension of ActivityTestCase which is an extendsion of InstrumentationTestCase - they are ordered alphabetically using android.test.suitebuilder.TestGrouping.SORT_BY_FULLY_QUALIFIED_NAME, so you can just restore you DB with a method that is the lowes in the alphabet out of your test names, like:
// underscore is low in the alphabet
public void test___________Restore() {
...
}
Note:
You have to pay attention to inherited tests, since they will not run in this order. The solution is to override all inherited test and simply call super() from the override. This will once again have everything execute alphabetically.
Example:
// Reusable class w only one time setup and finish.
// Abstract so it is not run by itself.
public abstract class Parent extends InstrumentationTestCase {
#LargeTest
public void test_001_Setup() { ... }
#LargeTest
public void test_____Finish() { ... }
}
/*-----------------------------------------------------------------------------*/
// These will run in order shown due to naming.
// Inherited tests would not run in order shown w/o the use of overrides & supers
public class Child extends Parent {
#LargeTest
public void test_001_Setup() { super.test_001_Setup(); }
#SmallTest
public void test_002_MainViewIsVisible() { ... }
...
#LargeTest
public void test_____Finish() { super.test_____Finish(); }
}
So I thought the following code would run fine in TestNG, although it doesn't:
public class Tests {
int i = 0;
#Test
public void testA() {
Assert.assertEquals(0, i);
++i;
}
#Test
public void testB() {
Assert.assertEquals(0, i);
++i;
}
}
Is there a way to make TestNG fire up a new Tests class for each test method?
The common solution is to use an #BeforeMethod method to setup test state,
#BeforeMethod
public void setup() {
i = 0;
}
By far the most common solution to this issue I have found is to use ThreadLocal’s and just deal with the fact that you only have one instance of each Test Class. This deals with all the questions on how to deal with parallel/threaded tests. This works, but is a bit ugly.
private ThreadLocal<Integer> i = new ThreadLocal<>();
#BeforeMethod
public void setup() {
i.set(0);
}
#Test
public void testA() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
#Test
public void testB() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
Now back to the root of your question, new instances for each method.
I’ve been researching for a few weeks similar topics, and I have identified this is the number one issue I was personally having with TestNG. It has literally driven me crazy.
If I was to ignore the fact that your tests had a bunch of complexities, you could potentially hack together a work around to meet the requirements you listed.
A TestNG #Factory Factory allows you to create new instances of your test classes.
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
I’ve now created two Tests instances, to be ran by testNG
Then the issue is your tests still fail, because it will try to run all test methods on your test classes. In order to hack around this you could implement a IMethodInterceptor, and hack together a solution to enforce that each Tests instance only run one method. Maintain a list of methods, and go through them one at a time.
Here is a brute example I hacked together.
public class TestFactory implements IMethodInterceptor {
private List<String> methodsToRun = new ArrayList<>();
private List<Object> testInstances = new ArrayList<>();
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
#Override
public List<IMethodInstance> intercept(List<IMethodInstance> methods, ITestContext context) {
ArrayList<IMethodInstance> tempList = new ArrayList<>();
for(IMethodInstance i: methods){
if(testInstances.contains(i.getInstance())){
continue;
}
String mName = i.getMethod().getConstructorOrMethod().getName();
if(!methodsToRun.contains(mName)){
tempList.add(i);
methodsToRun.add(mName);
testInstances.add(i.getInstance());
}
}
return tempList;
}
}
Then add your listener to the top of your Tests class
#Listeners(TestFactory.class)
You can improve this by dynamically creating new instances of the tests in the factory. Also breaking the listener out into it's own file and numerous other improvements, but you get the gist.
Maybe a crazy solution like the above will work for you or someone else.