Is there a way to put a priority on a #Factory method? I've tried #AfterMethod, #AfterTest, and #AfterClass, all result in my factory method running immediately after my setup call with the #BeforeClass tag.
My code is similar to this:
#BeforeClass
public void setup() {
}
#Test()
public void method1() throws Exception {
}
#Test(dependsOnMethods = "method1")
public void method2() {
}
#Test(dependsOnMethods = "method2")
public void method3() throws Exception {
}
#Test(dependsOnMethods = "method3")
public void method4() throws Exception {
}
#Test(dependsOnMethods = "method4")
public void method5() throws Exception {
}
#AfterClass
#Factory
public Object[] factory() {
Object[] values = new Object[10];
for (int i = 0; i < values.length; i++) {
values[i] = new validationFactory(map.get(i).x, map.get(i).y);
}
return values;
}
What the code is doing is reaching out to an API, retrieving any requested data, slicing that data up into a map, and then passing that map of data into the factory method in order to validate it. The problem is that immediately after my setup method runs, the factory shoots off and validates an empty map. Is there any way to make the factory method wait until the data is ready?
The purpose of #Factory is to create tests dynamically. It doesn't make sense to run those tests after an #AfterClass method, so even if you could work around your problem by checking if the map was empty (so that factory() runs twice, but the loop - only once), any tests created by the factory would not be executed by the framework.
If what you need is to validate some data after all the tests have finished, put that validation in a method annotated with #AfterClass (without #Factory). You can use assertions there as in a regular test.
If for some reason you want to run the validation as separate tests and have to use a factory, there is a way to defer their execution until after the other tests, but they still have to be instantiated at the beginning. So it looks like you need to pass some object that would load the data when required instead of initializing the validation tests with map entries right away. A data provider may work, but here's a simpler example.
Add all the main tests to a group.
#Test(dependsOnMethods = "method1", groups = "Main")
public void method2() { }
Create a class or a method that will load the data when needed, it depends on how you populate the map. It has to be thread-safe because TestNG runs tests in parallel. A very simplistic implementation:
public class DataLoader {
// Location has data members X and Y
private Map<Integer, Location> locations;
public synchronized Location getLocation(int index) {
if (locations == null) {
locations = new HashMap<>();
// load the map
}
return locations.get(index);
}
}
Create a class to represent a validation test. Notice its only test depends on the main group.
public class ValidationTest {
private int index;
private DataLoader loader;
public ValidationTest(int index, DataLoader loader) {
this.number = number;
this.loader = loader;
}
#Test(dependsOnGroups = "Main")
public void validate() {
Location location = this.loader.getLocation(this.index);
// do whatever with location.x and location.y
}
}
Instantiate the validation tests. They will run after the main group has finished. Notice I have removed the #AfterClass annotation.
#Factory
public Object[] factory() {
DataLoader loader = new DataLoader();
Object[] validators = new Object[10];
for (int i = 0; i < validators.length; i++) {
validators[i] = new ValidationTest(i, loader);
}
return validators;
}
By the way, dependencies between test methods indicate poorly written tests, and should be avoided, at least for unit-tests. There are frameworks other than TestNG for testing complex scenarios.
A TestNG run has 2 distinct phases:
Creation of tests ;
Run of tests.
Then, you can expect to create some new tests during the run of tests.
Related
I have a simple function which was recently wrapped by some additional logic. I am struggling with updating the test logic since suddently, the method body is wrapped in a mock.
Let me give you an example.
Former logic & test:
// logic
public void doSomething(Transaction t, int a) {
myService.foo(t, a);
}
And my test:
// test
TestedService service;
#Mock
MyService myService;
#Mock
Transaction t;
#Test
public void testSomething() {
testedService.doSomething(t, 10);
Mockito.verify(myService).foo(t, 10);
}
What happened is that we wrapped our logic in some additional function:
public void doSomething(Transaction t, int a) {
model.runInEnhancedTransaction(t, t2 -> { myService.foo(t2, a) });
}
My question is, how do I test this when the logic is suddently wrapped in in model method (model is a mock in my test).
I basically need to verify that t2 -> { myService.foo(t2, a) } was called when the model object is a mock.
EDIT: I made it work with implementing my custom version of model just for the test purpose but still wonder if there is some more elegant way.
It's kind of tough to test these kinds of lambda invocations. What I do is perform two tests: one that model.runInEnhancedTransaction() was called, and one for model.runInEnhancedTransaction() itself. eg
#Test
void doSomethingCallsModelEnhancedTransaction() {
testedService.doSomething(t, 10);
verify(model).runInEnhancedTransaction(eq(t), any());
}
#Test
void modelRunInEnhancedTransaction() {
Transaction t = mock(Transaction.class);
BiConsumer<Transaction, Integer> consumer = mock(Biconsumer.class);
model.runInEnhancedTransaction(t, consumer);
verify(consumer).accept(...);
}
Preconditions
I have the following class (fictional, just for demonstrating the problem):
public class MySingleton {
private static MySingleton sMySingleton;
private static List<String> sItemList;
private MySingleton(List<String> list) {
sItemList = list;
}
public static MySingleton getInstance(List<String> list) {
if (sMySingleton == null) {
sMySingleton = new MySingleton(list);
}
return sMySingleton;
}
public void addItem(String item) {
sItemList.add(item);
}
public void removeItem(String item) {
sItemList.remove(item);
}
}
And an according test class:
public class MySingletonTest {
private MySingleton mInstance;
private List<String> mList;
#Before
public void setup() {
mList = mock(List.class);
mInstance = MySingleton.getInstance(mList);
}
#Test
public void testAddItem() throws Exception {
String item = "Add";
mInstance.addItem(item);
verify(mList, times(1)).add(item);
}
#Test
public void testRemoveItem() throws Exception {
String item = "Remove";
mInstance.removeItem(item);
verify(mList, times(1)).remove(item);
}
}
Problem
If I now execute the complete test class, Mockito tells me for the test testRemoveItem() that there were 0 interactions with the mock.
How is that possible?
Note:
Please do not start of a discussion about the sense singletons.
This question is about Mockito and why its not working.
JUnit creates a new test class instance for every single test, which Mockito populates with a new mock instance for every single test. However, your singleton only ever initializes itself once, meaning that mList == MySingleton.sItemList during the first test but mList != MySingleton.sItemList for every test after that.
In other words, the interaction is happening, but by the second test, you're checking the wrong mock.
Though I know you're not here to debate the merits of this type of singleton, bear in mind that you might have a hard time replacing the instance in tests if you do it this way. Instead, consider making the singleton's constructor available (only) to your tests, and keeping the List (or other state) within the instance. That way you can create a brand new "Singleton" for every individual test.
I am currently in the process of developing an automation framework and would like to ask a question. What is the best way to initialize the web driver?
Should it be in a Base Test Class that every test class will inherent from and in the BeforeClass, initialize it. Or maybe the web driver should be a singleton object. Or should I use a JUnit Rule. My desirable is that I want to be able to execute the test suite on multiple browsers via a property file. It does not necessarily have to be running on multiple threads, (i.e. Selenium Grid) but I do want the ability to run in sequence. So for example, if in a property file, I have IE and chrome set to true, it will run the test cases for IE, then chrome. So, I would like to know the best way to facilitate this. It will also be data driven, via Excel files and junit parameterized tests.
Thanks
We did something like this with JUnit and Cucumber-JVM. We used a singleton for a WebDriver instance. The specific instance that gets created is based on a system property. To run against multiple browsers, we perform separate runs of the Suite with a different system property for the browser type. We manage that in our build tool.
One advantage of Cucumber-JVM is that it's pretty easy to write hooks that run before or after any test. Without that, you'll want to reset some portion of the WebDriver state before each test.
A while ago, in an unrelated case, I wrote a custom test runner to run tests against multiple database systems by extending ParentRunner and BlockJUnit4ClassRunner. It's easier to just script multiple runs of the same suite in the build tool.
I posted a quick pass at https://github.com/sethkraut/multiwebdriver. I'll also post the code below. It would probably need some polish around preparing and clearing the WebDriver instances, but it should be a good starting point.
This class runs an individual test class after populating a WebDriver field.
public class SingleWebDriverTestRunner extends BlockJUnit4ClassRunner {
private final WebDriver webDriver;
public SingleWebDriverTestRunner(Class<?> klass, WebDriver webDriver) throws InitializationError {
super(klass);
this.webDriver = webDriver;
}
// Test Description methods
#Override
protected String getName() {
return super.getName() + " on " + driverName();
}
private String driverName() {
return webDriver.getClass().getSimpleName();
}
#Override
protected String testName(FrameworkMethod method) {
return super.testName(method) + " on " + driverName();
}
#Override
protected Object createTest() throws Exception {
Object o = super.createTest();
for (Field f: o.getClass().getDeclaredFields()) {
if (f.getType().isAssignableFrom(WebDriver.class)) {
f.setAccessible(true);
f.set(o, webDriver);
}
}
return o;
}
}
And this class iterates over multiple WebDrivers creating instances of the previous class
public class MultiWebDriverTestRunner extends ParentRunner<Runner> {
private List<WebDriver> drivers = new ArrayList<WebDriver>(
Arrays.asList(new FirefoxDriver(), new ChromeDriver())
);
public MultiWebDriverTestRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected Description describeChild(Runner child) {
return child.getDescription();
}
private List<Runner> children = null;
#Override
protected List<Runner> getChildren() {
if (children == null) {
children = getChildrenNew();
}
return children;
}
protected List<Runner> getChildrenNew() {
List<Runner> runners = new ArrayList<Runner>();
for (WebDriver driver: drivers) {
try {
Class<?> javaClass = getTestClass().getJavaClass();
runners.add(new SingleWebDriverTestRunner(javaClass, driver));
} catch (InitializationError e) {
e.printStackTrace();
}
}
return runners;
}
#Override
protected void runChild(Runner child, RunNotifier notifier) {
child.run(notifier);
}
}
I was annoyed to find in the Parameterized documentation that "when running a parameterized test class, instances are created for the cross-product of the test methods and the test data elements." This means that the constructor is run once for every single test, instead of before running all of the tests. I have an expensive operation (1-5 seconds) that I put in the constructor, and now the operation is repeated way too many times, slowing the whole test suite needlessly. The operation is only needed once to set the state for all of the tests. How can I run several tests with one instance of a parameterized test?
I would move the expensive operation to a #BeforeClass method, which should execute just once for the entire parameterized test.
A silly example is shown below:
#RunWith(Parameterized.class)
public class QuickTest {
private static Object expensiveObject;
private final int value;
#BeforeClass
public static void before() {
System.out.println("Before class!");
expensiveObject = new String("Just joking!");
}
#Parameters
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] { { 1 }, { 2 } });
}
public QuickTest(int value) {
this.value = value;
}
#Test
public void test() {
System.out.println(String.format("Ran test #%d.", value));
System.out.println(expensiveObject);
}
}
Will print:
Before class!
Ran test #1.
Just joking!
Ran test #2.
Just joking!
So I thought the following code would run fine in TestNG, although it doesn't:
public class Tests {
int i = 0;
#Test
public void testA() {
Assert.assertEquals(0, i);
++i;
}
#Test
public void testB() {
Assert.assertEquals(0, i);
++i;
}
}
Is there a way to make TestNG fire up a new Tests class for each test method?
The common solution is to use an #BeforeMethod method to setup test state,
#BeforeMethod
public void setup() {
i = 0;
}
By far the most common solution to this issue I have found is to use ThreadLocal’s and just deal with the fact that you only have one instance of each Test Class. This deals with all the questions on how to deal with parallel/threaded tests. This works, but is a bit ugly.
private ThreadLocal<Integer> i = new ThreadLocal<>();
#BeforeMethod
public void setup() {
i.set(0);
}
#Test
public void testA() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
#Test
public void testB() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
Now back to the root of your question, new instances for each method.
I’ve been researching for a few weeks similar topics, and I have identified this is the number one issue I was personally having with TestNG. It has literally driven me crazy.
If I was to ignore the fact that your tests had a bunch of complexities, you could potentially hack together a work around to meet the requirements you listed.
A TestNG #Factory Factory allows you to create new instances of your test classes.
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
I’ve now created two Tests instances, to be ran by testNG
Then the issue is your tests still fail, because it will try to run all test methods on your test classes. In order to hack around this you could implement a IMethodInterceptor, and hack together a solution to enforce that each Tests instance only run one method. Maintain a list of methods, and go through them one at a time.
Here is a brute example I hacked together.
public class TestFactory implements IMethodInterceptor {
private List<String> methodsToRun = new ArrayList<>();
private List<Object> testInstances = new ArrayList<>();
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
#Override
public List<IMethodInstance> intercept(List<IMethodInstance> methods, ITestContext context) {
ArrayList<IMethodInstance> tempList = new ArrayList<>();
for(IMethodInstance i: methods){
if(testInstances.contains(i.getInstance())){
continue;
}
String mName = i.getMethod().getConstructorOrMethod().getName();
if(!methodsToRun.contains(mName)){
tempList.add(i);
methodsToRun.add(mName);
testInstances.add(i.getInstance());
}
}
return tempList;
}
}
Then add your listener to the top of your Tests class
#Listeners(TestFactory.class)
You can improve this by dynamically creating new instances of the tests in the factory. Also breaking the listener out into it's own file and numerous other improvements, but you get the gist.
Maybe a crazy solution like the above will work for you or someone else.