I am currently in the process of developing an automation framework and would like to ask a question. What is the best way to initialize the web driver?
Should it be in a Base Test Class that every test class will inherent from and in the BeforeClass, initialize it. Or maybe the web driver should be a singleton object. Or should I use a JUnit Rule. My desirable is that I want to be able to execute the test suite on multiple browsers via a property file. It does not necessarily have to be running on multiple threads, (i.e. Selenium Grid) but I do want the ability to run in sequence. So for example, if in a property file, I have IE and chrome set to true, it will run the test cases for IE, then chrome. So, I would like to know the best way to facilitate this. It will also be data driven, via Excel files and junit parameterized tests.
Thanks
We did something like this with JUnit and Cucumber-JVM. We used a singleton for a WebDriver instance. The specific instance that gets created is based on a system property. To run against multiple browsers, we perform separate runs of the Suite with a different system property for the browser type. We manage that in our build tool.
One advantage of Cucumber-JVM is that it's pretty easy to write hooks that run before or after any test. Without that, you'll want to reset some portion of the WebDriver state before each test.
A while ago, in an unrelated case, I wrote a custom test runner to run tests against multiple database systems by extending ParentRunner and BlockJUnit4ClassRunner. It's easier to just script multiple runs of the same suite in the build tool.
I posted a quick pass at https://github.com/sethkraut/multiwebdriver. I'll also post the code below. It would probably need some polish around preparing and clearing the WebDriver instances, but it should be a good starting point.
This class runs an individual test class after populating a WebDriver field.
public class SingleWebDriverTestRunner extends BlockJUnit4ClassRunner {
private final WebDriver webDriver;
public SingleWebDriverTestRunner(Class<?> klass, WebDriver webDriver) throws InitializationError {
super(klass);
this.webDriver = webDriver;
}
// Test Description methods
#Override
protected String getName() {
return super.getName() + " on " + driverName();
}
private String driverName() {
return webDriver.getClass().getSimpleName();
}
#Override
protected String testName(FrameworkMethod method) {
return super.testName(method) + " on " + driverName();
}
#Override
protected Object createTest() throws Exception {
Object o = super.createTest();
for (Field f: o.getClass().getDeclaredFields()) {
if (f.getType().isAssignableFrom(WebDriver.class)) {
f.setAccessible(true);
f.set(o, webDriver);
}
}
return o;
}
}
And this class iterates over multiple WebDrivers creating instances of the previous class
public class MultiWebDriverTestRunner extends ParentRunner<Runner> {
private List<WebDriver> drivers = new ArrayList<WebDriver>(
Arrays.asList(new FirefoxDriver(), new ChromeDriver())
);
public MultiWebDriverTestRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected Description describeChild(Runner child) {
return child.getDescription();
}
private List<Runner> children = null;
#Override
protected List<Runner> getChildren() {
if (children == null) {
children = getChildrenNew();
}
return children;
}
protected List<Runner> getChildrenNew() {
List<Runner> runners = new ArrayList<Runner>();
for (WebDriver driver: drivers) {
try {
Class<?> javaClass = getTestClass().getJavaClass();
runners.add(new SingleWebDriverTestRunner(javaClass, driver));
} catch (InitializationError e) {
e.printStackTrace();
}
}
return runners;
}
#Override
protected void runChild(Runner child, RunNotifier notifier) {
child.run(notifier);
}
}
Related
I am new to the DataStage world and I am trying to start the process() method by myself.
"Why do you want to do that?"
Sadly, I have not the hands on DataStage directly, I am "just" the Java developer in charge of creating the Java classes that will be used by DataStage. My goal is to know what my classes do exactly without the DataStage treatment in the picture to be able to debug and explain the process.
"What have you tried?"
My approach so far was to "simulate" the Configuration interface in order to call the validateConfiguration() method to have the InputLink set up before calling the process() method. But I don't know how to "simulate" a Configuration object with my data.
I have considered use a mock of my Configuration or of my InputLink objects but I don't know how to do that either.
It's time to show you some code:
First, the class that extends com.ibm.is.cc.javastage.api.Processor class:
public class DownloadExportOutputDatastage extends Processor {
#Override
public boolean validateConfiguration(Configuration configuration, boolean b) throws Exception
{
this.m_inputLink = configuration.getInputLink(0);
this.m_outputLink = configuration.getOutputLink(0);
return true;
}
#Override
public Capabilities getCapabilities() {
Capabilities capabilities = new Capabilities();
// ...
return capabilities;
}
#Override
public void process() throws Exception {
InputRecord inputRecord;
while ((inputRecord = this.m_inputLink.readRecord()) != null) { // How to use a custom inputLink with my data in it
DownloadExportOutput objDownloadExportOutput = new DownloadExportOutput();
objDownloadExportOutput.setRequestId((String) inputRecord.getValue("idrequest")); // The only value that I need in my Record
// My custom process here
OutputRecord outputRecord = this.m_outputLink.getOutputRecord();
outputRecord.setValue("body", soapContent);
this.m_outputLink.writeRecord(outputRecord);
}
}
}
Finally, what I have tried:
public class main {
public static void main(String[] args) throws Exception {
StandaloneConfiguration standaloneConfiguration = new StandaloneConfiguration(); // A custom implementation of the DataStage Configuration interface
DownloadExportOutputDatastage sample = new DownloadExportOutputDatastage(); // The class that you can see above
sample.validateConfiguration(standaloneConfiguration, true); // I try to load an InputLink but don't know what to put in it
sample.getCapabilities();
sample.process(); // Get a NPE because of my empty InputLink
}
}
Is there a way to put a priority on a #Factory method? I've tried #AfterMethod, #AfterTest, and #AfterClass, all result in my factory method running immediately after my setup call with the #BeforeClass tag.
My code is similar to this:
#BeforeClass
public void setup() {
}
#Test()
public void method1() throws Exception {
}
#Test(dependsOnMethods = "method1")
public void method2() {
}
#Test(dependsOnMethods = "method2")
public void method3() throws Exception {
}
#Test(dependsOnMethods = "method3")
public void method4() throws Exception {
}
#Test(dependsOnMethods = "method4")
public void method5() throws Exception {
}
#AfterClass
#Factory
public Object[] factory() {
Object[] values = new Object[10];
for (int i = 0; i < values.length; i++) {
values[i] = new validationFactory(map.get(i).x, map.get(i).y);
}
return values;
}
What the code is doing is reaching out to an API, retrieving any requested data, slicing that data up into a map, and then passing that map of data into the factory method in order to validate it. The problem is that immediately after my setup method runs, the factory shoots off and validates an empty map. Is there any way to make the factory method wait until the data is ready?
The purpose of #Factory is to create tests dynamically. It doesn't make sense to run those tests after an #AfterClass method, so even if you could work around your problem by checking if the map was empty (so that factory() runs twice, but the loop - only once), any tests created by the factory would not be executed by the framework.
If what you need is to validate some data after all the tests have finished, put that validation in a method annotated with #AfterClass (without #Factory). You can use assertions there as in a regular test.
If for some reason you want to run the validation as separate tests and have to use a factory, there is a way to defer their execution until after the other tests, but they still have to be instantiated at the beginning. So it looks like you need to pass some object that would load the data when required instead of initializing the validation tests with map entries right away. A data provider may work, but here's a simpler example.
Add all the main tests to a group.
#Test(dependsOnMethods = "method1", groups = "Main")
public void method2() { }
Create a class or a method that will load the data when needed, it depends on how you populate the map. It has to be thread-safe because TestNG runs tests in parallel. A very simplistic implementation:
public class DataLoader {
// Location has data members X and Y
private Map<Integer, Location> locations;
public synchronized Location getLocation(int index) {
if (locations == null) {
locations = new HashMap<>();
// load the map
}
return locations.get(index);
}
}
Create a class to represent a validation test. Notice its only test depends on the main group.
public class ValidationTest {
private int index;
private DataLoader loader;
public ValidationTest(int index, DataLoader loader) {
this.number = number;
this.loader = loader;
}
#Test(dependsOnGroups = "Main")
public void validate() {
Location location = this.loader.getLocation(this.index);
// do whatever with location.x and location.y
}
}
Instantiate the validation tests. They will run after the main group has finished. Notice I have removed the #AfterClass annotation.
#Factory
public Object[] factory() {
DataLoader loader = new DataLoader();
Object[] validators = new Object[10];
for (int i = 0; i < validators.length; i++) {
validators[i] = new ValidationTest(i, loader);
}
return validators;
}
By the way, dependencies between test methods indicate poorly written tests, and should be avoided, at least for unit-tests. There are frameworks other than TestNG for testing complex scenarios.
A TestNG run has 2 distinct phases:
Creation of tests ;
Run of tests.
Then, you can expect to create some new tests during the run of tests.
I want to back up my application's database before replacing it with the test fixture. I'm forced to use Junit3 because of Android limitations, and I want to implement the equivalent behavior of #BeforeClass an #AfterClass.
UPDATE: There is now a tool (Junit4Android) to get support for
Junit4 on Android. It's a bit of a kludge but should work.
To achieve the #BeforeClass equivalent, I had been using a static variable and initializing it during the first run like this, but I need to be able to restore the database after running all the tests. I can't think of a way of detecting when the last test has run (since I believe there is no guarantee on the order of test execution.)
public class MyTest extends ActivityInstrumentationTestCase2<MainActivity> {
private static boolean firstRun = true;
#Override
protected void setUp() {
if(firstRun) {
firstRun = false;
setUpDatabaseFixture();
}
}
...
}
From the junit website:
Wrapped the setUp and tearDown method in the suite.This is for the
case if you want to run a single YourTestClass testcase.
public static Test suite() {
return new TestSetup(new TestSuite(YourTestClass.class)) {
protected void setUp() throws Exception {
System.out.println(" Global setUp ");
}
protected void tearDown() throws Exception {
System.out.println(" Global tearDown ");
}
};
}
If you would like to run only one setUp and tearDown for all the
testcase, make a suite and add testClass to it and pass the suite
object in TestSetup constructor.But I think there is not much usage
for this,and in a way it is violating JUnit philosophy.
Recently, I was looking for a similar solution too. Fortunately, in my case after the JVM exits after the last test is run. So I was able to achieve this by adding a JVM shutdown hook.
// Restore database after running all tests
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
restoreDatabase();
}
});
hope this helps.
I would suggest avoiding these kind of dependencies where you need to know the order in which tests are run. If all you need is to restore a real database that was replaced by setUpDatabaseFixture() probably you solution comes from the use of a RenamingDelegatingContext. Anyway, if you can't avoid knowing when the last test was run, you can use something like this:
...
private static final int NUMBER_OF_TESTS = 5; // count your tests here
private static int sTestsRun = 0;
...
protected void tearDown() throws Exception {
super.tearDown();
sTestsRun += countTestCases();
if ( sTestsRun >= NUMBER_OF_TESTS ) {
android.util.Log.d("tearDow", "*** Last test run ***");
}
}
Isn't this (dealing elegantly with data, so you don't have to worry about restoring it) what testing with mock objects are for? Android supports mocking.
I ask as a question, since I've never mocked Android.
In my experiences, and from this blog post, when the Android tests are made into a suite and run by the InstrumentationTestRunner - ActivityInstrumentationTestCase2 is an extension of ActivityTestCase which is an extendsion of InstrumentationTestCase - they are ordered alphabetically using android.test.suitebuilder.TestGrouping.SORT_BY_FULLY_QUALIFIED_NAME, so you can just restore you DB with a method that is the lowes in the alphabet out of your test names, like:
// underscore is low in the alphabet
public void test___________Restore() {
...
}
Note:
You have to pay attention to inherited tests, since they will not run in this order. The solution is to override all inherited test and simply call super() from the override. This will once again have everything execute alphabetically.
Example:
// Reusable class w only one time setup and finish.
// Abstract so it is not run by itself.
public abstract class Parent extends InstrumentationTestCase {
#LargeTest
public void test_001_Setup() { ... }
#LargeTest
public void test_____Finish() { ... }
}
/*-----------------------------------------------------------------------------*/
// These will run in order shown due to naming.
// Inherited tests would not run in order shown w/o the use of overrides & supers
public class Child extends Parent {
#LargeTest
public void test_001_Setup() { super.test_001_Setup(); }
#SmallTest
public void test_002_MainViewIsVisible() { ... }
...
#LargeTest
public void test_____Finish() { super.test_____Finish(); }
}
So I thought the following code would run fine in TestNG, although it doesn't:
public class Tests {
int i = 0;
#Test
public void testA() {
Assert.assertEquals(0, i);
++i;
}
#Test
public void testB() {
Assert.assertEquals(0, i);
++i;
}
}
Is there a way to make TestNG fire up a new Tests class for each test method?
The common solution is to use an #BeforeMethod method to setup test state,
#BeforeMethod
public void setup() {
i = 0;
}
By far the most common solution to this issue I have found is to use ThreadLocal’s and just deal with the fact that you only have one instance of each Test Class. This deals with all the questions on how to deal with parallel/threaded tests. This works, but is a bit ugly.
private ThreadLocal<Integer> i = new ThreadLocal<>();
#BeforeMethod
public void setup() {
i.set(0);
}
#Test
public void testA() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
#Test
public void testB() {
Integer i1 = i.get();
Assert.assertEquals(0, i.get().intValue());
i.set(i1 + 1);
}
Now back to the root of your question, new instances for each method.
I’ve been researching for a few weeks similar topics, and I have identified this is the number one issue I was personally having with TestNG. It has literally driven me crazy.
If I was to ignore the fact that your tests had a bunch of complexities, you could potentially hack together a work around to meet the requirements you listed.
A TestNG #Factory Factory allows you to create new instances of your test classes.
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
I’ve now created two Tests instances, to be ran by testNG
Then the issue is your tests still fail, because it will try to run all test methods on your test classes. In order to hack around this you could implement a IMethodInterceptor, and hack together a solution to enforce that each Tests instance only run one method. Maintain a list of methods, and go through them one at a time.
Here is a brute example I hacked together.
public class TestFactory implements IMethodInterceptor {
private List<String> methodsToRun = new ArrayList<>();
private List<Object> testInstances = new ArrayList<>();
#Factory
public Object[] factory(){
return new Object[]{new Tests(), new Tests()};
}
#Override
public List<IMethodInstance> intercept(List<IMethodInstance> methods, ITestContext context) {
ArrayList<IMethodInstance> tempList = new ArrayList<>();
for(IMethodInstance i: methods){
if(testInstances.contains(i.getInstance())){
continue;
}
String mName = i.getMethod().getConstructorOrMethod().getName();
if(!methodsToRun.contains(mName)){
tempList.add(i);
methodsToRun.add(mName);
testInstances.add(i.getInstance());
}
}
return tempList;
}
}
Then add your listener to the top of your Tests class
#Listeners(TestFactory.class)
You can improve this by dynamically creating new instances of the tests in the factory. Also breaking the listener out into it's own file and numerous other improvements, but you get the gist.
Maybe a crazy solution like the above will work for you or someone else.
I'm using TestNG to run Selenium based tests in Java. I have a bunch of repeated tests. Generally, they do all the same except of test name and one parameter.
I want to automate generation of it. I was thinking about using factory. Is there a way to generate tests with different name? What would be the best approach to this?
As for now I have something like below and I want to create 10 tests like LinkOfInterestIsActiveAfterClick
#Test(dependsOnGroups="loggedin")
public class SmokeTest extends BrowserStartingStoping{
public void LinkOfInterestIsActiveAfterClick(){
String link = "link_of_interest";
browser.click("*",link);
Assert.assertTrue(browser.isLinkActive(link));
}
}
My XML suite is auto-generated from Java code.
Test names are crucial for logging which link is active, and which one is not.
Have your test class implement org.testng.ITest and override getTestName() to return the name you want.
So I connected Factory with DataProvider and used attributes of contexts.
#DataProvider(name = "DP1")
public Object[][] createData() {
Object[][] retObjArr={
{"Link1","link_to_page"},
{"Link2","link_to_page"},
return retObjArr;
}
#Test (dataProvider = "DP1")
public void isActive(String name, String link){
this.context.setAttribute("name", name);
browser.click(link);
Assert.assertTrue(browser.isLinkActive(link));
}
And in the Listener
public class MyListener extends TestListenerAdapter{
#Override
public void onTestSuccess(ITestResult tr){
log("+",tr);
}
//and similar
private void log(String string, ITestResult tr){
List<ITestContext> k = this.getTestContexts();
String testName = tr.getTestClass().getName();
for (ITestContext i: k)
{
if (i.getAttribute("name") != null)
logger.info(testName+"."+i.getAttribute("name"));
}
}
}