Can someone recommend the best way to 'tearDown' in the #AfterClass with testng? I occasionally get a hung window or popup and can't get back to the homepage to logout gracefully for the next class to start and avoid modal dialog errors.
I've tried getting a window handle on the homepage, then one on the popup, then putting a switchTo in my #AfterClass but it doesn't work. I've also tried driver.close(). But then my next class will be skipped with an unreachable browser error.
I really need a good tearDown #AfterClass method that can get out of whatever errors and popups are on page, and just leave a clean browser window at login page for the next test class to run.
Edit: Adding Code:
package TestPackage;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.Assert;
import org.testng.annotations.*;
import org.testng.annotations.Test;
//#Test (priority = 1)
public class test1 {
public static WebDriver driver = new FirefoxDriver();
Environment environment = Environment.QA03();
User testUser = User.ns_system();
AxUtilities axUtilities;
#BeforeClass
#Parameters("environment")
#Test
public void login(String environment1){
// environment = new Environment(environment1);
axUtilities = new DAxUtilities(environment1, driver);
// DAxUtilities dAxUtilities = new DAxUtilities(environment);
Login login = new Login();
login.getLogin(testUser, axUtilities, driver);
axUtilities.sleep(5000);
}
#Test
public void testNothing(){
String s = "public void testNothing() reached...";
System.out.println(s);
}
#AfterClass
public void verifyOKAndLogout() {
driver.quit();
// DAxUtilities dAxUtilities;
}
}
test class 1 and 2 are the same except for the class name...
And the tests are run from xml file. And I've tried many variants of the xml file, even putting each test in its own xml file:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="TestSuite1" verbose="1">
<parameter name="environment" value="QA03"/>
<test name="Test1">
<classes>
<class name="TestPackage.test1"/>
<!--<class name="TestPackage.test2"/>-->
</classes>
</test>
<test name="Test2">
<classes>
<class name="TestPackage.test2"/>
</classes>
</test>
</suite>
But always two web drivers get created, one right after the other, at the very beginning of the test run, before any other code is executed. And breakpoints show the code is definitely skipping from the beginning of class 1, to the beginning of class 2, before running class 1 tests...
From what you wrote it seems that you are creating browser (instantiating WebDriver) before entire test suite and then execute all the tests using the same browser instance. Although it results with faster execution time it also introduces problems like the one you are having right now.
I would suggest creating a new instance of the browser before executing each test class (or even test method). In such case you can safely use driver.quit() in your #AfterClass method. Restarting browser every test will make your tests much more stable.
EDIT comments on your newly added code
the way you are instantiating WebDriver is not correct. If you have
several test classes you will end up with multiple browser windows
opened before any test is executed
you annotated one method with both #Test and #BeforeClass - this will cause the method to be executed twice in a row
The way it should (more or less) looks like is
public class MyTest {
protected WebDriver driver;
#BeforeClass //I would even suggest to use #BeforeMethod but that's up to you
public void setUp() {
driver = new FirefoxDriver();
//other instantiations and assignments if necessary
}
#Test
public void login() {
driver.get("http://watever.com");
//do whatever
}
#Test
public void someOtherTest() {
//do something else
}
#AfterClass //or #AfterMethod
public void tearDown() {
driver.quit();
}
}
I believe that calling driver.close() or driver.quit() and then creating a new WebDriver and assigning it to your driver should do the trick. You'd probably want to call the same function to create the driver at both the beginning and end of your class.
Use
#tearDown{
driver.quit();
}
It will close all the windows open by test class.
And in the Next class create new instance of Webdriver() as shown below
Webdriver driver = new FireFoxDriver(); //Or some other driver
Related
I have a project where before every #Test method I do a check to see if the Method's annotation data is valid. If the data isn't valid I want to skip the test method & continue the rest of my test suite.
All the data parsing and logic works fine, but from what I can tell I am using the wrong tool for the job.
My code has...
private SoftAssert s_assert = new SoftAssert();
#BeforeMethod
public void beforeMethod(Method m){
//reads code
if (dataNotCorrect)
s_assert.fail();
}
#Test #MyCustomAnnotation(data = incorrect)
public void Test1(){
//Do stuff
}
#Test #MyCustomAnnotation(data = correct)
public void Test2(){
//Do stuff
}
In this scenario I want to start trying to do both, but when the test runs Test1() should be skipped and testng should continue on to run Test2(). Yet as soon as I catch the fail at Test1(), it ends the whole suite at Test1(). Skipping not only Test1() but also Test2().
I've tried both with a Soft assert and normal assert but neither seem to work.
SkipException is what you are looking for.
In beforeMethod, check your data and throw SkipException if they are not correct. This will skip the test. In this complete yet simple example, test2 is skipped:
import org.testng.SkipException;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.reflect.Method;
public class TestA {
#Retention(RetentionPolicy.RUNTIME)
public #interface MyCustomAnnotation {
boolean dataCorrect();
}
#BeforeMethod
public void beforeMethod(Method m) {
if (!m.getAnnotation(MyCustomAnnotation.class).dataCorrect()) {
throw new SkipException("Invalid data");
}
}
#Test
#MyCustomAnnotation(dataCorrect = true)
public void test1() {
System.out.println("test1");
}
#Test
#MyCustomAnnotation(dataCorrect = false)
public void test2() {
System.out.println("test2");
}
}
See also: How do I use TestNG SkipException?
You also need to alter the default config failure policy, to let the others tests run even if one is skipped. This is done at the suite level:
<?xml version="1.0" encoding="UTF-8"?>
<suite name="Suite" configfailurepolicy="continue">
...
</suite>
Thanks to #Kyle the OP for pointing this attribute.
#Benoit got me most of the way there replacing my asserts with throwing a SkipException. But my issue of wanting the current test to be skipped rather than every remaining test, still remained.
The issue turned out to be with the configfailurepolicy in Testng. It defaults to skip (skipping all remaining tests) when I wanted it to be set to continue (continues the rest of the suite).
Here is an answer I found elsewhere which I managed to apply in two different ways. Link here
1.
First, make a testng.xml and run tests from there. Next to the suite name, add the tag configfailurepolicy="continue"
Here is my testng.xml below
<?xml version="1.0" encoding="UTF-8"?>
<suite name="Suite" configfailurepolicy="continue">
<test name="MyTests" preserve-order="true">
<classes>
<class name="testclassLocation..." />
</classes>
</test>
</suite>
Make sure that you run your tests from testng.xml if you do it this way.
2.
Find where the .jar for testng is located. I'm using maven so it was "${user.dir}.m2\repository\org\testng\testng\6.14.3".
Then open up the .jar archive, view the file 'testng-1.0.dtd', find the line
configfailurepolicy (skip | continue) "skip"
And change it to
configfailurepolicy (skip | continue) "continue"
Should work fine after that.
Edit:
As mentioned in the comments it is recommended to use the first solution as it allows these changes/fixes to become portable across multiple projects/devices. The second solution will only apply the fixes to your machine.
Create a static Booleanflag in BeforeMethod, if data not correct then just change the flag to true and in Test of boolean is true turn flag to false first and fail test
Sample code
public static Boolean myflag=false;
#BeforeMethod
public void beforeMethod(Method m){
//reads code
if (dataNotCorrect)
myflag=true;
}
#Test #MyCustomAnnotation(data = incorrect)
public void Test1(){
if(myflag){
myflag=false;
s_assert.fail();
}
Below is my code :
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.annotations.Test;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.AfterMethod;
public class NewTest {
public WebDriver driver;
String driverPath ="F:\\AutomationTesting\\geckodriver-v0.10.0-win64\\geckodriver.exe";
#Test
public void main() {
driver.findElement(By.id("account")).click();
// Find the element that's ID attribute is 'log' (Username)
// Enter Username on the element found by above desc.
driver.findElement(By.id("log")).sendKeys("testuser_1");
// Find the element that's ID attribute is 'pwd' (Password)
// Enter Password on the element found by the above desc.
driver.findElement(By.id("pwd")).sendKeys("Test#123");
// Now submit the form. WebDriver will find the form for us from the element
driver.findElement(By.id("login")).click();
// Print a Log In message to the screen
System.out.println(" Login Successfully, now it is the time to Log Off buddy.");
// Find the element that's ID attribute is 'account_logout' (Log Out)
driver.findElement(By.id("account_logout"));
}
#BeforeMethod
public void beforeMethod() {
System.setProperty("webdriver.gecko.driver",driverPath);
driver = new FirefoxDriver();
//Put a Implicit wait, this means that any search for elements on the page could take the time the implicit wait is set for before throwing exception
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
//Launch the Online Store Website
driver.get("http://www.onlinestore.toolsqa.wpengine.com");
}
#AfterMethod
public void afterMethod() {
driver.quit();
}
}
When I try to run my Testsuite using TestNG. I'm getting an Exception like Class path not found. I have done cleaning the project, Reinstalled the TestNG. but no luck.
I'm using Selenium webdriver 3.0, Gecko driver ,TestNG.
How do I solve this problem ?
You need to add one testing.xml file to execute you all test as a suit
Just add one testing.xml file in project root and paste below content in it
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="FirstTestSuit">
<test name="FunctionlTest">
<classes>
<class name="package.subpackage.TestClassName1"/>
<class name="package.subpackage.TestClassName2"/>
</classes>
</test>
</suite>
After Saving changes. Right click on project > Run AS > TestNG Suit
Hope this will help
This happens when you try to run to run your testng.xml as TestNG suite and your classpath is incorrectly described under <class> tag
say you have following class tag in your testng.xml:
<classes>
<class name="packageName.yourClassName"/>
</classes>
you need to specify correct path which includes package name for your included class.
I'm using Selenium with Java WebDriver and with test Runner Suit(XML), I have 2 tests to run.
I have put a setup method in each Test class and it was OK - the tests was working well (each test with its browser window).
Then I decided to move the setup method to the configuration class. This class is extended by each test class, and this creates a problem that the second test run overrides the first by using the same browser.
Setup class method code:
public class Configrations_And_ScreenShotsFunc_POM {
protected WebDriver driver;
public void setup()
{
System.setProperty("webdriver.edge.driver","C:\\Program Files (x86)\\Microsoft Web Driver\\MicrosoftWebDriver.exe");
driver = new EdgeDriver();
driver.get(URL);
}
Test 1 class Code:
public class TestCase1_POM extends Configrations_And_ScreenShotsFunc_POM {
#BeforeTest
public void Begain() throws InterruptedException
{
setup(); //Setup Browser
}
Test 2 class Code:
public class TestCase2_POM extends Configrations_And_ScreenShotsFunc_POM {
#BeforeTest
public void Begain() throws InterruptedException
{
setup(); //Setup Browser
}
Runner.Xml:
<?xml version="1.0" encoding="UTF-8"?>
<suite name="TestSuite" thread-count="2" parallel="tests">
<test name="TestCase1">
<parameter name="browser" value="Edge" />
<classes>
<class name="POM.Tests.TestCa se1_POM"></class>
</classes>
</test>
<test name="TestCase2_POM">
<parameter name="browser" value="Edge" />
<classes>
<class name="POM.Tests.TestCase2_POM"></class>
</classes>
</test>
</suite>
Comment: The 2 tests files are in a folder, and the configurations class in another folder.
How can it be solved?
Use #BeforeTest annotation on your setup() method instead of your current Begin() and remove your Begin().
The annotation #BeforeTest is intended to be invoked only once per <test> tag in your suite.xml file. Resorting to browser instantiations in a #BeforeTest method which is available in a base class from which all test classes extend has the below disadvantage :
Depending on how the WebDriver instance is being saved (either as a static data member or as an instance variable), test methods can either end up using the same WebDriver instance (in the case of static data members) (or) end up getting NullPointerException (in the case of instance data members)
You can consider moving the browser instantiation into a much more granular level such as :
#BeforeClass (here also the drawback is there that if there are more than one #Test annotated test methods in your class which all use the WebDriver instance initialized by your #BeforeClass, then during parallel execution you will end up having race conditions amongst your test methods ) or into a
#BeforeMethod annotated method.
I had created a blog post that shows you how to do parallel executions with TestNG without using any of these configuration annotations, inheritance etc., Please see if it helps you.
Blog link : https://rationaleemotions.wordpress.com/2013/07/31/parallel-webdriver-executions-using-testng/
My setup:
A TestBase class containing a #BeforeClass method
Several Test classes extending from TestBase class and also containing a #BeforeClass method
testNG 6.8.8
Why this setup?:
I need the #BeforeClass in the TestBase class to provide setup that all testclasses will need and I don't want to repeat in every Test class. For example thread-id-dependent login credentials.
TestBase class also instantiates the Selenium WebDriver
I need the #BeforeClass in the Test classes to initialize everything that all #Test methods will need to use but that only needs to (or must) be built/invoked once for all tests. This includes calls to said WebDriver instance (that's why a "normal" constructor doesn't work here)
Here's what happens:
When I run the tests via a testNG xml file and there is an exception within the #BeforeClass method of one of the Test classes, then all subsequent Test classes are skipped by TestNG.
Why does this happen? How to prevent it?
When I change the annotation in the TestBase class to #BeforeSuite for example, then all tests are run, even if there is an exception in on of the #BeforeClass methods.
Example:
When you run the xml file, complete RunAllTestClasses02 class is skipped.
testNG xml file:
<?xml version="1.0" encoding="UTF-8"?>
<suite name = "MiscSuite">
<test name = "MiscTest">
<classes >
<class name="drkthng.misc.RunAllTestClasses01" />
<class name="drkthng.misc.RunAllTestClasses02" />
</classes>
</test>
</suite>
TestBase class with a #BeforeClass method:
public abstract class RunAllTestClassesBase {
#BeforeClass
public void beforeClass() {
// do something that all Test classes will need
}
}
Test class that throws Exception within #BeforeClass method:
public class RunAllTestClasses01 extends RunAllTestClassesBase {
#BeforeClass
public void beforeClass() {
Assert.assertTrue(false);
}
#Test
public void Test01() {
Assert.assertTrue(true);
}
}
This was a bug in Testng. solved in 6.9.5. Please upgrade.
Try to add #AfterClass(alwaysrun = true) or/and #AfterMethod(alwaysrun=true) as by default they are skipped if either BeforeClass or BeforeMethod are not completed.
The documentation on testNG Configuration Failures, Policy, and alwaysRun explains whether/when configuration failures cause listener methods (alwaysRun and other listeners) to be skipped, failure policies and best practices.
So currently I am doing something like this to do cross browser testing:
#DataProvider(name="foo")
public Object[][] getDrivers() {
DesiredCapabilities firefoxCapabs = DesiredCapabilities.firefox();
capabillities.setCapability("version", "26");
capabillities.setCapability("platform", Platform.WINDOWS);
DesiredCapabilities chromeCapabs = ....
....
DesiredCapabilities ieCapabs = ...
....
return new Object[][]{
{new RemoteWebDriver(url, firefoxCapabs)},
{new RemoteWebDriver(url, chromeCapabs)},
......
};
}
#Test(dataProvider="foo")
public void testSomething(WebDriver driver) {
//some test
}
This seems extremely inefficient as I am basically creating and destroying these WebDriver objects every time I run a test. Is there no way to do something like this at least at the TestSuite level so that I am not generating and destroying these objects for every test. I would like something like below. I am aware that you cannot have a DataProvider for #BeforeSuite methods!
public class TestSuite{
public static WebDriver driver;
#BeforeSuite(dataProvider="foo")
public void setDriver(WebDriver driver) {
this.driver = driver;
}
}
public class TestClass {
private WebDriver driver;
#BeforeTest
public void getDriver() {
this.driver = TestSuite.driver;
}
#Test
public void myTest() {
//use this.driver to do testing stuff
}
}
Are there options I am not seeing to do something like this?
Sauce Labs On Demand has a great plugin for Jenkins (https://saucelabs.com/jenkins/5). Their approach is pretty simple: you check/uncheck what OSs and browsers you to test and Jenkins sets environment variables for your tests to pick up. Below is a complete example of using Spring's #Configuration:
package com.acme.test;
import java.net.MalformedURLException;
import java.net.URL;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.config.ConfigurableBeanFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Scope;
import org.springframework.core.env.Environment;
#Configuration
public class SauceLabsWebDriverConfiguration {
#Autowired private Environment environment;
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public WebDriver webDriver() throws MalformedURLException {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("version", environment.getProperty("SELENIUM_VERSION", "17.0.1"));
capabilities.setCapability("platform", environment.getProperty("SELENIUM_PLATFORM", "XP"));
capabilities.setCapability("browserName", environment.getProperty("SELENIUM_BROWSER", "firefox"));
String username = environment.getProperty("SAUCE_USER_NAME", "enter_your_username_here");
String accessKey = environment.getProperty("SAUCE_API_KEY", "enter_your_api_here");
return new RemoteWebDriver(new URL("http://" + username + ":" + accessKey + "#ondemand.saucelabs.com:80/wd/hub"), capabilities);
}
}
Sauce Labs has some free plans, but if you don't want to use them, you should be able to switch out the last part that constructs the URL ("http://" + username + ":" + accessKey + "#ondemand.saucelabs.com:80/wd/hub") the actual server URL you want to point to ("http://mydomain.com").
The trick is basically to replace hard-coded browser/capability names with environment provided ones and then have your build runner (ant/maven/etc) set environment variables for each of the OS/browser combos you want to test and "loop" over those somehow. SauceLabs plugins just makes it easy to do the looping. You can still provide default fallback values in case you want to run a simple local test.
// Before
DesiredCapabilities firefoxCapabs = DesiredCapabilities.firefox();
capabillities.setCapability("version", "26");
capabillities.setCapability("platform", Platform.WINDOWS);
// After
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("version", environment.getProperty("SELENIUM_VERSION", "17.0.1"));
capabilities.setCapability("platform", environment.getProperty("SELENIUM_PLATFORM", "XP"));
capabilities.setCapability("browserName", environment.getProperty("SELENIUM_BROWSER", "firefox"));
Hope it helps.
Two ways to implement this
1) Since you are using a Suite, you can create two tests that will run at the start of the suite and the end , one will setup Webdriver and store it in a global value; which each test class can later access , and the second will close the webdriver.
2) You can use dependency injection to setup webdriver as a singleton/global value and inject it to each test
http://testng.org/doc/documentation-main.html#dependency-injection
Here's something you can try:
Use the ITestListener. Implement the onStart and onFinish methods to create threadlocal variables for your driver based on the param value
i.e.
In onStart(context), fetch your parameter value using
context.getCurrentXmlTest().getParameter("something")
Depending on the param value populate the threadlocal value with appropriate initialized driver
ThreadLocal<WebDriver> threadDriver = new ThreadLocal<WebDriver>()
In your xml pass the browser as parameter for individual test tags and set parallel=tests
<test name="1" >
<parameter name="browser" value="ff"></parameter>
<classes>
<class name="com.nv.test.testngtests.Eq"/>
</classes>
</test>
<test name="2" >
<parameter name="browser" value="chrome"></parameter>
<classes>
<class name="com.nv.test.testngtests.Eq"/>
</classes>
</test>
You can take it at suite level too with ISuiteListener, but I think with the test approach you atleast get a chance to have some kind of parallelism
Two ways I see to fix this problem:
Have a different instance of the driver for each class, and use a #BeforeClass to create the driver (I personally do this, because it allows me to run all of my classes in parallel)
Construct your driver in your class's constructor, and use a #Factory.
Both of these solutions will create a driver per class. If you want to have a single set of drivers for your entire suite, create and store an Object[][] statically, and use it in your #dataProvider.
What I have did in my framework is to use configuration file(java properties file) to to set the browser parameter.
in your project you can create different classes for Browsers for example IE,Firefox etc.. inside this classes let have the functions to create webdriver with desired capabilities.
Now create a Driver class in which you can create function to initilize webdriver from your browser class as per the input from your proerties file. for example.
you can save properties file with Framework.properties name and inside the file add line like below
Browser=Firefox
'Browser=internetExplorer
...so on
To read this property file use java.util.Properties like below.
Properties prop = new Properties();
FileInputStream stream = new FileInputStream("path to property file");
prop.load(stream);
String browser = prop.getProperty(attribute);
stream.close();
Now use this browser parameter in your Driver class as below.
switch (Browser) {
case "FireFox":
return FireFox.initializeFireFoxDriver();
case "InternetExplorer":
return IE.initializeInternetExplorerDriver();
case "Chrome":
return Chrome.initializeChromeDriver();
case "Safari":
return Safari.initializeSafariDriver();
default:
break;
}
Call this class method in your fixture setup and it will start browser instance as per mentioned in properties file.
In case if you don't want to use property file method please use #parameter option in testng.xml for running test suite for passing browser name.
Assuming you are making use of CI, so you can maintain a configuration file that maintains what browser, base URL you should intake for performing a test over test suite.
This configuration can be maintained in a properties file that is always read in #BeforeSuite and accordingly you can load the WebDriver with DesiredCapabilities
Now being in a CI, you can have multiple projects differentiating browser and each can be run parallel/simultaneously, helping you use the resources efficiently and each project will override the configuration file with the required data.
Have a look to Selenium Grid. Never used myself but as far as I understand your question it matches your needs. Selenium Grid allow you to run your selenium test case on different OS/browser combination.
More info on http://docs.seleniumhq.org/docs/07_selenium_grid.jsp and http://code.google.com/p/selenium/wiki/Grid2