I'm trying to construct a suite of Cucumber tests using Selenium. The first step in each test logs in to a web application.
I'm using the Selenium ChromeDriver, and I can see that Cucumber is using dependency injection to initialise the driver. After each test completes I would like to start fresh with a new web browser, but Cucumber insists on using the same driver used in the previous test. I've tried a number of things to start from a clean point. I'm not sure what the recommended way of doing this is, I presume you have to use the 'Hooks' class, as that contains methods which run before and after each test scenario. Here's what I currently have:
public class Hooks {
private final WebDriver driver;
#Inject
public Hooks(final WebDriver driver) {
this.driver = driver;
}
#Before
public void openWebSite() {
}
#After
public void closeSession() {
driver.close();
}
}
As you can see, I put a driver.close() statement into the #After method, but I don't see a method to reopen, or recreate a new session, and I'm getting the following exception when the next test tries to log in:
Message: org.openqa.selenium.NoSuchSessionException: no such session
Presumably because it didn't like the fact that I just called close().
But really, I want to tell Cucumber that I'd like a completely fresh driver to be used for each test scenario.
I've searched around for Cucumber examples, but all the example code I've found just involves one single test. I didn't turn up anything which was using a suite of tests, aiming to do something similar to what I've described above.
What's the recommended pattern for this?
Related
I am running a test over and over. Each time I run I see that another firefox appears as seen here:
Where can I add the driver.quit() (or similar) function so it properly cleans itself up on program close?
I am only calling driver with this:
me.Drivers.Test = new FirefoxDriver();
me.Drivers.Test.get(websiteLink);
Any assistance greatly appreciated.
You can use the test annotation and call me.Drivers.Test.quit() in #After (In JUnit its #After, every testing environment has its own naming convention).
Example:
#Before
public void before() {
me.Drivers.Test = new FirefoxDriver();
}
#Test
public void test() {
me.Drivers.Test.get(websiteLink);
}
#After
public void after() {
me.Drivers.Test.quit();
}
The #Before annotation will run before the test starts, some kind of test setup.
In #Test you are doing the actual testing.
And #After will run after the test is finished, and there you are doing all the cleaning.
For more details you can look here.
Like you initialize the driver you have to quit it.
Use:
me.Drivers.Test.quit()
I assume that me.Drivers.Test is an instance of WebDriver. So you can use me.Drivers.Test.quit() at the end of your script to quit the webdriver.
Apologies for any formatting issues or anything else against etiquette on this site, this is my first post after lurking for the last couple of months and everything that I am working with is pretty new to me.
I have recently started to write some selenium tests in Java/Cucumber/JUnit and have reached an issue that I can't work my way around. I know what the problem is but can't figure out how to actually change my tests to remedy it. Here is some of the background info:
feature file example:
Feature: Form Submission functionality
#Run
Scenario: Submitting the demo form with correct details is succesful
Given I am on the demo page
When I submit the demo form with valid information
Then the thank you page is displayed
StepDefs file example (I have four files like this, testing different parts of the site):
package testFiles.stepDefinitions;
import testFiles.testClasses.formSubmissionFunctionalityTest;
import cucumber.api.java.en.*;
import cucumber.api.java.After;
import cucumber.api.java.Before;
public class formSubmissionFunctionalityStepDefs {
private formSubmissionFunctionalityTest script = new formSubmissionFunctionalityTest();
#Before
public void setUpWebDriver() throws Exception {
script.setUp();
}
#Given("^I am on the demo page$")
public void i_am_on_the_demo_page() throws Throwable {
script.goToDemoPage();
}
#When("^I submit the demo form with valid information$")
public void i_submit_the_demo_form_with_valid_information() throws Throwable {
script.fillSubmitDemoForm();
}
#Then("^the thank you page is displayed$")
public void the_thank_you_page_is_displayed() throws Throwable {
script.checkThankYouPageTitle();
}
#After
public void tidyUp() {
script.tearDown();
}
}
I then also have a formSubmissionFunctionalityTest.java file which contains all of the actual code for methods such as fillSubmitDemoFrom. I also have a setupTest.java file with methods such as tearDown and setUp in.
The problem I am having is that every time I execute a test, four browser sessions are opened rather than the desired single browser. I know that this is because the #Before and #After annotations are executed before each test, rather than before the whole suite. I think that the best solution would be to have a new file with the #Before and #After in, but this is the part that I can't seem to figure out. In each file, script is different, which is where I think the problems come from, but I am not entirely sure.
Does anyone know of a way I can restructure my tests so that they all share the same #Before and #After methods, without causing multiple browser sessions to open? Thank you in advance
The issue isn't really the before and after, it's how you are managing your instance of WebDriver. Generally you need to maintain a single instance of it inside something like a singleton. You can do this through a classic signleton pattern, or you can do it through injection.
I highly recommend that you check out The Cucumber for Java Book. It's not going to solve all of the challenges you will face, but it is a great book for Cucumber when you are working in Java. Chapter 12 is all about using WebDriver in cucumber and talks about using injection to reuse the browser.
I have a Selenium test case that I need to write, but before it executes I need to get some information from the user for the Test to run.
Currently, my code is structured like this:
public class myTest {
private WebDriver driver;
#Before
System.setProperty("webdriver.ie.driver",
"C:\\Users\\ktuck\\Documents\\Selenium\\Selenium Server\\IEDriverServer.exe");
driver = new InternetExplorerDriver(); // I guess I don't need to fire this up as i'm only collecting information from the user?
#Test
// Code to collect user inputted data to use later in my test
#After
public void tearDown() throws Exception {
driver.quit(); // Do I need this?
}
}
My initial thoughts were to put the collection code inside of a main function and then call the rest of my test script which would be in a different file, passing the information collected into it. But I'm not quite sure how to do that as I'm quite new to Selenium/Java :p
Can anyone help?
If you are not using any testing framework , you can choose TestNG. TestNG supports Data driven and parameterized test.
you can pass param via testng.xml.
since you are using maven you can configure maven to pass parameters without using testng.xml.
you can also pass params to TestNG via maven through command line like below
mvn -Dtest=<testName> -D<paramName>=<paramValue> test
if you dont want to use any of the testing framework then you can pass the param via JVM arg
and retrive it using System.getProperty("paramName")
Consider using a test framework like JUnit or TestNG. This would enable you to use methods that are run before and after the actual test (as indicated by the pseudo-code given above).
Using this approach, you can do all the lookup stuff in the #BeforeClass method and quit the webdriver in the #AfterTest method. To keep the test class clean, I recommend to move the #BeforeClass and #AfterTest to an abstract super class which you inherit from.
Abstract Superclass
public abstract class AbstractSeleniumTest {
private WebDriver webDriver;
#BeforeClass
public void setup() {
// do all the initalization stuff, e.g. system property lookup
}
#AfterTest(alwaysRun = true)
public void tearDown() {
// do all the clean-up stuff, e.g. webdriver.quit();
}
}
Test Class
#Test
public class MySeleniumTest extends AbstractSeleniumTest {
public void testSomething() {
// do the actual test logic
}
}
I hope this covers most of your question. For further assistance, please give more information.
I'm in the process of migrating a test framework from JUnit to TestNG. This framework is used to perform large end-to-end integration tests with Selenium that take several minutes to run and consist of several hundred steps across dozens of browser pages.
DISCLAIMER: I understand that this makes unit testing idealists very uneasy, but this sort of testing is required at most large service oriented companies and using unit testing tools to manage these integration tests is currently the most widespread solution. It wasn't my decision. It's what I've been asked to work on and I'm attempting to make the best of it.
At any rate, these tests fail very frequently (surprise) and making them easy to debug is of high importance. For this reason we like to detect test failures before they're reported, append some information about the failure, and then allow JUnit to fail with this extra information. For instance, without this information a failure may look like:
java.lang.<'SomeObscureException'>: <'Some obscure message'> at <'StackTrace'>
But with the added information it will look like:
java.lang.AssertionError:
Reproduction Seed: <'Random number used to generate test case'>
Country: <'Country for which test was set to run'>
Language: <'Localized language used by test'>
Step: <'Test step where the exception occurred'>
Exception Message: <'Message explaining probable cause of failure'>
Associated Exception Type: <'SomeObscureException'>
Associated Exception Message: <'Some obscure message'>
Associated Exception StackTrace: <'StackTrace'>
Exception StackTrace: <'StackTrace where we appended this information'>
It's important to note that we add this information before the test actually fails. Because our reporting tool is based entirely on the exceptions thrown by JUnit this ensures that the information we need is present in those exceptions. Ideally I'd like to add this information to an HTML or XML document using a reporter class after the test fails but before teardown is performed and then modify our reporting tool to pick up this extra information and append it to our e-mail reports. However, this has been a hard sell at our sprint planning meetings and I have not been allotted any time to work on it (running endless regressions for the developers is given higher priority than working on the test framework itself. Such is the life of the modern SDET). I also believe strongly in balance and refuse to cut into other parts of my life to get this done outside of tracked time.
What we're currently doing is this:
public class SomeTests extends TestBase {
#Test
public void someTest() {
// Test code
}
// More tests
}
public abstract class TestBase {
#Rule
public MyWatcher watcher = new MyWatcher();
// More rules and variables
#Before
public final void setup() {
// Read config, generate test data, create Selenium WebDriver, etc.
// Send references to all test objects to MyWatcher
}
}
public class MyWatcher extends TestWatcher {
// Test object references
#Override
public void failed(Throwable throwable, Description description) {
StringBuilder sb = new StringBuilder();
// Append custom test information to sb.
String exceptionSummary = sb.toString();
Assert.fail(exceptionSummary);
}
#Override
public void finished(Description description) {
// Shut down Selenium WebDriver, kill proxy server, etc.
}
// Miscellaneous teardown and logging methods
}
JUnit starts.
SomeTests inherits from TestBase class. TestBase instantiates our own instance of a TestWatcher via #Rule annotation (MyWatcher).
Test setup is run in TestBase class.
References to test objects are sent to MyWatcher.
JUnit begins someTest() method.
someTest fails at some point.
JUnit calls overridden failed() method in MyWatcher.
failed() method appends custom test information to new message using references passed by TestBase.
failed() method calls JUnit's Assert.fail() method with the customized message.
JUnit throws a java.lang.Assertion error for this new failure with the customized message. This is the exception that actually gets recorded in the test results.
JUnit calls overridden finished() method.
finished() method performs test teardown.
Our reporting tool picks up the summarized errors thrown by JUnit, and includes them in the e-mails we receive. This makes life easier than debugging the original exceptions would be without any of the extra information added by MyWatcher after the original failure.
I'd now like to implement a similar mechanism using TestNG. I first tried adding an IInvokedMethodListener in a #Listener annotation to our TestBase class as a way of replacing the TestWatcher that we were using in JUnit. Unfortunately the methods in this listener were getting called after every #BeforeMethod and #AfterMethod call as well as for the actual tests. This was causing quite a mess when I called Assert.fail from inside the IInvokedMethodListener so I opted to scrap this approach and insert the code directly into an #AfterMethod call in our TestBase class.
Unfortunately TestNG does not appear to handle the 'failing twice' approach that we were using in JUnit. When I call Assert.fail in the #AfterMethod of a test that has already failed it gets reported as an additional failure. It seems like we're going to have to come up with another way of doing this until I can get authorization to write a proper test reporter that includes the information we need for debugging.
In the meantime, we still need to dress up the exceptions that get thrown by TestNG so that the debugging information will appear in our e-mail reports. One idea I have for doing this is to wrap every single test in a try/catch block. If the test fails (an exception gets thrown), then we can catch that exception, dress it up in a summary exception with the debugging information added to that exception's message, and call Assert.fail with our new summarized exception. That way TestNG only ever sees that one exception and should only report one failure. This feels like a kludge on top of a kludge though, and I can't help but feel that there's a better way of doing this.
Does anybody know of a better method for modifying what gets reported by TestNG? Is there some kind of trick I can use for replacing the original exception with my own using ITestContext or ITestResult? Can I dive in somewhere and remove the original failure from some list, or is it already too late to stop TestNG's internal reporting by the time I get to the #AfterMethod functions?
Do you have any other advice regarding this sort of testing or exception handling in general? I don't have many knowledgeable co-workers to help with this stuff so I'm pretty much just winging it.
Implement IInvokedMethodListener:
public class InvokedMethodListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult result) {
if (method.isTestMethod() && ITestResult.FAILURE == result.getStatus()) {
Throwable throwable = result.getThrowable();
String originalMessage = throwable.getMessage();
String newMessage = originalMessage + "\nReproduction Seed: ...\nCountry: ...";
try {
FieldUtils.writeField(throwable, "detailMessage", newMessage, true);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Register it in your test:
#Listeners(InvokedMethodListener.class)
public class YourTest {
#Test
public void test() {
Assert.fail("some message");
}
}
or in testng.xml.
If you execute it, you should get:
java.lang.AssertionError: some message
Reproduction Seed: ...
Country: ...
You can user SoftAssert Class in testNG for implementing above scenario. SoftAssert Class has an hash map array which stores all the error message from Asserts in test cases and prints them in the end of the test case. you can also extend Assertion class to implement methods as per your requirement.
More information regarding SoftAssert class and its implementation can be found here
I am writing tests in testNG. Each test method shares a number of common attributes stored at the class level but each test method needs its own independent driver, so the driver cannot be stored as a class variable. This allows each test method to be invoked multiple times with different drivers while running concurrently.
Basically my sudo-code of what I am trying to do would look something like the following:
#BeforeMethod
public void setup(Argument someArg) {
Driver driver = new Driver(argArg);
}
#Test
public void test() {
driver.dostuff();
}
#AfterMethod (alwaysrun = true)
public void teardown() {
driver.quit();
}
My thought is that I might store the drivers in a concurrent map collection using the classname and test method as a key for storing and retrieving the driver, but I would like to find a simpler, less verbose way of doing this.
I apologize if there is an answer that already addresses this. I searched high and low and couldn't find the solution I was looking for or couldn't make the connection to how a specific idea would apply to my problem. My case is specific to Selenium Webdriver, but I imagine that there are other cases that may want to do something like this.
How about using a ThreadLocal<Driver>?