Apologies for any formatting issues or anything else against etiquette on this site, this is my first post after lurking for the last couple of months and everything that I am working with is pretty new to me.
I have recently started to write some selenium tests in Java/Cucumber/JUnit and have reached an issue that I can't work my way around. I know what the problem is but can't figure out how to actually change my tests to remedy it. Here is some of the background info:
feature file example:
Feature: Form Submission functionality
#Run
Scenario: Submitting the demo form with correct details is succesful
Given I am on the demo page
When I submit the demo form with valid information
Then the thank you page is displayed
StepDefs file example (I have four files like this, testing different parts of the site):
package testFiles.stepDefinitions;
import testFiles.testClasses.formSubmissionFunctionalityTest;
import cucumber.api.java.en.*;
import cucumber.api.java.After;
import cucumber.api.java.Before;
public class formSubmissionFunctionalityStepDefs {
private formSubmissionFunctionalityTest script = new formSubmissionFunctionalityTest();
#Before
public void setUpWebDriver() throws Exception {
script.setUp();
}
#Given("^I am on the demo page$")
public void i_am_on_the_demo_page() throws Throwable {
script.goToDemoPage();
}
#When("^I submit the demo form with valid information$")
public void i_submit_the_demo_form_with_valid_information() throws Throwable {
script.fillSubmitDemoForm();
}
#Then("^the thank you page is displayed$")
public void the_thank_you_page_is_displayed() throws Throwable {
script.checkThankYouPageTitle();
}
#After
public void tidyUp() {
script.tearDown();
}
}
I then also have a formSubmissionFunctionalityTest.java file which contains all of the actual code for methods such as fillSubmitDemoFrom. I also have a setupTest.java file with methods such as tearDown and setUp in.
The problem I am having is that every time I execute a test, four browser sessions are opened rather than the desired single browser. I know that this is because the #Before and #After annotations are executed before each test, rather than before the whole suite. I think that the best solution would be to have a new file with the #Before and #After in, but this is the part that I can't seem to figure out. In each file, script is different, which is where I think the problems come from, but I am not entirely sure.
Does anyone know of a way I can restructure my tests so that they all share the same #Before and #After methods, without causing multiple browser sessions to open? Thank you in advance
The issue isn't really the before and after, it's how you are managing your instance of WebDriver. Generally you need to maintain a single instance of it inside something like a singleton. You can do this through a classic signleton pattern, or you can do it through injection.
I highly recommend that you check out The Cucumber for Java Book. It's not going to solve all of the challenges you will face, but it is a great book for Cucumber when you are working in Java. Chapter 12 is all about using WebDriver in cucumber and talks about using injection to reuse the browser.
Related
I need to make screenshot inside the step on specific place. It means not on #BeforeStep nor on #AfterStep. I need to call
// public void someStep(Scenario scenario) // This does not work
public void someStep()
{
page.openUrl();
scenario.attach(screenshot(), "image/png", fileName1);
page.doSomething();
scenario.attach(screenshot(), "image/png", fileName2);
page.doSomethingElse();
}
But I am not able to get current scenario related to the step execution. Is it possible or not? I tried to call it like someStep(Scenarion scenario) but it throws an error.
If you want access to the scenario object, your best bet is an AfterStep hook. However this is not supported in all flavours of cucumber. Your best bet is to check the docs or API documentation for your language
I'm trying to construct a suite of Cucumber tests using Selenium. The first step in each test logs in to a web application.
I'm using the Selenium ChromeDriver, and I can see that Cucumber is using dependency injection to initialise the driver. After each test completes I would like to start fresh with a new web browser, but Cucumber insists on using the same driver used in the previous test. I've tried a number of things to start from a clean point. I'm not sure what the recommended way of doing this is, I presume you have to use the 'Hooks' class, as that contains methods which run before and after each test scenario. Here's what I currently have:
public class Hooks {
private final WebDriver driver;
#Inject
public Hooks(final WebDriver driver) {
this.driver = driver;
}
#Before
public void openWebSite() {
}
#After
public void closeSession() {
driver.close();
}
}
As you can see, I put a driver.close() statement into the #After method, but I don't see a method to reopen, or recreate a new session, and I'm getting the following exception when the next test tries to log in:
Message: org.openqa.selenium.NoSuchSessionException: no such session
Presumably because it didn't like the fact that I just called close().
But really, I want to tell Cucumber that I'd like a completely fresh driver to be used for each test scenario.
I've searched around for Cucumber examples, but all the example code I've found just involves one single test. I didn't turn up anything which was using a suite of tests, aiming to do something similar to what I've described above.
What's the recommended pattern for this?
I'm in the process of migrating a test framework from JUnit to TestNG. This framework is used to perform large end-to-end integration tests with Selenium that take several minutes to run and consist of several hundred steps across dozens of browser pages.
DISCLAIMER: I understand that this makes unit testing idealists very uneasy, but this sort of testing is required at most large service oriented companies and using unit testing tools to manage these integration tests is currently the most widespread solution. It wasn't my decision. It's what I've been asked to work on and I'm attempting to make the best of it.
At any rate, these tests fail very frequently (surprise) and making them easy to debug is of high importance. For this reason we like to detect test failures before they're reported, append some information about the failure, and then allow JUnit to fail with this extra information. For instance, without this information a failure may look like:
java.lang.<'SomeObscureException'>: <'Some obscure message'> at <'StackTrace'>
But with the added information it will look like:
java.lang.AssertionError:
Reproduction Seed: <'Random number used to generate test case'>
Country: <'Country for which test was set to run'>
Language: <'Localized language used by test'>
Step: <'Test step where the exception occurred'>
Exception Message: <'Message explaining probable cause of failure'>
Associated Exception Type: <'SomeObscureException'>
Associated Exception Message: <'Some obscure message'>
Associated Exception StackTrace: <'StackTrace'>
Exception StackTrace: <'StackTrace where we appended this information'>
It's important to note that we add this information before the test actually fails. Because our reporting tool is based entirely on the exceptions thrown by JUnit this ensures that the information we need is present in those exceptions. Ideally I'd like to add this information to an HTML or XML document using a reporter class after the test fails but before teardown is performed and then modify our reporting tool to pick up this extra information and append it to our e-mail reports. However, this has been a hard sell at our sprint planning meetings and I have not been allotted any time to work on it (running endless regressions for the developers is given higher priority than working on the test framework itself. Such is the life of the modern SDET). I also believe strongly in balance and refuse to cut into other parts of my life to get this done outside of tracked time.
What we're currently doing is this:
public class SomeTests extends TestBase {
#Test
public void someTest() {
// Test code
}
// More tests
}
public abstract class TestBase {
#Rule
public MyWatcher watcher = new MyWatcher();
// More rules and variables
#Before
public final void setup() {
// Read config, generate test data, create Selenium WebDriver, etc.
// Send references to all test objects to MyWatcher
}
}
public class MyWatcher extends TestWatcher {
// Test object references
#Override
public void failed(Throwable throwable, Description description) {
StringBuilder sb = new StringBuilder();
// Append custom test information to sb.
String exceptionSummary = sb.toString();
Assert.fail(exceptionSummary);
}
#Override
public void finished(Description description) {
// Shut down Selenium WebDriver, kill proxy server, etc.
}
// Miscellaneous teardown and logging methods
}
JUnit starts.
SomeTests inherits from TestBase class. TestBase instantiates our own instance of a TestWatcher via #Rule annotation (MyWatcher).
Test setup is run in TestBase class.
References to test objects are sent to MyWatcher.
JUnit begins someTest() method.
someTest fails at some point.
JUnit calls overridden failed() method in MyWatcher.
failed() method appends custom test information to new message using references passed by TestBase.
failed() method calls JUnit's Assert.fail() method with the customized message.
JUnit throws a java.lang.Assertion error for this new failure with the customized message. This is the exception that actually gets recorded in the test results.
JUnit calls overridden finished() method.
finished() method performs test teardown.
Our reporting tool picks up the summarized errors thrown by JUnit, and includes them in the e-mails we receive. This makes life easier than debugging the original exceptions would be without any of the extra information added by MyWatcher after the original failure.
I'd now like to implement a similar mechanism using TestNG. I first tried adding an IInvokedMethodListener in a #Listener annotation to our TestBase class as a way of replacing the TestWatcher that we were using in JUnit. Unfortunately the methods in this listener were getting called after every #BeforeMethod and #AfterMethod call as well as for the actual tests. This was causing quite a mess when I called Assert.fail from inside the IInvokedMethodListener so I opted to scrap this approach and insert the code directly into an #AfterMethod call in our TestBase class.
Unfortunately TestNG does not appear to handle the 'failing twice' approach that we were using in JUnit. When I call Assert.fail in the #AfterMethod of a test that has already failed it gets reported as an additional failure. It seems like we're going to have to come up with another way of doing this until I can get authorization to write a proper test reporter that includes the information we need for debugging.
In the meantime, we still need to dress up the exceptions that get thrown by TestNG so that the debugging information will appear in our e-mail reports. One idea I have for doing this is to wrap every single test in a try/catch block. If the test fails (an exception gets thrown), then we can catch that exception, dress it up in a summary exception with the debugging information added to that exception's message, and call Assert.fail with our new summarized exception. That way TestNG only ever sees that one exception and should only report one failure. This feels like a kludge on top of a kludge though, and I can't help but feel that there's a better way of doing this.
Does anybody know of a better method for modifying what gets reported by TestNG? Is there some kind of trick I can use for replacing the original exception with my own using ITestContext or ITestResult? Can I dive in somewhere and remove the original failure from some list, or is it already too late to stop TestNG's internal reporting by the time I get to the #AfterMethod functions?
Do you have any other advice regarding this sort of testing or exception handling in general? I don't have many knowledgeable co-workers to help with this stuff so I'm pretty much just winging it.
Implement IInvokedMethodListener:
public class InvokedMethodListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult result) {
if (method.isTestMethod() && ITestResult.FAILURE == result.getStatus()) {
Throwable throwable = result.getThrowable();
String originalMessage = throwable.getMessage();
String newMessage = originalMessage + "\nReproduction Seed: ...\nCountry: ...";
try {
FieldUtils.writeField(throwable, "detailMessage", newMessage, true);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Register it in your test:
#Listeners(InvokedMethodListener.class)
public class YourTest {
#Test
public void test() {
Assert.fail("some message");
}
}
or in testng.xml.
If you execute it, you should get:
java.lang.AssertionError: some message
Reproduction Seed: ...
Country: ...
You can user SoftAssert Class in testNG for implementing above scenario. SoftAssert Class has an hash map array which stores all the error message from Asserts in test cases and prints them in the end of the test case. you can also extend Assertion class to implement methods as per your requirement.
More information regarding SoftAssert class and its implementation can be found here
I'm working an automation framework built on Selenium 2 and based on the Page Object design pattern. I am at the point where I want to start thinking about writing test suites for my code. Due to various reasons, some of them having to do with efficiency and others having to do with my lack of ownership and control over the test environment where the web application this framework is supposed to test is installed, I wanted to avoid having to start a browser and use the SUT to verify my framework code. So, I thought that Mock objects would be decent alternative.
The problem is that I cannot really wrap my head around the idea of mock objects and I really couldn't find a decent concrete example on the internet that illustrated how this would actually work. I did find one link that appeared to be promising, but the examples were really just way too abstract to actually be useful to me.
http://www.methodsandtools.com/archive/testingcodetdd.php
So, I thought I would post my simple LoginPage page object and ask for a simple example for a unit test or two for this page object using PowerMock. Here is the source code for my LoginPage object:
public final class LoginPage extends Page<LoginPage> {
#FindBy(how = How.ID, using = "username")
private WebElement usernameBox;
#FindBy(how = How.ID, using = "password")
private WebElement passwordBox;
public LoginPage(final WebDriver driver) {
this(driver, driver.getCurrentUrl(), DEFAULT_TIMEOUT_IN_SECONDS);
}
public LoginPage(final WebDriver driver, final String url) {
super(driver, url, DEFAULT_TIMEOUT_IN_SECONDS);
}
public LoginPage(final WebDriver driver, final String url, final int
timeoutInSeconds) {
super(driver, url, timeoutInSeconds);
}
public final void enterUsername(final String username) {
usernameBox.clear();
usernameBox.sendKeys(username);
}
public final void enterPassword(final String password) {
passwordBox.clear();
passwordBox.sendKeys(password);
}
public final void clickLoginButton() {
loginButton.click();
}
public final HomePage loginWithGoodCredentials(final User user) {
return login(user, HomePage.class);
}
public final LoginPage loginWithBadCredentials(final User user) {
return login(user, LoginPage.class);
}
private <T extends Page<T>> T login(final User user, final Class<T>
expectedPage) {
user.getUsername(), user.getPassword(), user.getType(), expectedPage);
enterUsername(user.getUsername());
enterPassword(user.getPassword());
loginButton.click();
return Page.constructPage(getDriver(), getTimeoutInSeconds(),
expectedPage);
}
}
I understand that mocking WebDriver and WebElement is easy because they are interfaces, according to the link I posted above. But the document I referenced doesn't make it very clear to a total newbie to Mock objects and Mocking frameworks how exactly, I use that to write a unit test for my page object. Let's take the public login methods, for example. What does a unit test for those look like exactly? I would merely need to verify that logging in with returns a page object of the expected type. Or for example, the methods which enter text into the username and password boxes... I would perhaps want to have a test that verifies that any existing text is erased before a username and password are entered. Since I wouldn't have a real browser with the real application login page loaded, I'm not exactly sure how PowerMock would instantiate and initialize all my web elements in order to do a test on the page object's publicly exposed services.
Maybe http://popperfw.org is interesting for you. It is a much more flexible implementation of the page object pattern than the default implementation from selenium and it comes with direct support for "unit" testing your created PageObjects => it tests if your PageObjects match on your testet pages
http://popperfw.org/unitTest.html
Selenium Tests are not the appropriate place for Mock objects. Mock objects are used when UnitTesting classes which have dependencies. You mock the dependencies so you can focus on testing the single class.
Selenium is used for browser based testing of a web page/application. By the very nature of this being a web page indicates that there is some sort of server running and a running browser (automated in the case of Selenium). This is, by definition, an integration test since you're testing the final "integrated" product. If there's a problem, it might be within ClassA or perhaps the web server configuration. Integration testing is the place to find out where that stuff breaks (this is often put in place to reduce the load on a QA team for regression tests since Unit Tests test functionality of classes and Integration Tests test the functionality of the system as a whole (or perhaps in smaller integrated parts).
With all that said, it sounds like you're mixing the two up. From your posted code, I'd say you're doing Integration Testing and you should just forget about Mock objects for now.
The selenium tests I'm gonna be doing are basically based on three main steps, with different parameters. These parameters are passed in from a text file to the test. this allows easy completion of a test such as create three of "X" without writing the code to do the create three times in one test.
Imagine i have a test involving creating two of "X" and one of "Y". CreateX and CreateY are already defined in separate tests. Is there a nice way of calling the code contained in createX and createY from say, Test1?
I tried creating a class with the creates as seperate methods, but got errors on all the selenium.-anything-, ie every damn line. it goes away if i extend seleneseTestCase, but it seems that my other test classes wont import from a class that extends seleneseTestCase. I'm probably doing something idiotic but i might as well ask!
EDIT:
well for example, its gonna be the same setUp method for every test, so id like to only write that once... instead of a few hundred times...
public void ready() throws Exception
{
selenium = new DefaultSelenium("localhost", 4444, "*chrome", "https://localhost:9443/");
selenium.start();
selenium.setSpeed("1000");
selenium.setTimeout("999999");
selenium.windowMaximize();
}
thats gonna be used EVERYWHERE.
its in a class called reuseable. Id like to just call reuseable.ready(); from the tests SetUp... but it wont let me....
public class ExampleTest {
#Before
public void setup() {
System.out.println("setup");
}
public void someSharedFunction() {
System.out.println("shared function");
}
#Test
public void test1() {
System.out.println("test1");
someSharedFunction();
}
#Test
public void test2() {
System.out.println("test2");
someSharedFunction();
}
}
The contents of the function after the #Before annotation is what will be executed before every test. someSharedFunction() is an example of a 'reusable' function. The code above will output the following:
setup
test1
shared function
setup
test2
shared function
I would recommend using JUnit and trying out some of the tutorials on junit.org. The problem you have described can be fixed using the #Before annotation on a method that performs this setup in a super class of your tests