I have a scenario where i need to run my selenium test in parallel using the same data provider. From what i have read it is possible but could not get it to work.I have a hub and a node running on one machine and have another node running on another machine.
My DataProvider
// Data provider for Storage Rule Suite
#DataProvider(name = "StorageRuleDataProvider", parallel =true)
public static Object[][] getStorageData(Method m) {
return TestUtil.getData(m.getName(), TestBase.storageSuite);
}
My Test
#Test(groups = { "CreateNewStorageRule" }, dependsOnGroups = { "StoragePage" }, dataProviderClass = TestDataProvider.class, dataProvider = "StorageRuleDataProvider", threadPoolSize = 20)
public void createNewStorageRuleTest(Hashtable<String, String> data){}
XML
<suite name="Storage Rule Suite" parallel="tests" data-provider-thread-count="20" >
When i run the test in the xml file, i have two set of browser opening on each node but when it attempts to do a login, sometimes it enters the credentials twice in one browser and nothing on the other and sometimes nothing gets entered on one browser.
What you describe is a classical example of non-thread-safe Selenium test automation framework. In most cases you solve this by having an instance of driver per test class, and running all tests from that class in single thread.
However, if you want to run content of single test class in multiple parallel threads, you need to redesign is-a & has-a relationships in your framework. Here is a detailed example of how this can be done:
http://automatictester.co.uk/2015/04/11/parallel-execution-on-method-level-in-selenium-testng-framework
Although, this may add extra work and additional compexity to your test automation. I'd think twice why you want to run Selenium test methods using data provider in parallel and try to answer the question if you really need to do that.
According to my experiences, if you start combining Data Providers with Selenium, you may have a problem with overall test approach. Perhaps you try to automate too much on UI level, instead of pushing the tests down the stack to e.g. API level.
First, you have to use parallel="methods" to run your #Test methods in parallel. Second: I had a similar problem, that more Test methods got executed in the same browser and I solved it by making my WebDriver ThreadSafe.
Related
I'm trying to construct a suite of Cucumber tests using Selenium. The first step in each test logs in to a web application.
I'm using the Selenium ChromeDriver, and I can see that Cucumber is using dependency injection to initialise the driver. After each test completes I would like to start fresh with a new web browser, but Cucumber insists on using the same driver used in the previous test. I've tried a number of things to start from a clean point. I'm not sure what the recommended way of doing this is, I presume you have to use the 'Hooks' class, as that contains methods which run before and after each test scenario. Here's what I currently have:
public class Hooks {
private final WebDriver driver;
#Inject
public Hooks(final WebDriver driver) {
this.driver = driver;
}
#Before
public void openWebSite() {
}
#After
public void closeSession() {
driver.close();
}
}
As you can see, I put a driver.close() statement into the #After method, but I don't see a method to reopen, or recreate a new session, and I'm getting the following exception when the next test tries to log in:
Message: org.openqa.selenium.NoSuchSessionException: no such session
Presumably because it didn't like the fact that I just called close().
But really, I want to tell Cucumber that I'd like a completely fresh driver to be used for each test scenario.
I've searched around for Cucumber examples, but all the example code I've found just involves one single test. I didn't turn up anything which was using a suite of tests, aiming to do something similar to what I've described above.
What's the recommended pattern for this?
I ran into some trouble testing a Spring app. The current approach in my team is to write scenarios in Gherkin and have Serenity provide its pretty reports.
A new component in the app will need a lot of test cases. The requirements will be provided in a few 'parsable' excel files so I thought it would be neat to just use them directly, row by row, in a Junit parametrized test. Another option would be to write a bloated Gherkin feature and tediously compose each example manually.
So I thought of something like that:
#RunWith(Parameterized.class)
private static class Tests {
#Parameterized.Parameters(name = "...") // name with the params
public static Collection params() {
// parse excel here or use some other class to do it
}
#Test
public void test() {
/* do the actual test - it involves sending and receiving some JSON objects */
}
}
This works smoothly but I ran into trouble trying to use
#RunWith(SerenityRunner.class)
The problem is that Junit does not support multiple runners. A solution I found is to make a nested class and annotate each with a different runner, but I don't know how to make it work (which runner should be on the outside, where do I actually run the tests, an so on).
Any thoughts?
Actually Serenity provides another runner - SerenityParameterizedRunner which seems to have the same features as JUnit's Parameterized.
What - Detailed Steps
My test calls a 3rd party API and sends a request for a new transaction (let's say I need to do this for 5 tests which were generated by #Factory). These tests end here with the status of 'Pending'.
The 3rd party API takes 5 minutes to process the data. I need to make a second call to the API after 5 minutes (for all my pending tests) to get the transaction ID for my request and then pass/fail the test.
I want to spin up another #Factory here to re-generate all the Pending tests. These pending tests call the API again (with different inputs) to get the transaction ID and pass/fail the test based on this info.
How
I am trying to use #Factory to generate a bunch of tests dynamically and run them. After these tests are run I want to use #Factory again to generate a second batch of new tests and run them. The problem is, I did not have success when trying to call #Factory for the second time.
I am using Jenkins and Maven in my setup for generating builds and that is when I would want the tests to run.
Questions
Is step 3 possible?
Is there a better way to do this?
Thanks everyone!
Reading the extra comment / improves question, it sounds indeed like an Integration Test.
There are some need Integration Test libraries like JBehave, Serenity, Cucumber, etc which would probably better for setting this up.
With TestNG, you could create 3 tests, where each next test depends on the previous test. See code sample below, from testng dependency test
package com.mkyong.testng.examples.dependency;
import org.testng.annotations.Test;
public class App {
#Test
public void method1() {
System.out.println("This is method 1");
}
#Test(dependsOnMethods = { "method1" })
public void method2() {
System.out.println("This is method 2");
}
}
Here the most simple dependency is show. See the sample code for more complex cases, like groups etc. For setting up two test classes each with their own #Factory
Solved! Responses to this question led me in finding the answer - Thanks #Verhagen
I added 2 tests in my testng.xml.
And have 2 factories setup in my code.
When a build is triggered,
#Factory 1 creates tests -->
#Factory 2 creates more tests -->
tests by #Factory 1 are executed -->
tests by #Factory 2 are executed
This solves my requirement for running a batch of tests (first batch) and then running a second batch of tests based on the out come of the first batch.
Some tests make no sense to run, when i am not on the network, having access to shared resources. Running these tests offline will cause a failed test and an exception.
Is it possible for a given test to be set up in such a way that it runs only when some condition is met, for example, when there is network connectivity?
This sounds like a job for assumptions.
#Test public void someTestThatNeedsNetworkConnectivity() {
assumeTrue(thereIsNetworkConnectivity());
// ...
}
The current project I'm working on requires me to write a tool which runs functional tests on a web application, and outputs method coverage data, recording which test case traversed which method.
Details:
The web application under test will be a Java EE application running in a servlet container (eg. Tomcat). The functional tests will be written in Selenium using JUnit. Some methods will be annotated so that they will be instrumented prior to deployement into the test enviornment. Once the Selenium tests are executed, the execution of annotated methods will be recorded.
Problem: The big obstacle of this project is finding a way to relate an execution of a test case with the traversal of a method, especially that the tests and the application run on different JVMs, and there's no way to transmit the name of the test case down the application, and no way in using thread information to relate test with code execution.
Proposed solution: My solution would consist of using the time of execution: I extend the JUnit framework to record the time the test case was executed, and I instrument the application so that it saves the time the method was traversed. And I try to use correlation to link the test case with method coverage.
Expected problems: This solution assumes that test cases are executed sequentially, and a test case ends befores the next one starts. Is this assumption reasonable with JUnit?
Question: Simply, can I have your input on the proposed solution, and perhaps suggestions on how to improve and make it more robust and functional on most Java EE applications? Or leads to already implemented solutions?
Thank you
Edit: To add more requirements, the tool should be able to work on any Java EE application and require the least amount of configuration or change in the application. While I know it isn't a realistic requirement, the tool should at least not require any huge modification of the application itself, like adding classes or lines of code.
Have you looked at existing coverage tools (Cobertura, Clover, Emma, ...). I'm not sure if one of them is able to link the coverage data to test cases, but at least with Cobertura, which is open-source, you might be able to do the following:
instrument the classes with cobertura
deploy the instrumented web app
start a test suite
after each test, invoke a URL on the web app which saves the coverage data to some file named after the test which has just been run, and resets the coverage data
after the test suite, generate a cobertura report for every saved file. Each report will tell which code has been run by the test
If you need a merged report, I guess it shouldn't be too hard to generate it from the set of saved files, using the cobertura API.
Your proposed solution seems like a reasonable one, except for the proposed solution to relate the test and request by timing. I've tried to do this sort of thing before, and it works. Most of the time. Unless you write your JUnit code very carefully, you'll have lots of issues, because of differences in time between the two machines, or if you've only got one machine, just matching one time against another.
A better solution would be to implement a Tomcat Valve which you can insert into the lifecycle in the server.xml for your webapp. Valves have the advantage that you define them in the server.xml, so you're not touching the webapp at all.
You will need to implement invoke(). The best place to start is probably with AccessLogValve. This is the implementation in AccessLogValve:
/**
* Log a message summarizing the specified request and response, according
* to the format specified by the <code>pattern</code> property.
*
* #param request Request being processed
* #param response Response being processed
*
* #exception IOException if an input/output error has occurred
* #exception ServletException if a servlet error has occurred
*/
public void invoke(Request request, Response response) throws IOException,
ServletException {
if (started && getEnabled()) {
// Pass this request on to the next valve in our pipeline
long t1 = System.currentTimeMillis();
getNext().invoke(request, response);
long t2 = System.currentTimeMillis();
long time = t2 - t1;
if (logElements == null || condition != null
&& null != request.getRequest().getAttribute(condition)) {
return;
}
Date date = getDate();
StringBuffer result = new StringBuffer(128);
for (int i = 0; i < logElements.length; i++) {
logElements[i].addElement(result, date, request, response, time);
}
log(result.toString());
} else
getNext().invoke(request, response);
}
All this does is log the fact that you've accessed it.
You would implement a new Valve. For your requests you pass a unique id as a parameter for the URL, which is used to identify the tests that you're running. Your valve would do all of the heavy lifting before and after the invoke(). You could remove remove the unique parameter for the getNext().invoke() if needed.
To measure the coverage, you could use a coverage tool as suggested by JB Nizet, based on the unique id that you're passing over.
So, from junit, if your original call was
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14");
}
You would change this to be:
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14&testId=testSomething");
}
Then you'd pick up the parameter testId in your valve.