How to run the Jbehave class through TestNG - java

I just created feature file which is open google and enter some values in search engine and validated the search result. Also, I created a steps class file which contains given, when, and then annotations. Now, I wanna create a test runner which is used to drive the steps class file according to story file. But In my case I wanna use TestNG instead of Junit framework. I googled there are several sites were explained how to integrate the Jbehave+Junit. But So, far I didn't find any site where is the combination of Jbehave+TestNG. Since I'm trying by myself I just need some clarity on this. Can somebody give me a sample code which help me to understand better.
Please find Sample Story:
scenario: Check the google search engine
Given : Open the google home page www.google.com
When : Enter test automation in search box
Then : Proper result should be displayed in results page
Steps class file:
import org.jbehave.core.annotations.Given;
import org.jbehave.core.annotations.Then;
import org.jbehave.core.annotations.When;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class GoogleSearchEngine {
public static WebDriver driver;
#Given("Open the google home page $url")
public static void openUrl(String url) throws Exception {
try {
driver = new FirefoxDriver();
driver.get(url);
} catch (Exception ex) {
ex.printStackTrace();
}
}
#When("Enter $searchKeyword in search box")
public static void searchKeyword(String searchKeyword) throws Exception {
try {
driver.findElement(By.xpath(".//*[#id='gs_htif0']")).sendKeys(searchKeyword);
driver.findElement(By.xpath(".//*[#id='tsf']/center/input[1]")).click();
} catch (Exception ex) {
ex.printStackTrace();
}
}
#Then("Proper result should be displayed in results page")
public static void result() throws Exception {
try {
driver.quit();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Then I need to develop testrunner which used to drive the story file.
Can somebody help me to create a test runner class file using TestNG framework?

I had to do something similar recently and was pleasantly surprised to discover it was NBD. See my answer to another SO JBehave question for sample code describing how to setup JBehave. From there, simply create a testNG suite file for your JBehave tests like so:
<test name="all JBehave tests">
<packages>
<package name="com.foo.tests.jbehave.test1"/>
<package name="com.foo.tests.jbehave.test2"/>
<package name="com.foo.tests.jbehave.test3"/>
</packages>
</test>
Remember that the JBehave's JUnitStory and JUnitStories make JBehave stuff look like JUnit stuff, so from TestNG's perspective, it's just running JUnit stuff. One thing to watch out for is integrating reporting between the two.

Related

Testng. Skip execution of testMethod in #beforeMethod instead of all skipping remaining testMethods

I have a project where before every #Test method I do a check to see if the Method's annotation data is valid. If the data isn't valid I want to skip the test method & continue the rest of my test suite.
All the data parsing and logic works fine, but from what I can tell I am using the wrong tool for the job.
My code has...
private SoftAssert s_assert = new SoftAssert();
#BeforeMethod
public void beforeMethod(Method m){
//reads code
if (dataNotCorrect)
s_assert.fail();
}
#Test #MyCustomAnnotation(data = incorrect)
public void Test1(){
//Do stuff
}
#Test #MyCustomAnnotation(data = correct)
public void Test2(){
//Do stuff
}
In this scenario I want to start trying to do both, but when the test runs Test1() should be skipped and testng should continue on to run Test2(). Yet as soon as I catch the fail at Test1(), it ends the whole suite at Test1(). Skipping not only Test1() but also Test2().
I've tried both with a Soft assert and normal assert but neither seem to work.
SkipException is what you are looking for.
In beforeMethod, check your data and throw SkipException if they are not correct. This will skip the test. In this complete yet simple example, test2 is skipped:
import org.testng.SkipException;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.reflect.Method;
public class TestA {
#Retention(RetentionPolicy.RUNTIME)
public #interface MyCustomAnnotation {
boolean dataCorrect();
}
#BeforeMethod
public void beforeMethod(Method m) {
if (!m.getAnnotation(MyCustomAnnotation.class).dataCorrect()) {
throw new SkipException("Invalid data");
}
}
#Test
#MyCustomAnnotation(dataCorrect = true)
public void test1() {
System.out.println("test1");
}
#Test
#MyCustomAnnotation(dataCorrect = false)
public void test2() {
System.out.println("test2");
}
}
See also: How do I use TestNG SkipException?
You also need to alter the default config failure policy, to let the others tests run even if one is skipped. This is done at the suite level:
<?xml version="1.0" encoding="UTF-8"?>
<suite name="Suite" configfailurepolicy="continue">
...
</suite>
Thanks to #Kyle the OP for pointing this attribute.
#Benoit got me most of the way there replacing my asserts with throwing a SkipException. But my issue of wanting the current test to be skipped rather than every remaining test, still remained.
The issue turned out to be with the configfailurepolicy in Testng. It defaults to skip (skipping all remaining tests) when I wanted it to be set to continue (continues the rest of the suite).
Here is an answer I found elsewhere which I managed to apply in two different ways. Link here
1.
First, make a testng.xml and run tests from there. Next to the suite name, add the tag configfailurepolicy="continue"
Here is my testng.xml below
<?xml version="1.0" encoding="UTF-8"?>
<suite name="Suite" configfailurepolicy="continue">
<test name="MyTests" preserve-order="true">
<classes>
<class name="testclassLocation..." />
</classes>
</test>
</suite>
Make sure that you run your tests from testng.xml if you do it this way.
2.
Find where the .jar for testng is located. I'm using maven so it was "${user.dir}.m2\repository\org\testng\testng\6.14.3".
Then open up the .jar archive, view the file 'testng-1.0.dtd', find the line
configfailurepolicy (skip | continue) "skip"
And change it to
configfailurepolicy (skip | continue) "continue"
Should work fine after that.
Edit:
As mentioned in the comments it is recommended to use the first solution as it allows these changes/fixes to become portable across multiple projects/devices. The second solution will only apply the fixes to your machine.
Create a static Booleanflag in BeforeMethod, if data not correct then just change the flag to true and in Test of boolean is true turn flag to false first and fail test
Sample code
public static Boolean myflag=false;
#BeforeMethod
public void beforeMethod(Method m){
//reads code
if (dataNotCorrect)
myflag=true;
}
#Test #MyCustomAnnotation(data = incorrect)
public void Test1(){
if(myflag){
myflag=false;
s_assert.fail();
}

Cucumber test not running

I am working on my first feature file/selenium project.
I have created a feature file and runner class.
package cucumberpkg2;
import org.junit.runner.RunWith;
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
#RunWith(Cucumber.class)
#CucumberOptions
(features="Features")
public class Runner {
}
I have the feature file test.feature
Feature: Login screen
Scenario Outline: Successful login
Given User is on Login page
When User enters valid UserName and Password
And Clicks the login button
Then User landed on the home page
But whenever I try to run the TestRunner class as a JUnit test, I get the error:
Test class not found in selected project.
Here is the solution to your Question:
As per the current documentation of Cucumber you may require to change the keyword Scenario Outline to Scenario in test.feature file.
Though you mentioned about TestRunner class but your code speaks about Runner class being implemented as public class Runner, be sure which class file are you executing as JUnit test.
You may require to change #CucumberOptions to #Cucumber.Options for previous versions (recent versions work with #CucumberOptions)
Put the following 2 parts in a single line as #Cucumber.Options
(features="Features")
As you would be mentioning #Cucumber.Options(features="Features") ensure that your feature file is placed inside Features sub-directory within the Project Directory.
So you will be having, test.feature file within Features sub-directory with the following code:
Feature: Login screen
Scenario: Successful login
Given User is on Login page
When User enters valid UserName and Password
And Clicks the login button
Then User landed on the home page
Your Runner class will look like:
import org.junit.runner.RunWith;
import cucumber.api.junit.Cucumber;
#RunWith(Cucumber.class)
#CucumberOptions(features="Features")
public class Runner {
}
Finally when you will execute Runner class as a JUnit test you will see the following message on your console:
You can implement missing steps with the snippets below:
#Given("^User is on Login page$")
public void User_is_on_Login_page() throws Throwable {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
#When("^User enters valid UserName and Password$")
public void User_enters_valid_UserName_and_Password() throws Throwable {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
#When("^Clicks the login button$")
public void Clicks_the_login_button() throws Throwable {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
#Then("^User landed on the home page$")
public void User_landed_on_the_home_page() throws Throwable {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
These warnings can be taken care easily by implementing the glue options to Cucumber.
Let me know if this Answers your Question.
you need to provide full path of the feature file as mentioned below.
#RunWith(Cucumber.class)
#CucumberOptions(
features = {"src/test/resources/com/gaurang/steps/demo.feature",
"src/test/resources/com/gaurang/steps/demo1.feature"
}
)
public class RunAllTest {
}
or if you have too many feature files the best way is to provide tags to feature file and then use those tags to run as mentioned below .
#userRegistrations
Feature: User Registration
RunAllTest.java
#RunWith(Cucumber.class)
#CucumberOptions(tags={"#userRegistrations"})
public class RunAllTest {
}
And you can use multiple tags

Is there a screenshot > Report embed method for TestNG which is available for Cucumber?

Is there a screenshot embed method for TestNG which is available for Cucumber?
I have the following Cucumber method up and running, but is there a similar method for JUnit or TestNG which will append images to created reports (XML reports)
public void close_browser_window(Scenario scenario) throws Exception {
if (scenario.isFailed()) {
scenario.embed(((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES), "image/png");
}
}
I think your question has 2 paths.
One is actually taking the screenshot and the other one is attaching it to an XML report.
So, in TestNG in order to take a screenshot you can override the OnTestFailure method like so:
public class onFailure extends TestListenerAdapter {
#Override
public void onTestFailure(ITestResult result) {
File scrFile = ((TakesScreenshot)driver.getScreenshotAs(OutputType.FILE));
FileUtils.copyFile(scrFile, new File("C:\\Screenshots\\Regression\\"+nameVar+"_"+envVar+".png"));
}
}
Then without too much hassle you can use Extent Reports which can attach the screenshot to your report, check out the community edition here!
Update after OP's comment:

JUnit and Mocks in Liferay

I need to make JUnit tests using Mockito or PowerMock or smth else but I don't know what to start with. I created testing folder, set mockito, but what should I do next? I couldn't find any examples so Im stucked with it. Can you show me how to write this JUnit test or at least give some idea.
public void deleteAuthor(ActionRequest actionRequest, ActionResponse actionResponse)
throws SystemException, PortalException {
long authorId = ParamUtil.getLong(actionRequest, "authorId");
AuthorLocalServiceUtil.deleteAuthor(authorId);
SessionMessages.add(actionRequest, "deleted-author");
log.info(DELETE_SUCCESS);
}
Or this:
public void addAuthor(ActionRequest actionRequest, ActionResponse actionResponse)
throws IOException, PortletException, SystemException {
String authorName=ParamUtil.getString(actionRequest,"authorName");
Author author=AuthorLocalServiceUtil.createAuthor(CounterLocalServiceUtil.increment());
author.setAuthorName(authorName);
author=AuthorLocalServiceUtil.addAuthor(author);
}
P.S. Im very newbie and made only 1 JUnit test in my life, so Im really intrested in good advice. Thanks in advance!
UPD:
I try do to smth like this:
private BookAndAuthor portlet;
#Before
public void setUp() {
portlet = new BookAndAuthor();
}
#Test
public void testDeleteBookOk() throws Exception {
PowerMockito.mockStatic(BookLocalServiceUtil.class);
long id = 1;
Book book = BookLocalServiceUtil.createBook(id);
ActionRequest actionRequest = mock(ActionRequest.class);
ActionResponse actionResponse = mock(ActionResponse.class);
when(BookLocalServiceUtil.deleteBook(book)).thenReturn(null);
Book result = BookLocalServiceUtil.deleteBook(book);
assertEquals(result, null);
}
...but with no success.
We are running JUnit test using following set-up:
i. Create test folder beside docroot in your portlet.
ii. Add unit folder to test and create your package in it.
iii. Create portal-ext.properties file in your test folder with following configuration:
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost:3309/db_name?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=your_username
jdbc.default.password=your_password
jdbc.default.automaticTestTable=C3P0TestTable
jdbc.default.idleConnectionTestPeriod=36000
jdbc.default.maxIdleTime=1200
iv. Create a suite class (say AbcSuite.java) as following:
package x.x.x;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import com.liferay.portal.util.InitUtil;
#RunWith(Suite.class)
#Suite.SuiteClasses({
// Where AxTest.class would be your test class name
A1Test.class, A2Test.class, AxTest.class
})
public class AbcSuite {
#BeforeClass
public static void setUp() throws Exception {
// Loading properties and establishing connection with database
InitUtil.initWithSpring();
System.out.println("X Portlet's Test Suite Execution : Started.");
}
#AfterClass
public static void tearDown() {
System.out.println("X Portlet's Test Suite Execution : Completed.");
}
}
v. Create a test class (say A1Test.java) as following:
package x.x.x;
import java.util.ArrayList;
import org.junit.Assert;
import org.junit.BeforeClass;
import org.junit.Test;
public class A1Test {
#BeforeClass
public static void setUp() throws Exception {
System.out.println("Test Running : A1Test");
}
#Test
public void testAddAuthor() {
Author author = AuthorLocalServiceUtil.createAuthor(
CounterLocalServiceUtil.increment());
author.setAuthorName("Testcase Author");
author = AuthorLocalServiceUtil.addAuthor(author);
Assert.assertNotNull(author);
Assert.assertTrue(author.getAuthorId() > 0);
}
}
That it! You can execute all test cases together using following command:
ant test -Dtest.class=AbcSuite*
or separately as:
ant test -Dtest.class=A1Test*
This will be an unpopular answer, but...
I have found that JUnit tests with a lot of mocking objects are not particularly useful. The balance comes in when looking at the size of the setUp() method of your test: The longer it is, the less value the test has. In the portlet world you'd have to use a lot of mocks, and you'll be more busy mirroring the runtime environment (and correcting the assumptions you made about it) than you are fixing issues that you only found during the creation of this kind of tests.
That being said, here's my prescription
Build your portlets with one thing in mind: Portlets are a UI technology. UI is inherently hard to test automatically. You're stuck between the JSR-286 standard and your business layer - two layers that probably don't lend themselves particularly well for connecting them in tests.
Keep your UI layer code so ridiculously simple, that you can go with just a bit of code review. You'll learn more from it than from humongous setUp() routines of your JUnit tests.
Factor out meaningful UI-layer code. Extract it into its own utility class or method. Test that - notice that you probably don't even need a full PortletRequest object for it, use just the actual data that it needs
Create Integration tests on top of all this. These will utilize the full stack, your application deployed in a test environment. They will provide a smoke test to see if your code is actually working. But make sure that testing correct wiring doesn't slow you down: Code of the complexity object.setStreet(request.getParameter("street")); should not be tested, rather code reviewed - and it should be either obviously right or obviously wrong.
Use proper coding standards to make reviews easier. E.g. name your input field "street" if that's the data it holds, not "input42"
With these in mind: Whenever you write a portlet with code that you believe should be tested: Extract it. Eliminate the need to mock the portlet objects or your business layer. Test the extracted code. A second { code block } within a portlet's method might be enough code smell to justify extraction to a separate class/method that can typically be tested trivially - and these tests will be totally independent of Liferay, teach you a lot about your code if they fail, and are far easier to understand than those that set up a lot of mock objects.
I'd rather err on the side of triviality of tests than on the side of too complex tests: Too complex tests will slow you down, rather than provide meaningful insight. They typically only fail because an assumption about the runtime environment was false and needs to be corrected.

Best Way to Reset Browser State in TestNG with Selenium and Java

Can someone recommend the best way to 'tearDown' in the #AfterClass with testng? I occasionally get a hung window or popup and can't get back to the homepage to logout gracefully for the next class to start and avoid modal dialog errors.
I've tried getting a window handle on the homepage, then one on the popup, then putting a switchTo in my #AfterClass but it doesn't work. I've also tried driver.close(). But then my next class will be skipped with an unreachable browser error.
I really need a good tearDown #AfterClass method that can get out of whatever errors and popups are on page, and just leave a clean browser window at login page for the next test class to run.
Edit: Adding Code:
package TestPackage;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.Assert;
import org.testng.annotations.*;
import org.testng.annotations.Test;
//#Test (priority = 1)
public class test1 {
public static WebDriver driver = new FirefoxDriver();
Environment environment = Environment.QA03();
User testUser = User.ns_system();
AxUtilities axUtilities;
#BeforeClass
#Parameters("environment")
#Test
public void login(String environment1){
// environment = new Environment(environment1);
axUtilities = new DAxUtilities(environment1, driver);
// DAxUtilities dAxUtilities = new DAxUtilities(environment);
Login login = new Login();
login.getLogin(testUser, axUtilities, driver);
axUtilities.sleep(5000);
}
#Test
public void testNothing(){
String s = "public void testNothing() reached...";
System.out.println(s);
}
#AfterClass
public void verifyOKAndLogout() {
driver.quit();
// DAxUtilities dAxUtilities;
}
}
test class 1 and 2 are the same except for the class name...
And the tests are run from xml file. And I've tried many variants of the xml file, even putting each test in its own xml file:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="TestSuite1" verbose="1">
<parameter name="environment" value="QA03"/>
<test name="Test1">
<classes>
<class name="TestPackage.test1"/>
<!--<class name="TestPackage.test2"/>-->
</classes>
</test>
<test name="Test2">
<classes>
<class name="TestPackage.test2"/>
</classes>
</test>
</suite>
But always two web drivers get created, one right after the other, at the very beginning of the test run, before any other code is executed. And breakpoints show the code is definitely skipping from the beginning of class 1, to the beginning of class 2, before running class 1 tests...
From what you wrote it seems that you are creating browser (instantiating WebDriver) before entire test suite and then execute all the tests using the same browser instance. Although it results with faster execution time it also introduces problems like the one you are having right now.
I would suggest creating a new instance of the browser before executing each test class (or even test method). In such case you can safely use driver.quit() in your #AfterClass method. Restarting browser every test will make your tests much more stable.
EDIT comments on your newly added code
the way you are instantiating WebDriver is not correct. If you have
several test classes you will end up with multiple browser windows
opened before any test is executed
you annotated one method with both #Test and #BeforeClass - this will cause the method to be executed twice in a row
The way it should (more or less) looks like is
public class MyTest {
protected WebDriver driver;
#BeforeClass //I would even suggest to use #BeforeMethod but that's up to you
public void setUp() {
driver = new FirefoxDriver();
//other instantiations and assignments if necessary
}
#Test
public void login() {
driver.get("http://watever.com");
//do whatever
}
#Test
public void someOtherTest() {
//do something else
}
#AfterClass //or #AfterMethod
public void tearDown() {
driver.quit();
}
}
I believe that calling driver.close() or driver.quit() and then creating a new WebDriver and assigning it to your driver should do the trick. You'd probably want to call the same function to create the driver at both the beginning and end of your class.
Use
#tearDown{
driver.quit();
}
It will close all the windows open by test class.
And in the Next class create new instance of Webdriver() as shown below
Webdriver driver = new FireFoxDriver(); //Or some other driver

Categories

Resources