Before/After Scenario not working in jbehave serenity BDD
serenity.version 1.2.3-rc.5
serenity.jbehave.version 1.21.0
Eg
public class UploadDocumentWhatStep {
#BeforeScenario
public void beforeEachScenario(){
System.out.println("in before");
}
#Given("Sample Given")
public void cleanUp() {
System.out.println("in given");
}
#When("Sample When")
public void action() {
System.out.println("in When");
}
#Then("Sample Then")
public void action() {
System.out.println("in then");
}
#AfterScenario
public void afterEachScenario(){
System.out.println("in After");
}
}
When i try to run this code the output is
Output:
in given
in When
in Then
This worked to me:
The JBehave API seems to have changed, it seems you now need to add the ScenarioType parameter:
#BeforeScenario(uponType = ScenarioType.ANY)
public void setTheStage() {
OnStage.setTheStage(new OnlineCast());
}
Source: https://github.com/serenity-bdd/serenity-jbehave/issues/117
JBehave determines Scenario by your .story file. Chances are you either did not define scenarios in your story file or there is a syntax error and it's being ignored. post your story file here.
Related
Given a test failure, I have to extract all code statements that the test execution ran.
Let's say unit test 1 has failed. I have to extract all the codes it executed.
public class Driver {
method1 {
}
method2 {
}
method3 {
}
public TakeScreenshot(int flag){
statement1;
statement2;
if(flag) {
statement_inside_flag;
}
statement100;
}
}
[TestMethod]
public void TestThings()
{
boolean result = Driver.TakeScreenshot(true);
Assert.isTrue(result);
}
Is there an easy way I can do it using an open-source tool?
If I want to extract the body of the code under test i.e. in this case the output would be as follows. Some lines in the TakeScreenshot was not executed:
public class Driver {
public TakeScreenshot(int flag){
statement1;
statement2;
if(flag) {
statement_inside_flag;
return;
}
}
}
I have to unit test the below method, whereas all the lines of this code related to third party aws library. The method also returns nothing. So only test I can do is verifying the exception. Any other test can I do to improve the code coverage?
public void multipartUpload() throws InterruptedException {
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.withMultipartUploadThreshold(1024l)
.build();
PutObjectRequest request = new PutObjectRequest(bucketName, keyName, filePath);
Upload upload = tm.upload(request);
upload.waitForCompletion();
}
Let see the code that needs to be tested:
public class DemoCodeCoverage {
public void showDemo(LibraryCode library) {
System.out.println("Hello World!");
library.runDemoApplication();
// Extract the below code to a method since LibraryCode is not passed
// Then ignore running that method
// LibraryCode library = new LibraryCode()
// library.runDemoApplication_1();
// library.runDemoApplication_2();
// library.runDemoApplication_3();
System.out.println("World ends here!");
}
public boolean showBranchingDemo(boolean signal) {
if (signal) {
signalShown();
} else {
noSignal();
}
return signal;
}
public void signalShown() {
System.out.println("signalShown!");
}
public void noSignal() {
System.out.println("NoSignal!");
}
}
public class LibraryCode {
// Library can be AWS/Database code which needs authentication
// And this authentication is not a concern for our UT
// Still will end up execption when we do our UT
public void runDemoApplication() {
throw new RuntimeException();
}
}
Below can give good code coverage:
public class DemoCodeCoverageTest {
#Test
public void testShowDemo() {
DemoCodeCoverage t = Mockito.spy(new DemoCodeCoverage());
LibraryCode lib = Mockito.mock(LibraryCode.class);
Mockito.doNothing().when(lib).runDemoApplication();
t.showDemo(lib);
// when(bloMock.doSomeStuff()).thenReturn(1);
// doReturn(1).when(bloMock).doSomeStuff();
}
#Test
public void testShowBranchingDemo() {
DemoCodeCoverage t = Mockito.spy(new DemoCodeCoverage());
assertEquals(true, t.showBranchingDemo(true));
assertEquals(false, t.showBranchingDemo(false));
}
#Test
public void testSignalShown() {
DemoCodeCoverage t = Mockito.spy(new DemoCodeCoverage());
t.showBranchingDemo(true);
Mockito.verify(t, times(1)).signalShown();
}
#Test
public void testNoSignal() {
DemoCodeCoverage t = Mockito.spy(new DemoCodeCoverage());
t.showBranchingDemo(false);
Mockito.verify(t, times(1)).noSignal();
}
}
Below are the steps to increase the test code coverage:
Case_1: Testing void method
Assume you have method the does not take any params and return nothing.
public void printHelloWorld() {
System.out.println("Hello World")
}
Still you can write test that calls this method and returns successfully without any runtimeException.
Actually we haven't tested anything here other than giving a option to run the code by our tests. Thus increase the code coverage.
Additionally you can verify the invocation:
Mockito.verify(instance, times(1)).printHelloWorld();
There are circumstances you cannot test those, example say it is third party library call, then the library might have tested already, we just need to run through it.
#Test
public void testPrintHelloWorld() {
// may be hibernate call/other 3rd party method call
instance.printHelloWorld();
}
If your tool is not strict for 100% code coverage, you can even ignore it and justify it.
Case_2: Testing a method with object created and called another method inside the testing method
Assume you have method the does call DB to add entry in Hello_World table also prints it in console like below.
public void printHelloWorld() throws DBException {
DBConnection db = new DBConnection();
db.createEntry(TABLE_NAME, "Hello World");
System.out.println("Hello World")
}
You can extract those db code into new method, then test it separately.
public void printHelloWorld() throws DBException {
makeHelloWorldEntryInTable();
System.out.println("Hello World")
}
public void makeHelloWorldEntryInTable() throws DBException {
DBConnection db = new DBConnection();
db.createEntry(TABLE_NAME, "Hello World");
}
While testing with DB you would expect the DBConnectionException as it is just unit test. So one test with #Test(expected=DBException) for makeHelloWorldEntryInTable, and another test on printHelloWorld() with skipping the method makeHelloWorldEntryInTable call like below. Thus increases the code coverage.
#Test(expected=DBException)
public void testMakeHelloWorldEntryInTable() {
//This can any third party library which cannot be configured for ut.
//One example is testing the AWS bucket exist or not.
instance.makeHelloWorldEntryInTable();
}
#Test
public void testPrintHelloWorld() {
Mockito.doNothing()
.when(localInstance)
.makeHelloWorldEntryInTable();
localInstance.printHelloWorld();
}
Case_3: if you have private method, then make it default package level and test it. Thus improves the code coverage.
I have ran this code and the screenshot gets captured after the chrome browser closes (#After)
If i comment out CloseBrowser(); the screenshot gets captured but the chromebrowser stay open.
I want the screenshot to capture on a failed test then close the browser.
in summary
The screenshot currently captures after the browser closes, which is just a blank .png
I want the screenshot to capture when a test fails just before the browser closes
Thanks
public class TestClass extends classHelper//has BrowserSetup(); and CloseBrowser(); {
#Rule
public ScreenshotTestRule my = new ScreenshotTestRule();
#Before
public void BeforeTest()
{
BrowserSetup();// launches chromedriver browser
}
#Test
public void ViewAssetPage()
{
//My test code here//And want to take screenshot on failure
}
#After
public void AfterTest() throws InterruptedException
{
CloseBrowser();//closes the browser after test passes or fails
}
}
class ScreenshotTestRule implements MethodRule {
public Statement apply(final Statement statement, final FrameworkMethod frameworkMethod, final Object o) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
try {
statement.evaluate();
} catch (Throwable t) {
captureScreenshot(frameworkMethod.getName());
throw t; // rethrow to allow the failure to be reported to JUnit
}
}
public void captureScreenshot(String fileName) {
try {
new File("target/surefire-reports/").mkdirs(); // Insure directory is there
FileOutputStream out = new FileOutputStream("target/surefire-reports/screenshot-" + fileName + ".png");
out.write(((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES));
out.close();
} catch (Exception e) {
// No need to crash the tests if the screenshot fails
}
}
};
}
}
You can implement TestNG Listeners to execute code before a test or after a test
Or when a test fails or succeeded etc.
Implement it like below and put your screenshot in the method i showed
public class Listeners implements ITestListener {
Methods…
And put the screenshot code inside the method below:
#Override
public void onTestFailure(ITestResult result) {
code for screenshot
}
}
So i have found a way to implement the screenshots. I have created a method that will take a screenshot. I have put a try and catch around my test code and catch an exception and calling the method to take a screenshot.
public class TestClass extends classHelper//has BrowserSetup(); and CloseBrowser(); {`
#Rule
public ScreenshotTestRule my = new ScreenshotTestRule();
#Before
public void BeforeTest()
{
BrowserSetup();// launches chromedriver browser
}
#Test
public void ViewAssetPage()
{
try
{
//My test code here//And want to take screenshot on failure
}
catch(Exception e)
{
//print e
takeScreenShot();
}
}
#After
public void AfterTest() throws InterruptedException
{
CloseBrowser();//closes the browser after test passes or fails
}
}
///////////////////////////////////////////
void takeScreenShot()
{
try
{
int num = 0;
String fileName = "SS"+NAME.getMethodName()+".png";//name of file/s you wish to create
String dir = "src/test/screenshot";//directory where screenshots live
new File(dir).mkdirs();//makes new directory if does not exist
File myFile = new File(dir,fileName);//creates file in a directory n specified name
while (myFile.exists())//if file name exists increment name with +1
{
fileName = "SS"+NAME.getMethodName()+(num++)+".png";
myFile = new File(dir,fileName);
}
FileOutputStream out = new FileOutputStream(myFile);//creates an output for the created file
out.write(((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES));//Takes screenshot and writes the screenshot data to the created file
//FileOutputStream out = new FileOutputStream("target/surefire-reports/" + fileName);
out.close();//closes the outputstream for the file
}
catch (Exception e)
{
// No need to crash the tests if the screenshot fails
}
This might help:
https://github.com/junit-team/junit4/issues/383
The ordering for rule execution has changed with new 'TestRule'
I am trying to use AspectJ in sample project in IntelliJ IDEA. I have an experience with Spring AOP, but this is first time I am using AspectJ, and cannot make it work.
Environment:Win 10, IntelliJ IDEA and AspectJ,
Refer to this document for configuration,
https://www.jetbrains.com/help/idea/2016.3/aspectj.html
public class Hello {
public void sayHello() {
System.out.println("test1.Hello, AspectJ!");
}
public static void main(String[] args) {
Hello hello = new Hello();
hello.sayHello();
}
}
public aspect TxAspect {
void around():call(void Hello.sayHello()){
System.out.println("Start transaction...");
proceed();
System.out.println("end transaction...");
}
}
It should hava outputs:
Start transaction...
Hello, AspectJ!
end transaction...
but it appears a lot of errors:
enter image description here
Change the jdk release from 10 to 8 can solve this problem.
Someone could tell me how to write a functional application tests which combine Selenium Page Object Pattern and ExtentsReports (http://extentreports.relevantcodes.com/) to generate reports from these test cases. How to design test class? because I know that validation should be separated from page objects. What is the best approach to do this?
A sample piece of code would be very helpful
It is a good approach, of course, to separate your model (Page Objects) from you tests. For this to happen, you may use a layer of services, i.e. helper classes, which can interact both with business objects and page objects.
Note: I'm going to answer the second part of your question, not that on yet-another lib for reporting.
So, you have a business object:
public class Something {
boolean toHappen;
public Something(boolean toHappen) {
this.toHappen = toHappen;
}
public boolean isToHappen() {
return toHappen;
}
}
You also have your page:
public class ApplicationPage {
// how driver object is put here is your own business.
private static WebDriver driver;
#FindBy(id = "id")
private Button triggerButton;
public ApplicationPage() {
PageFactory.initElements(driver, this);
}
public static ApplicationPage open(){
driver.get("http://page.net");
return new ApplicationPage();
}
public void trigger() {
triggerButton.click();
}
}
So in order not to mix business objects and pages in tests, you create a service:
public class InevitableService {
public static void makeHappen() {
// just a very stupid code here to show interaction
Something smth = new Something(true);
ApplicationPage page = ApplicationPage.open();
if(smth.toHappen()){
page.trigger();
}
}
}
And finally your test
public class TestClass extends Assert {
#Test
public void test() {
InevitableService.makeHappen();
assertTrue(true);
}
}
As a result:
you have no driver in tests
you have no page objects in tests
you operate only high-level logic
Pros:
very flexible
Cons:
gets complicated over time
Considering your reporting tool - I believe it just listens the result of you tests and sends them to server. Or it just takes the xml/html results of you tests and makes pretty and useless pie-charts. Again, has nothing to do with POP.
Steps:
1. Declare variables under Test Suite class
public ExtentReports extent ;
public ExtentTest test;
2. Create object for Extent Managers User defined class
extent = ExtentManager.instance();
3. Pass extent parameter to the Page Object Class
inbound = new DemoPageObject(driver,extent);
4. Goto page object class method and Start with "Start log"
test = extent.startTest("View details", "Unable to view details");
5. For Success steps and we need end test
test.log(LogStatus.PASS, "The list of details are successfully displaying");
test.log(LogStatus.INFO, test.addScreenCapture(ExtentManager.CaptureScreen(driver, "./Send")));
log.info("The list of details are successfully displaying ");
extent.endTest(test);
6. For Failure and no need to end test
test.log(LogStatus.FAIL, "A Technical error is displaying under ");
7. Use #AfterMethod to handle error test cases
#AfterMethod
public void tearDown(ITestResult result) {
if (result.getStatus() == ITestResult.FAILURE) {
test.log(LogStatus.FAIL, "<pre>" + result.getThrowable().getMessage() + "</pre>");
extent.endTest(test);
}
}
8. Finally Adding results to the report
#AfterTest
public void when_I_Close_Browser() {
extent.flush();
}
public class ExtentManager {
public static ExtentReports instance() {
ExtentReports extent;
String Path = "./ExtentReport.html";
System.out.println(Path);
extent = new ExtentReports(Path, true);
//extent.config() .documentTitle("Automation Report").reportName("Regression");
extent
.addSystemInfo("Host Name", "Anshoo")
.addSystemInfo("Environment", "QA");
return extent;
}
public static String CaptureScreen(WebDriver driver, String ImagesPath) {
TakesScreenshot oScn = (TakesScreenshot) driver;
File oScnShot = oScn.getScreenshotAs(OutputType.FILE);
File oDest = new File(ImagesPath + ".jpg");
try {
FileUtils.copyFile(oScnShot, oDest);
} catch (IOException e) {
System.out.println(e.getMessage());
}
return ImagesPath + ".jpg";
}
}