How do I get the result of a Cucumber feature - java

I'm trying to run Cucumber features in JUnit 5 Jupiter. I've lifted some code from the Cucumber-jvm source and adapted it for JUnit 5's TestFactory. It is working: I see my features running when I run all JUnit tests (this is Kotlin code, but the same applies to Java):
#CucumberOptions(
plugin = arrayOf("pretty"),
features = arrayOf("classpath:features")
)
class Behaviours {
#TestFactory
fun loadCucumberTests() : Collection<DynamicTest> {
val options = RuntimeOptionsFactory(Behaviours::class.java).create()
val classLoader = Behaviours::class.java.classLoader
val resourceLoader = MultiLoader(classLoader)
val classFinder = ResourceLoaderClassFinder(resourceLoader, classLoader)
val runtime = Runtime(resourceLoader, classFinder, classLoader, options)
val cucumberFeatures = options.cucumberFeatures(resourceLoader)
return cucumberFeatures.map<CucumberFeature, DynamicTest> { feature ->
dynamicTest(feature.gherkinFeature.name) {
var reporter = options.reporter(classLoader)
feature.run(options.formatter(classLoader), reporter, runtime)
}
}
}
}
However, JUnit reports that every feature was successful, whether or not it actually was. When features fail, the results are correctly pretty-printed, but the generated DynamicTest passes. Neither gradle test nor Intellij notice the error: I have to inspect the text output.
I think I have to figure out, in the Executable passed as the second parameter to dynamicTest, what the result of the feature was, and raise an assertion when appropriate. How do I determine the result of feature or feature.gherkinFeature at that point?
And is there a way to get at the results for each scenario in the feature? Or better, is there a way to run a specific scenario, so that I can create a DynamicTest for each scenario, giving me better reporting granularity in JUnit?

To record the result of a Cucumber scenario as a JUnit5, I found it easiest to implement a JunitLambdaReporter, which is essentially a simpler version of the existing JunitReporter. Once you have a reporter that remembers what the current scenario is, then you can create a #TestFactory that uses this logic:
return dynamicTest(currentScenario.getName(), () -> {
featureElement.run(formatter, reporter, runtime);
Result result = reporter.getResult(currentScenario);
// If the scenario is skipped, then the test is aborted (neither passes nor fails).
Assumptions.assumeFalse(Result.SKIPPED == result);
Throwable error = result.getError();
if (error != null) {
throw error;
}
});

Related

AWS Lambda and Java Reflections (Guava)

I am trying to run Guava reflections in my AWS Lambda function but it seems to not work in production..
The Code i am trying to run is supposed to create a Map<String, Class> with class name and class.
Code:
val converterClassMap by lazy {
val cl = ClassLoader.getSystemClassLoader()
ClassPath.from(cl).getTopLevelClasses("converters").asSequence().mapNotNull { it.load().kotlin }
.filter { it.simpleName?.endsWith("Converter") == true }
.associateBy( { it.simpleName }, { it } )
}
Running this code locally works perfectly, but running it in production on a lambda return an error where the map is empty.
Key PaginationConverter is missing in the map.: java.util.NoSuchElementException
Has anyone else run into this problem?
One more case. You have the
val cl = ClassLoader.getSystemClassLoader()
the line in the code. It means it takes the system classloader to scan for classes.
Try using
class SomeClassFromYouCodeNotALibrary
val cl = SomeClassFromYouCodeNotALibrary::class.java.classLoader
That one will work stable, independent from the number of classloaders, that are used in the application. AWS Lambda runtime may have specific classloaders, for example.
If it does not work, try logging the classloader type and classpath, e.g. println(cl) and println((cl as? URLClassLoader).getURLs().joinToString(", "))

Reporting on multiple Selenium Webdriver job ran from Jenkins pipeline job

I have a pipeline job using Groovy script set up to run multiple tests in "parallel", but I am curious as to how to get the report(s) unified.
I am coding my Selenium tests in Java and using TestNG and Maven.
When I look at the report in target/surefire-reports, the only thing there is the "last" test ran of "suite".
How can I get a report that combines all of the tests within the Pipeline parallel job?
Example Groovy code:
node() {
try {
parallel 'exampleScripts':{
node(){
stage('ExampleScripts') {
def mvnHome
mvnHome = tool 'MAVEN_HOME'
env.JAVA_HOME = tool 'JDK-1.8'
bat(/"${mvnHome}\bin\mvn" -f "C:\workspace\Company\pom.xml" test -DsuiteXmlFile=ExampleScripts.xml -DenvironmentParam="$ENVIRONMENTPARAM" -DbrowserParam="$BROWSERPARAM" -DdebugParam="false"/)
} // end stage
} // end node
}, // end parallel
'exampleScripts2':{
node(){
stage('ExampleScripts2') {
def mvnHome
mvnHome = tool 'MAVEN_HOME'
env.JAVA_HOME = tool 'JDK-1.8'
bat(/"${mvnHome}\bin\mvn" -f "C:\workspace\Company\pom.xml" test -DsuiteXmlFile=ExampleScripts2.xml -DenvironmentParam="$ENVIRONMENTPARAM" -DbrowserParam="$BROWSERPARAM" -DdebugParam="false"/)
} // end stage
} // end node
step([$class: 'Publisher', reportFilenamePattern: 'C:/workspace/Company/target/surefire-reports/testng-results.xml'])
} // end parallel
There is a little more to this code after this in terms of emailing the test runner the result of the test and such.
This works great, other than the reporting aspect.
I prefer to use ExtentReports because it has a ExtentX server that allows to you report on multiple different test reports.
I used to use ReportNG but development on that stalled and so I don't recommend it any more. It doesn't allow you combine reports anyway.
Other than that, you could use CouchBase or similar JSON database to store test results and then generate your own report from that information.

JaCoCo branch coverage try with resources

I have a method that I am trying to unit-test. I cannot post the actual code, but it looks like this:
public int getTotal() throws MyException {
int total = 0;
try (ExternalResource externalResource = ExternalService.getResource()) {
try (OtherExternal otherResource = externalResource.getOtherResource()) {
if (someCondition) {
total = otherResource.getTotal();
}
}
}
}
JaCoCo is telling me that I am missing 4/8 branches on each of the try-with-resource blocks. I am testing that someCondition is true and someCondition is false, and JaCoCo shows that block completely covered.
I read this question, and I understand from the accepted answer that the issue is in how the byte code is generated.
I would like to be able to better understand how to identify the various branches that are generated, and then I can make a better judgement on wether to test them or not (are they unreachable, etc).
Per the change history in version 0.8.2:
Branches and instructions generated by javac 11 for try-with-resources statement are filtered out
I've tested this out locally using openjdk java8, and my try-with-resources now reports 100% branch coverage (even though the IOException is never thrown in my tests).
While it is good to test this behavior out, there are times when you can't easily reproduce such exceptions. For instance, in a method that just returns an open port:
public int getOpenPort() {
try (ServerSocket boundSocket = new ServerSocket(0)) {
return boundSocket.getLocalPort();
}
}
I know of no simple way to force this code to throw IOException without adding a bunch of confusing and unnecessarily complicated code, just to pass a branch coverage check. Luckily, the new (v0.8.2) jacoco library gives this method 100% coverage with a single test just calling Assert.assertNotEquals(0, portChecker.getOpenPort());.
You have to test every Exception and every condition. But JaCoCo sometimes failed to identify correctly what is covered or not.

Cucumber-JVM Step definitions

After creating my feature file in eclipse, i run it as a Cucumber feature. i use the step definition the console gives me to create the first base of the test file
#Given("^the input is <(\\d+)> <(\\d+)>$")
these should be outputted by the console however currently it is showing the feature without the step definitions.
Feature: this is a test
this test is to test if this test works right
Scenario: test runs # src/test/resources/Test.feature:4
Given: i have a test
When: i run the test
Then: i have a working test
0 Scenarios
0 Steps
0m0,000s
this feature is just to check if cucumber is working properly.
the runner:
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
import org.junit.runner.RunWith;
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
dryRun = false,
format = "pretty",
features = "src/test/resources/"
)
public class RunCukes {
}
what can be the cause of the console not showing all the info?
TL:DR console does not show the step regex for missing steps
EDIT: added feature file
Feature: this is a test
this test is to test if this test works right
Scenario: test runs
Given: i have a test
When: i run the test
Then: i have a working test
The problem is in the feature file. Using : after Given, When and Then is the problem. I was able to reproduce your issue with your feature file. But when I removed the : and ran the feature file, with the same Runner options provided above, I got the regex to implement missing step definitions.
P.S I am using IntelliJ, but don't think it would make a difference.
Feature: this is a test
this test is to test if this test works right
Scenario: test runs # src/test/resources/Test.feature:4
Given i have a test
When i run the test
Then i have a working test
Below is what I got:
Testing started at 19:12 ...
Undefined step: Given i have a test
1 Scenarios (1 undefined)
3 Steps (3 undefined)
0m0.000s
Undefined step: When i run the test
You can implement missing steps with the snippets below:
#Given("^i have a test$")
public void i_have_a_test() throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
#When("^i run the test$")
public void i_run_the_test() throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
#Then("^i have a working test$")
public void i_have_a_working_test() throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
Undefined step: Then i have a working test
1 scenario (0 passed)
3 steps (0 passed)
Process finished with exit code 0
it can happen if your .feature file is invalid somehow. I once had it happen just because I had two || together in the examples table of my Scenario Outline

Running an individual JUnit test from separate class [duplicate]

This question already has answers here:
Run single test from a JUnit class using command-line
(4 answers)
Closed 9 years ago.
I am trying to run tests from a separate class where information can be compiled and reported. I am having difficulty running individual tests, however.
I tried:
for (int i = 0; i < testRuns; i++) {
JUnitCore.runClasses(InternetExplorerTestClass.class, MozillaFirefoxTestClass.class, GoogleChromeTestClass.class);
}
but that limits the control I have over the results and reporting the data.
How do I run a single test from a test suite? Thank you in advance.
It almost looks like you are doing something like a Selenium test? If you use Gradle as your build tool, you can easily run one specific test by using the "include" filter option like so. (You could do something similar with Ant, SBT, or Maven as well). Personally, I think using the build tool to pick the tests to run is more elegant than writing code to run certain classes.
tasks.withType(Test) {
jvmArgs '-Xms128m', '-Xmx1024m', '-XX:MaxPermSize=128m'
maxParallelForks = 4
// System properties passed to tests (if not http://localhost:8001/index.html)
systemProperties['testProtocol'] = 'http'
systemProperties['testDomain'] = 'djangofan.github.io'
systemProperties['testPort'] = 80
systemProperties['testUri'] = '/html-test-site/site'
systemProperties['hubUrl'] = 'localhost'
systemProperties['hubPort'] = '4444'
}
task runParallelTestsInFirefox(type: Test) {
description = 'Runs all JUnit test classes in parallel threads.'
include '**/TestHandleCache*.class'
testReportDir = file("${reporting.baseDir}/ParallelTestsFF")
testResultsDir = file("${buildDir}/test-results/ParallelTestsFF")
// System properties passed to tests
systemProperties['browserType'] = 'firefox'
// initial browser size and position
systemProperties['windowXPosition'] = '100'
systemProperties['windowYPosition'] = '40'
systemProperties['windowWidth'] = '400'
systemProperties['windowHeight'] = '600'
}
This is taken from a example project I wrote here.

Categories

Resources