I am working with cucumber and selenium project and I am trying to run the test by a Junit Test Runner. Here is the complete code (make sure you have lombok in your IDE). And here is my test runner:
#RunWith(Cucumber.class)
#CucumberOptions(
features = {"src/test/resources/features" },
monochrome = true,
plugin = {
"pretty",
"com.cucumber.listener.ExtentCucumberFormatter:target/report.html"
},
tags = {"#all"},
glue = {"stepDefinitions"}
)
public class TestRunnerJUnit {
#AfterClass
public static void setup() {
Reporter.setSystemInfo("User Name", System.getProperty("user.name"));
Reporter.setSystemInfo("Time Zone", System.getProperty("user.timezone"));
Reporter.setSystemInfo("Machine", "Windows 10" + "64 Bit");
Reporter.setTestRunnerOutput("Sample test runner output message");
}
}
The point is, when i run the test using the test runner, it finds the feature file, but it does not find any Scenario inside it. Here is the output of the run:
#all
Feature:
As a customer, I should be able to register for insurance.
0 Scenarios
0 Steps
0m0.000s
if i run the test directly using the feature file (by right click on it Run as Cucumber then it works well. But, i need to run my test by test runner)
I had a chance to pull your code base and look into the code.
the issue you are running into is something to do with extent library. you are providing feature name in a new line and which extent library can't understand that. write feature name in same line and it should solve your problem
Feature: As a customer, I should be able to register for insurance.
I also suggest you to use newer version of cucumber (Cucumber-JVM v4+) libraries which have concurrent execution support( under single JVM) and current library which you are using going to span up multiple JVM instances depending on your configuration
Related
I have setup an embedded mongo via flapdoodle (de.flapdoodle.embed).
Quite a lot of mongo operations hence i would like to run all of them as a suite and setup the mongo just once in testsuite.
Now when i run the test cases via mvn install , it seems to run the test cases individually.
Is there a way to run test cases only from suite and not as a class.
baeldung.com describes the use of JUnit 5 Tags, which are very well suited for your case.
You can mark tests with two different tags:
#Test
#Tag("MyMongoTests")
public void testThatThisHappensWhenThatHappens() {
}
#Test
#Tag("MyTestsWithoutMongo")
public void testThatItDoesNotHappen() {
}
And execute either set in a suite, e.g.
#IncludeTags("MyMongoTests")
public class MyMongoTestSuite {
}
In your case, the tests could be categorized by whether Mongo is in the application context or not. So, theoretically, it might be possible to create a JUnit 5 Extension to add the tag. That would be the more complex solution though.
I have two methods in my test class:
#Test
#Stories( "story1")
public void test01(){
}
#Test
#Stories( "story2")
public void test02(){
}
#Test
#Stories( "story1")
public void test03(){
}
To run tests Im using:
mvn clean test site
It will execute all test. But my question is, how to execute tests when I want to execute only tests with specific user story (ie. story1)
I know in python it can be done by
py.test my_tests/ --allure_stories=story1
But I don't know how to do it in java using maven
In Java there is no need for Allure to do such sort of things, because you can do it using your test runner, e.g. TestNG.
Just create Listener or BeforSuite which will check your environment variable e.g. -DallureStories and match it with ITestContext to disable tests not in your stories list.
I am attempting to gather up all the files my tests generate (log file, screenshots, cucumber report etc) and send them via email. I'm doing this from the Runner class using JUnit's #AfterClass annotation.
#RunWith(Cucumber.class)
#CucumberOptions(
features = "src/test/resources/Features"
, glue = { "stepDefinition" }
, tags = { "#Teeest" }
, monochrome = true
, strict = false
, plugin = { /* "pretty", */ "html:target/cucumber-html-report","json:target/cucumber-html-report/cucumber-json-" + "report.json" })
public class TestRunner {
#AfterClass
public static void sendReport() {
SomeClass.sendMail();
}
}
Everything works fine except for the cucumber reports (both html and json), which are blank. When i manually check it, it looks good so I'm assuming it's generated sometime after this method is executed.
Does anyone have an idea of how I can get around this issue?
I'm thinking of either getting the Cucumber Reports plugin for Jenkins, writing a shell script to execute via Maven POM or a separate Java app that looks for the files, zips and sends the email.
All 3 of these ideas come with drawbacks so I'd really love another hook-type approach to this, if possible, as file locations have dynamic names and would be a lot easier to get the actual locations within the test suite than to look for them afterward.
Thanks!
PS: I am junior at this stuff so please don't hold back on details :)
I have a #Parameterized junit test that spawns 50 tests:
#RunWith(Parameterized.class)
public class NurseRosteringSolveAllTurtleTest ... {
#Parameterized.Parameters(name = "{index}: {0}")
public static Collection<Object[]> getSolutionFilesAsParameters() {
return ... // returns 50 Files.
}
public NurseRosteringSolveAllTurtleTest(File unsolvedDataFile) {
...
}
...
#Test
public void solveDataFile() {
...
}
}
Running it takes an hour (and it's impossible to shorten that time, they are integration tests). Test 28 fails.
How do I run test 28 alone, without running the other 49 tests? Without changing the actual code, by simply configuring a -D or something similar in IntelliJ's (or Eclipse's) run configuration.
I just tested this in Eclipse with a simple parameterized test that always fails on test #4. One is able to right-click on the failed test and select Run. Only that test then executes.
Result:
Frustratingly, I can't see what Eclipse did to solve the problem. Nothing is apparently altered in the run configuration. In particular, if you select to run the configuration a second time, it executes all the tests.
Some further testing shows that Eclipse will regenerate all 10 parameter values, but only uses the 4th value. (This was determined by embedding a print statement in the #Parameters method).
Eclipse is now (as of the Mars M4 release) able to run not just a single test from the Parameterized test class but any kind of subtree.
This can be:
all methods for a single data set as returned by the #Parameterized-method
all datasets for a single #Test-method
And as already mentioned, the test can also be specified by entering the tests name into the "method" text filed within the launch configuration. There will be a marker indicating that the method doesn't exist, but the test will run anyway.
See this blog post for details.
Not sure if it will help, but you can try a trick which I used with Eclipse and JUnit parameterized tests.
In JUnit launch configuration in "Test method" field you can write the full name of parameterized test, in your example it should be something like this 'solveDataFile[28: /path/to/your/file]'. Eclipse will complain that method does not exist but will still lunch it successfully.
For a subset of tests ex( 27 & 28 ) Just add:
`.subList( startInclusive, stopExclusive );`
before returning your parameters collection.
Non consecutive subsets:
Collection<Object[]> c = Arrays.asList( data ).subList( startInclusive, stopExclusive );
c.add( another subset );
return c;
Similarly to Miguel's answer, if you are using the JUnit 5's
#ParameterizedTest
#CsvFileSource(resources = arrayOf("/sender.csv"))
you can go to your csv file and "comment out" some lines by prepending the # character to them.
Is there a load testing framework that I could use where I can supply my own Java class and test the performance of that class. So basically the framework would essentially spawn threads and record when those threads finished running and then generate a report with the final results.
Apache JMeter is exactly the project you want. You can point it at a running process or have it spin up multiple threads each starting a process. It will monitor the throughput, error rate and anything else you are interested in and render it all in a set of charts.
Take a look into Metrics (http://metrics.codahale.com/). You can use it to instrument your app, and get interesting reports after a test suite run or even published to a metrics server.
Assuming you have a Java Class and a Test method like below:
import org.junit.Test;
public class AnyTestEndPoint {
#Test
public void anyTestMethod() throws Exception {
...
your code goes here for a single user
...
}
}
Your above test can be fed to the load generator with following configs.
You can spawn virtual-users from a simple properties config file like below.
# my_load_config.properties
#############################
number.of.threads=50
ramp.up.period.in.seconds=10
loop.count=1
In the above config, number.of.threads represents virtual users to be ramped up concurrently.
Then your load test looks like below which is pointing to the above Test:
#LoadWith("my_load_config.properties")
#TestMapping(testClass = AnyTestEndPoint.class, testMethod = "anyTestMethod")
#RunWith(ZeroCodeLoadRunner.class)
public class LoadTest {
}
This can be achieved for JUnit4 load generation and JUnit5 load generation. See the running examples in the HelloWorld GitHub repo.
You could try JUnit or TestNG. I have used them in the past. Not sure if it exactly what you are looking for.