I'm struggling to get the new parallel execution feature of Cucumber-JVM v4.0.0 working.
As discussed here, an argument can be made via CLI to invoke the multi-threading options.
However when i run the below, the request is accepted and the tests are run, but still only one test at a time.
mvn clean test -Dcucumber.options="--threads 4" -Dbrowser=chrome
I'm either over estimating the out of the box functionality or, and more likely, missing some other key configuration or just completely misunderstanding.
Has anyone had any luck in getting this working?
EDIT: Sorry i forgot to mention, it does state that dependency injection has to be used to share state between steps in order for parallel execution to work. Just to confirm, i'm using Pico Container to manage dependency injection.
You cannot use this functionality with Maven. With Maven u need to use the 'parallel' options in junit or testng etc. Refer to the links for them in the same article.
This option is for running the feature files directly from the command line using the cucumber.api.cli.Main class. Refer to this - https://github.com/cucumber/cucumber-jvm/blob/v4.0.0/core/src/main/resources/cucumber/api/cli/USAGE.txt
Related
Our team is starting a JUnit 5 project with karate tests.
Currently we are using this as a template for our Karate test runner https://github.com/intuit/karate#junit-5-parallel-execution.
It allows us to pass in the "target/surefire-reports" and then before the test finishes we call ReportBuilder.generateReports(). It is basically identical to this code https://github.com/intuit/karate/blob/b50202b3c8a8916a7db0f3d5196d42086ab80a04/karate-junit4/src/test/java/com/intuit/karate/mock/MockServerTest.java.
This works well, but while I was looking at how to set up JUnit 5 I noticed this very slick fluent api https://github.com/intuit/karate#junit-5.
It would be nice to use that syntax, but I can't get the Cucumber report generated like I can with Runner.parallel. I made sure the maven-surefire-plugin was in build.gradle(although I could have messed that up) but it didn't seem to help.
I also tried doing ReportBuilder.generateReports() and the related logic from the parallel execution example in the #AfterAll function, but couldn't get that working either. The errors suggested that the target/surefire-reports folder didn't exist.
Is the cucumber report supported in the second example? If so, is there a trick to getting it setup?
Great question. The reason we de-couple the JUnit execution and the parallel-runner - is JUnit is more useful in development mode, and you expect detailed pass/fail stats in the IDE for example. But this will be an un-necessary overhead in "CI mode".
That said, we have put in some work on making the Parallel runner a fluent interface, so great timing :) You can find an example on line 57 here.
May I request you to try the develop branch and see if you are missing anything ? Building is easy, here are some instructions: https://github.com/intuit/karate/wiki/Developer-Guide
I'm new to BDD and particularly Cucumber.
Can I get a features and its steps from a variable? Also, I want to get a feature and its steps from a test tracker (TestRail) before running tests by the special selection of this tests, and put it in a list, then one by one get a scenario and run it.
Is there such a possibility? Should I use Cucumber or another framework for this?
No, you can't define a Cucumber scenario in code (or at least not in a supported way). But if you were going to write code to get a scenario and its steps from your test tracker and run it, you could equally well write code to put the scenario and its steps in files and run the scenario with the cucumber executable.
I don't know of a Java testing framework in which you can define tests dynamically. You could do that in Ruby with RSpec or (less cleanly) minitest. But I don't know whether a Ruby test framework would be acceptable, or whether it would be OK for the people writing entries in your test tracker to have to read and/or write RSpec examples. (It seems strange to have Cucumber step definitions in a test tracker, too; having features in a test tracker seems more reasonable, aside from the question of how to run them.)
I have a cucumber test runner class in which i write my test suite to run like below
#CucumberOptions( features={"Feature_Files/featues"
} ,glue={ "com.automation.stepdef"
} ,monochrome=true ,dryRun= false ,plugin = {"html:target/cucumber-html-report"
} ,tags = {"#Startup"
}
)
If I wish to customize this tag option on successful completion of #startup feature, is it possible ?
The most common way of running two or more dependant test suites is creation of triggers for two or more jobs in your CI. This can be done with various plugins as described here.
Otherwise, if that's some test preparation actions you can use #Before or realted JUnit #BeforeClass annotation.
Seems not possible with current Cucumber. What you are asking for is the dependency among test scenarios, which IMO is a very good feature. For example, we have some login feature and some other functional features. It would not make any sense and would actually be a waste of time to run other features if the login feature does not work in the first place. To make things worse, you will see a lot of failures in test report in which you could not easily spot the root cause which is non-working login feature.
TestNG supports "dependsOnMethod" feature. However, TestNG is not a BDD tool.
QAF https://qmetry.github.io/qaf/qaf-2.1.7b/scenario.html#meta-data supports this as a BDD tool. However, it would be too heavy to introduce a new tool for such a simple feature.
All we need is some addition to Cucumber syntax and a customized test runner to build up the scenarios execution order as per dependencies and skip the features if the feature they depends on fails.
I would love to see if someone can put some effort into this :)
BTW, CI could workaround this issue, but again it's too heavy and clumsy. Imagine you have multi-dependencies among test scenarios, how many CI pipelines will you need then? Also, you can not workaround this in local dev env with CI because simply you would not set CI locally.
I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.
I want to run my unit tests automatically when I save my Eclipse project. The project is built automatically whenever I save a file, so I think this should be possible in some way.
How do I do it? Is the only option really to get an ant script and change the project build to use the ant script with targets build and compile?
Update I will try 2 different approaches now:
Running an additional builder for my project that executes the ant target test (I have an ant script anyway)
ct-eclipse, recommended by Thorbjørn
For sure it it unwise to run all tests, because we can have for example 20.000 tests whereas our change could affect only, let's say 50 of them, among which are tests for the class we have changed and tests for classes that collaborate with our class.
There is an unseful plugin called infinitetest http://improvingworks.com/products/infinitest/ which runs only some tests ( related to class we've changed ) just after we save changes. It also integrate quite nicely with editor ( using annotations ) and problem view - displaying not-passing tests like errors.
Right click on your project > Properties > Builders > New, and there add your ant ant builder.
But, in my opinion, it is unwise to run the unit tests on each save.
See if Eclipse has a plugin for Infinitest.
I'd also consider TestNG as an alternative to JUnit. It has a lot of features that might be helpful in partitioning your unit test classes into shorter and longer running groups.
I believe you are looking for http://ct-eclipse.tigris.org/
I've experimented with the concept earlier, and my personal conclusion was that in order for this to be useful you need a lot of tests which take time. Personally I save very frequently so this would happen frequently, and I didn't find it to be an advantage. It might be different for you.
Instead we bit the bullet and set up a "build server" which watches our CVS repository and builds projects as they change. If the compilation fails or the tests fail we are notified quickly so we can remedy it.
It is as always a matter of taste what works for you. This is what I've found.
I would recommend Inifinitest for the described situation. Infinitest is nowadays a GPL v3 licensed product. Eclipse update site: http://infinitest.github.com
Then you must use INFINITEST. INFINITEST helps you to do Continuous Testing.
Whenever you make a change, Infinitest runs tests for you.
It selects tests intelligently, and only runs the ones you need. It reports unit test failures like compiler errors, and provides additional information that helps you write better tests.