I have two FitNesse suits which are mutually exclusive and I want to run them in parallel.
As they are invoked from a junit test case, I have written the following piece of code:
#Test
public void executeFitnesseSuites() {
final Class<?>[] classes = { Suite1.class, Suite2.class };
final Result result = JUnitCore.runClasses(ParallelComputer.classes(), classes);
System.out.println(result);
}
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("Suite1")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("/tmp/fitnesse/")
public static class Suite1{
}
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("Suite2")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("/tmp/fitnesse/")
public static class Suite2{
}
In the earlier implementation, these were two independent classes and were being executed sequentially.
However, I am seeing a similar execution time for the above test.
Does this mean that FitNesse is not spinning up two slim server instances and executing these suites in parallel?
Unfortunately FitNesse itself is not thread safe, so one should not run two slim server instances in one JVM at the same time.
I'm not sure how jUnit behaves using the approach you use. Does it spin up two parallel JVMs, or just threads in the same JVM?
An approach I've used in the past to run two completely independent suites with jUnit is two have two separate classes (as you had before) and run these in parallel on separate JVMs using Maven's failsafe plugin. Failsafe (and surefire as well) offers a forkCount property to specify the number of processes to use (see http://maven.apache.org/surefire/maven-failsafe-plugin/examples/fork-options-and-parallel-execution.html for more details). Please note that you should NOT use the parallel property as that is within one JVM.
If you are running tests in parallel using FitNesse's jUnit runner you may also be interested in a tool I created to combine the HTML reports of such runs into a single report: HtmlReportIndexGenerator. This is part my fixtures jar, but also available as separate docker image: hsac/fitnesse-fixtures-combine.
Related
I want to run the same Cucumber tests in multiple threads. More specifically, I have a set of features, and running these features in one thread works fine. I use the JSON formatter to record running time of each step. Now I want to do load test. I care more about the running time of each feature/step in a multi-thread environment. So I create multiple threads, and each thread runs on the same feature set. Each thread has its own JSON report. Is this possible in theory?
For some project setup reason I cannot use the JUnit runner. So I have to resort to the CLI-way:
long threadId = Thread.currentThread().getId();
String jsonFilename = String.format("json:run/cucumber%d.json", threadId);
String argv[] = new String[]{
"--glue",
"com.some.package",
"--format",
jsonFilename,
"d:\\features"};
// Do not call Main.run() directly. It has a System.exit() call at the end.
// Main.run(argv, Thread.currentThread().getContextClassLoader());
// Copied the same code from Main.run().
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
RuntimeOptions runtimeOptions = new RuntimeOptions(new Env("cucumber-jvm"), argv);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
runtime.writeStepdefsJson();
runtime.run();
I tried to create a seperate thread for each Cucumber run. The problem is, only one of the thread has a valid JSON report. All the other threads just create empty JSON files. Is this by design in Cucumber or is there something I missed?
We have looked into multi-threading cucumber tests under Gradle and Groovy using the excellent GPars library. We have 650 UI tests and counting.
We didn't encounter any obvious problems running cucumber-JVM in multiple threads but the multi-threading also didn't improve performance as much as we hoped.
We ran each feature file in a separate thread. There are a few details to take care of, like splicing together the cucumber reports from different threads and making sure our step code was thread-safe. We sometimes need to store values between steps, so we used a concurrentHashMap keyed to the thread id to store this kind of data:
class ThreadedStorage {
static private ConcurrentHashMap multiThreadedStorage = [:]
static private String threadSafeKey(unThreadSafeKey) {
def threadId = Thread.currentThread().toString()
"$threadId:$unThreadSafeKey"
}
static private void threadSafeStore(key, value) {
multiThreadedStorage[threadSafeKey(key)] = value
}
def static private threadSafeRetrieve(key) {
multiThreadedStorage[threadSafeKey(key)]
}
}
And here's the gist of the Gradle task code that runs the tests multi-threaded using GPars:
def group = new DefaultPGroup(maxSimultaneousThreads())
def workUnits = features.collect { File featureFile ->
group.task {
try {
javaexec {
main = "cucumber.api.cli.Main"
...
args = [
...
'--plugin', "json:$unitReportDir/${featureFile.name}.json",
...
'--glue', 'src/test/groovy/steps',
"path/to/$featureFile"
]
}
} catch (ExecException e) {
++noOfErrors
stackTraces << [featureFile, e.getStackTrace()]
}
}
}
// ensure all tests have run before reporting and finishing gradle task
workUnits*.join()
We found we needed to present the feature files in reverse order of execution time for best results.
The results were a 30% improvement on an i5 CPU, degrading above 4 simultaneous threads, which was a little disappointing.
I think the threads were too heavy for multi-threading on our hardware. Above a certain number of threads there were too many CPU cache misses.
Running concurrently on different instances using a thread-safe work queue like Amazon SQS now seems a good way forward, especially since it is not going to suffer from thread-safety issues (at least not on the test framework side).
It is non-trivial for us to test this multi-threading method on i7 hardware due to security constraints in our workplace, but I would be very interested to hear how an i7 with a larger CPU cache and more physical cores compares.
Not currently -- here is the issue you observe. I haven't found any way to parallelize by scenario.
Here's a nice write up on poor-man's concurrency. Just run multiple commands each selecting a different subset of your tests -- by feature or tag. I would fork a new JVM (as a JUnit driver would) rather than trying to thread it since cucumber was not designed for that. You have to balance them yourself, then figure out how to combine the reports. (But at least the problem is combining reports not corrupt reports.)
Supposedly you can run your Cucumber-JVM tests in parallel by using this Maven POM configuration from here: https://opencredo.com/running-cucumber-jvm-tests-in-parallel/
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.14</version>
<executions>
<execution>
<id>acceptance-test</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>${surefire.fork.count}</forkCount>
<refuseForks>false</reuseForks>
<argLine>-Duser.language=en</argLine>
<argLine>-Xmx1024m</argLine>
<argLine>-XX:MaxPermSize=256m</argLine>
<argLine>-Dfile.encoding=UTF-8</argLine>
<useFile>false</useFile>
<includes>
<include>**/*AT.class</include>
</includes>
<testFailureIgnore>true</testFailureIgnore>
</configuration>
</execution>
</executions>
</plugin>
In the above snippet, you can see that the maven-surefire-plugin is used to run our acceptance tests – any classes that end in *AT will be run as a JUnit test class. Thanks to JUnit, making the tests run in parallel is now a simple case of setting the forkCount configuration option. In the example project, this is set to 5, meaning that we can run up to 5 threads (ie, 5 runner classes) at a time.
Well if you can find a way for cucumber to output scenario location ( I.e. feature_file_path:line_nunber_in_feature_file) for all the scenarios you want run based on given tag, then you can use gpars and gradle to run scenarios in parallel.
Step 1: In the first gradle task, we’ll use above solution to generate a text file (say scenarios.txt) containing locations for all the scenarios that we want to execute
Step 2: Next, extract contents of scenarios.txt generated in step 1 into a groovy list say scenariosList
Step 3: create one more task (javaExec), here we’ll use gpars withPool in combination with scenariosList.eachParallel, and use cucumber main class and other cucumberOptions to run these scenarios in parallel. PS: here we will provide a scenario location as the value of the option “features” so that cucumber will run only this scenario. Also no need to provide any tag name as we already have a list of scenarios that we need to execute.
Note: You need to use a machine with high configuration like a Linux sever because a new jvm instance is created per scenario, and probably use a cloud service like Saucelabs to execute scenarios. This way you don’t have to worry about the infrastructure.
Step4: This is the last step. Every scenario bran in step 3 will generate an json output file. You have to collate the output based on the feature names so as to generate one json file per feature file.
This solution sounds a bit complex, but with right efforts can yield significant results.
I was writting JUnit test. I would like to know if the tests within a test class can run in parallel.
class TestMyClass {
#Test
public void test1() {
}
#Test
public void test2() {
}
}
Will Junit ever run test1() and test2() in parallel?
Consider TestNG if you are looking for Parallel tests execution.
Yes, you can. Take a look at this question for details on how to set that up. The correctness of your tests should not really on this behaviour though. Your tests should run correctly if they're run concurrently or not.
I cannot directly answer on whether jUnit will run them in parallel or not, but theoretically that shouldn't matter. The only thing you should keep in mind is the sequence of execution you can bet on, like
setup
execution of test
teardown
This should be enough, as each single test should be completely independent from each other. If your tests depend on the order they're executed or whether they run in parallel, then you probably have some wrong dependencies.
No, because a fixture is setUp before each test. Running test in parallel could change the fixture state. I guess you could write a test executor to run tests in parallel.
Suppose I want to manually run from my IDE (Intellij IDEA, or eclipse) 4000 JUnit tests; the first 1000 tests run pretty smoothly (say they take 3 minutes all 1000) but the test 1001 takes alone over 30 minutes.
Is there a way I can skip the test 1001 (while it's still running) and to let the test 1002 (and the others) keep going. I do not want to #Ignore the test 1001 and rerun the suite because I already have the answer for tests 1-1000; also I do not want to select tests 1001-4000 because it takes too much time.
I would some kind of button - Skip Current Test - which can be pressed when the test is running.
In case such feature does not exist, an enhancement for it needs to be done by the IDE developers or by JUnit developers?
This is actually pretty simple with JUnit 4 using Assume. Assume is a helper class like Assert. The difference is that Assert will make the test fail while Assume will skip it.
The common use case is Assume.assumeTrue( isWindows() ) for tests that only work on, say, a Windows file system.
So what you can do is define a system property skipSlowTests and add
Assume.assumeTrue( Boolean.getBoolean("skipSlowTests") )
at the beginning of slow tests that you usually want to skip. Create an Eclipse launch configuration which defines the property to true and you have a convenient way to switch between the two.
If you want to run a slow test, select the method in Eclipse (or the whole class) and use "Run as JUnit Test" from the context menu. Since the property is false by default, the tests will be run.
No, you cannot skip tests if they are already running.
What I suggest you do is use Categories to separate your slow tests from the rest of your tests.
For example:
public interface SlowTests {
}
public class MyTest {
#Test
public void test1{
}
#Category(SlowTests.class)
#Test
public void test1001{
// this is a slow test
}
}
Create a test suite for the fast tests.
#RunWith(Categories.class)
#ExcludeCategory(SlowTests.class)
#SuiteClasses(MyTest.class)
public class FastTestSuite {
}
Now execute the FastTestSuite if you don't want to run the slow tests (e.g. test1001). Execute MyTest as normal if you want to run all the tests.
What you're asking for is to stop executing your code while it is in mid test. You can't stop executing a current test without having hooks in your code to allow it. Your best solution is to use Categories as others have suggested.
Basically, JUnit executes all of the #Before methods (including #Rules), then your #Test method, then the #After methods (again, including #Rules). Even assuming that JUnit had a mechanism for stopping execution of it's bits of the code (which it doesn't), most of the time is spent in your code. So to 'skip' a test which has already started requires you to modify your test code (and potentially the code that it's testing) in order that you can cleanly stop it. Cleanly stopping an executing thread is a question in itself [*].
So what are your options?
Run the tests in parallel, then you don't have to wait as long for the tests to finish. This may work, but parallelizing the tests may well be a lot of work.
Stop execution of the tests, and fix the one that's you're working on. Most IDEs have an option to kill the JVM in which the tests are running. This is definitely the easiest option.
Implement your own test runner, which runs the test in a separate thread. This test runner then either waits for the thread to finish executing, or checks a flag somewhere which would be a signal for it to stop. This sounds complicated, because you need t manage your threads but also to set the flag in a running jvm. Maybe creating a file somewhere? This runner would then fail the currently running test, and you could move on to the next. Please note that 'stopping' a test midway may leave stuff in an inconsistent state, or you may end up executing stuff in parallel.
There are parallel JUnit runners out there, and I don't think you're going to get much help from IDE developers (at least in the short term). Also, look at TestNG, which allows stuff to be run in parallel.
For using categories, one solution I use is to run the long running tests separately using maven surefire or similar, not through the IDE. This involves checking out the source code somewhere else on my machine and building there.
[*]: Java, how to stop threads, Regarding stopping of a thread
I think a more common solution is to have two test suites: one for the fast tests and another for the slow ones. This is typically the way you divide unit tests (fast) and integration tests (slow).
It's highly unlikely that you'll get modifications to JUnit or IntelliJ for something like this. Better to change the way you use them - it'll get you to an answer faster.
You can modify your thest and do something like
public void theTest(){
if (System.getProperty("skipMyTest") == null){
//execute the test
}
}
and pass the environment variable if you want to skip the test
I want to stop/destroy a running JUnitCore, which is started with
JUnitCore.run(Request.aClass(ClassToRun));
Like pleaseStop() on the RunNotifier.
Any ideas?
http://junit.sourceforge.net/javadoc/org/junit/runner/package-summary.html
Option 1:
the best option is to write your own Runner implementation, inherited from org.junit.runners.BlockJUnit4ClassRunner and declare it in your execution context, for instance as Main Class of a raw Java command line.
Get inspired of JUnit source code (it is really small), mainly org.junit.runners.ParentRunner and override the runChildren method by your own to get the opportunity to exit the execution loop when a stop command has been triggered.
The factory for Runner is Request. To start, you invoke
(new JUnitCore()).run(Request.runner(new MyStoppableRunner().aClass(ClassToRun))
Option 2:
If using your own runner is not possible in your context (launched within Eclipse for instance), thanks to a RunListener implementation registered in the used runner, you can get a reference to the thread running your test case.
If stop command has been triggered, your listener may throw a RuntimeException or even an Error in the hope it will make the original test runner collapse.
Bonus
These two options are basic as the aim is to check a stop condition and do not go on looping on methods or tests classes.
You may want to try to interrupt the test thread if stuck in sleep or wait state. To do so a watch dog thread should be created to invoke interrupt on the test thread after an inactivity timeout.
as far as I know there 's no such method available.
So the first answer should be : impossible...
But with Java bytecode enhancement frameworks nothing is really impossible...
So I could advise you to write a Java Interface Closeable or something like this... use the Java bytecode enhancement framework of your choice (asm, bcel, javaassist or any other) and enhance the JunitCore class to implement this interface.Once done you will be able to stop this facade for your tests...
public interface Closeable{
public void stopMe();
}
Hi,bytecode enhancement is not the silver bullet of course....But forking and patching an Open Source project requires huge changes into your project management... Who will remeber this little patch 5 years and 3 releases later ? Adding a small class to enhance the bytecode to fulfill your needs is a pragmatic but as always not a perfect answer ....I am Ok with Yves that trying to add the feature into JUnit would be the best solution but it requires far more than technicaal knowledge... Of course you may encounter classloading weird problems while using such technique.... For integration testing I would suggest using TestNG rather than JUnit it provides many enhancements while providing a compatibility layer....
HTH
Jerome
I want to provide another simple solution to stop a JUnitCore:
JUnitCore jUnitCore = new JUnitCore();
Field field = JUnitCore.class.getDeclaredField("fNotifier");
field.setAccessible(true);
RunNotifier runNotifier = (RunNotifier) field.get(jUnitCore);
runNotifier.pleaseStop();
Credits to Matthew Farwell who transfered my idea into code.
I needed to stop all running processes/threads as I was running executing my test suite from a main method using java -jar test.jar within a Docker image. I couldn't then extract the correct exit code after the tests had finished. I went with this:
final JUnitCore engine = new JUnitCore();
engine.addListener(new TextListener(System.out));
final Result testsResult = engine.run(AllTestSuite.class);
if (testsResult.wasSuccessful()) {
System.out.println("Tests complete with success!!!");
System.exit(0);
}
System.out.println("Tests complete with "+ result.getFailureCount() + " failures!!!");
System.exit(1);
I'd like to know if there are some unit testing frameworks which are capable of writing multi-threaded tests easily?
I would imagine something like:
invoke a special test method by n threads at the same time for m times. After all test threads finished, an assertion method where some constraints should be validated would be invoked.
My current approach is to create Thread objects inside a junit test method, loop manually the real test cases inside each run() method, wait for all threads and then validate the assertions. But using this, I have a large boilerplate code block for each test.
What are your experiences?
There is ConTest, and also GroboUtils.
I've used GroboUtils many years ago, and it did the job. ConTest is newer, and would be my preferred starting point now, since rather than just relying on trial and error, the instrumentation forces specific interleavings of the threads, providing a deterministic test. In contrast, GroboUtils MultiThreadedTestRunner simply runs the tests and hopes the scheduler produces an interleaving that causes the thread bug to appear.
EDIT: See also ConcuTest which also forces interleavings and is free.
There is also MultithreadedTC by Bill Pugh of FindBugs fame.
Just using the concurrency libraries would simplify your code. You can turn your boiler plate code into one method.
Something like
public static void runAll(int times, Runnable... tests) {
}