I want to run the same Cucumber tests in multiple threads. More specifically, I have a set of features, and running these features in one thread works fine. I use the JSON formatter to record running time of each step. Now I want to do load test. I care more about the running time of each feature/step in a multi-thread environment. So I create multiple threads, and each thread runs on the same feature set. Each thread has its own JSON report. Is this possible in theory?
For some project setup reason I cannot use the JUnit runner. So I have to resort to the CLI-way:
long threadId = Thread.currentThread().getId();
String jsonFilename = String.format("json:run/cucumber%d.json", threadId);
String argv[] = new String[]{
"--glue",
"com.some.package",
"--format",
jsonFilename,
"d:\\features"};
// Do not call Main.run() directly. It has a System.exit() call at the end.
// Main.run(argv, Thread.currentThread().getContextClassLoader());
// Copied the same code from Main.run().
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
RuntimeOptions runtimeOptions = new RuntimeOptions(new Env("cucumber-jvm"), argv);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
runtime.writeStepdefsJson();
runtime.run();
I tried to create a seperate thread for each Cucumber run. The problem is, only one of the thread has a valid JSON report. All the other threads just create empty JSON files. Is this by design in Cucumber or is there something I missed?
We have looked into multi-threading cucumber tests under Gradle and Groovy using the excellent GPars library. We have 650 UI tests and counting.
We didn't encounter any obvious problems running cucumber-JVM in multiple threads but the multi-threading also didn't improve performance as much as we hoped.
We ran each feature file in a separate thread. There are a few details to take care of, like splicing together the cucumber reports from different threads and making sure our step code was thread-safe. We sometimes need to store values between steps, so we used a concurrentHashMap keyed to the thread id to store this kind of data:
class ThreadedStorage {
static private ConcurrentHashMap multiThreadedStorage = [:]
static private String threadSafeKey(unThreadSafeKey) {
def threadId = Thread.currentThread().toString()
"$threadId:$unThreadSafeKey"
}
static private void threadSafeStore(key, value) {
multiThreadedStorage[threadSafeKey(key)] = value
}
def static private threadSafeRetrieve(key) {
multiThreadedStorage[threadSafeKey(key)]
}
}
And here's the gist of the Gradle task code that runs the tests multi-threaded using GPars:
def group = new DefaultPGroup(maxSimultaneousThreads())
def workUnits = features.collect { File featureFile ->
group.task {
try {
javaexec {
main = "cucumber.api.cli.Main"
...
args = [
...
'--plugin', "json:$unitReportDir/${featureFile.name}.json",
...
'--glue', 'src/test/groovy/steps',
"path/to/$featureFile"
]
}
} catch (ExecException e) {
++noOfErrors
stackTraces << [featureFile, e.getStackTrace()]
}
}
}
// ensure all tests have run before reporting and finishing gradle task
workUnits*.join()
We found we needed to present the feature files in reverse order of execution time for best results.
The results were a 30% improvement on an i5 CPU, degrading above 4 simultaneous threads, which was a little disappointing.
I think the threads were too heavy for multi-threading on our hardware. Above a certain number of threads there were too many CPU cache misses.
Running concurrently on different instances using a thread-safe work queue like Amazon SQS now seems a good way forward, especially since it is not going to suffer from thread-safety issues (at least not on the test framework side).
It is non-trivial for us to test this multi-threading method on i7 hardware due to security constraints in our workplace, but I would be very interested to hear how an i7 with a larger CPU cache and more physical cores compares.
Not currently -- here is the issue you observe. I haven't found any way to parallelize by scenario.
Here's a nice write up on poor-man's concurrency. Just run multiple commands each selecting a different subset of your tests -- by feature or tag. I would fork a new JVM (as a JUnit driver would) rather than trying to thread it since cucumber was not designed for that. You have to balance them yourself, then figure out how to combine the reports. (But at least the problem is combining reports not corrupt reports.)
Supposedly you can run your Cucumber-JVM tests in parallel by using this Maven POM configuration from here: https://opencredo.com/running-cucumber-jvm-tests-in-parallel/
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.14</version>
<executions>
<execution>
<id>acceptance-test</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>${surefire.fork.count}</forkCount>
<refuseForks>false</reuseForks>
<argLine>-Duser.language=en</argLine>
<argLine>-Xmx1024m</argLine>
<argLine>-XX:MaxPermSize=256m</argLine>
<argLine>-Dfile.encoding=UTF-8</argLine>
<useFile>false</useFile>
<includes>
<include>**/*AT.class</include>
</includes>
<testFailureIgnore>true</testFailureIgnore>
</configuration>
</execution>
</executions>
</plugin>
In the above snippet, you can see that the maven-surefire-plugin is used to run our acceptance tests – any classes that end in *AT will be run as a JUnit test class. Thanks to JUnit, making the tests run in parallel is now a simple case of setting the forkCount configuration option. In the example project, this is set to 5, meaning that we can run up to 5 threads (ie, 5 runner classes) at a time.
Well if you can find a way for cucumber to output scenario location ( I.e. feature_file_path:line_nunber_in_feature_file) for all the scenarios you want run based on given tag, then you can use gpars and gradle to run scenarios in parallel.
Step 1: In the first gradle task, we’ll use above solution to generate a text file (say scenarios.txt) containing locations for all the scenarios that we want to execute
Step 2: Next, extract contents of scenarios.txt generated in step 1 into a groovy list say scenariosList
Step 3: create one more task (javaExec), here we’ll use gpars withPool in combination with scenariosList.eachParallel, and use cucumber main class and other cucumberOptions to run these scenarios in parallel. PS: here we will provide a scenario location as the value of the option “features” so that cucumber will run only this scenario. Also no need to provide any tag name as we already have a list of scenarios that we need to execute.
Note: You need to use a machine with high configuration like a Linux sever because a new jvm instance is created per scenario, and probably use a cloud service like Saucelabs to execute scenarios. This way you don’t have to worry about the infrastructure.
Step4: This is the last step. Every scenario bran in step 3 will generate an json output file. You have to collate the output based on the feature names so as to generate one json file per feature file.
This solution sounds a bit complex, but with right efforts can yield significant results.
Related
I have two FitNesse suits which are mutually exclusive and I want to run them in parallel.
As they are invoked from a junit test case, I have written the following piece of code:
#Test
public void executeFitnesseSuites() {
final Class<?>[] classes = { Suite1.class, Suite2.class };
final Result result = JUnitCore.runClasses(ParallelComputer.classes(), classes);
System.out.println(result);
}
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("Suite1")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("/tmp/fitnesse/")
public static class Suite1{
}
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("Suite2")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("/tmp/fitnesse/")
public static class Suite2{
}
In the earlier implementation, these were two independent classes and were being executed sequentially.
However, I am seeing a similar execution time for the above test.
Does this mean that FitNesse is not spinning up two slim server instances and executing these suites in parallel?
Unfortunately FitNesse itself is not thread safe, so one should not run two slim server instances in one JVM at the same time.
I'm not sure how jUnit behaves using the approach you use. Does it spin up two parallel JVMs, or just threads in the same JVM?
An approach I've used in the past to run two completely independent suites with jUnit is two have two separate classes (as you had before) and run these in parallel on separate JVMs using Maven's failsafe plugin. Failsafe (and surefire as well) offers a forkCount property to specify the number of processes to use (see http://maven.apache.org/surefire/maven-failsafe-plugin/examples/fork-options-and-parallel-execution.html for more details). Please note that you should NOT use the parallel property as that is within one JVM.
If you are running tests in parallel using FitNesse's jUnit runner you may also be interested in a tool I created to combine the HTML reports of such runs into a single report: HtmlReportIndexGenerator. This is part my fixtures jar, but also available as separate docker image: hsac/fitnesse-fixtures-combine.
First of all, have I fundamentally misunderstood Spark Standalone mode? The official documentation says
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications. However, to allow multiple concurrent
users, you can control the maximum number of resources each
application will use.
I thought that this implied multiple users could have applications running in parallel, submitting jobs to the same Spark Standalone cluster. However, now I am wondering if this was meant to mean that restricting resources would allow multiple users to each run separate Spark Standalone clusters without starving all other users (or just run other programs on the cluster without Spark starving them of resources). Is this the case?
I have Spark set up in Standalone mode on three VMs running Ubuntu. They can all see each other across a NAT network. One of the machines (192.168.56.101) is the master, while the others are slaves (192.168.56.102 and 192.168.56.103).
The Spark version is 2.1.7.
I have a Java app which creates JavaRDD objects in several threads, each calling .collect() in its own thread. I would have thought that this counts as the kind of "job" which can run in parallel for a single Spark Context object (according to https://spark.apache.org/docs/1.2.0/job-scheduling.html).
Each thread gets a JavaRDD object from a synchronized method of a class co-ordinating access to the (single) JavaSparkContext object. The JavaSparkContext is set up without much tweaking. Essentially it is
public synchronized JavaRDD<String> getRdd(List<String> fooList) {
if (this.javaSparkContext == null) {
SparkConf sparkConf = new SparkConf();
sparkConf.set("spark.executor.memory", "500m");
// There might be a few more settings here such as host name and port, but nothing directly to do with an executor pool or anything, as far as I remember. I don't have the code in front of me while not at work.
this.javaSparkContext = JavaSparkContext.fromSparkContext(new SparkContext(sparkConf));
}
if (this.jobPool == "fooPool") {
this.jobPool = "barPool";
} else {
this.jobPool = "fooPool";
}
this.javaSparkContext.setLocalProperty("spark.scheduler.pool", this.jobPool);
this.javaSparkContext.requestExecutors(1);
return this.javaSparkContext.parallelize(fooList);
}
The Spark Context object has set up two job pools (as I set it up to), as far as I can tell from the console log:
... INFO scheduler.FairSchedulableBuilder: Created pool fooPool, schedulingMode: FAIR, minShare: 1, weight: 1
... INFO scheduler.FairSchedulableBuilder: Created pool barPool, schedulingMode: FAIR, minShare: 1, weight: 1
... INFO scheduler.FairSchedulableBuilder: Created pool default, schedulingMode: FIFO, minShare: 1, weight: 1
I started many threads, each submitting one .collect() job, alternating between the two FAIR pools. As far as I can tell, these are being allocated to the two pools:
... INFO: scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
... INFO scheduler.FairSchedulableBuilder: Added task set TaskSet_0.0 tasks to pool fooPool
and so on, alternating between the two pools.
(The .collect() call is something like
List<String> consoleOutput = getRdd(fooList).cache().pipe("python ./dummy.py").collect();
but again I don't have the code in front of me. It certainly works in the sense that an Executor correctly executes the command.)
However, the client.StandaloneAppClient$ClientEndpoint only ever creates one Executor, which then proceeds to execute all the tasks in barPool then all the tasks in fooPool in serial (but not FIFO). The Worker node VM has 1 core though I set SPARK_EXECUTOR_INSTANCES, SPARK_EXECUTOR_CORES, SPARK_WORKER_INSTANCES, and SPARK_WORKER_CORES to 4, hoping that that would help somehow.
The Master node also has SPARK_EXECUTOR_INSTANCES, SPARK_EXECUTOR_CORES, SPARK_WORKER_INSTANCES, and SPARK_WORKER_CORES set to 4.
It is only ever one of the Worker nodes which responds, and only ever sends one Executor. Both Worker nodes can communicate with the Master - I can turn off one, and the other will take up the next set of jobs which I submit.
The jobs are trivial jobs, each of which delivers a Python script which performs "sleep for some seconds, printing some stuff", and each job takes a single-element RDD, as a proof of concept for a good business reason, as essentially multiple unrelated RDDs would need to be processed in parallel by unrelated Python scripts.
Is there some setting which I have missed? I know that I am misusing Spark in that I am specifically preventing it from parallelizing according to an RDD, but this is set in stone. I am baffled though that only one Worker responds, given that there are many task sets lined up, in multiple job pools. I even call .requestExecutors(1) with every submission, with the console showing
... INFO cluster.StandaloneSchedulerBackend: Requesting 1 additional executor(s) from the cluster manager
but this seems to be totally ignored.
Any advice will be greatly appreciated!
Edit: added Spark version and Java code for method setting up context. Removed idiotic English mistakes introduced by someone who thought that they would "correct" my question by making it grammatically wrong, which were approved by some people who obviously did not read the edit.
As far as I can tell from a lot of research on the Internet and experimenting with my own code, the answer is "Spark does not work that way".
Specifically:
1) There can only be 1 Spark Context per Java Virtual Machine.
2) Per Spark Context, tasks are only ever executed sequentially.
The way which is used by popular Spark cluster managers such as Mesos or Mist, is to prepare several Spark Contexts, each in its own JVM, and tasks are divided among these Spark Contexts.
I could manage to engage a second worker by using a second JVM (in my case, it was by running the same code simultaneously in the Eclipse debugger and in the IntelliJ debugger), but this is just a confirmation of the kind of set-up described above.
I have been recently trying Gradle, I didn't have any prior experience with it, and so far I have been able to do the things I wanted to and am satisfied with the results. However, in my case I have to run Selenium tests with JUnit and some of them are disproportionately larger than others (i.e.: 25min vs 4min).
When using the maxParallelForks option, sometimes it takes longer than I would expect, as the tests seem to be assigned beforehand to the forks and sometimes I end up with iddle forks, while one of them is stuck with a long test, and when it finishes other shorter tests run after it (which could have run in any of the other available forks).
TL;DR:
When running tests in parallel, Gradle seems to assign tests as if there were multiple queues (one per fork) and I would like it to be like a single queue where the forks take the next test in the queue.
As an off-topic example, my situation is like being stuck in a queue at the supermarket, when the ones next to you are empty, but you can't change.
it's the latter. gradle uses one queue and distributes each entry clockwise to the running processes. Say you have 4 tests:
Test1 taking 10s
Test2 taking 1s
Test3 taking 10s
Test4 taking 1s
and using maxParallelForks = 2 the overall test task execution would be around 20s. I guess we need to discuss if this can be improved by getting notified about "free" processes to assign Test3 directly to test worker process 2 after Test2 comes back after 1s.
As of July 2021, Your question (and my experience) matches the issue described in this issue which
"has been automatically closed due to inactivity".
The issue is not resolved, and yes, the system assigns tasks early and does not rebalance to use idle workers.
Suppose I want to manually run from my IDE (Intellij IDEA, or eclipse) 4000 JUnit tests; the first 1000 tests run pretty smoothly (say they take 3 minutes all 1000) but the test 1001 takes alone over 30 minutes.
Is there a way I can skip the test 1001 (while it's still running) and to let the test 1002 (and the others) keep going. I do not want to #Ignore the test 1001 and rerun the suite because I already have the answer for tests 1-1000; also I do not want to select tests 1001-4000 because it takes too much time.
I would some kind of button - Skip Current Test - which can be pressed when the test is running.
In case such feature does not exist, an enhancement for it needs to be done by the IDE developers or by JUnit developers?
This is actually pretty simple with JUnit 4 using Assume. Assume is a helper class like Assert. The difference is that Assert will make the test fail while Assume will skip it.
The common use case is Assume.assumeTrue( isWindows() ) for tests that only work on, say, a Windows file system.
So what you can do is define a system property skipSlowTests and add
Assume.assumeTrue( Boolean.getBoolean("skipSlowTests") )
at the beginning of slow tests that you usually want to skip. Create an Eclipse launch configuration which defines the property to true and you have a convenient way to switch between the two.
If you want to run a slow test, select the method in Eclipse (or the whole class) and use "Run as JUnit Test" from the context menu. Since the property is false by default, the tests will be run.
No, you cannot skip tests if they are already running.
What I suggest you do is use Categories to separate your slow tests from the rest of your tests.
For example:
public interface SlowTests {
}
public class MyTest {
#Test
public void test1{
}
#Category(SlowTests.class)
#Test
public void test1001{
// this is a slow test
}
}
Create a test suite for the fast tests.
#RunWith(Categories.class)
#ExcludeCategory(SlowTests.class)
#SuiteClasses(MyTest.class)
public class FastTestSuite {
}
Now execute the FastTestSuite if you don't want to run the slow tests (e.g. test1001). Execute MyTest as normal if you want to run all the tests.
What you're asking for is to stop executing your code while it is in mid test. You can't stop executing a current test without having hooks in your code to allow it. Your best solution is to use Categories as others have suggested.
Basically, JUnit executes all of the #Before methods (including #Rules), then your #Test method, then the #After methods (again, including #Rules). Even assuming that JUnit had a mechanism for stopping execution of it's bits of the code (which it doesn't), most of the time is spent in your code. So to 'skip' a test which has already started requires you to modify your test code (and potentially the code that it's testing) in order that you can cleanly stop it. Cleanly stopping an executing thread is a question in itself [*].
So what are your options?
Run the tests in parallel, then you don't have to wait as long for the tests to finish. This may work, but parallelizing the tests may well be a lot of work.
Stop execution of the tests, and fix the one that's you're working on. Most IDEs have an option to kill the JVM in which the tests are running. This is definitely the easiest option.
Implement your own test runner, which runs the test in a separate thread. This test runner then either waits for the thread to finish executing, or checks a flag somewhere which would be a signal for it to stop. This sounds complicated, because you need t manage your threads but also to set the flag in a running jvm. Maybe creating a file somewhere? This runner would then fail the currently running test, and you could move on to the next. Please note that 'stopping' a test midway may leave stuff in an inconsistent state, or you may end up executing stuff in parallel.
There are parallel JUnit runners out there, and I don't think you're going to get much help from IDE developers (at least in the short term). Also, look at TestNG, which allows stuff to be run in parallel.
For using categories, one solution I use is to run the long running tests separately using maven surefire or similar, not through the IDE. This involves checking out the source code somewhere else on my machine and building there.
[*]: Java, how to stop threads, Regarding stopping of a thread
I think a more common solution is to have two test suites: one for the fast tests and another for the slow ones. This is typically the way you divide unit tests (fast) and integration tests (slow).
It's highly unlikely that you'll get modifications to JUnit or IntelliJ for something like this. Better to change the way you use them - it'll get you to an answer faster.
You can modify your thest and do something like
public void theTest(){
if (System.getProperty("skipMyTest") == null){
//execute the test
}
}
and pass the environment variable if you want to skip the test
my question is, whether there exists a framework in Java for managing and concurrently running Tasks that have logical dependencies.
My Task is as follows:
I have a lot of independent tasks (Let's say A,B,C,D...), They are implemented as Commands (like in Command pattern). I would like to have a kind of executor which will accept all these tasks and execute them in a parallel manner.
The tasks can be dependent one on another (For example, I can't run C, Before I run A), synchronous or asynchronous.
I would also like to incorporate the custom heuristics to affect the scheduler execution, for example if tasks A and B are CPU-intensive and C is, say, has high Memory consumption, It makes sense to run A and C in parallel, rather than running A and B.
Before diving into building this stuff by myself (i'm thinking about java.util.concurrent + annotation based constraints/rules), I was wondering, if someone could point me on some project that could suit my needs.
Thanks a lot in advance
I don't think that a there is a framework for managing tasks that could fulfill your requirements. You are on the right path using the Command pattern. You could take a look at the Akka framework for a simplified concurrency model. Akka is based on the Actor model:
The actor model is another very simple
high level concurrency model: actors
can’t respond to more than one message
at a time (messages are queued into
mailboxes) and can only communicate by
sending messages, not sharing
variables. As long as the messages are
immutable data structures (which is
always true in Erlang, but has to be a
convention in languages without means
of ensuring this property), everything
is thread-safe, without need for any
other mechanism. This is very similar
to request cycle found in web
development MVC frameworks.
http://metaphysicaldeveloper.wordpress.com/2010/12/16/high-level-concurrency-with-jruby-and-akka-actors/
Akka is written in Scala but it exposes clean Java API.
I'd recommend you to examine possibility to use ant for this purpose. Although ant is known as a popular build tool it actually the XML controlled engine that runs various tasks. I think that its flag fork=true does exactly what you need: runs tasks concurrently. As any java application ant can be executed from other java application: just call its main method. In this case you can wrap your tasks using ant API, i.e. implement them as Ant tasks.
I have never try this approach but I believe it should work. I thought about it several years ago and suggested it to my management as a possible solution for problem similar to yours.
Eclipse's job scheduling module is able to handle interdependent tasks. Take a look at http://www.eclipse.org/articles/Article-Concurrency/jobs-api.html.
There is a framework specifically for this purpose called dexecutor (Disclaimer : I am the owner)
Dexecutor is a very light weight framework to execute dependent/independent tasks in a reliable way, to do this it provides the minimal API.
An API to add nodes in the graph (addDependency, addIndependent, addAsDependentOnAllLeafNodes, addAsDependencyToAllInitialNodes Later two are the hybrid version of the first two)
and the other to execute the nodes in order.
Here is the simplest example :
DefaultDependentTasksExecutor<Integer, Integer> executor = newTaskExecutor();
executor.addDependency(1, 2);
executor.addDependency(1, 2);
executor.addDependency(1, 3);
executor.addDependency(3, 4);
executor.addDependency(3, 5);
executor.addDependency(3, 6);
//executor.addDependency(10, 2); // cycle
executor.addDependency(2, 7);
executor.addDependency(2, 9);
executor.addDependency(2, 8);
executor.addDependency(9, 10);
executor.addDependency(12, 13);
executor.addDependency(13, 4);
executor.addDependency(13, 14);
executor.addIndependent(11);
executor.execute(ExecutionBehavior.RETRY_ONCE_TERMINATING);
Here is how the dependency graph would be constructed
Tasks 1,12,11 would run in parallel, once on of these tasks finishes dependent tasks would run, for example, lets say task 1 finishes, tasks 2 and 3 would run similarly once task 12, finishes task 13 would run and so on.