How to limited runner in JUnitCore.runClasses()? - java

I have multiple class and multiple test. but when i used:
public class ParallelComputerExample {
#Test
public void runAllTests() {
Class<?>[] classes = { Class1.class, Class2.class, Class3.class };
JUnitCore.runClasses(new ParallelComputer(true, true), classes);
}
}
It run all #Test in the same time. I want it just create max 5 instance ?

Simple answer: the existing structure doesn't support that. JUnit doesn't have a way to run tests in parallel ... but under such constraints as you are asking for.
Thus: you most likely have to build something like that your own.
The good thing: that should be rather easy. You see, that ParallelComputer object you are using there is simply extending the base Computer class.
In that sense: go, and have a look into the source code of those two classes (you know, it is open source) - and then build your own extension of that Computer class that runs jobs in parallel, but using a limited thread pool for example.

Related

Excecuting Junit test classes in Order [duplicate]

I have a java app with maven.
Junit for tests, with failsafe and surefire plugins.
I have more than 2000 integration tests.
To speed up the test running, I use failsafe jvmfork to run my tests parallel.
I have some heavy test class, and they typically running at end of my test execution and it is slows down my CI verify process.
The filesafe runorder:balanced would be a good option for me, but i cant use it because the jvmfork.
To rename the test classes or move to another package and run it alpahabetical is not an option.
Any suggestion how can I run my slow test classes at the begining of the verify process?
In JUnit 5 (from version 5.8.0 onwards) test classes can be ordered too.
src/test/resources/junit-platform.properties:
# ClassOrderer$OrderAnnotation sorts classes based on their #Order annotation
junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$OrderAnnotation
Other Junit built-in class orderer implementations:
org.junit.jupiter.api.ClassOrderer$ClassName
org.junit.jupiter.api.ClassOrderer$DisplayName
org.junit.jupiter.api.ClassOrderer$Random
For other ways (beside junit-platform.properties file) to set configuration parameters see JUnit 5 user guide.
You can also provide your own orderer. It must implement ClassOrderer interface:
package foo;
public class MyOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.shuffle(context.getClassDescriptors());
}
}
junit.jupiter.testclass.order.default=foo.MyOrderer
Note that #Nested test classes cannot be ordered by a ClassOrderer.
Refer to JUnit 5 documentations and ClassOrderer API docs to learn more about this.
I gave the combination of answers I found a try:
Running JUnit4 Test classes in specified order
Running JUnit Test in parallel on Suite Level
The second answer is based on these classes of this github project, which is available under the BSD-2 license.
I defined a few test classes:
public class LongRunningTest {
#Test
public void test() {
System.out.println(Thread.currentThread().getName() + ":\tlong test - started");
long time = System.currentTimeMillis();
do {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
} while(System.currentTimeMillis() - time < 1000);
System.out.println(Thread.currentThread().getName() + ":\tlong test - done");
}
}
#Concurrent
public class FastRunningTest1 {
#Test
public void test1() {
try {
Thread.sleep(250);
} catch (InterruptedException e) {
}
System.out.println(Thread.currentThread().getName() + ":\tfrt1-test1 - done");
}
// +7 more repetions of the same method
}
Then I defined the test suites:
(FastRunningTest2 is a copy of the first class with adjusted output)
#SuiteClasses({LongRunningTest.class, LongRunningTest.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({FastRunningTest1.class, FastRunningTest2.class})
#RunWith(Suite.class)
public class SuiteTwo {}
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(ConcurrentSuite.class)
public class TopLevelSuite {}
When I execute the TopLevelSuite I get the following output:
TopLevelSuite-1-thread-1: long test - started
FastRunningTest1-1-thread-4: frt1-test4 - done
FastRunningTest1-1-thread-2: frt1-test2 - done
FastRunningTest1-1-thread-1: frt1-test1 - done
FastRunningTest1-1-thread-3: frt1-test3 - done
FastRunningTest1-1-thread-5: frt1-test5 - done
FastRunningTest1-1-thread-3: frt1-test6 - done
FastRunningTest1-1-thread-1: frt1-test8 - done
FastRunningTest1-1-thread-5: frt1-test7 - done
FastRunningTest2-2-thread-1: frt2-test1 - done
FastRunningTest2-2-thread-2: frt2-test2 - done
FastRunningTest2-2-thread-5: frt2-test5 - done
FastRunningTest2-2-thread-3: frt2-test3 - done
FastRunningTest2-2-thread-4: frt2-test4 - done
TopLevelSuite-1-thread-1: long test - done
TopLevelSuite-1-thread-1: long test - started
FastRunningTest2-2-thread-5: frt2-test8 - done
FastRunningTest2-2-thread-2: frt2-test6 - done
FastRunningTest2-2-thread-1: frt2-test7 - done
TopLevelSuite-1-thread-1: long test - done
Which basically shows that the LongRunningTest is executed in parralel to the FastRunningTests. The default value of threads used for parallel execution defined by the Concurrent Annotation is 5, which can be seen in the output of the parallel execution of the FastRunningTests.
The downside is that theses Threads are not shared between FastRunningTest1 and FastRunningTest2.
This behavious shows that it is "somewhat" possible to do what you want to do (so whether that works with your current setup is a different question).
Also I am not sure whether this is actually worth the effort,
as you need to prepare those TestSuites manually (or write something that autogenerates them)
and you need to define the Concurrent Annotation for all those classes (maybe with a different number of threads for each class)
As this basically shows that it is possible to define the execution order of classes and trigger their parallel execution, it should also be possibly to get the whole process to only use one ThreadPool (but I am not sure what the implication of that would be).
As the whole concept is based on a ThreadPoolExecutor, using a PriorityBlockingQueue which gives long running tasks a higher priority you would get closer to your ideal outcome of executing the long running tests first.
I experimented around a bit more and implemented my own custom suite runner and junit runner. The idea behind is to have your JUnitRunner submit the tests into a queue which is handeld by a single ThreadPoolExecutor. Because I didn't implement a blocking operation in the RunnerScheduler#finish method, I ended up with a solution where the tests from all classes were passed to the queue before the execution even started. (That might look different if there a more test classes and methods involved).
At least it proves the point that you can mess with junit at this level if you really want to.
The code of my poc is a bit messy and to lengthy to put it here, but if someone is interested I can push it into a github project.
In out project we had created a few marker interfaces (
example
public interface SlowTestsCategory {}
)
and put it into the #Category annotation of JUnit in the test class with slow tests.
#Category(SlowTestsCategory.class)
After that we created some special tasks for Gradle to run tests by category or a few categories by custom order:
task unitTest(type: Test) {
description = 'description.'
group = 'groupName'
useJUnit {
includeCategories 'package.SlowTestsCategory'
excludeCategories 'package.ExcludedCategory'
}
}
This solution is served by Gradle, but maybe it'll be helpful for you.
Let me summarize everything before I will provide a recommendation.
Integration tests are slow. This is fine and it's natural.
CI build doesn't run tests that assume deployment of a system, since there is no deployment in CI. We care about deployment in CD process.
So I assume your integration tests don't assume deployment.
CI build runs unit tests first. Unit tests are extremely fast because they use only RAM.
We have good and quick feedback from unit tests.
At this moment we are sure we don't have a problem with getting a quick feedback. But we still want to run integration tests faster.
I would recommend the following solutions:
Improve actual tests. Quite often they are not effective and can be speed up significantly.
Run integration tests in background (i.e. don't wait for real time feedback from them).
It's natural for them to be much slower than unit tests.
Split integration tests on groups and run them separately if you need feedback from some of them faster.
Run integration tests in different JVMs. Not different threads within the same JVM!
In this case you don't care about thread safety and you should not care about it.
Run integration tests on different machines and so on.
I worked with many different projects (some of them had CI build running for 48 hours) and first 3 steps were enough (even for crazy cases). Step #4 is rarely needed having good tests. Step #5 is for very specific situations.
You see that my recommendation relates to the process and not to the tool, because the problem is in the process.
Quite often people ignore root cause and try to tune the tool (Maven in this case). They get cosmetic improvements but with high maintenance cost of created solution.
There is a solution for that from version 5.8.0-M1 of junit.
Basically you need to create your own orderer. I did something like that.
Here is an annotation which you will use inside your test classes:
#Retention(RetentionPolicy.RUNTIME)
public #interface TestClassesOrder {
public int value() default Integer.MAX_VALUE;
}
Then you need to create class which will implement org.junit.jupiter.api.ClassOrderer
public class AnnotationTestsOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.sort(context.getClassDescriptors(), new Comparator<ClassDescriptor>() {
#Override
public int compare(ClassDescriptor o1, ClassDescriptor o2) {
TestClassesOrder a1 = o1.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
TestClassesOrder a2 = o2.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
if (a1 == null) {
return 1;
}
if (a2 == null) {
return -1;
}
if (a1.value() < a2.value()) {
return -1;
}
if (a1.value() == a2.value()) {
return 0;
}
if (a1.value() > a2.value()) {
return 1;
}
return 0;
}
});
}
}
To get it working you need to tell junit which class you would use for ordering descriptors. So you need to create file "junit-platform.properties" it should be in resources folder. In that file you just need one line with your orderer class:
junit.jupiter.testclass.order.default=org.example.tests.AnnotationTestOrderer
Now you can use your orderer annotation like Order annotation but on class level:
#TestClassesOrder(1)
class Tests {...}
#TestClassesOrder(2)
class MainTests {...}
#TestClassesOrder(3)
class EndToEndTests {...}
I hope that this will help someone.
You can use annotations in Junit 5 to set the test order you wish to use:
From Junit 5's user guide:
https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order
import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
Upgrading to Junit5 can be done fairly easily and the documentation on the link in the beginning of the post contains all the information you might need.

Instantiating huge data and sharing them with all tests before #BeforeAll

What is the correct way to initialize some relatively big data, and share them (read only, so thread safe) across all JUnit5 tests?
I've looked this answer and others that are similar, but I always seem to have 1 or 2 more levels of assembly/instantiation than they deal with.
My testing setup is this:
I have a custom Repository data structure that needs to be initialized just once, read from multiple sources and assembled (about 100 - 200 mb) and then shared to all the tests.
Each test class instantiates an Engine in #BeforeAll, that needs the repository above and then goes on and executes the tests in series, calling engine.reset() between tests. Each test has it's own unique setup. Engine is semi-heavy, and impossible to have one for each test.
#TestInstance(TestInstance.Lifecycle.PER_CLASS) is used so we get only one instance per testing class, (and one engine per class).
Multithreading/Parallel testing is used, each test class is done in parallel, and methods within it are done in sequence. This means:
systemProperty("junit.jupiter.execution.parallel.enabled", true)
systemProperty("junit.jupiter.execution.parallel.mode.default", "same_thread")
systemProperty("junit.jupiter.execution.parallel.mode.classes.default", "concurrent")
systemProperty("junit.jupiter.execution.parallel.config.strategy","dynamic")
systemProperty("junit.jupiter.execution.parallel.config.dynamic.factor",1) // could be 2!
Since there is nothing before #BeforeAll, I had to improvise:
I ended up declaring the repository on the top level of a kotlin test class file, outside of the class and initialize it like this: (large irrelevant chunks are omitted for clarity)
TestSetAlpha.kt:
import org.junit.jupiter.api.*
val database:Repository = Repository().also{
it.setupData(Config(...))
it.someOtherInit()
blah blah
}
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class `Engine Test Set ALPHA` {
var eng = Engine()
#BeforeAll
fun initAll() {
// configure Engine
println("Configuring Engine ALPHA")
eng.setDatabase(database)
eng.configure {
....
....
}
}
#BeforeEach
fun init() {
// reset the engine
eng.reset()
}
#Test
fun `A simple test`() {
eng.add(...)
eng.add(...)
eng.execute()
// interrogate resulting state
assert(eng.property == ...)
...
}
On subsequent test class files, I can reuse the same database Repository, and it only realy initializes once on a project level (verified!). There are no changes and no mutability on the repository after it loads, and that is guaranteed by it's API. This means that on an 16 thread CPU, I can reuse the database and roughly run 16 test classes in parallel.
I'm not sure on the loading and instantiating semantics of that global val. With a lot of data, JUnit5 is waiting for the also closure to complete before continuing with any tests, probably because it can't proceed with the classes on the files? I've never gotten an error, but feel this will probably break with a future update or on another platform because it's not clean and looks like a hack.
I would like to specify and have a guarantee that the repository is instantiated and shared properly across all classes & files and then have the threads start. How do you go about doing that though? There isn't some kind of top level, global #BeforeBeforeAll, although it would be exactly what I require. Any feedback and refactoring is welcomed. I can't run the tests without parallelism of course.
Far simpler than I thought it would be!
On top scope or on another file, use a singleton object!
object DatabaseProvider {
val database: Repository by lazy(LazyThreadSafetyMode.SYNCHRONIZED) {
val r=Repository()
r.setupData(Config(...))
// Load and add everything into the database
return#lazy r
}
}
and then in each test class, you plug in the database as part of initialization:
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class `Engine Test Set ALPHA`{
var eng = Engine()
#BeforeAll
fun initAll() {
// configure Engine
eng.setupRepo(DatabaseProvider.database)
eng.configure= ....
println("Configuration of Engine 1 DONE!")
}
Note the lazy init mode set to synchronized.
The #BeforeAll methods will fire up before the database repository is loaded, but it will block on each test class until the initialization oof the repository is done, and then continue.

How do you unit test main method in Java?

This is my class with main function. Here I initialize a spring bean which has camel route in it. I do not want to test any other classes being referred in this code but I just want to increate code coverage of this main class. How do I mock and test this class?
import org.apache.camel.main.Main;
public class ABC{
public static void main(String[] args) {
Main main = new Main();
MyCamelRoute myCamelRoute = SpringUtil.getBean(MyCamelRoute.class);
main.addRouteBuilder(myCamelRoute);
Thread t = new Thread(() -> {
try {
main.run();
} catch (Exception e) {
_logger.error("Unable to add route", e);
}
}, "started route");
t.start();
}
}
As you're writing of "mocks" I assume you intend to write a unit test.
ONE: Either you test a class or you mock it. You use mocks to make your test independent of the behaviour (and thus possible bugs) of other units (the "dependencies" of your "system under test" (SUT)).
TWO: You do not write tests to increase code coverage. You write tests to enforce requirements of the API contract.
THREE: To test your main method: call it! You can put in arguments and see if the return value matches your expectations.
FOUR: The problem here might be that you've got static dependencies you cannot control. Spring allows you to configure mocks for bean injection. Can't tell you details right now but I am sure you can find out, it should be something like #Configuration annotated classes or test specific versions of them.
But: your test has no control whatsoever over the main object. And sincerely, I would guess that actually you intend to test the Mainclass. Also you might want to inject the Main instance via Spring means.
FIVE: I am not sure wether it is a good idea to involve multi threading in unit tests as it means your test cannot control the environment of your sut. If you do not know where your test starts, you cannot decide if where it ends up is correct or not.

How do I create pre-formatted Java files in Eclipse?

I am currently writing JUnit test cases using the Selenium-RC API. I am writing my scripts like:
//example JUnit test case
public class myScripts extends SeleneseTestCase {
public void setUp() throws Exception {
SeleniumServer newSelServ = new SeleniumServer();
newSelServ.start();
setUp("https://mySite.com", "*firefox");
}
public void insert_Test_Name throws Exception {
//write test code here
}
}
And for each test, I have a new JUnit file. Now, since the beginning of my JUnit files will all basically be the same, just with minor variations towards the end, I was thinking about creating a pre-formatted Java template to write create a file with the redundant code already written. However, I can't find any information on whether this is possible. Does Eclipse allow you to create file templates for certain packages?
Create a super class to add all the common code. Creating template is really bad because of the fact you are duplicating the code at the end of the day.
class Super extends SeleneseTestCase{
// Add all common code
}
class Test1 extends Super{
// only special test case logic
}
Also I would suggest not to create SeleniumServer instance for each test case, It will reduce overall performance of the test suite. You can reuse object as long as you are running test sequentially.

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources