Equivalent to #DirtiesContext(...) for Surefire + JUnit? - java

I'm using the maven-surefire-plugin with junit 4.1.4. I have a unit test which relies on a 3rd party class that internally uses static { ... } code block to initiate some variables. For one test, I need to change one of these variables, but only for certain tests. I'd like this block to be re-executed between tests, since it picks up a value the first time it runs.
When testing, it seems like surefire instantiates the test class once, so the static { ... } code block is never processed again.
This means my unit tests that change values required for testing are ignored, the static class has already been instantiated.
💭 Note: The static class uses System.loadLibrary(...), from what I've found, it can't be rewritten to be instantiated, static is the (rare, but) proper usage.
I found a similar solution for Spring Framework which uses #DirtiesContext(...) annotation, allowing the programmer to mark classes or methods as "Dirty" so that a new class (or in many cases, the JVM) is initialized between tests.
How do you do the same thing as #DirtiesContext(...), but with maven-surefire-plugin?
public class MyTests {
#Test
public void test1() {
assertThat(MyClass.THE_VALUE, is("something-default"));
}
#Test
public void test2() {
System.setProperty("foo.bar", "something-else");
assertThat(MyClass.THE_VALUE, is("something-else"));
// ^-- this assert fails
// value still "something-default"
}
}
public class MyClass {
static {
String value;
if(System.getProperty("foo.bar") != null) {
value = System.getProperty("foo.bar"); // set to "something-else"
} else {
value = "something-default";
}
}
public static String THE_VALUE = value;
}
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>4.1.2</version>
</plugin>

static initialization blocks in java are something that can't be easily handled by JUnit. In general static stuff doesn't play nicely with unit testing concepts.
So, assuming you can't touch this code, your options are:
Option 1:
Spawn a new JVM for each test - well, this will work, but might be an overkill because it will aggravate the performance
If you'll follow this path, you might need to configure surefire plugin with:
forkCount=1
reuseForks=false
According to the surefire plugin documentation this combination will execute each test class in its own JVM process.
Option 2:
Create a class with a different class loader for every test.
Basically in Java if class com.foo.A is created by ClassLoader M is totally different than the same class com.foo.A created by ClassLoaded N.
This is somewhat hacky but should work.
The overhead is much smaller than in option 1. However you'll have to understand how to "incorporate" new class loaders into the testing infrastructure.
For more information about the creation of the custom class loader read for example this tutorial

Related

Excecuting Junit test classes in Order [duplicate]

I have a java app with maven.
Junit for tests, with failsafe and surefire plugins.
I have more than 2000 integration tests.
To speed up the test running, I use failsafe jvmfork to run my tests parallel.
I have some heavy test class, and they typically running at end of my test execution and it is slows down my CI verify process.
The filesafe runorder:balanced would be a good option for me, but i cant use it because the jvmfork.
To rename the test classes or move to another package and run it alpahabetical is not an option.
Any suggestion how can I run my slow test classes at the begining of the verify process?
In JUnit 5 (from version 5.8.0 onwards) test classes can be ordered too.
src/test/resources/junit-platform.properties:
# ClassOrderer$OrderAnnotation sorts classes based on their #Order annotation
junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$OrderAnnotation
Other Junit built-in class orderer implementations:
org.junit.jupiter.api.ClassOrderer$ClassName
org.junit.jupiter.api.ClassOrderer$DisplayName
org.junit.jupiter.api.ClassOrderer$Random
For other ways (beside junit-platform.properties file) to set configuration parameters see JUnit 5 user guide.
You can also provide your own orderer. It must implement ClassOrderer interface:
package foo;
public class MyOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.shuffle(context.getClassDescriptors());
}
}
junit.jupiter.testclass.order.default=foo.MyOrderer
Note that #Nested test classes cannot be ordered by a ClassOrderer.
Refer to JUnit 5 documentations and ClassOrderer API docs to learn more about this.
I gave the combination of answers I found a try:
Running JUnit4 Test classes in specified order
Running JUnit Test in parallel on Suite Level
The second answer is based on these classes of this github project, which is available under the BSD-2 license.
I defined a few test classes:
public class LongRunningTest {
#Test
public void test() {
System.out.println(Thread.currentThread().getName() + ":\tlong test - started");
long time = System.currentTimeMillis();
do {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
} while(System.currentTimeMillis() - time < 1000);
System.out.println(Thread.currentThread().getName() + ":\tlong test - done");
}
}
#Concurrent
public class FastRunningTest1 {
#Test
public void test1() {
try {
Thread.sleep(250);
} catch (InterruptedException e) {
}
System.out.println(Thread.currentThread().getName() + ":\tfrt1-test1 - done");
}
// +7 more repetions of the same method
}
Then I defined the test suites:
(FastRunningTest2 is a copy of the first class with adjusted output)
#SuiteClasses({LongRunningTest.class, LongRunningTest.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({FastRunningTest1.class, FastRunningTest2.class})
#RunWith(Suite.class)
public class SuiteTwo {}
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(ConcurrentSuite.class)
public class TopLevelSuite {}
When I execute the TopLevelSuite I get the following output:
TopLevelSuite-1-thread-1: long test - started
FastRunningTest1-1-thread-4: frt1-test4 - done
FastRunningTest1-1-thread-2: frt1-test2 - done
FastRunningTest1-1-thread-1: frt1-test1 - done
FastRunningTest1-1-thread-3: frt1-test3 - done
FastRunningTest1-1-thread-5: frt1-test5 - done
FastRunningTest1-1-thread-3: frt1-test6 - done
FastRunningTest1-1-thread-1: frt1-test8 - done
FastRunningTest1-1-thread-5: frt1-test7 - done
FastRunningTest2-2-thread-1: frt2-test1 - done
FastRunningTest2-2-thread-2: frt2-test2 - done
FastRunningTest2-2-thread-5: frt2-test5 - done
FastRunningTest2-2-thread-3: frt2-test3 - done
FastRunningTest2-2-thread-4: frt2-test4 - done
TopLevelSuite-1-thread-1: long test - done
TopLevelSuite-1-thread-1: long test - started
FastRunningTest2-2-thread-5: frt2-test8 - done
FastRunningTest2-2-thread-2: frt2-test6 - done
FastRunningTest2-2-thread-1: frt2-test7 - done
TopLevelSuite-1-thread-1: long test - done
Which basically shows that the LongRunningTest is executed in parralel to the FastRunningTests. The default value of threads used for parallel execution defined by the Concurrent Annotation is 5, which can be seen in the output of the parallel execution of the FastRunningTests.
The downside is that theses Threads are not shared between FastRunningTest1 and FastRunningTest2.
This behavious shows that it is "somewhat" possible to do what you want to do (so whether that works with your current setup is a different question).
Also I am not sure whether this is actually worth the effort,
as you need to prepare those TestSuites manually (or write something that autogenerates them)
and you need to define the Concurrent Annotation for all those classes (maybe with a different number of threads for each class)
As this basically shows that it is possible to define the execution order of classes and trigger their parallel execution, it should also be possibly to get the whole process to only use one ThreadPool (but I am not sure what the implication of that would be).
As the whole concept is based on a ThreadPoolExecutor, using a PriorityBlockingQueue which gives long running tasks a higher priority you would get closer to your ideal outcome of executing the long running tests first.
I experimented around a bit more and implemented my own custom suite runner and junit runner. The idea behind is to have your JUnitRunner submit the tests into a queue which is handeld by a single ThreadPoolExecutor. Because I didn't implement a blocking operation in the RunnerScheduler#finish method, I ended up with a solution where the tests from all classes were passed to the queue before the execution even started. (That might look different if there a more test classes and methods involved).
At least it proves the point that you can mess with junit at this level if you really want to.
The code of my poc is a bit messy and to lengthy to put it here, but if someone is interested I can push it into a github project.
In out project we had created a few marker interfaces (
example
public interface SlowTestsCategory {}
)
and put it into the #Category annotation of JUnit in the test class with slow tests.
#Category(SlowTestsCategory.class)
After that we created some special tasks for Gradle to run tests by category or a few categories by custom order:
task unitTest(type: Test) {
description = 'description.'
group = 'groupName'
useJUnit {
includeCategories 'package.SlowTestsCategory'
excludeCategories 'package.ExcludedCategory'
}
}
This solution is served by Gradle, but maybe it'll be helpful for you.
Let me summarize everything before I will provide a recommendation.
Integration tests are slow. This is fine and it's natural.
CI build doesn't run tests that assume deployment of a system, since there is no deployment in CI. We care about deployment in CD process.
So I assume your integration tests don't assume deployment.
CI build runs unit tests first. Unit tests are extremely fast because they use only RAM.
We have good and quick feedback from unit tests.
At this moment we are sure we don't have a problem with getting a quick feedback. But we still want to run integration tests faster.
I would recommend the following solutions:
Improve actual tests. Quite often they are not effective and can be speed up significantly.
Run integration tests in background (i.e. don't wait for real time feedback from them).
It's natural for them to be much slower than unit tests.
Split integration tests on groups and run them separately if you need feedback from some of them faster.
Run integration tests in different JVMs. Not different threads within the same JVM!
In this case you don't care about thread safety and you should not care about it.
Run integration tests on different machines and so on.
I worked with many different projects (some of them had CI build running for 48 hours) and first 3 steps were enough (even for crazy cases). Step #4 is rarely needed having good tests. Step #5 is for very specific situations.
You see that my recommendation relates to the process and not to the tool, because the problem is in the process.
Quite often people ignore root cause and try to tune the tool (Maven in this case). They get cosmetic improvements but with high maintenance cost of created solution.
There is a solution for that from version 5.8.0-M1 of junit.
Basically you need to create your own orderer. I did something like that.
Here is an annotation which you will use inside your test classes:
#Retention(RetentionPolicy.RUNTIME)
public #interface TestClassesOrder {
public int value() default Integer.MAX_VALUE;
}
Then you need to create class which will implement org.junit.jupiter.api.ClassOrderer
public class AnnotationTestsOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.sort(context.getClassDescriptors(), new Comparator<ClassDescriptor>() {
#Override
public int compare(ClassDescriptor o1, ClassDescriptor o2) {
TestClassesOrder a1 = o1.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
TestClassesOrder a2 = o2.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
if (a1 == null) {
return 1;
}
if (a2 == null) {
return -1;
}
if (a1.value() < a2.value()) {
return -1;
}
if (a1.value() == a2.value()) {
return 0;
}
if (a1.value() > a2.value()) {
return 1;
}
return 0;
}
});
}
}
To get it working you need to tell junit which class you would use for ordering descriptors. So you need to create file "junit-platform.properties" it should be in resources folder. In that file you just need one line with your orderer class:
junit.jupiter.testclass.order.default=org.example.tests.AnnotationTestOrderer
Now you can use your orderer annotation like Order annotation but on class level:
#TestClassesOrder(1)
class Tests {...}
#TestClassesOrder(2)
class MainTests {...}
#TestClassesOrder(3)
class EndToEndTests {...}
I hope that this will help someone.
You can use annotations in Junit 5 to set the test order you wish to use:
From Junit 5's user guide:
https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order
import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
Upgrading to Junit5 can be done fairly easily and the documentation on the link in the beginning of the post contains all the information you might need.

How do you unit test main method in Java?

This is my class with main function. Here I initialize a spring bean which has camel route in it. I do not want to test any other classes being referred in this code but I just want to increate code coverage of this main class. How do I mock and test this class?
import org.apache.camel.main.Main;
public class ABC{
public static void main(String[] args) {
Main main = new Main();
MyCamelRoute myCamelRoute = SpringUtil.getBean(MyCamelRoute.class);
main.addRouteBuilder(myCamelRoute);
Thread t = new Thread(() -> {
try {
main.run();
} catch (Exception e) {
_logger.error("Unable to add route", e);
}
}, "started route");
t.start();
}
}
As you're writing of "mocks" I assume you intend to write a unit test.
ONE: Either you test a class or you mock it. You use mocks to make your test independent of the behaviour (and thus possible bugs) of other units (the "dependencies" of your "system under test" (SUT)).
TWO: You do not write tests to increase code coverage. You write tests to enforce requirements of the API contract.
THREE: To test your main method: call it! You can put in arguments and see if the return value matches your expectations.
FOUR: The problem here might be that you've got static dependencies you cannot control. Spring allows you to configure mocks for bean injection. Can't tell you details right now but I am sure you can find out, it should be something like #Configuration annotated classes or test specific versions of them.
But: your test has no control whatsoever over the main object. And sincerely, I would guess that actually you intend to test the Mainclass. Also you might want to inject the Main instance via Spring means.
FIVE: I am not sure wether it is a good idea to involve multi threading in unit tests as it means your test cannot control the environment of your sut. If you do not know where your test starts, you cannot decide if where it ends up is correct or not.

Debugging ignored tests in testng?

I have a maven Java project in Intellij IDEA community. The TestNg version is very old i.e. 6.9.5 and I simply cannot update it. I have 6 TestNg test methods in a class. Only 5/6 of these methods use data provider methods, all of which are in one DataProvider class.
When I run the test class, only the method without data provider (say test_5) runs successfully. The others are marked as "test ignored". Moreover, when I comment or disable test_5, then all the other tests run. Can I make testng give a detailed reason for ignoring tests ?
Here is brief information about my project. I can't give the full code.
public class MyUtilityClass {
public class MyUtilityClass(){
//Load data from property files and initialize members, do other stuff.
}
}
public class BaseTest {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj, provide common annotated methods for tests etc.
}
public class TestClass extends BaseTest {
#BeforeClass
public void beforeMyClass(){
//Get some data from this.utilObj and do other things also.
}
#Test(dataProvider = "test_1", dataProviderClass = MyDataProvider.class)
test_1(){}
#Test(dataProvider = "test_2", dataProviderClass = MyDataProvider.class)
test_2(){}
...
//test_5 was the only one without data provider.
test_5(){}
#Test(dataProvider = "test_6", dataProviderClass = MyDataProvider.class)
test_6(){}
}
public class MyDataProvider {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj besides other things.
}
Your tests need to end in exactly the same environment in which they started.
You gave nary a clue as to what your code is like, but I can say that it is almost certainly either a database that is being written to and not reverted or an internal, persistent data structure that is being modified and not cleared.
If the tests go to the database, try enclosing the entire test in a transaction that you revert at the end of the test. If you can't do this, try mocking out the database.
If it's not the DB, look for an internal static somewhere, either a singleton pattern or a static collection contained in an object. Improve that stuff right out of your design and you should be okay.
I could give you more specific tips with code, but as is--that's about all I can tell you.
I solved my problem. Test_5 is the only test method which does not have a data provider. So, I provided a mock data provider method for it.

Testing Google App Engine ThreadManager outside of GAE

I've written a JUnit (4.10) unit test that makes the following call to com.google.appengine.api.ThreadManager:
ThreadManager.currentRequestThreadFactory();
When this test runs, I get a NullPointerException being thrown from within this currentRequestThreadFactory method:
Caused by: java.lang.NullPointerException
at com.google.appengine.api.ThreadManager.currentRequestThreadFactory(ThreadManager.java:39)
at com.myapp.server.plumbing.di.BaseModule.providesThreadFactory(BaseModule.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
When I pull down the source for ThreadManager, and look at Line 39 (which is the source of the NPE), I see:
public static ThreadFactory currentRequestThreadFactory() {
return (ThreadFactory) ApiProxy.getCurrentEnvironment().getAttributes()
.get(REQUEST_THREAD_FACTORY_ATTR);
}
So it seems that the ApiProxy.getCurrentEnvironment() is null, and when it's getAttribute() method is called, the NPE is thrown. I've confirmed this by adding some new print statements higher up in my unit test code:
if(ApiProxy.getCurrentEnvironment() == null)
System.out.println("Environment is null.");
I'm vaguely aware that GAE offers "test versions" for all its services, but haven't been able to find (specifically) how to use them and set them up. So I ask: does GAE offer such test versions? If so, how do I add an ApiProxy test version here? And if not, then what are my options? I don't think I can mock either method (ThreadManager#currentRequestThreadFactory or ApiProxy#getCurrentEnvironment) because they're both statics. Thanks in advance.
Edit: I see that there is an appengine-testing.jar that ships with the SDK. Inside this JAR is an ApiProxyLocal.class that I believe is a version of ApiProxy that could be used during JUnit testing, that would work without throwing the NPE. If that's the case (which I'm not even sure of), then the question is: how do I inject it into my ThreadManager for this test?
You will get the correct Threads from the stubs, if you set up your LocalServiceTestHelper, along the following lines.
private static final LocalServiceTestHelper helper = new LocalServiceTestHelper( new LocalDatastoreServiceTestConfig());
#BeforeClass
public static void initialSetup() {
helper.setUp();
}
#AfterClass
public static void finalTearDown() {
helper.tearDown();
}
I suggest avoiding calling ThreadManager.currentRequestThreadFactory() directly from your code. Instead, inject the ThreadFactory into the class that needs to create threads.
One easy way to do this is with Guice:
public static class MyService {
private final ThreadFactory threadFactory;
#Inject
MyService(ThreadFactory threadFactory) {
this.threadFactory = threadFactory;
}
...
}
In your tests for MyService, you could either pass a fake ThreadFactoryto the MyService constructor or inject Executors.defaultThreadFactory().
In your production code, you would create a binding:
bind(ThreadFactory.class)
.toInstance(ThreadManager.currentRequestThreadFactory());
Of course, if you're not ready to jump into dependency injection, you can create your own accessors to get and set the ThreadFactory
If you're implementing ApiProxy.Environment as suggested in
http://code.google.com/appengine/docs/java/howto/unittesting.html
then the getAttributes() method is where you are not finding an
entry for REQUEST_THREAD_FACTORY_ATTR in your map.
You can add that:
attributes.put(REQUEST_THREAD_FACTORY_ATTR, new RequestThreadFactory());
For other helpful additions, see:
http://googleappengine.googlecode.com/svn-history/trunk/java/src/main/com/google/appengine/tools/development/LocalEnvironment.java
To add to the last answer, if you are starting a thread within your unit test case, that java thread needs the ApiProxy environment set too.
in your class LocalServiceTestCase extends TestCase, the setup method might look
something like this:
super.setUp();
helper1.setUp();
setEnvironment();
where:
public static void setEnvironment() {
if (ApiProxy.getCurrentEnvironment() == null) {
ApiProxyLocal apl = LocalServiceTestHelper.getApiProxyLocal();
ApiProxy.setEnvironmentForCurrentThread(new TEnvironment());
ApiProxy.setDelegate(apl);
}
}
The TEnvironment is given in the above urls.
You might want to make a static unsetEnvironment() too.
In your unit test case, if you've started a new java thread, you can just
use the static method at the beginning of your run method:
public void run() {
LocalServiceTestCase.setEnvironment();

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources