I have a class that makes native Windows API calls through JNA. How can I write JUnit tests that will execute on a Windows development machine but will be ignored on a Unix build server?
I can easily get the host OS using System.getProperty("os.name")
I can write guard blocks in my tests:
#Test public void testSomeWindowsAPICall() throws Exception {
if (isWindows()) {
// do tests...
}
}
This extra boiler plate code is not ideal.
Alternatively I have created a JUnit rule that only runs the test method on Windows:
public class WindowsOnlyRule implements TestRule {
#Override
public Statement apply(final Statement base, final Description description) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
if (isWindows()) {
base.evaluate();
}
}
};
}
private boolean isWindows() {
return System.getProperty("os.name").startsWith("Windows");
}
}
And this can be enforced by adding this annotated field to my test class:
#Rule public WindowsOnlyRule runTestOnlyOnWindows = new WindowsOnlyRule();
Both these mechanisms are deficient in my opinion in that on a Unix machine they will silently pass. It would be nicer if they could be marked somehow at execution time with something similar to #Ignore
Does anybody have an alternative suggestion?
In Junit5, There are options for configuring or run the test for specific Operating System.
#EnabledOnOs({ LINUX, MAC })
void onLinuxOrMac() {
}
#DisabledOnOs(WINDOWS)
void notOnWindows() {
// ...
}
Have you looked into assumptions? In the before method you can do this:
#Before
public void windowsOnly() {
org.junit.Assume.assumeTrue(isWindows());
}
Documentation: http://junit.sourceforge.net/javadoc/org/junit/Assume.html
Have you looked at JUnit assumptions ?
useful for stating assumptions about the conditions in which a test
is meaningful. A failed assumption does not mean the code is broken,
but that the test provides no useful information. The default JUnit
runner treats tests with failing assumptions as ignored
(which seems to meet your criteria for ignoring these tests).
If you use Apache Commons Lang's SystemUtils:
In your #Before method, or inside tests that you don't want to run on Win, you can add:
Assume.assumeTrue(SystemUtils.IS_OS_WINDOWS);
Presumably you do not need to actually call the Windows API as part of the junit test; you only care that the class which is the target of the unit test calls what it thinks is a the windows API.
Consider mocking the windows api calls as part of the unit tests.
Related
Here is the relevant code in my test:
private MockedStatic<MyDAO> myDAOStaticMock;
#Captor
ArgumentCaptor<SomeInput> someInputCaptor;
#Before
public void before() {
myDAOStaticMock = Mockito.mockStatic(MyDAO.class);
}
#After
public void after() {
myDAOStaticMock.close();
}
#Test
public void test() {
thingImTesting.methodThatCallsDaoStatically();
myDAOStaticMock.verify(() -> MyDao.staticMethod(someInputCaptor.capture()), times(2));
}
When I run this test in intellij, it almost always works. When it runs through our build system, it fails every time. The error is that there is 0 interaction with this mock.
What might be the problem with this particular verify, when all of the rest of my static checks work fine?
Edit: I was wrong about the reason for the issue and the static checks were not related. See my answer for more information.
I was wrong about the reason these tests failed in the build system. The unit test I wrote called DateTime.now() which has a different result depending on the timezone of where you are running the code. My build system was running the code in a different timezone, so the test failed in that context.
By being explicit about time zone in my unit test (e.g. DateTime.now().withZone(whatever)), I was able to fix the issue.
Currently the JUnit5 Framework works with Inversion of Control. I.e. you annotate a test method with #Test and then JUnit scans your classpath (in the simplest case)
Now is there a way for me to be in charge of calling the test cases through JUnit APIs? Maybe by hooking my test implementations to some test registry provided by JUnit?
I'm pretty new to JUnit - how did older versions go about this?
The reason I'm asking is that normally to execute my test cases, I'd have to run something along the lines of
java -jar junit-platform-standalone.jar --class-path target --scan-class-path
on the command line. My situation requires me to run the test cases through by executing one of my own classes, like that e.g.
java /com/example/MyTestCassesLauncher
EDIT: to clarify, I need one of my own classes to be hosting/launching my test cases, something like this:
// Maybe this needs to extend one of JUnit's launchers?
public class MyTestCassesLauncher {
public static void main(String[] args) {
JUnitLauncher.launchTests(new MyTestClass());
}
}
where JUnitLauncher.launchTests is some kind of API provided by the platform. I'm not looking for a method with that exact same signature but a mechanism that would allow me to ultimately call my own MyTestClassesLauncher class to run the tests.
Thanks in advance.
Not sure what you arę actually trying to achieve but in Junit5 to change behaviour of your tests you can use Extensions mechanism, similar to Junit4 RunWith but more powerful
Such custom extension can provide some additional logic like in this logging example
public class LoggingExtension implements
TestInstancePostProcessor {
#Override
public void postProcessTestInstance(Object testInstance,
ExtensionContext context) throws Exception {
Logger logger = LogManager.getLogger(testInstance.getClass());
testInstance.getClass()
.getMethod("setLogger", Logger.class)
.invoke(testInstance, logger);
}
}
The way Junit controls it's flow is Junit problem - you should not modify framework but extend it
Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.
My tests just repeats the code. For method
public void start(Context context) {
context.setA(CONST_A);
context.setB(CONST_B);
...
}
I wrote test using Mockito
#Test
public void testStart() throws Exception {
Context mockContext = mock(Context.class);
action.start(mockContext);
verify(mockAction).setA(Action.CONST_A);
verify(mockAction).setB(Action.CONST_B);
...
}
Or for
public void act() {
state.act();
}
test
#Test
public void testAct() throws Exception {
State mockState = mock(State.class);
context.setState(mockState);
context.act();
verify(mockState).act();
}
Are such tests useful? Such methods need to be tested and how to test them?
In my opinion, you should not try to have a 100% test coverage in general. Having a high test coverage is good, having a perfect coverage is useless and wastes your time. Any method that just sets, gets or delegate work to another method should not be tested, because it will cost you much to write and even more when refactoring. Finally, it won't add more anti-regression value or any help for anyone using your API.
Prefer testing method with real intelligence, risky or sensitive. The cases you submitted are test more Mockito than your own code. This will take build time and won't help you.
Personally I don't consider verify() useful at all since it directly tests the implementation instead of the result of your method. This will give you false failures when you change the implementation while the result is still correct.
As to whether this is useful: there is no logic to test so no, it's not particularly useful.
According to the comments I left in other answers
public void start(Context context) {
context.setA(CONST_A);
context.setB(CONST_B);
...
}
should not be tested with Mockito, rather
#Test
public void testStart() throws Exception {
Context context = new Context();
action.start(context);
assertThat(context.getA(), equalTo(Action.CONST_A));
assertThat(context.getB(), equalTo(Action.CONST_B));
}
Its not much different, but in comparison with verify it can also get true, if start manages to reach this state without calling a setter or getter.
I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.