Writing unit test for #Nonnull annotated parameter - java

I have a method like this one:
public void foo(#Nonnull String value) {...}
I would like to write a unit test to make sure foo() throws an NPE when value is null but I can't since the compiler refuses to compile the unit test when static null pointer flow analysis is enabled in IDE.
How do I make this test compile (in Eclipse with "Enable annotation-based null analysis" enabled):
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(null);
}
Note: In theory the static null pointer of the compiler should prevent cases like that. But there is nothing stopping someone from writing another module with the static flow analysis turned off and calling the method with null.
Common case: Big messy old project without flow analysis. I start with annotating some utility module. In that case, I'll have existing or new unit tests which check how the code behaves for all the modules which don't use flow analysis yet.
My guess is that I have to move those tests into an unchecked module and move them around as I spread flow analysis. That would work and fit well into the philosophy but it would be a lot of manual work.
To put it another way: I can't easily write a test which says "success when code doesn't compile" (I'd have to put code pieces into files, invoke the compiler from unit tests, check the output for errors ... not pretty). So how can I test easily that the code fails as it should when callers ignore #Nonnull?

Hiding null within a method does the trick:
public void foo(#NonNull String bar) {
Objects.requireNonNull(bar);
}
/** Trick the Java flow analysis to allow passing <code>null</code>
* for #Nonnull parameters.
*/
#SuppressWarnings("null")
public static <T> T giveNull() {
return null;
}
#Test(expected = NullPointerException.class)
public void testFoo() {
foo(giveNull());
}
The above compiles fine (and yes, double-checked - when using foo(null) my IDE gives me a compile error - so "null checking" is enabled).
In contrast to the solution given via comments, the above has the nice side effect to work for any kind of parameter type (but might probably require Java8 to get the type inference correct always).
And yes, the test passes (as written above), and fails when commenting out the Objects.requireNonNull() line.

Why not just use plain old reflection?
try {
YourClass.getMethod("foo", String.class).invoke(someInstance, null);
fail("Expected InvocationException with nested NPE");
} catch(InvocationException e) {
if (e.getCause() instanceof NullPointerException) {
return; // success
}
throw e; // let the test fail
}
Note that this can break unexpectedly when refactoring (you rename the method, change the order of method parameters, move method to new type).

Using assertThrows from Jupiter assertions I was able to test this:
public MethodName(#NonNull final param1 dao) {....
assertThrows(IllegalArgumentException.class, () -> new MethodName(null));

Here design by contract comes to picture. You can not provide null value parameter to a method annotated with notNull argument.

You can use a field which you initialize and then set to null in a set up method:
private String nullValue = ""; // set to null in clearNullValue()
#Before
public void clearNullValue() {
nullValue = null;
}
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(nullValue);
}
As in GhostCat's answer, the compiler is unable to know whether and when clearNullValue() is called and has to assume that the field is not null.

Related

JUnit assertion to force a line to be executed

Is there any junit assertion, with which i can force a line to be executed?
For example:
doAnswer(new Answer<Void>() {
#SuppressWarnings("unchecked")
#Override
public Void answer(final InvocationOnMock invocation) throws Throwable {
Object[] arguments = invocation.getArguments();
Map<String, String> fieldMapActual = (Map<String, String>) arguments[0];
assertEquals(fieldMap, fieldMapActual);
**assertFailIfThisLineIsNotExecuted();**
return null;
}
}).when(x).myMethod(xxx);
As i simulate the behaviour of myMethod, the method answer from the anonymous inner type will be executed at runtime of the myMethod (not at runtime of junit test), if myMethod will be called with the intended value/parameter.
In order to assert that the method is called, i must additionally define a verify (otherwise my test would still run even if the method is not called).
verify(x).myMethod(xxx);
If i had a chance to write sth like assertFailIfThisLineIsNotExecuted in the answer method, i would not have to define an extra verify. So again, Is there any junit assertion, with which i can force a line to be executed? Opposite of fail() so to speak, without immediately defining the method as "successful".
If you want to make sure that a certain line of your test is being executed, use a boolean flag:
final boolean[] wasExecuted = { false };
...
wasExecuted[0] = true;
...
assertTrue("Some code wasn't executed", wasExecuted);
But my gut feeling is that you're trying to solve a different problem.
The verify says "This method must have been called". It doesn't matter if you mocked an answer or not. So this is the approach that you should use.
I'm using my flag approach only when I can't create a mock for some reason. If that happens, I extend the class under test in my test code and add the flag.
The advantage of the flag over verify is that the flag documents in which place I expect the code to be (you can have the IDE search all places where the flag is used). verify() is not that easy to locate when it fails.
verify(x).myMethod(xxx); should be what you want. It also expresses intent.
assertFailIfThisLineIsNotExecuted() would also be a single line of code (so how would it be "better" than verify?), it's not supported by JUnit, you would have to write code to get a good error message, etc.

What is the difference between assert object!=null and Assert.assertNotNull(object)?

What is the difference between following two blocks of code?
#Test
public void getObjectTest() throws Exception {
Object object;
//Some code
Assert.assertNotNull(object);
}
AND
#Test
public void getObjectTest() throws Exception {
Object object;
//Some code
assert object!=null;
}
I do understand that Assert.AssertNotNull is a function call from TestNG and assert is a key word of Java ( introduced in Java 1.4 ). Are there any other differences in these two ? e.g. working, performance etc
Well the first case uses the Assert class from your testing framework and is the right way to go, since TestNG will report the error in an intelligent way.
You can also add your own message to the test:
Assert.assertNotNull(object, "This object should not be null");
The second case uses the assert keyword - it will give you a failure but the stack trace may or may not be understandable at a glance. You also need to be aware that assertions may NOT be enabled.
Talking about Performance:
assert object!=null; is a single java statement but Assert.assertNotNull(object) will result in calling of Multiple functions of TestNG, so w.r.t. performance assert object!=null; will be slightly better.
Yes, there is: the assert keyword needs to be enabled using the -ea flag, like
java -ea MyClass
The assert can therefor be turned on and off without any changes to the code.
The Assert class will always work instead. So, if you're doing testing, use the Assert class, never the assert keyword. This might give you the idea all your tests are passing, and while actually they don't assert anything. In my 10+ years of coding, I've never seen anyone enable assertions except Jetbrains, so use Assert instead. Or better, use Hamcrest.
The difference is basically the lack of assert keyword.
This is the source from testng:
static public void assertNotNull(Object object) {
assertNotNull(object, null);
}
static public void assertNotNull(Object object, String message) {
assertTrue(object != null, message);
}
Please note that the OP used the testng tag!
This is not JUnit's Assert!
Some info about java's assert keyword: assert

Mocking a Spy method with Mockito

I am writing a unit test for a FizzConfigurator class that looks like:
public class FizzConfigurator {
public void doFoo(String msg) {
doWidget(msg, Config.ALWAYS);
}
public void doBar(String msg) {
doWidget(msg, Config.NEVER);
}
public void doBuzz(String msg) {
doWidget(msg, Config.SOMETIMES);
}
public void doWidget(String msg, Config cfg) {
// Does a bunch of stuff and hits a database.
}
}
I'd like to write a simple unit test that stubs the doWidget(String,Config) method (so that it doesn't actually fire and hit the database), but that allows me to verify that calling doBuzz(String) ends up executing doWidget. Mockito seems like the right tool for the job here.
public class FizzConfiguratorTest {
#Test
public void callingDoBuzzAlsoCallsDoWidget() {
FizzConfigurator fixture = Mockito.spy(new FizzConfigurator());
Mockito.when(fixture.doWidget(Mockito.anyString(), Config.ALWAYS)).
thenThrow(new RuntimeException());
try {
fixture.doBuzz("This should throw.");
// We should never get here. Calling doBuzz should invoke our
// stubbed doWidget, which throws an exception.
Assert.fail();
} catch(RuntimeException rte) {
return; // Test passed.
}
}
}
This seems like a good gameplan (to me at least). But when I actually go to code it up, I get the following compiler error on the 2nd line inside the test method (the Mockito.when(...) line:
The method when(T) in the type Mockito is not applicable for the arguments (void)
I see that Mockito can't mock a method that returns void. So I ask:
Am I approaching this test setup correctly? Or is there a better, Mockito-recommended, way of testing that doBuzz calls doWidget under the hood? And
What can I do about mocking/stubbing doWidget as it is the most critical method of my entire FizzConfigurator class?
I wouldn't use exceptions to test that, but verifications. And another problem is that you can't use when() with methods returning void.
Here's how I would do it:
FizzConfigurator fixture = Mockito.spy(new FizzConfigurator());
doNothing().when(fixture).doWidget(Mockito.anyString(), Mockito.<Config>any()));
fixture.doBuzz("some string");
Mockito.verify(fixture).doWidget("some string", Config.SOMETIMES);
This isn't a direct answer to the question, but I ran across it when trying to troubleshoot my problem and haven't since found a more relevant question.
If you're trying to stub/mock an object marked as Spy, Mockito only picks up the stubs if they're created using the do...when convention as hinted at by JB Nizet:
doReturn(Set.of(...)).when(mySpy).getSomething(...);
It wasn't being picked up by:
when(mySpy.getSomething(...)).thenReturn(Set.of(...));
Which matches the comment in MockHandlerImpl::handle:
// stubbing voids with doThrow() or doAnswer() style
This is a clear sign that doWidget method should belong to another class which FizzConfigurator would depend on.
In your test, this new dependency would be a mock, and you could easily verify if its method was called with verify.
In my case, for the method I was trying to stub, I was passing in incorrect matchers.
My method signature (for the super class method I was trying to stub): String, Object.
I was passing in:
myMethod("string", Mockito.nullable(ClassType.class)) and getting:
Hints:
1. missing thenReturn()
2. you are trying to stub a final method, which is not supported
3: you are stubbing the behaviour of another mock inside before 'thenReturn' instruction if completed
When using a matcher in another parameter, we also need to use one for the string:
myMethod(eq("string"), Mockito.nullable(ClassType.class))
Hope this helps!

JUnit4 fail() is here, but where is pass()?

There is a fail() method in JUnit4 library. I like it, but experiencing a lack of pass() method which is not present in the library. Why is it so?
I've found out that I can use assertTrue(true) instead but still looks unlogical.
#Test
public void testSetterForeignWord(){
try {
card.setForeignWord("");
fail();
} catch (IncorrectArgumentForSetter ex){
}
// assertTrue(true);
}
Call return statement anytime your test is finished and passed.
As long as the test doesn't throw an exception, it passes, unless your #Test annotation specifies an expected exception. I suppose a pass() could throw a special exception that JUnit always interprets as passing, so as to short circuit the test, but that would go against the usual design of tests (i.e. assume success and only fail if an assertion fails) and, if people got the idea that it was preferable to use pass(), it would significantly slow down a large suite of passing tests (due to the overhead of exception creation). Failing tests should not be the norm, so it's not a big deal if they have that overhead.
Note that your example could be rewritten like this:
#Test(expected=IncorrectArgumentForSetter.class)
public void testSetterForeignWord("") throws Exception {
card.setForeignWord("");
}
Also, you should favor the use of standard Java exceptions. Your IncorrectArgumentForSetter should probably be an IllegalArgumentException.
I think this question needs an updated answer, since most of the answers here are fairly outdated.
Firstly to the OP's question:
I think its pretty well accepted that introducing the "expected excepetion" concept into JUnit was a bad move, since that exception could be raised anywhere, and it will pass the test. It works if your throwing (and asserting on) very domain specific exceptions, but I only throw those kinds of exceptions when I'm working on code that needs to be absolutely immaculate, --most APIS will simply throw the built in exceptions like IllegalArgumentException or IllegalStateException. If two calls your making could potentitally throw these exceptions, then the #ExpectedException annotation will green-bar your test even if its the wrong line that throws the exception!
For this situation I've written a class that I'm sure many others here have written, that's an assertThrows method:
public class Exceptions {
private Exceptions(){}
public static void assertThrows(Class<? extends Exception> expectedException, Runnable actionThatShouldThrow){
try{
actionThatShouldThrow.run();
fail("expected action to throw " + expectedException.getSimpleName() + " but it did not.");
}
catch(Exception e){
if ( ! expectedException.isInstance(e)) {
throw e;
}
}
}
}
this method simply returns if the exception is thrown, allowing you to do further assertions/verification in your test.
with java 8 syntax your test looks really nice. Below is one of the simpler tests on our model that uses the method:
#Test
public void when_input_lower_bound_is_greater_than_upper_bound_axis_should_throw_illegal_arg() {
//setup
AxisRange range = new AxisRange(0,100);
//act
Runnable act = () -> range.setLowerBound(200);
//assert
assertThrows(IllegalArgumentException.class, act);
}
these tests are a little wonky because the "act" step doesn't actually perform any action, but I think the meaning is still fairly clear.
there's also a tiny little library on maven called catch-exception that uses the mockito-style syntax to verify that exceptions get thrown. It looks pretty, but I'm not a fan of dynamic proxies. That said, there syntax is so slick it remains tempting:
// given: an empty list
List myList = new ArrayList();
// when: we try to get the first element of the list
// then: catch the exception if any is thrown
catchException(myList).get(1);
// then: we expect an IndexOutOfBoundsException
assert caughtException() instanceof IndexOutOfBoundsException;
Lastly, for the situation that I ran into to get to this thread, there is a way to ignore tests if some conidition is met.
Right now I'm working on getting some DLLs called through a java native-library-loading-library called JNA, but our build server is in ubuntu. I like to try to drive this kind of development with JUnit tests --even though they're far from "units" at this point--. What I want to do is run the test if I'm on a local machine, but ignore the test if we're on ubuntu. JUnit 4 does have a provision for this, called Assume:
#Test
public void when_asking_JNA_to_load_a_dll() throws URISyntaxException {
//this line will cause the test to be branded as "ignored" when "isCircleCI"
//(the machine running ubuntu is running this test) is true.
Assume.assumeFalse(BootstrappingUtilities.isCircleCI());
//an ignored test will typically result in some qualifier being put on the results,
//but will also not typically prevent a green-ton most platforms.
//setup
URL url = DLLTestFixture.class.getResource("USERDLL.dll");
String path = url.toURI().getPath();
path = path.substring(0, path.lastIndexOf("/"));
//act
NativeLibrary.addSearchPath("USERDLL", path);
Object dll = Native.loadLibrary("USERDLL", NativeCallbacks.EmptyInterface.class);
//assert
assertThat(dll).isNotNull();
}
I was looking for pass method for JUnit as well, so that I could short-circuit some tests that were not applicable in some scenarios (there are integration tests, rather than pure unit tests). So too bad it is not there.
Fortunately, there is a way to have a test ignored conditionally, which actually fits even better in my case using assumeTrue method:
Assume.assumeTrue(isTestApplicable);
So here the test will be executed only if isTestApplicable is true, otherwise test will be ignored.
There is no need for the pass method because when no AssertionFailedException is thrown from the test code the unit test case will pass.
The fail() method actually throws an AssertionFailedException to fail the testCase if control comes to that point.
I think that this question is a result of a little misunderstanding of the test execution process. In JUnit (and other testing tools) results are counted per method, not per assert call. There is not a counter, which keeps track of how many passed/failured assertX was executed.
JUnit executes each test method separately. If the method returns successfully, then the test registered as "passed". If an exception occurs, then the test registered as "failed". In the latter case two subcase are possible: 1) a JUnit assertion exception, 2) any other kind of exceptions. Status will be "failed" in the first case, and "error" in the second case.
In the Assert class many shorthand methods are avaiable for throwing assertion exceptions. In other words, Assert is an abstraction layer over JUnit's exceptions.
For example, this is the source code of assertEquals on GitHub:
/**
* Asserts that two Strings are equal.
*/
static public void assertEquals(String message, String expected, String actual) {
if (expected == null && actual == null) {
return;
}
if (expected != null && expected.equals(actual)) {
return;
}
String cleanMessage = message == null ? "" : message;
throw new ComparisonFailure(cleanMessage, expected, actual);
}
As you can see, in case of equality nothing happens, otherwise an excepion will be thrown.
So:
assertEqual("Oh!", "Some string", "Another string!");
simply throws a ComparisonFailure exception, which will be catched by JUnit, and
assertEqual("Oh?", "Same string", "Same string");
does NOTHING.
In sum, something like pass() would not make any sense, because it did not do anything.

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources