My tests just repeats the code. For method
public void start(Context context) {
context.setA(CONST_A);
context.setB(CONST_B);
...
}
I wrote test using Mockito
#Test
public void testStart() throws Exception {
Context mockContext = mock(Context.class);
action.start(mockContext);
verify(mockAction).setA(Action.CONST_A);
verify(mockAction).setB(Action.CONST_B);
...
}
Or for
public void act() {
state.act();
}
test
#Test
public void testAct() throws Exception {
State mockState = mock(State.class);
context.setState(mockState);
context.act();
verify(mockState).act();
}
Are such tests useful? Such methods need to be tested and how to test them?
In my opinion, you should not try to have a 100% test coverage in general. Having a high test coverage is good, having a perfect coverage is useless and wastes your time. Any method that just sets, gets or delegate work to another method should not be tested, because it will cost you much to write and even more when refactoring. Finally, it won't add more anti-regression value or any help for anyone using your API.
Prefer testing method with real intelligence, risky or sensitive. The cases you submitted are test more Mockito than your own code. This will take build time and won't help you.
Personally I don't consider verify() useful at all since it directly tests the implementation instead of the result of your method. This will give you false failures when you change the implementation while the result is still correct.
As to whether this is useful: there is no logic to test so no, it's not particularly useful.
According to the comments I left in other answers
public void start(Context context) {
context.setA(CONST_A);
context.setB(CONST_B);
...
}
should not be tested with Mockito, rather
#Test
public void testStart() throws Exception {
Context context = new Context();
action.start(context);
assertThat(context.getA(), equalTo(Action.CONST_A));
assertThat(context.getB(), equalTo(Action.CONST_B));
}
Its not much different, but in comparison with verify it can also get true, if start manages to reach this state without calling a setter or getter.
Related
I have the following method and I wrote a unit test in Java for this method. It is coveraged except from the if statement and I also need to test this part.
#InjectMocks
private ProductServiceImpl productService;
public void demoMethod(final List<UUID> productUuidList) {
if (productUuidList.isEmpty()) {
return;
}
final Map<ProductRequest, PriceOverride> requestMap = getPriceRequests(uuidList);
productService.updateByPriceList(priceRequestMap, companyUuid);
}
However, as the method execution is finalized and does not return anything when uuidList is empty, I cannot test this if block.
So:
How can I test this if block?
Should I create a new Unit Test method for testing this if block? Or should I add related assert lines to the current test method?
Update: Here is my test method:
#Test
public void testDemoMethod() {
final UUID uuid = UUID.randomUUID();
final List<Price> priceList = new ArrayList<>();
final Price price = new Price();
price.setUuid(uuid);
priceList.add(price);
productService.demoMethod(Collections.singletonList(uuid));
}
The general idea is that you don't want to test specific code, but you want to test some behaviour.
So in your case you want to verify that getPriceRequests and priceService.updateByPriceList are not called when passing in an empty List.
How exactly you do that depends on what tools you have available. The easiest way is if you already mock priceService: then just instruct your mocking liberary/framework to verify that updateByPriceList is never called.
The point of doing a return in your if condition is that the rest of the code is not executed. I.e., if this // code omitted for brevity was to be executed, the method would not fill it's purpose. Therefore, just make sure that whatever that code does, it was not done if your list is empty.
You have 3 choices:
Write a unit test with mocks. Mockito allows you to verify() whether some method was invoked.
Write a more high-level test with database. When testing Service Facade Layer this is usually a wiser choice. In this case you can obtain the resulting state of DB in your test to check whether it did what it had to.
Refactor your code to work differently
Check out Test Pyramid and How anemic architecture spoils your tests for more details.
I have a method like this one:
public void foo(#Nonnull String value) {...}
I would like to write a unit test to make sure foo() throws an NPE when value is null but I can't since the compiler refuses to compile the unit test when static null pointer flow analysis is enabled in IDE.
How do I make this test compile (in Eclipse with "Enable annotation-based null analysis" enabled):
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(null);
}
Note: In theory the static null pointer of the compiler should prevent cases like that. But there is nothing stopping someone from writing another module with the static flow analysis turned off and calling the method with null.
Common case: Big messy old project without flow analysis. I start with annotating some utility module. In that case, I'll have existing or new unit tests which check how the code behaves for all the modules which don't use flow analysis yet.
My guess is that I have to move those tests into an unchecked module and move them around as I spread flow analysis. That would work and fit well into the philosophy but it would be a lot of manual work.
To put it another way: I can't easily write a test which says "success when code doesn't compile" (I'd have to put code pieces into files, invoke the compiler from unit tests, check the output for errors ... not pretty). So how can I test easily that the code fails as it should when callers ignore #Nonnull?
Hiding null within a method does the trick:
public void foo(#NonNull String bar) {
Objects.requireNonNull(bar);
}
/** Trick the Java flow analysis to allow passing <code>null</code>
* for #Nonnull parameters.
*/
#SuppressWarnings("null")
public static <T> T giveNull() {
return null;
}
#Test(expected = NullPointerException.class)
public void testFoo() {
foo(giveNull());
}
The above compiles fine (and yes, double-checked - when using foo(null) my IDE gives me a compile error - so "null checking" is enabled).
In contrast to the solution given via comments, the above has the nice side effect to work for any kind of parameter type (but might probably require Java8 to get the type inference correct always).
And yes, the test passes (as written above), and fails when commenting out the Objects.requireNonNull() line.
Why not just use plain old reflection?
try {
YourClass.getMethod("foo", String.class).invoke(someInstance, null);
fail("Expected InvocationException with nested NPE");
} catch(InvocationException e) {
if (e.getCause() instanceof NullPointerException) {
return; // success
}
throw e; // let the test fail
}
Note that this can break unexpectedly when refactoring (you rename the method, change the order of method parameters, move method to new type).
Using assertThrows from Jupiter assertions I was able to test this:
public MethodName(#NonNull final param1 dao) {....
assertThrows(IllegalArgumentException.class, () -> new MethodName(null));
Here design by contract comes to picture. You can not provide null value parameter to a method annotated with notNull argument.
You can use a field which you initialize and then set to null in a set up method:
private String nullValue = ""; // set to null in clearNullValue()
#Before
public void clearNullValue() {
nullValue = null;
}
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(nullValue);
}
As in GhostCat's answer, the compiler is unable to know whether and when clearNullValue() is called and has to assume that the field is not null.
I have a test that does a cycle to run the same test but with different inputs. The problem is that when one assert fails, then the test stops, and it is marked as if one test failed.
Here is the code:
#Test
public void test1() throws JsonProcessingException {
this.bookingsTest(Product.ONE);
}
#Test
public void test2() throws JsonProcessingException {
this.bookingsTest(Product.TWO);
}
public <B extends Sale> void bookingsTest(Product product) {
List<Booking> bookings = this.prodConnector.getBookings(product, 100);
bookings.stream().map(Booking::getBookingId).forEach((bookingId) -> {
this.bookingTest(bookingId);
});
}
public <B extends Sale> void bookingTest(String bookingId) {
...
// Do some assert:
Assert.assertEquals("XXX", "XXX");
}
In that case, the methods test1 and test2, execute as two different tests, but inside them, I do a cycle to check some stuff on every item that is returned by the collection. But what I want is to make that test on each item to be treated as a different one, so that if one fails, the others continue executing and I can see which one failed and how many failed over the total.
what you described is parameterized testing. there is plenty of frameworks that may help you. they just have different simplicity to power ratio. just pick the one that fits your needs.
junit's native way is #Parameterized but it's very verbose. other junit plugins that you may find useful are zohhak or junit-dataprovider. if you are not enforced to use junit and/or plain java, you can try spock or testng.
Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.
I have a class that makes native Windows API calls through JNA. How can I write JUnit tests that will execute on a Windows development machine but will be ignored on a Unix build server?
I can easily get the host OS using System.getProperty("os.name")
I can write guard blocks in my tests:
#Test public void testSomeWindowsAPICall() throws Exception {
if (isWindows()) {
// do tests...
}
}
This extra boiler plate code is not ideal.
Alternatively I have created a JUnit rule that only runs the test method on Windows:
public class WindowsOnlyRule implements TestRule {
#Override
public Statement apply(final Statement base, final Description description) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
if (isWindows()) {
base.evaluate();
}
}
};
}
private boolean isWindows() {
return System.getProperty("os.name").startsWith("Windows");
}
}
And this can be enforced by adding this annotated field to my test class:
#Rule public WindowsOnlyRule runTestOnlyOnWindows = new WindowsOnlyRule();
Both these mechanisms are deficient in my opinion in that on a Unix machine they will silently pass. It would be nicer if they could be marked somehow at execution time with something similar to #Ignore
Does anybody have an alternative suggestion?
In Junit5, There are options for configuring or run the test for specific Operating System.
#EnabledOnOs({ LINUX, MAC })
void onLinuxOrMac() {
}
#DisabledOnOs(WINDOWS)
void notOnWindows() {
// ...
}
Have you looked into assumptions? In the before method you can do this:
#Before
public void windowsOnly() {
org.junit.Assume.assumeTrue(isWindows());
}
Documentation: http://junit.sourceforge.net/javadoc/org/junit/Assume.html
Have you looked at JUnit assumptions ?
useful for stating assumptions about the conditions in which a test
is meaningful. A failed assumption does not mean the code is broken,
but that the test provides no useful information. The default JUnit
runner treats tests with failing assumptions as ignored
(which seems to meet your criteria for ignoring these tests).
If you use Apache Commons Lang's SystemUtils:
In your #Before method, or inside tests that you don't want to run on Win, you can add:
Assume.assumeTrue(SystemUtils.IS_OS_WINDOWS);
Presumably you do not need to actually call the Windows API as part of the junit test; you only care that the class which is the target of the unit test calls what it thinks is a the windows API.
Consider mocking the windows api calls as part of the unit tests.