Should every possible branch in a method have a separate junit? - java

This is more of a design question.
Suppose you have a method like this (as an example):
if (x == 5) {
c = 1;
} else {
if (z != 2) {
b = 6;
} else {
a = 3;
}
Do you think it's best practice to have a junit for each possible branch? Ie, testx5, test xnot5znot2, testxnot5z2, etc, or something like:
void testMethod() {
// x is 5
test/assert code;
// x not 5, z not 2
test/assert code;
// x not 5, z is 2
test/assert code
// etc
}
EDIT: Just to be clear, my goal is complete code coverage. I just want to know opinions on whether I should make a new test for each branch or combine them in one test. Thank you for your input.

The JUnit FAQ seems to indicate that it is better to have more tests with fewer assertions, if only because JUnit will only report the first assertion failure in a test method. With one method, if you broke the x = 5 case, you'd have no way to tell if any of the x != 5 cases were still working.

What you're discussing is called Branch Coverage.
The conventional wisdom is if it's important enough to write code to cover that use case, it's important enough to write a test case to cover that code. So this would seem to say that 100% branch coverage is an excellent goal (and will also imply 100% statement coverage, but not necessarily 100% loop or 100% condition coverage).
However, you also need to balance the effort of writing tests with the value of getting those tests. For example, if the code you're testing has a try/catch to catch a checked exception, but the exception is almost never thrown (or difficult to cause to be thrown in a test case), then writing a test to cover that exception is probably not worth your time.
This is why you see in a lot of places that aiming for a certain % of test coverage is a bad idea, because you end up writing test cases to get coverage, not to find bugs. In your example, yes, each branch deserves it's own test case. In every piece of production code, it's probably not necessary.

In unit testing, your goal is to test behaviors -- not the "code". Think of the method you're testing a black box and you want to see if it works correctly. You don't know how it does it's job internally, but you expect certain results for certain inputs. So you'd want to create tests for different cases of how you'd expect the code to work as if you didn't know anything about how the internals of the method actually does it's job. So you'd write tests like "applysDiscountToShoppingCart" and "addsDeliveryFeeToShoppingCart".
Now, all that being said, it's also useful to create "edge cases" in which you're testing things that are likely to break (like nulls, zeros, negatives, data too big/small, etc) to see if it fails in an expected manner too. Usually to write those, you need to know how the method actually works. If you can design tests that will cover all your code, that's great! 100% test coverage is a definite thing to strive for, but it's not always practical (or useful) depending on the situation.

Especially on build servers it is easier to have many different testcases/functions because it will be easy to identify which test fails. Another downside is that the testcase will halt if the first one fails, and you will not know the result of the other testcases.
For me personally this benefit stops when you have to do a lot of copy pasting to set up/explain the testcase, in that case I will just do several asserts in the same test case.

Related

How to test method that wrapping another method?

Let's imagine having class (written in Java-like pseudocode):
class MyClass {
...
public List<Element> getElementsThatContains(String str) {
return this.getElementsThatContains(new Set<String> { str });
}
public List<Element> getElementsThatContains(Set<String> strs) {
...
}
}
First of all - I have getElementsThatContains(Set<String> strs) properly 100% covered.
How should I cover getElementsThatContains(String str):
Should I copy (almost) all the tests but with call to getElementsThatContains(String str)?
Should I just make one test
method that check if results from first and second methods are same
(with same incoming data)?
Should I refactor my code so I do not have
such a situation? (If yes, how?)
Yes, you should cover both methods. The reason for having unit tests is the safety net, when the code is refactored. For example, Someone might refactor the implementation of 'getElementsThatContains(String str)' and it will always return an empty List. Despite getElementsThatContains(Set strs) has 100% coverage those tests won't catch this.
No, you should not make one test method that check if results from first and second methods are same. This is generally considered a bad practice. Moreover, if there is a bug in one method, your test would just check the other method returns same incorrect result.
No, you should not copy all the tests, because the test cases for each method would be different. The arguments for the methods are different. So you will have different test cases for each, despite that underneath the same method is called.
Yes you should test both methods, and you should use distinct test cases for each method.
But you should care less for your line coverage.
Don't get me wrong here! It is important to keep the line coverage high. But it is more important to have 100% behavior coverage. And if you come across untested lines your question should be: "Is this untested code needed (i.e. what requirement does it implement) or is it obsolete?".
When we write our tests with line coverage in mind we tend to focus on the implementation details of our code under test. In consequence our tests are likely to fail when we change this implementation details (e.g. during refactoring). But our tests should only fail if the tested behavior changes and not when we change the way this behavior is achieved.

I don't want assertJ assertThat ends test when assertion fails

I use assertJ and have multiple assertThat assertions in my test case.
When first assertion fails test is finished but I don't want that.
I'd like to have information about all failing assertions after single executing of test case.
Is it any way to do that ?
I have found solution with SoftAssertions here -> http://joel-costigliola.github.io/assertj/assertj-core-features-highlight.html#soft-assertions
but it's ugly to add variable. before each assertThat
A bit of example code would help, but then, this is more of a theoretical problem, as the real answer is: consider not having multiple assertions in one test call!
Meaning: the idea of a failing test is to get you to a problem as quickly as possible. When you combine multiple asserts into a single test, then you make our life harder by default. Because instead of knowing "test X with assertion Y failed, you have to first study logs very carefully to identify which asserts passed, and which one failed.
Therefore the recommend practice is to not put multiple asserts/check into a single test.
If you don't like soft assertions, you can give a try to JUnit 5 assertAll but otherwise I would follow #GhostCat advice and try to assert one thing per test (that usually leads to only a few assertions).
I think that in some cases you may and sometimes even you have to assert multiple things in a single test method if your method perform multiple changes that you should check through different levels/abstractions.
For example as you test a method that adds an element in a object that stores it, you can assert that the number of elements contained in the object were incremented by one but you can also check that the new element were correctly added concerning its values.
You have two levels/abstractions : the object that contains the element that has a "direct/core" state and the elements that it contains that have their own states.
In splitting it in two assertions, it would give a test that looks like :
#Test
public void addElt(){
foo.addElt(new Element("a name", "a role"));
assertThat(foo).extracting(Foo::getSize)
.contains(actualSize+1);
assertThat(foo.getLastElt()).extracting(Element::getName, Element::getRole)
.containsExactly(addedElt.getName(), addedElt.getRole());
}
So now why trying to couple two assertions that checks two different
things ?
Does it really bring a value for debugging your test ?
I don't think so.
Trying to assert the changes on the two level of abstraction in a single assertion makes clearly no sense : complex and useless noises.
If the first assertion fails :
assertThat(foo).extracting(Foo::getSize)
.contains(actualSize+1);
It very probably means that the element was not added.
So in this case, performing the second assertion :
assertThat(foo.getLastElt()).extracting(Element::getName, Element::getRole)
.containsExactly(addedElt.getName(), addedElt.getRole());
makes no sense as it will very probably be also in error.
The developer that handles the failure test needs only to have useful information and not noise that can make its solving harder. So having a feedback about the size that is not which one expected is just what you need.
What I try to explain is right for AssertJ as for any testing framework.

What is a test script?

I was given an assignment:
Consider a list of single digit numbers. For example, [2,2,5].
Consider the successors of such lists.
The successor of [2,2,5] is [2,2,6].
The successor of [2,3,9,9] is [2,4,0,0].
The successor of [9,9,9,9] is [1,0,0,0,0].
Write a Java program that takes such a list of numbers as input and
returns its successor. You cannot express this list as a decimal
number and compute its successor as the number may be so large that an
overflow occurs. Make your program as short as possible.
Submit code in Java along with compilation instructions and test scripts.
I have written the program and compiled and ran it successfully. The program does exactly what the assignment requires. However, I am not sure what the instructor mean by test scripts. Would someone please explain and provide an example to what it is?
Note: If you decide to downvote the question, provide a reason so I can edit my question.
Look up Junit. The professor wants you to create a test class that tests specific scenarios. Something like this:
#Test
public void testSuccesor() {
int[] start = {2,2,5};
int[] expected = {2,2,6};
int[] actual = callSuccesorMethod(start);
assertArrayEquals(expected, actual);
}
The test will throw a failure if the expected value does not match the actual value. Unit testing is a great way to validate your code. It also helps with future development, when you change the code you can just rerun your tests to validate you didn't break anything.
Check out TDD as well, its a style of programming that could interest you.
A test script is a program that exercises your code (known as the "system under test" or simply "SUT") in a predictable manner, and verifies that its behaviour is correct for the given inputs. The test may be in the same language as the SUT, but it's very common to use a different language, as you have different requirements on performance, ease of modification, readability and robustness.
For example, a simple shell script may be enough in your case:
#!/bin/bash
set -e # important, so the script fails if any single command fails
test '2,2,6' = $(./successor '2,2,5')
test '2,4,0,0' = $(./successor '2,3,9,9')
# Add any more tests you think you should have
If your compilation instructions are presented in the form of a Makefile, you could run your tests from there, too. It's a common convention to call the target test, so make test would compile your program and run its tests.

Only one valid testCase, can I save the time to write failing ones? (can I automatically generate them)

Imagine the following situation: There is a legacy application I want to write tests for in order to be able to refactor the (ugly) code.
Now there is a big chunk of code that deals with some constraints. Basically it boils down to this signature:
public Boolean isValidRequest(Boolean constraint0, Boolean constraint1, Boolean constraint2, BooleanConstraint3) {
return constraint0 && constraint1 && constraint2 && constraint3;
}
Now what I want to test is exactly that "chaining" (here all constraints are chained with "&&", but I want to make sure this stays like it, e.g. no typo!)
In order to test all combinations I would have to write n² tests where n is the number of constraints. Here it would be 4², hence 16 tests.
I wonder if there is a more easy way. The only valid testcase is:
#Test
public void testAllConstraintsFulfilled() {
assertThat(myObjectToTest.isValidRequest(true, true, true, true)).isTrue();
}
While all other combinations should fail. I wonder whether there is a way I don't know about to skip manually writing the other 15 testcases and instead say something like:
1) Try all combinations
2) Only (true, true, true, true) should not fail
I have mockito, assertj and junit4 available. Any hints, ideas?
There are two ways here:
A) you turn to coverage; and you keep writing test cases until you hit 100%. You don't need to test all possible paths - only the existing paths that your program can take!
B) you reverse that approach: instead of writing testcases to hit paths; you use a solution that generates tests for you. In other words: you provide rules, and that an engine verifies that the code under test matches those rules. QuickCheck would be one example of such tooling.

Testing for optional exception in parameterized JUnit 4+ test

I am trying to write a unit test for a method which takes a string as a para-
meter and throws an exception if it is malformed (AND NONE if it is okay).
I want to write a parameterized test which feeds in several strings and the
expected exception (INCLUDING the case that none is thrown if the input
string is well-formed!). If trying to use the #Test(expect=SomeException.class)
annotation, I encountered two problems:
expect=null is not allowed.
So how could I test for the expected outcome of NO exception to be thrown
(for well-formed input strings)?
expect= not possible?
I not yet tried it, but I strongly suspect that this is the case after
reading this (could you please state whether this is true?):
http://tech.groups.yahoo.com/group/junit/message/19383
This then seems to be the best solution I found yet. What do you think about
it, especially compared to that:
How do I test exceptions in a parameterized test?
Thank you in advance for any help, I look forward the discussion :)
Create two test case classes:
ValidStringsTest
InvalidStringsTest
Obviously the first one tests all sorts of valid inputs (not throwing an exception), whilst the second one always expects the exception.
Remember: readability of your tests is even more important than readability of production code. Don't use wacky flags, conditions and logic inside JUnit test cases. Simplicity is the king.
Also see my answer here for a hint how to test for exceptions cleanly.
Have two different tests - one for valid inputs and one for invalid ones. I haven't used JUnit 4 so I can't comment on the exact annotation format - but basically you'd have one parameterized test with various different invalid inputs, which says that it does expect an exception, and a separate test with various different valid inputs which doesn't say anything about exceptions. If an exception is thrown when your test doesn't say that it should be, the test will fail.
Splitting the test cases into two test classes is the appropriate approach in many cases - as both Tomasz and Jon already outlined.
But there are other cases where this split is not a good choice just in terms of readability. Let's assume the rows in the tested data set have a natural order and if the rows are sorted by this natural order it may be easy to see whether or not the test data covers all relevant use cases. If one splits the test cases into two test classes, there is no longer an easy way to see whether all relevant test cases are covered. For these cases
How do I test exceptions in a parameterized test?
seeems to provide the best solution indeed.

Categories

Resources