Dealing with private methods (whitebox test), methods with many conditions - java

I have 2 questions:
1) In JUnit you shouldn't test or mock private methods. But how do I deal with, when they are getting called inside a public method. Let's assume I have following setup:
public void method(String value){
if(value.contains("something")){
doSomethingToString(value);
}
else{
//do something else
}
}
private void doSomethingToString(String value){
Object obj = service.getObject(); //service is mocked in my test-class
//do something with the obj and value
}
I am doing a whitebox test, so I know the methods and what's going on. Now I want to test the public method method(String value). When I now only consider what happens in there, I would get into trouble, since I need to influence what service.getObject() in my private method returns. Is it OK, when I just go on, like I would, meaning using doReturn(objectICreatedInMyTestClass).when(service.getObject()) or do I need to find another way?
2) Methods which have more than one conditions. For example:
public void method(String value){
if(value.contains("something")){
Object obj = service.getObj(value);
}
else{
//do something else
}
if(obj.getAddress == null){
//do something
}
else{
//do something else
}
if (obj.getName == "Name") {
// do something
}
else
{
// do something else
}
}
How many times do I need to test this method? Only twice, where once all conditions return true, and second, where they all return false? Or is it advised to test every possible scenario? This would mean test with condition 1 = true, condition 2 = false, condition 3=false, and then condition 1 = true, condition 2 = true, condition 3 = false and so on (= 8 possibilites).

1) Should I test private methods on their own, outside of the public methods that call them?
Typically I've always followed the assumption that if the only way your code will access that method is by going through another, then that's how you should test it. My experience with testing systems has lead me to believe that this is generally the way it's done in 'the real world' as well.
If we take your example, we can assume that our first step is to write tests to thoroughly test our main method. As part of this we should be testing scenarios that include properly exercising all the conditions we could expect it to face. This will include at least a subset of the scenarios that your private method will face.
Your private method may be used by multiple (possibly wildly different) methods, so its space of possible inputs and outputs may be greater than any single public method that uses it. If you thoroughly test the public methods that use them however, you should be in the situation where you are testing all of the possible scenarios that that private method will encounter.
Because of this you shouldn't need to write tests specifically for the private method. There may be other scenarios where it is unavoidable to try and test private methods, or private classes. Usually I would argue this is because the code is simply written in a way that makes it hard/impossible and could be rewritten to make it friendlier to tests (And therefore friendlier to being updated/refactored at a later date).
2) Should all of those combinations be tested?
This depends on what is happening in the example. There are two different scenarios to consider
a) Neither of these branches have anything to do with each other. That is, what ever happens in the first set of branches will have no way of impacting the logic of what happens in the second branch.
b) Some possible implications of running any logic in either of the first set of branches would result in a different result in the logic of the code in one or more of the second branches.
This will be down to your reading and understanding what is happening in the code, so your example isn't enough to point to one or the other way of doing things.

Related

How can I test if a class method is calling another method from one of the classes's private objects

I'm trying to create Unit tests for a project. In my project I have a Menu Class, and a VerticalOptions Class.
My menu class has a private VerticalOptions object and a public handleInput method.
When I call my menu's handleInput(key) method, depending on the key I give it it'll do different things, namely call different methods of my VerticalOptions object.
I want to make a unitTest to see if the methods being called are the correct ones, how can I do that?
I've tried adding a Mockito spy to my menu, however since I want to test if the method being called was the method in the private VerticalOptions object, it doesn't really work.
I've also tried putting the spy on the VerticalOptions object, after getting it with a getVerticalOptions method, but it also doesn't work.
public void handleInput(InputKey key)
{
switch (key) {
case S:
case DOWN:
optionsInterface.cycleDown();
break;
case W:
case UP:
optionsInterface.cycleUp();
break;
case SPACE:
case ENTER:
optionsInterface.select();
break;
default:
break;
}
}
#Test
public void testInput() {
MainMenu menu = new MainMenu(game);
VerticalButtonInterface buttonInterface = menu.getOptionsInterface();
VerticalButtonInterface spy = spy(buttonInterface);
menu.handleInput(InputKey.DOWN);
verify(spy, times(1)).cycleDown();
}
This is the test failure I got:
Wanted but not invoked:
verticalButtonInterface.cycleDown();
-> at MenuTest.testInput(MenuTest.java:60)
Actually, there were zero interactions with this mock.
I will give you an alternative view on this. I have seen a lot of people going down the wrong path and when you do that, everything else becomes hard to do / test, which is exactly what you are doing now.
Start here, what are you trying to achieve?
I want to test and make sure that a certain method is called ...
Is this a good thing? What is a unit test not meant to have? That is deep knowledge of the code.
Why? Because every time you make slight changes to the code, you'll have to change the test because of this deep knowledge. if you have 1000 tests, you're in for a hard road.
Ok, so we now know what the problem is, so how do we solve it? Well, first let's make sure we can have a test without deep knowledge of the code.
How do we do that? Well, imagine that your code adds an extra step, a flag which sets a state. You might have a flag which stores a resulting state ...
you have 3 methods you want to call, so you will need 3 different states, so create a variable which reflects that, be it a string, or an enum or whatever else makes you happy.
For example sake, let's say we create a string with 3 possible values: cycleDown, cycleUp and select.
your code starts to look something like :
public string handleInput(InputKey key)
{
String state = determineState(key);
SomeType someResult = executeActionForState(state);
}
public String determineState(string key)
{
String state = "";
switch (key) {
case S:
case DOWN:
state = "cycleDown";
break;
case W:
case UP:
state = "cycleUp";
break;
case SPACE:
case ENTER:
state = "select";
break;
default:
break;
}
return state;
}
public void executeActionForState(string state)
{
if ( state == "cycleup" ) {
}
etc etc
}
Now, I may not necessarily code your example like this, this is a bit forced, it depends on what other things you're doing with the code, but this is meant to show how you separate functionality from UI aspects.
I can easily test the state method, I can change its code and I wouldn't have to change the test, because the test would look at the inputs and outputs and not how things are achieved.
Unit testing is about functionality, it's about having simple tests that don't need to change once created. Verifying that a method has been called doesn't give you anything worthwhile, because you don't know what that method does later.
UI stuff you can test in other ways, unit testing is only about correct functionality. If you do not make this separation clear then you will have trouble maintaining your tests, it will become harder and harder until you give up.
You would test that you get the correct state, then you test that cycleUp method does something correct based on your requirements and that's how you know each part works in isolation. Later on you start looking at integration tests, Automated UI tests, but those are different things. Keep unit testing for what it's meant to do, keep it simple, keep it not tied to other code and then everything becomes simple. You won't need to mock much, you won't need to worry too much about complex setups and you won't need to change your tests every time something in the code changes.
Now, to address the final part of the question, private methods, you test them by observing their outputs. You must have something public in your class that changes when a private method is called. So test that.

What's the point of verifying the number of times a function is called with Mockito?

In my understanding, code testing is to test whether results are right, like a calculator, I need to write a test case to verify if the result of 1+1 is 2.
But I have read many test cases about verifying the number of times a method is called. I'm very confused about that. The best example is what I just saw in Spring in Action:
public class BraveKnight implements Knight {
private Quest quest;
public BraveKnight(Quest quest) {
this.quest = quest;
}
public void embarkOnQuest() {
quest.embark();
}
}
public class BraveKnightTest {
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
BraveKnight knight = new BraveKnight(mockQuest);
knight.embarkOnQuest();
verify(mockQuest, times(1)).embark();
}
}
I really have no idea about why they need to verify the embark() function is called one time. Don't you think that embark() will certainly be invoked after embarkOnQuest() is called? Or some errors will occur, and I will notice error messages in the logs, which show the error line number, that can help me quickly locate the wrong code.
So what's the point of verifying like above?
The need is simple: to verify that the correct number of invocations were made. There are scenarios in which method calls should not happen, and others in which they should happen more or less than the default.
Consider the following modified version of embarkOnQuest:
public void embarkOnQuest() {
quest.embark();
quest.embarkAgain();
}
And suppose you are testing error cases for quest.embark():
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
Mockito.doThrow(RuntimeException.class).when(mockQuest).embark();
...
}
In this case you want to make sure that quest.embarkAgain is NOT invoked (or is invoked 0 times):
verify(mockQuest, times(0)).embarkAgain(); //or verifyZeroInteractions
Of course this is one other simple example. There are many other examples that could be added:
A database connector that should cache entries on first fetch, one can make multiple calls and verify that the connection to the database was called just once (per test query)
A singleton object that does initialization on load (or lazily), one can test that initialization-related calls are made just once.
Consider the following code:
public void saveFooIfFlagTrue(Foo foo, boolean flag) {
if (flag) {
fooRepository.save(foo);
}
}
If you don't check the number of times that fooRepository.save() is invoked , then how can you know whether this method is doing what you want it to?
This applies to other void methods. If there is no return to a method, and therefore no response to validate, checking which other methods are called is a good way of validating that the method is behaving correctly.
Good question. You raise a good point that mocking can be overly circuitous when you can just check the results. However, there are contexts where this does lead to more robust tests.
For example, if a method needs to make a call to an external API, there are several problems with simply testing the result:
Network I/O is slow. If you have many checks like this, it will slow down your test case
Any round-trip like this would have to rely on the code making the request, the API, and the code interpreting the API's response all to work correctly. This is a lot of failure points for a single test.
If something stupid happens and you accidentally make multiple requests, this could cause performance issues with your program.
To address your sub-questions:
Don't you think that embark() will certainly be invoked after embarkOnQuest() called?
Tests also have value in letting you refactor without worry about breaking things. This is obvious now, yes. Will it be obvious in 6 months?
I really have no idea about why they need to verify the embark()
function is called one time
Verifying an invocation on a mock for a specific number of times is the standard way how Mockito works as you invoke Mockito.verify().
In fact this :
verify(mockQuest, times(1)).embark();
is just a verbose way to write :
verify(mockQuest).embark();
In a general way, the verification for a single call on the mock is what you need.
In some uncommon scenarios you may want to verify that a method was invoked a specific number of times (more than one).
But you want to avoid using so specific verifications.
In fact you even want to use verifying as few as possible.
If you need to use verifying and besides the number of invocation on the mock, it generally means two things : the mocked dependency is too much coupled to the class under
test and or the method under test performs too many unitary tasks that produce only side effects.
The test is so not necessary straight readable and maintainable. It is like if you coded the mock flow in the verifying invocations.
And as a consequence it also makes the tests more brittle as it checks invocation details not the overall logic and states.
In most of cases, a refactoring is the remedy and cancel the requirement to specify a number of invocation.
I don't tell that it is never required but use it only as it happens to be the single decent choice for the class under test.

How should I use JUnit with Mockito to test to see if a method makes a method call to another Class?

I ran into a problem when I had to test a if else clause that only had method calls inside of them.
public CLI(String[] input){
cliCheck(input);
}
public static void cliCheck(String[] input){
if (input.length == 0) {
System.out.println("No input");
Help.help();
System.exit(0);
}
if(input.length == 1){
if(input[0] .equals("help") || input[0] == "-h") {
Help.help();
System.exit(0);
}
}
inputParser(input);
}
The code of from the beginning portion of a Command Line Interface program.
The first if is true when there is no input.
The second if is true when user types in "help" or "-h".
If not, then the input String is sent as the parameter of the inputParser method.
This is what I have so far...
#Test
public void cliCheckTest_Help(){
String[] input = {"help"};
CLI cli = new CLI(input);
Help help = mock(Help.class);
cli.cliCheck(input);
verify(help, times(1)).help();
}
(please tell me there is a better way to test for 100% branch coverage)
The problem is you created hard-to-test code here.
You see, that call to System.exit() will tear down your unit test in a very unpleasant manner.
You could do something like that instead:
public interface ShutdownService {
public void systemExit();
}
and then create a "default" implementation that simply calls System.exit().
But for your unit tests, you could instead "insert" a mocked version of that interface; and use that to verify that the expected call took place.
Beyond that, your code is also doing other things that make testing much harder than necessary - starting with the fact that you have static methods there; which then use a static field inputParser.
Long story short: static might look convenient, but it very much kills your ability to write reasonable unit tests.
So, my advise: learn how to create testable code; for example by watching these videos. And then improve the design of your production code. Because then you will find that writing tests becomes much easier!
And beyond that: reasonable handling of command line options is much more complicated than your naive implementation here. Unless this is for learning purposes: do not re-invent the wheel. There are libraries out there that do this kind of work for you. Use one of them!

Mockito style anyXXX methods for unit testing

While unit testing some methods, there can be some scenarios where value of some parameters do not matter and can be any value.
For example in this piece of code:
public void method(String arg1, String arg2, int arg3){
if(arg1 == null) throw new NullPointerException("arg1 is null");
//some other code
}
unit testing the behavior that when arg1 is null then NPE must be thrown, the values of other arguments do not matter, they can be any value or be null.
So I wanted to document the fact that the values do not matter for the method under test.
I thought of following options:
Option 1: Define constants of ANY_XXX
I thought of explicitly creating constants ANY_STRING and ANY_INT, which contain a fixed value which documents that it can be any value and the method under test does not care about the actual value.
I can put all these constants in a single class called Any and reuse them across all test classes.
Option 2: Random values for ANY_XXX
This option seems a bit hacky to me as I have read somewhere that randomness should not be brought into test cases. But in this scenario this randomness will not be visible as the parameters will not create any side effect.
Which approach would be more suitable for better, readable tests?
UPDATE:
While I can use ANY_XXX approach by defining constants in Any class, but I am also thinking of generating ANY_XXX values with some constraints such as
Any.anyInteger().nonnegative();
Any.anyInteger().negative();
Any.anyString().thatStartsWith("ab");
I am thinking that maybe Hamcrest Matchers can be used for creating this chaining. But I am not sure if this approach is a good one. Similar methods for anyObject() are already provided by Mockito but those only work on Mocks and spies and not on normal objects. I want to achieve the same for normal objects for more readable tests.
Why I want to do this?
Suppose I have a class
class MyObject{
public MyObject(int param1, Object param2){
if(param1 < 0) throw new IllegalArgumentException();
if(param2 == null) throw new NullPointerException();
}
}
And now while writing tests for constructor
class MyObjectTest{
#Test(expected=NullPointerException.class)
public void testConstructor_ShouldThrowNullpointer_IfSecondParamIsNull(){
//emphasizing the fact that value of first parameter has no relationship with result, for better test readability
new MyObject(Any.anyInteger().nonnegative(), null);
}
}
I see both og them quite a lot
Personally I disagree that randomness should not be brought into tests. Using randomness to some degree should make your tests more robust, but not necessarily easier to read
If you go for the first approach I would not create a constants class, but rather pass the values (or nulls) directly, since then you see what you pass in without the need to have a look in another class - which should make your tests more readable. You can also easily modify your tests later if you need the other parameters later on
My preference is to build up a utility class of constants along with methods to help with the creation of the constant values for tests, e.g.:
public final class Values {
public static final int ANY_INT = randomInt(Integer.MIN_VALUE, Integer.MAX_VALUE);
public static final int ANY_POSITIVE_INT = randomInt(0, Integer.MAX_VALUE);
public static final String ANY_ISBN = randomIsbn();
// etc...
public static int randomInt(int min, int max) { /* omitted */ }
public static String randomIsbn() { /* omitted */ }
// etc...
}
Then I would use static imports to pull the constants and methods I needed for a particular test class.
I use the ANY_ constants only in situations where I do not care about the value, I find that they can make the intent of the test clearer, for example:
// when
service.fooBar(ANY_INT, ANY_INT, ANY_INT, ANY_INT, 5);
It's clear that the value 5 is of some significance - although it would be better as a local variable.
The utility methods can be used for adhoc generation of values when setting up tests, e.g.:
// given
final String isbn1 = randomIsbn();
final String isbn2 = randomIsbn();
final Book[] books = { new Book(isbn1), new Book(isbn2) };
// when
bookRepository.store(books);
Again, this can help to keep the test classes concerned about the tests themselves and less about data set up.
In addition to this I have also used a similar approach from domain objects. When you combine the two approaches it can be quite powerful. e.g.:
public final class Domain {
public static Book book() {
return new Book(randomIsbn());
}
// etc...
}
I've faced the same problem when i've started to write unit tests for my project and had to deal with numerous of arrays, lists, integer inputs, strings etc.
So I decided to use QuickCheck and create a generator util class.
Using Generators in this library, you can generate primitive data types and String easily.
For example, when you want to Generate an integer; simply use IntegerGenerator class.You can define maximum and minimum values in the constructor of the generator.You can also use CombinedGeneratorSamples class to generate data structures like lists, maps and arrays.
Another feature of this library is implementing Generator interface for custom class generators.
You're overthinking and creating unnecessary barriers for your project :
if you want to document your method, do it with words! that's why the Javadoc is here for
if you want to test your method with "any positive int" then just call it with a couple different positive ints. In your case, ANY does not mean testing every possible integer value
if you want to test your method with "a string that starts with ab", call it with "abcd", then "abefgh" and just add a comment on the test method !
Sometimes we are so caught with frameworks and good practices that it takes common sense away.
In the end : most readable = simplest
How about using a caller method for the actual method.
//This is the actual method that needs to be tested
public void theMethod(String arg1, String arg2, int arg3, float arg4 ){
}
Create a caller method that calls the method with the required parameters and default(or null) values for the rest of the params and run your test case on this caller method
//The caller method
#Test
public void invokeTheMethod(String param1){
theMethod(param1, "", 0, 0.0F); //Pass in some default values or even null
}
Although you will have to be pretty sure that passing default values on theMethod(...) for the other parameters wont cause any NPE.
i see 3 options:
never pass nulls, forbid your team passing nulls. nulls are evil. passing null should be an exception, not a rule
simply use annotation in production code: #NotNull or sth like that. if u use lombok, this annotation will also do the actual validation
and if u really have to do it in tests then simply create a test with proper name:
static final String ANY_STRING = "whatever";
#Test
public void should_throw_NPE_when_first_parameter_is_null() {
object.method(null, ANY_STRING, ANY_STRING); //use catch-exception or junit's expected
}
If you're willing to give JUnitParams' framework a go, you could parametrize your tests specifying meaningful names to your parameters:
#Test
#Parameters({
"17, M",
"2212312, M" })
public void shouldCreateMalePerson(int ageIsNotRelevant, String sex) throws Exception {
assertTrue(new Person(ageIsNotRelevant, sex).isMale());
}
I'm always in favor of the constants approach. The reason is that I believe it gets more readable than chaining several matchers.
Instead of your example:
class MyObjectTest{
#Test(expected=NullPointerException.class)
public void testConstructor_ShouldThrowNullpointer_IfSecondParamIsNull(){
new MyObject(Any.anyInteger().nonnegative(), null);
}
}
I would d:
class MyObjectTest{
private static final int SOME_NON_NEGATIVE_INTEGER = 5;
#Test(expected=NullPointerException.class)
public void testConstructor_ShouldThrowNullpointer_IfSecondParamIsNull(){
new MyObject(SOME_NON_NEGATIVE_INTEGER, null);
}
}
Also, I prefer the use of 'SOME' over 'ANY', but that's also a matter of personal taste.
If you're considering testing the constructor with a number of different variants as you mentioned (nonNegative(), negative(), thatStartsWith(), etc.), I would that instead you write parameterized tests. I recommend JUnitParams for that, here's how I'd do it:
#RunWith(JUnitParamRunner.class)
class MyObjectTest {
#Test(expected = NullPointerException.class)
#Parameters({"-4000", "-1", "0", "1", "5", "10000"})
public void testConstructor_ShouldThrowNullpointer_IfSecondParamIsNull(int i){
new MyObject(i, null);
}
...
}
I suggest you go with constant values for those parameters which may be arbitrary. Adding randomness makes your test runs not repeatable. Even if parameter values "don't matter" here, actually the only "interesting" case is when a test fails, and with random behavior added in, you might not be able to reproduce the error easily. Also, simpler solutions are often better, and easier to maintain: using a constant is certainly simpler than using random numbers.
Of course if you go with constant values, you could put these values in static final fields, but you could also put them in methods, with names such as arbitraryInt() (returning e.g. 0) and so on. I find the syntax with methods cleaner than with constants, as it resembles Mockito's any() matchers. It also allows you to replace the behavior more easily in case you need to add more complexity later on.
In case you want to indicate that a parameter doesn't matter and the parameter is an object (not primitive type), you can also pass empty mocks, like so: someMethod(null, mock(MyClass.class)). This conveys to a person reading the code that the second parameter can be "anything", since a newly created mock has only very basic behavior. It also doesn't force you to create your own methods for returning "arbitrary" values. The downside is it doesn't work for primitive types or for classes which can't be mocked, e.g. final classes like String.
Ok.... I see a big Problem with you approach!
The other value doesn't matter? Who guarantees this? The Writer of the Test, the writer of the Code? What if you have a Method, which throws some unrelated Exception if the first Parameter is exactly 1000000 even if the second parameter is NULL ?
You have to formulate your Test-Cases: What is the Test-Specification... What do you want to proof? Is it:
In some cases if the first parameter is some arbitrary value and the second is null, this method should throw a NullPointerException
For any possible first Input value, if the second value is NULL the method should always throw a NullPointerException
If you want to test the first case - your approach is ok. Use a constant, a random value, a Builder... whatever you like.
But if your specification actually requires the 2nd condition all of your presented solutions are not up for the task, since they only test some arbitrary value. A good test should still be valid if the programmer changes some code in the method. This means the right way to test this method would be a whole series of Testcases, testing all corner-cases as with all other methods. So each critical value which can lead to a different execution-path should be checked - or you need a testsuite which checks for code-path completeness...
Otherwise your test is just bogus and there to look pretty...

If statements in tests

I have a Parametrized test class with bunch of unit tests that generally control the creation of custom email messages. Right now class has a lot of test which depend on factor(s) used in parametrized class, the flow of the tests is the same for every test. The example of a test:
#Test
public void testRecipientsCount() {
assertEquals(3, recipientsCount);
}
I had to add extra funcionality to my email class that adds some extra internal emails to the list of recipients, and that only happens for some of the cases and that leads to my problem.
Lets say I want to assert the amount of messages created. For the old test it was the same for each case, but now its different depending on cases. The most intuitive way for me was to add if statements:
#Test
public void testRecipientsCount(){
if(something) {
assertEquals(3, recipientsCount);
}
else {
assertEquals(4, recipientsCount);
}
}
...
but my more experienced co-worker says we should avoid ifs in test classes (and I kinda agree on that).
I thought that splitting test on two test classess may work, but that would lead to redundant code in both classes (I still have to check if non-iternal messages were created, their size, content etc.), and a few lines added for one of them.
My question is: how do I do this so I don't use if's or loads of redundant code (not using parametrized class would produce even more redundant code)?
In my opinion a Junit should be read like a protocol.
That means you can write redundant code to make the test case better readable.
Write a testcase for each possible if-statement in your business logic as well as the negative cases. Thats the only way to get a 100% test coverage.
I use the structure:
- testdata preparation
- executing logic
- check results
- clear data
Furthermore you should write complex asserts of big objects in own abstract classes:
abstract class YourBusinessObjectAssert{
public static void assertYourBussinessObjectIsValid(YourBusinessObject pYourBusinessObject, Collection<YourBusinessObject> pAllYourBusinessObject) {
for (YourBusinessObject lYourBusinessObject : pAllYourBusinessObject) {
if (lYourBusinessObject.isTechnicalEqual(pYourBusinessObject)) {
return;
}
}
assertFalse("Could not find requested YourBusinessObject in List<YourBusinessObject>!", true);
}
}
it will reduce the complexity of your code and you're making it available to other developers.
A unit test should, in my opinion, test only one thing if possible. As such I'd say that if you need an if statement then you probably need more than one unit test - one for each block in the if/else code.
If possible I'd say a test should read like a story - my preferred layout (and its not my idea :-) - its fairly widely used) is:
- given: do setup etc
- when: the place you actually execute/call the thing under test
- expect: verify the result
Another advantage of a unit test testing only one thing is that when a failure occurs its unambiguous what the cause was - if you have a long test with many possible outcomes it becomes much harder to reason why a test has failed.
I'm not sure if it's possible to cleanly do what you're after in a parametrized test. If you need different test case behavior based on which parameter for some features, you might just be better off testing those features separately - in different test classes that are not parametrized.
If you really do want to keep everything in the parametrized test classes, I would be inclined to make a helper function so that your example test at least reads as a simple assertion:
#Test
public void testRecipientsCount(){
assertEquals(expectedCount(something), recipientsCount)
}
private int expectedCount(boolean which) {
if (something){
return 3;
}
else {
return 4;
}
}
Why not have a private method that tests the things that are common for each method? Something like (but probably with some input parameter for the testCommonStuff() method):
#Test
public void testRecipientsCountA(){
testCommonStuff();
// Assert stuff for test A
}
#Test
public void testRecipientsCountB(){
testCommonStuff();
// Assert stuff for test B
}
private void testCommonStuff() {
// Assert common stuff
}
This way you don't get redundant code and you can split your test into smaller tests. Also you make your tests less error prone IF they should actually test the same things. You will still know which test that failed, so traceability should be no problem.

Categories

Resources