How to unit test a method in an enum that might change - java

So I have an autogenerated enum where each enum contains several fields and I wish to test some of the logic of the methods contained in the enum. Examples could be "find all enums with this value in this field". However the enum can possibly change, more specifically, the values and the number of enum elements, but not the number of fields in each enum. This also includes the possibility of mocking the values() method.
Now I'm afraid if I make tests using specific values, those tests might fail if the values are no longer present in the enum.
So my options are either: Add elements to the existing enum that I might then use in the test or mock the entire enum with new values I can use in the test.
Now my question, what is good practice? I've read about powermock, however it seems to be differing oppinions on this. Any better solutions? Am I looking at this wrong?

The part that can be easily answered: you don't need a mocking framework here.
You have enums of some content - and when you want to test their internals, a mocking framework is of no use. There is no point in mocking the values() when your goal is to test certain properties of these generated enums.
In other words: your test cases should boil down into code that fetches values and then somehow asserts() something on them. Worst case, you might have to use reflection, as in:
somehow collect the names of all enum classes to test (could be achieved by scanning class path content for example)
for each such enum - maybe use reflection to acquire certain fields - to then assert against expected results.
But most likely, the real answer is completely different: it is be wrong to unit test generated code in the first place. Rather have unit tests to verify the code generator instead.
You see - when your unit tests find a problem in the generated enum? What will you do ... probably change your generator.

Related

Is there a way to assert that an enum has not changed since the last revision?

There exists an enum class that will cause problems downstream if it is changed without also making changes to the downstream project. Unfortunately, this is not easily identified by just searching for usages of the enum. We have had big warning comments in the code saying "Don't change this without also changing (downstream project)", but apparently that wasn't enough: Murphy's Law held firm. I need some other way of preventing other developers (or future me) from breaking things.
My current approach is to create a Unit test that will throw an error whenever the enum is changed. Changing the enum will therefore cause the build to fail which should get the attention of the developer. Included in the failure message will be instructions on how to safely update the enum. Unfortunately I can't see any way of writing this unit test short of copying the entire enum into the test class and then comparing every value from the test enum to the actual enum.
Is there a way that I can avoid duplicating the enum in the test class here? Is there a best practice that you recognize I should be following based on my description?
If all you want to verify is the enum member names and maybe their order, then you can create a static method on the enum type that computes a digested form of that information. Your unit test can then invoke a single method to test whether the enum matches expected form.
For example,
enum Test {
T1;
static int computeSignature() {
StringBuilder sb = new StringBuilder();
for (Test t : values) {
sb.append(t.name()).append(';');
}
return sb.toString().hashCode();
}
}
// ...
private final static int EXPECTED_ENUM_SIG = /* some number */;
#Test
public void testEnumSignature() {
assertEquals("enum Test has been unexpectedly changed", EXPECTED_ENUM_SIG,
Test.computeSignature());
}
If you decide you don't care about the enum order, then sort the names as part of the signature computation.
Once an enum class is published it becomes part of the Public API and for the most part, should not be changed. Enum constant names should definitely should not be changed and you probably shouldn't add any new constants either.
This really isn't that much different than changing method signatures on interfaces and should be treated the same way.
Any change is an API breaking change and will affect any downstream programs that link to your enum.
If you really must change your enum then your library should bump up the major version number and include detailed release notes explaining the change and exactly what must be done to make old code compatible.
Unfortunately, the workarounds for enums are not pretty.
Since you can't extend enums if you want to add new enums you can't just extend the old enum class and add a few more constants.
I would recommend extracting an interface for the methods in the enum and have your enum class implement the new interface and try to get your downstream software to only use the interface. Then future versions of your code can add new constants with less problems. But the initial transition to an interface will be painful but hopefully only happen once.
This is covered in some detail in Effective Java 2nd Edition Item 34: "Emulate extensible enums with interfaces"

Using mockito.when without knowing the parameters of the method call

I'm Junit testing a class and had to create some Mockito mock objects. The line of code I'm interested in is this
Mockito.when(emailer.sendEmail(INPUT GOES HERE)).thenReturn(true);
the sendEmail() method of emailer takes in two parameters, and I'm not sure what they will be. Is there a sort of wild card that can be used there to replace the parameters without knowing what they will be?
As mentioned in the question comments.
Matchers.any(ClassName.class), which is usually what you want. In Mockito 1.x, this stands in for any object, regardless of its type, but by receiving a class it will typically avoid the need for a cast. (According to Mockito contributor Brice in an SO comment, this behavior will change in Mockito 2 and beyond, presumably to behave more like isA as any(MyClass.class) would suggest in English.)
Matchers.any(), which will usually require a cast, and isn't a good idea for primitives.
Matchers.anyInt() or Matchers.anyShort() (etc), which are good for primitives.
Matchers.anyString(), because Strings are a common use-case.
Because Mockito extends Matchers, most of these methods will be available on Mockito, but some IDEs have trouble finding static methods across subclasses. You can find all of them by using import static org.mockito.Matchers.*;.
Read more about all of the matchers available to you at the org.mockito.Matchers documentation.
If you run into trouble, or want to learn more about how these wildcards work under the surface, read more here.

Junit multiple inputs/multiple methods testing via #Parameters

I'm relatively new to JUnit 4; I have figured out that I can repeat the same test over a method, with different inputs, using #Parameters annotation to tag a method returning an Iterable of arrays, say a List<Integer[]>.
I found out that JUnit does require the method provider of Iterable of arrays to be static and named data, which means that you can test different methods but using always the same data.
As a matter of fact you can tag any method (with any return type, BTW) with #Parameters, but with no effect; only data() method is taken into account.
What I was hoping JUnit allowed to do was having some different data sets annotated with #Parameters and some mechanism (say an argument to #Test) which could specify to use data set X while performing testFoo() and data set Y for testBar().
In other words I would like to set up a local (as opposed to class/instance) data set in each of my testing methods.
As I understand the whole thing, you are obliged to build a separate class for each of the methods you want to test with multiple inputs, which IMHO makes the thing pretty useless; AAMOF I built myself a modest framework (actually based on JUnit) which indeed allows me multiple methods testing with multiple inputs (with tracing feature), all contained in a single class, avoiding so code proliferation.
Am I missing something ?!?
It shouldn't matter what the method is called, as long as it has the #Parameters annotation.
The easiest way to have multiple datasets is to put the tests in an abstract class and have the concrete subclasses have the method with the #Parameters method.
However, I would question why one would want to have multiple datasets per test in the same test class. This probably violates the single responsibility principle and you should probably break the data sets into 1 test class each.

Checking for deep equality in JUnit tests

I am writing unit tests for objects that are cloned, serialized, and/or written to an XML file. In all three cases I would like to verify that the resulting object is the "same" as the original one. I have gone through several iterations in my approach and having found fault with all of them, was wondering what other people did.
My first idea was to manually implement the equals method in all the classes, and use assertEquals. I abandoned this this approach after deciding that overriding equals to perform a deep compare on mutable objects is a bad thing, as you almost always want collections to use reference equality for mutable objects they contain[1].
Then I figured I could just rename the method to contentEquals or something. However, after thinking more, I realized this wouldn't help me find the sort of regressions I was looking for. If a programmer adds a new (mutable) field, and forgets to add it to the clone method, then he will probably forget to add it to the contentEquals method too, and all these regression tests I'm writing will be worthless.
I then wrote a nifty assertContentEquals function that uses reflection to check the value of all the (non-transient) members of an object, recursively if necessary. This avoids the problems with the manual compare method above since it assumes by default that all fields must be preserved and the programmer must explicitly declare fields to skip. However, there are legitimate cases when a field really shouldn't be the same after cloning[2]. I put in an extra parameter toassertContentEquals that lists which fields to ignore, but since this list is declared in the unit test, it gets real ugly real fast in the case of recursive checking.
So I am now thinking of moving back to including a contentEquals method in each class being tested, but this time implemented using a helper function similar to the assertContentsEquals described above. This way when operating recursively, the exemptions will be defined in each individual class.
Any comments? How have you approached this issue in the past?
Edited to expound on my thoughts:
[1]I got the rational for not overriding equals on mutable classes from this article. Once you stick a mutable object in a Set/Map, if a field changes then its hash will change but its bucket will not, breaking things. So the options are to not override equals/getHash on mutable objects or have a policy of never changing a mutable object once it has been put into a collection.
I didn't mention that I am implementing these regression test on an existing codebase. In this context, the idea of changing the definition of equals, and then having to find all instances where it could change the behavior of the software frightens to me. I feel like I could easily break more than I fix.
[2]One example in our code base is a graph structure, where each node needs a unique identifier to use to link the nodes XML when eventually written to XML. When we clone these objects we want the identifier to be different, but everything else to remain the same. After ruminating about it more, it seems like the questions "is this object already in this collection" and "are these objects defined the same", use fundamentally different concepts of equality in this context. The first is asking about identity and I would want the ID included if doing a deep compare, while the second is asking about similarity and I don't want the ID included. This is making me lean more against implementing the equals method.
Do you guys agree with this decision, or do you think that implementing equals is the better way to go?
I would go with the reflection approach and define a custom Annotation with RetentionPolicy.RUNTIME to allow the implementers of the tested classes to mark the fields that are expected to change after cloning. You can then check the annotation with reflection and skip the marked fields.
This way you can keep your test code generic and simple and have a convenient means to mark exceptions directly in the code without affecting the design or runtime behavior of the code that needs to be tested.
The annotation could look like this:
import java.lang.annotation.*;
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.FIELD})
public #interface ChangesOnClone
{
}
This is how it can be used in the code that is to be tested:
class ABC
{
private String name;
#ChangesOnClone
private Cache cache;
}
And finally the relevant part of the test code:
for ( Field field : fields )
{
if( field.getAnnotation( ChangesOnClone.class ) )
continue;
// else test it
}
AssertJ's offers a recursive comparison function:
assertThat(new).usingRecursiveComparison().isEqualTo(old);
See the AssertJ documentation for details: https://assertj.github.io/doc/#basic-usage
Prerequisites for using AssertJ:
import:
import static org.assertj.core.api.Assertions.*;
maven dependency:
<!-- test -->
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.19.0</version>
<scope>test</scope>
</dependency>

Difference between JUnit Theories and Parameterized Tests

What is the difference between a Theory and a Parameterized test?
I'm not interested in implementation differences when creating the test classes, just when you would choose one over the other.
From what I understand:
With Parameterized tests you can supply a series of static inputs to a test case.
Theories are similar but different in concept. The idea behind them is to create test cases that test on assumptions rather than static values.
So if my supplied test data is true according to some assumptions, the resulting assertion is always deterministic.
One of the driving ideas behind this is that you would be able to supply an infinite number of test data and your test case would still be true; also, often you need to test an universe of possibilities within a test input data, like negative numbers. If you test that statically, that is, supply a few negative numbers, it is not guaranteed that your component will work against all negative numbers, even if it is highly probable to do so.
From what I can tell, xUnit frameworks try to apply theories' concepts by creating all possible combinations of your supplied test data.
Both should be used when approaching a scenario in a data-driven scenario (i.e only inputs change, but the test is always doing the same assertions over and over).
But, since theories seem experimental, I would use them only if I needed to test a series of combinations in my input data. For all the other cases I'd use Parameterized tests.
Parameterized.class tests "parametrize" tests with a single variable, while Theories.class "parametrize" with all combinations of several variables.
For examples please read:
http://blogs.oracle.com/jacobc/entry/parameterized_unit_tests_with_junit
http://blog.schauderhaft.de/2010/02/07/junit-theories/
http://blogs.oracle.com/jacobc/entry/junit_theories
Theories.class is similar to Haskell QuickCheck:
http://en.wikibooks.org/wiki/Haskell/Testing
but QuickCheck autogenerates parameter combinations
In addition to above responses:
On a input with 4 values and 2 test methods
#RunWith(Theories.class) - will generate 2 JUnit tests
#RunWith(Parameterized.class) - will generate 8 (4 inputs x 2 methods) JUnit tests
A little late in replying. But it would be helpful to the future testers.
Parameterized Tests vs Theories
Class annotated with "#RunWith (Parameterized.class)" VS "#RunWith(Theories.class)"
Test inputs are retrieved from a static method returning Collection and annotated with #Parameters vs static fields annotated with #DataPoints or #DataPoint.
Inputs are passed to the constructor (mandatory) and used by the test method vs inputs are directly passed to the test method.
Test method is annotated with #Test and doen't take arguments vs Test method is annotated with #Theory and may take arguments
From my understanding the difference is that a Parameterized Test is used when all you want to do is test a different set of inputs (test each one individually), a Theory is a special case of a Parameterized Test in which you are testing every input as a whole (every parameter needs to be true).

Categories

Resources