Junit multiple inputs/multiple methods testing via #Parameters - java

I'm relatively new to JUnit 4; I have figured out that I can repeat the same test over a method, with different inputs, using #Parameters annotation to tag a method returning an Iterable of arrays, say a List<Integer[]>.
I found out that JUnit does require the method provider of Iterable of arrays to be static and named data, which means that you can test different methods but using always the same data.
As a matter of fact you can tag any method (with any return type, BTW) with #Parameters, but with no effect; only data() method is taken into account.
What I was hoping JUnit allowed to do was having some different data sets annotated with #Parameters and some mechanism (say an argument to #Test) which could specify to use data set X while performing testFoo() and data set Y for testBar().
In other words I would like to set up a local (as opposed to class/instance) data set in each of my testing methods.
As I understand the whole thing, you are obliged to build a separate class for each of the methods you want to test with multiple inputs, which IMHO makes the thing pretty useless; AAMOF I built myself a modest framework (actually based on JUnit) which indeed allows me multiple methods testing with multiple inputs (with tracing feature), all contained in a single class, avoiding so code proliferation.
Am I missing something ?!?

It shouldn't matter what the method is called, as long as it has the #Parameters annotation.
The easiest way to have multiple datasets is to put the tests in an abstract class and have the concrete subclasses have the method with the #Parameters method.
However, I would question why one would want to have multiple datasets per test in the same test class. This probably violates the single responsibility principle and you should probably break the data sets into 1 test class each.

Related

How to unit test a method in an enum that might change

So I have an autogenerated enum where each enum contains several fields and I wish to test some of the logic of the methods contained in the enum. Examples could be "find all enums with this value in this field". However the enum can possibly change, more specifically, the values and the number of enum elements, but not the number of fields in each enum. This also includes the possibility of mocking the values() method.
Now I'm afraid if I make tests using specific values, those tests might fail if the values are no longer present in the enum.
So my options are either: Add elements to the existing enum that I might then use in the test or mock the entire enum with new values I can use in the test.
Now my question, what is good practice? I've read about powermock, however it seems to be differing oppinions on this. Any better solutions? Am I looking at this wrong?
The part that can be easily answered: you don't need a mocking framework here.
You have enums of some content - and when you want to test their internals, a mocking framework is of no use. There is no point in mocking the values() when your goal is to test certain properties of these generated enums.
In other words: your test cases should boil down into code that fetches values and then somehow asserts() something on them. Worst case, you might have to use reflection, as in:
somehow collect the names of all enum classes to test (could be achieved by scanning class path content for example)
for each such enum - maybe use reflection to acquire certain fields - to then assert against expected results.
But most likely, the real answer is completely different: it is be wrong to unit test generated code in the first place. Rather have unit tests to verify the code generator instead.
You see - when your unit tests find a problem in the generated enum? What will you do ... probably change your generator.

If I have multiple JUnit test classes, but they share a large amount of setup, is there a way to resuse some of this setup code?

I have a number of classes I'm testing, and I've seperated all of the tests for these classes out into their own files. i.e. I have the following sort of directory structure
src
ClassOne.java
ClassTwo.java
test
ClassOneTests.java
ClassTwoTests.java
I have a much larger number of classes to test than this however. I've started to write tests for many of these classes, and I realised that there is a lot of code in the setup that is the same between them all. I'd like to be able to reuse this code, both to reduce the time it takes to write more tests, but also in the event that I end up making modifications to my code base, I want to only have one point in the tests to change, rather than many dozens of almost identical instatiation of mocks.
An example of my code layout is like this:
public class ClassOneTests {
//Declare global variables that are dependencies for the test
AClass var1;
// etc...
ClassOne testThis;
#Before
public void setUp() throws Exception {
//instantiate and populate the dependencies
var1 = new AClass();
// etc...
testThis.setDependencies(var1, //And other dependencies....);
}
#Test
public void aTest() {
//Do test stuff
Var result = testThis.doStuff();
Assert.NotNull(result);
}
// etc...
}
This means a huge majority of stuff in my declaration and setUp method is duplicated across my class tests.
I have partially solved this by simply having a GenericTestSetup.java that does all of this setup and then each ClassXTest.java extends it. The problem lies with the few classes that are unique to each ClassXText.java file, but used in each method. So, in the example above AClass is a variable specific for ClassOneTests. When I do ClassTwoTests I'd have a bunch of shared variables, but then instead of AClass I'd have BClass, for example.
The problem I'm having is that I can't declare two #Before tags, one in the GenericTestClass for the generic setup, and then another in the ClassXTest class if any of the objects inside ClassXTest are dependant on something in GenericTestClass because you can't guarantee an execution order.
In summary, is there a way to have this generic test setup that's used for multiple class tests, but also have another setup method for each specific class that is guaranteed to run after the generic objects have been set up?
EDIT: It was suggested that my question is a duplicate of this question: Where to put common setUp-code for differen testclasses?
I think they're similar, but that question was asking for a more general approach in terms of policy, I have a more specific set of requirements and asking for how it would actually be implemented. Plus, none of the answers there actually gave a proper answer about implementation, which I require, so they're not useful to me.
What i usually do when overloading increases the complexity, I use, what i call Logic classes...
Basically class with static methods which takes the parameters that need to be initialized, and sets them up.
Logic.init(param1, param2)
and I usually use only one #Before with different logic methods.
Advantage is that the code might be shared between similar initializations, and you can reuse the code in the Logic class itself. Example
Logic.init(param1,param2,param3) can call Logic.init(param1,param2), and you can use different variants into the different #Before methods.
EDIT:
I do not know if there is pattern, or a name related to such solution. If there is i would like to know the name as well :).
You could use Template Method pattern. The #before method in the generic class calls a protected method in the same class which is empty but can be overriden in the implementing classes to add specific behaviour.

Reuse expectations block several times in JMockit

I am writing test cases for a liferay portal in which I want to mock ActionRequest, ThemeDisplay kind of objects. I have tried with writing expectations in each test method.
Now I want to generalize the approach by creating a BaseTest class which provides me all expectations needed for each method so that I don't have to write it again in the all test classes.
For one class I have tried by writing expectations in #Before method. How can I use same in different classes?
For example I want to do following in several classes:
#Before
public void setUp() {
// All expectations which are required by each test methods
new Expectations() {{
themeDisplay.getPermissionChecker();
returns(permissionChecker);
actionRequest.getParameter("userId");
returns("111");
actionRequest.getParameter("userName");
returns("User1");
}};
}
Also is there a way to provide that whenever I call actionRequest.getParameter() it may return the specific value which I provide?
Any help will be appreciated.
Generally, what you want is to create named Expectations and Verifications subclasses to be reused from multiple test classes. Examples can be found in the documentation.
Note that mocked instances have to be passed in, when instantiating said subclasses.
Methods like getPermissionChecker(), however, usually don't need to be explicitly recorded, since a cascaded instance is going to be automatically returned as needed.
Mocking methods like getParameter, though, hints that perhaps it would be better to use real objects rather than mocks. Mocking isn't really meant for simple "getters", and this often indicates that you may be mocking too much.

Test classes needing lots of objects (Java)

I'm trying to write a test class for a class that has exactly one method. It's not too difficult of a method, but it does require some (my case two, through parameter) objects from upper layers. Problem is that in order to create those objects, other objects are needed and so on and so on. So I decided to write two inner class which function as placeholder classes for the two parameters, containing only the information I need. I set for these inner classes values for the attributes I'd expect in fully working circumstances.
Now Eclipse doesn't let me compile because of the following error:
The method produceTasks(IOrder, List) in the type AssemblyTaskFactory is not applicable for the arguments (AssemblyTaskFactoryTest.IOrder, List)"
As you guessed, produceTasks is the method I'm trying to test and the arguments are my inner classes. The error thrown is pretty clear about it; it won't accept my inner classes as valid parameters, despite having the same class names (because it's in fact a different class). I expected this to work if I used the same class and method names. Is there a workaround to make this work; or what would be the alternative to avoid making a hundred objects just to test one method?
MOCK dependencies and inject them during testing.
Mockito is rather nice

Difference between JUnit Theories and Parameterized Tests

What is the difference between a Theory and a Parameterized test?
I'm not interested in implementation differences when creating the test classes, just when you would choose one over the other.
From what I understand:
With Parameterized tests you can supply a series of static inputs to a test case.
Theories are similar but different in concept. The idea behind them is to create test cases that test on assumptions rather than static values.
So if my supplied test data is true according to some assumptions, the resulting assertion is always deterministic.
One of the driving ideas behind this is that you would be able to supply an infinite number of test data and your test case would still be true; also, often you need to test an universe of possibilities within a test input data, like negative numbers. If you test that statically, that is, supply a few negative numbers, it is not guaranteed that your component will work against all negative numbers, even if it is highly probable to do so.
From what I can tell, xUnit frameworks try to apply theories' concepts by creating all possible combinations of your supplied test data.
Both should be used when approaching a scenario in a data-driven scenario (i.e only inputs change, but the test is always doing the same assertions over and over).
But, since theories seem experimental, I would use them only if I needed to test a series of combinations in my input data. For all the other cases I'd use Parameterized tests.
Parameterized.class tests "parametrize" tests with a single variable, while Theories.class "parametrize" with all combinations of several variables.
For examples please read:
http://blogs.oracle.com/jacobc/entry/parameterized_unit_tests_with_junit
http://blog.schauderhaft.de/2010/02/07/junit-theories/
http://blogs.oracle.com/jacobc/entry/junit_theories
Theories.class is similar to Haskell QuickCheck:
http://en.wikibooks.org/wiki/Haskell/Testing
but QuickCheck autogenerates parameter combinations
In addition to above responses:
On a input with 4 values and 2 test methods
#RunWith(Theories.class) - will generate 2 JUnit tests
#RunWith(Parameterized.class) - will generate 8 (4 inputs x 2 methods) JUnit tests
A little late in replying. But it would be helpful to the future testers.
Parameterized Tests vs Theories
Class annotated with "#RunWith (Parameterized.class)" VS "#RunWith(Theories.class)"
Test inputs are retrieved from a static method returning Collection and annotated with #Parameters vs static fields annotated with #DataPoints or #DataPoint.
Inputs are passed to the constructor (mandatory) and used by the test method vs inputs are directly passed to the test method.
Test method is annotated with #Test and doen't take arguments vs Test method is annotated with #Theory and may take arguments
From my understanding the difference is that a Parameterized Test is used when all you want to do is test a different set of inputs (test each one individually), a Theory is a special case of a Parameterized Test in which you are testing every input as a whole (every parameter needs to be true).

Categories

Resources