What is the difference between a Theory and a Parameterized test?
I'm not interested in implementation differences when creating the test classes, just when you would choose one over the other.
From what I understand:
With Parameterized tests you can supply a series of static inputs to a test case.
Theories are similar but different in concept. The idea behind them is to create test cases that test on assumptions rather than static values.
So if my supplied test data is true according to some assumptions, the resulting assertion is always deterministic.
One of the driving ideas behind this is that you would be able to supply an infinite number of test data and your test case would still be true; also, often you need to test an universe of possibilities within a test input data, like negative numbers. If you test that statically, that is, supply a few negative numbers, it is not guaranteed that your component will work against all negative numbers, even if it is highly probable to do so.
From what I can tell, xUnit frameworks try to apply theories' concepts by creating all possible combinations of your supplied test data.
Both should be used when approaching a scenario in a data-driven scenario (i.e only inputs change, but the test is always doing the same assertions over and over).
But, since theories seem experimental, I would use them only if I needed to test a series of combinations in my input data. For all the other cases I'd use Parameterized tests.
Parameterized.class tests "parametrize" tests with a single variable, while Theories.class "parametrize" with all combinations of several variables.
For examples please read:
http://blogs.oracle.com/jacobc/entry/parameterized_unit_tests_with_junit
http://blog.schauderhaft.de/2010/02/07/junit-theories/
http://blogs.oracle.com/jacobc/entry/junit_theories
Theories.class is similar to Haskell QuickCheck:
http://en.wikibooks.org/wiki/Haskell/Testing
but QuickCheck autogenerates parameter combinations
In addition to above responses:
On a input with 4 values and 2 test methods
#RunWith(Theories.class) - will generate 2 JUnit tests
#RunWith(Parameterized.class) - will generate 8 (4 inputs x 2 methods) JUnit tests
A little late in replying. But it would be helpful to the future testers.
Parameterized Tests vs Theories
Class annotated with "#RunWith (Parameterized.class)" VS "#RunWith(Theories.class)"
Test inputs are retrieved from a static method returning Collection and annotated with #Parameters vs static fields annotated with #DataPoints or #DataPoint.
Inputs are passed to the constructor (mandatory) and used by the test method vs inputs are directly passed to the test method.
Test method is annotated with #Test and doen't take arguments vs Test method is annotated with #Theory and may take arguments
From my understanding the difference is that a Parameterized Test is used when all you want to do is test a different set of inputs (test each one individually), a Theory is a special case of a Parameterized Test in which you are testing every input as a whole (every parameter needs to be true).
Related
I'm writing a Junit 4 test class comprising of 5 test methods.
All these 5 test methods have same 10 assertEquals lines of code.
Would it be best practice to move these lines of codes in one method for e.g. public void callAssertions() { .... }
and call this method from all tests?
As matt freake's comment mentions - Calling Junit Assertions via method call, there are differing opinions, but what I would do is separate assertions that are similar in nature.
So for example if you want to assert person's details, I would separate them into an assertPersonDetails() method - and so forth for other assertions. It really depends on the business logic underneath.
I wouldn't recommend separating them all into a generic named method like you suggested callAssertions()
As with all code, you should make sure that every test and every method in your test is readable and comprehensible.
You should have a clear pattern in your top test methods: given X, when Y, then Z. You can extract common code from each of the given/then/when parts, but you should not mix them.
So an assertThatZzzIsConsistent(...) method is OK, if it extracts only from the 'then' part. But an executeYyyAndAssertThatZzzIsConsistent(...) is not OK if it combines the 'when' and 'then' part.
I think that is a neater solution yes. Personally i always keep everything as separate as possible so every test is completely standalone. As long as that is the case it's okay.
I'm trying to mock arraylist to verify the add method but I'm getting the message:
FAILED: testInit
Wanted but not invoked:
arrayList.add(<any>);
-> at AsyncRestTemplateAutoConfigurationTest.testInit(AsyncRestTemplateAutoConfigurationTest.java:95)
Actually, there were zero interactions with this mock.
The test class I've used is:
#Test
public void testInit() throws Exception {
ArrayList<AsyncClientHttpRequestInterceptor> interceptors = Mockito.mock(ArrayList.class);
PowerMockito.whenNew(ArrayList.class).withAnyArguments()
.thenReturn(interceptors);
Mockito.stub(interceptors.add(Mockito.any())).toReturn(true);
asyncRestTemplateAutoConfiguration.init();
Mockito.verify(interceptors).add(Mockito.any());
}
The actual tested code is:
List<AsyncClientHttpRequestInterceptor> interceptors = new ArrayList<>(interceptors);
interceptors.add(new TracingAsyncRestTemplateInterceptor(tracer));
I've declared the test class with
#RunWith(PowerMockRunner.class)
#PrepareForTest(AsyncRestTemplateAutoConfiguration.class)
Where AsyncRestTemplateAutoConfigurationis the class, which I'm using to test. Could anyone please tell me what I'm missing?
Your unit test should verify public observable behavior which is return values and communication with dependencies (which does not nessessarily imply to test only public methods).
That your production code uses an ArrayList to store your data is an implementation detail which you don't want to test since it may be changed without changing the units general behavior, in which case your unittest should not fail.
Don't start learning how to unit test using PowerMockito - it will give you bad habits.
Instead, consider working carefully through the documentation for Mockito and you will see how to structure your classes for better testing.
Ideally, your classes should be such that you do not need PowerMockito to test them and you can just rely on plain old Mockito.
If you can arrive at the point where you can write elegant and simple tests using just Mockito, it will be a sign you have grasped the fundamental concepts of unit testing.
You can start by learning how to inject dependencies through the constructor of the class that can be swapped with mocked test doubles on which behaviour can be verified.
Another point to note is, as per the other answer, the internal ArrayList in your system under test is an implementation detail. Unless consumers of your system under test can access the ArrayList through, say, methods that expose it there is not much point in writing a test against it.
If the state of the internal ArrayList affects something from the point of view of the consumer, then try writing a test against that rather than against the internal property.
Good luck with your journey on unit testing!
So I have an autogenerated enum where each enum contains several fields and I wish to test some of the logic of the methods contained in the enum. Examples could be "find all enums with this value in this field". However the enum can possibly change, more specifically, the values and the number of enum elements, but not the number of fields in each enum. This also includes the possibility of mocking the values() method.
Now I'm afraid if I make tests using specific values, those tests might fail if the values are no longer present in the enum.
So my options are either: Add elements to the existing enum that I might then use in the test or mock the entire enum with new values I can use in the test.
Now my question, what is good practice? I've read about powermock, however it seems to be differing oppinions on this. Any better solutions? Am I looking at this wrong?
The part that can be easily answered: you don't need a mocking framework here.
You have enums of some content - and when you want to test their internals, a mocking framework is of no use. There is no point in mocking the values() when your goal is to test certain properties of these generated enums.
In other words: your test cases should boil down into code that fetches values and then somehow asserts() something on them. Worst case, you might have to use reflection, as in:
somehow collect the names of all enum classes to test (could be achieved by scanning class path content for example)
for each such enum - maybe use reflection to acquire certain fields - to then assert against expected results.
But most likely, the real answer is completely different: it is be wrong to unit test generated code in the first place. Rather have unit tests to verify the code generator instead.
You see - when your unit tests find a problem in the generated enum? What will you do ... probably change your generator.
I'm relatively new to JUnit 4; I have figured out that I can repeat the same test over a method, with different inputs, using #Parameters annotation to tag a method returning an Iterable of arrays, say a List<Integer[]>.
I found out that JUnit does require the method provider of Iterable of arrays to be static and named data, which means that you can test different methods but using always the same data.
As a matter of fact you can tag any method (with any return type, BTW) with #Parameters, but with no effect; only data() method is taken into account.
What I was hoping JUnit allowed to do was having some different data sets annotated with #Parameters and some mechanism (say an argument to #Test) which could specify to use data set X while performing testFoo() and data set Y for testBar().
In other words I would like to set up a local (as opposed to class/instance) data set in each of my testing methods.
As I understand the whole thing, you are obliged to build a separate class for each of the methods you want to test with multiple inputs, which IMHO makes the thing pretty useless; AAMOF I built myself a modest framework (actually based on JUnit) which indeed allows me multiple methods testing with multiple inputs (with tracing feature), all contained in a single class, avoiding so code proliferation.
Am I missing something ?!?
It shouldn't matter what the method is called, as long as it has the #Parameters annotation.
The easiest way to have multiple datasets is to put the tests in an abstract class and have the concrete subclasses have the method with the #Parameters method.
However, I would question why one would want to have multiple datasets per test in the same test class. This probably violates the single responsibility principle and you should probably break the data sets into 1 test class each.
I've recently inherited an application that is written by different people at different times and looking for guidance on how to standardize.
Assuming NUnit:
[Test]
public void ObjectUnderTest_StateChanged_Consequence()
{
Assert.That(tra_la_la);
}
[Test]
public void ObjectUnderTest_Behaviour_Consequence()
{
Assert.That(tra_la_la);
}
for example:
[Test]
public void WifeIsTired_TakeWifeToDinner_WifeIsGrateful()
{
Assert.That(tra_la_la);
}
[Test]
public void WifeIsTired_MentionNewGirlfriend_WifeGetsHalf()
{
Assert.That(tra_la_la);
}
I just write what it's for. It's not like you're going to have to type the names in anywhere else, so having a testWibbleDoesNotThrowAnExceptionIfPassedAFrobulator isn't a problem. Anything which is a test begins with 'test', obviously.
There is no standard as such, different people/places will have different schemes. The important thing is you stick to a standard.
Personally I'm a fan of the following - example code in C#, but very close to Java, same rules apply:
[Test]
public void person_should_say_hello()
{
// Arrange
var person = new Person();
// Act
string result = person.SayHello();
// Assert
Assert(..., "The person did not say hello correctly!");
}
Explicit
The test name should give the name of the class under test. In this example, the class being tested is Person. The test name should also have the name of the method that is being tested. This way, if the test was to fail, you'll at least know where to look to solve it. I'd also recommend following the AAA - Arrange, Act, Assert rule, it will ensures your tests are easy to read and follow.
Friendly fail messages
When it comes to asserting a result/state, its useful to include an optional message. This makes it easier when a test fails, especially when run as part of a build process or via an external tool.
Underscores
The final (though optional) stance I follow is using underscores for tests names. While I'm no fan of underscores in production code, their use in test names is useful as test names are often much longer. Quickly glancing at a test name that uses underscores proves to be much more readable, though this is subjective and the source of much debate with regards unit testing practices.
Integration Tests
The same standards apply to integration tests, the only difference being the location of such tests should be separate from unit tests. In the example code above, the test class would be called PersonTests and located in a file called PersonTests.cs. The integration tests would be named in a similar manner - PersonIntegrationTests, located in PersonIntegrationTests.cs. The same project can be used for these tests, but ensure they are located in separate directories.
It's instructive to look at BDD (behavioural driven development) and this blog post in particular.
BDD is essentially focusing on components and what they should do. Consequently it impacts directly on how you name/structure your tests, and the code they use to set up conditions and validate. BDD allows not only the developers to read/write the tests, but non-technical members of the team (business analysts etc.) can contribute by specifying the tests and validating them.
I ran across two good suggestions. Links here: http://slott-softwarearchitect.blogspot.com/2009/10/unit-test-naming.html
http://weblogs.asp.net/rosherove/archive/2005/04/03/TestNamingStandards.aspx
http://openmrs.org/wiki/Unit_Testing_with_#should
In that situation I'd probably find the naming convention that was used the most and refactor the rest of the code to use that. If the one that was used the most is truly horrid, I'd still look to the existing code and try to find one that I could live with. Consistency is more important than arbitrary conventions.
I use a FunctionTestCondition construct. If I have two methods, Get and Set I would maybe create the following test methods:
GetTest being a positive test (everything is ok).
GetTestInvalidIndex to test an invalid index being passed to the method.
GetTestNotInitialized to test when the data is not inited before use.
SetTest
SetTestInvalidIndex
SetTestTooLargeValue
SetTestTooLongString
Group your tests by setup, make a test class around this setup and name is with suffix Test or IntegrationTest. Using a test framework like JUnit or TestNG you can name your test methods as you want. I would name the method as what it tests, a sentence in camel case, not test prefix. The frameworks use a #Test annotation to mark a method as test.