I have an interface, e.g.:
public interface Thing {
FrobResult frob(FrobInput);
}
And several implementations of that interface (e.g. NormalThing, ImmutableThing, AsyncThing) that I am trying to test.
Many of my test methods are really about ensuring that the interface is implemented correctly, and thus are duplicated across each Thing implementation. In JUnit 3 a common solution to this would be to create a base class (extending TestCase) that is then subclassed by each implementation class. But is this the correct approach for JUnit 4?
Possible alternatives in (I believe) ascending order of preference:
Cut'n'paste the duplicated test methods. Not DRY at all, but I guess less worrisome in tests than it would be in production code.
Create an abstract class with #Test methods, and subclass it for each implementation test class. (Commonly seen with JUnit 3 tests -- is this still a good way to go in JUnit 4?)
Put the common test methods into a helper class, and invoke it on each implementation. (Composition instead of inheritance.)
What's the best practice for doing #3? Maybe a #RunWith(Parameterized.class) test that is parameterized with each implementation? Or is there a better way to accomplish this?
Yes, it is the correct approach to create a base class that is then subclassed by each implementation class in JUnit4, too.
I prefer the base test class for the interface to be abstract, i.e. your "alternative" 2, since I have made good experience in mimicing the inheritance hierarchy from the production code for the test code. So if you have interface I and implementations S1, S2 and S3, you make the abstract test class TestI and the test classes TestS1, TestS2 and TestS3.
Test cases should be speaking, i.e. tell a story. By choosing -- as always -- method names carefully and use clean behavioral subtyping only, inheritance does not obfuscate this.
I use #2 approach for JUnit and TestNG test cases. This is most convenient and easy to maintain. Its also straight forward to pick up (since its native to OOD to have base class that has common methods).
To me unit test classes are no different than regular project classes...so I do apply similar design considerations.
Related
Assume that I am building a monthly subscription charging feature for a mobile app, there are multiple ways to charge the bill. It can be through Apple Pay, Google wallet, paypal, visa/master depends on the platform. Each provider has their own implementation and respective JUnit tests ( as they are using Java).
To evaluate few basic functionalities, there are few cases which every implementation has to validate. So, the plan is to write base tests and call an abstract validate method.
Here is my approach,
public abstract class BaseBillingTest
{
public abstract BillCharger getBillCharger();
public abstract void ValidateTest1(Bill input, Bill output);
public void tests_case_1(){
Bill input = new Bill(<some value>);
Bill Output = getBillCharger().charge(input);
ValidateTest1(input, output);
}
}
Any derived test class will implement the abstract methods, so it has the responsibility to implement the validate methods. The derived test class need not know what is happening in the base test, as they can just validate the output based on the input.
Any suggestions to approach this in a more elegant fashion ? Are there any design patterns which I can apply in this scenario ?
Your example of Inheritance in my opinion is not the optimal way to go for the two abstract methods. One is Constructional and the other is static - you validate one Bill against another.
In both cases inheritance is not the correct relationship here. You can use successfully Factory pattern for the "getBillCharger" or the more modern TestDataBuilders or ObjectMother pattern.
For the second method you can just use a helper class.
If you need to invoke the same construction logic several times in a Test class you can use #Before
One important aspect is that if you place your tests_case_1 in super class you will be hiding parts of your logic and your tests will not be that obvious. I prefer more explicit and visible test cases therefore I would avoid this kind of setup.
The original approach does, in fact, apply a design pattern, it's called Template Method.
OTOH, the questions has a slight smell of hunting for design patterns as a goal in its own right. Why bother, if they already have a solution at hand that fulfills functional (here: testing the right thing) as well as non-functional (here: clearly separating common parts from specifics) requirements. The one reason I would accept is: communication, as in: being able to give a name to the design so that co-workers quickly understand my intentions.
I was just looking at the Java Hamcrest code on GitHub, and noticed they employed a strategy that seemed unintuitive and awkward, but it got me wondering if I'm missing something.
I noticed in the HamCrest API that there is an interface Matcher and an abstract class BaseMatcher. The Matcher interface declares this method, with this javadoc:
/**
* This method simply acts a friendly reminder not to implement Matcher directly and
* instead extend BaseMatcher. It's easy to ignore JavaDoc, but a bit harder to ignore
* compile errors .
*
* #see Matcher for reasons why.
* #see BaseMatcher
* #deprecated to make
*/
#Deprecated
void _dont_implement_Matcher___instead_extend_BaseMatcher_();
Then in BaseMatcher, this method is implemented as follows:
/**
* #see Matcher#_dont_implement_Matcher___instead_extend_BaseMatcher_()
*/
#Override
#Deprecated
public final void _dont_implement_Matcher___instead_extend_BaseMatcher_() {
// See Matcher interface for an explanation of this method.
}
Admittedly, this is both effective and cute (and incredibly awkward). But if the intention is for every class that implements Matcher to also extend BaseMatcher, why use an interface at all? Why not just make Matcher an abstract class in the first place and have all other matchers extend it? Is there some advantage to doing it the way Hamcrest has done it? Or is this a great example of bad practice?
EDIT
Some good answers, but in search of more detail I'm offering a bounty. I think that the issue of backwards / binary compatibility is the best answer. However, I'd like to see the issue of compatibility elaborated on more, ideally with some code examples (preferably in Java). Also, is there a nuance between "backwards" compatibility and "binary" compatibility?
FURTHER EDIT
January 7, 2014 -- pigroxalot provided an answer below, linking to
this comment on Reddit by the authors of HamCrest. I encourage everyone to read it, and if you find it informative, upvote pigroxalot's answer.
EVEN FURTHER EDIT
December 12, 2017 -- pigroxalot's answer was removed somehow, not sure how that happened. It's too bad... that simple link was very informative.
The git log has this entry, from December 2006 (about 9 months after the initial checkin):
Added abstract BaseMatcher class that all Matchers should extend. This allows for future API compatability [sic] as the Matcher interface evolves.
I haven't tried to figure out the details. But maintaining compatibility and continuity as a system evolves is a difficult problem. It does mean that sometimes you end up with a design that you would never, ever, ever have created if you had designed the whole thing from scratch.
But if the intention is for every class that implements Matcher to also extend BaseMatcher, why use an interface at all?
It's not exactly the intent. Abstract base classes and interfaces provide entirely different 'contracts' from an OOP perspective.
An interface is a communication contract. An interface is implemented by a class to signify to the world that it adheres to certain communication standards, and will give a specific type of result in response to a specific call with specific parameters.
An abstract base class is an implementation contract. An abstract base classes is inherited by a class to provide functionality that is required by the base class but left for the implementer to provide.
In this case, both overlap, but this is merely a matter of convenience - the interface is what you need to implement, and the abstract class is there to make implementing the interface easier - there is no requirement whatsoever to use that base class to be able to offer the interface, it's just there to make it less work to do so. You are in no way limited in extending the base class for your own ends, not caring about the interface contract, or in implementing a custom class implementing the same interface.
The given practice is actually rather common in old-school COM/OLE code, and other frameworks facilitating inter-process communications (IPC), where it becomes fundamental to separate implementation from interface - which is exactly what is done here.
I think what happened is that initially a Matcher API was created in the form of an interface.
Then while implementing the interface in various ways a common code base was discovered which was then refactored out into the BaseMatcher class.
So my guess is that the Matcher interface was retained as it is part of the initial API and the descriptive method was then added as a reminder.
Having searched through the code, I found that the interface could easily by done away with as it is ONLY implemented by BaseMatcher and in 2 test units which could easily be changed to use BaseMatcher.
So to answer your question - in this particular case there is no advantage to doing it this way, besides not breaking other peoples implementations of Matcher.
As to bad practice ? In my opinion it is clear and effective - so no I don't think so, just a little odd :-)
Hamcrest provides matching, and matching only. It is a tiny niche market but they appear to be doing it well. Implementations of this Matcher interface are littered across a couple unit testing libraries, take for example Mockito's ArgumentMatcher and across a silly great amount of tiny anonymous copy-paste implementations in unit tests.
They want to be able to extend the Matcher with a new method without breaking all of those existing implementing classes. They would be hell to upgrade. Just imagine suddenly having all your unittest classes showing angry red compile errors. The anger and annoyance would kill hamcrest's niche market in one quick swoop. See http://code.google.com/p/hamcrest/issues/detail?id=83 for a small taste of that. Also, a breaking change in hamcrest would divide all versions of libraries that use Hamcrest into before and after the change and make them mutually exclusive. Again, a hellish scenario. So, to keep some freedom, they need Matcher to be an abstract base class.
But they are also in the mocking business, and interfaces are way easier to mock than base classes. When the Mockito folks unit test Mockito, they should be able to mock the matcher. So they also need that abstract base class to have a Matcher interface.
I think they have seriously considered the options and found this to be the least bad alternative.
There is an interesting discussion about it here. To quote nat_pryce:
Hi. I wrote the original version of Hamcrest, although Joe Walnes
added this wierd method to the base class.
The reason is because of a peculiarity of the Java language. As a
commenter below said, defining Matcher as a base class would make it
easier to extend the library without breaking clients. Adding a method
to an interface stops any implementing classes in client code from
compiling, but new concrete methods can be added to an abstract base
class without breaking subclasses.
However, there are features of Java that only work with interfaces, in
particular java.lang.reflect.Proxy.
Therefore, we defined the Matcher interface so that people could write
dynamic implementations of Matcher. And we provided the base class for
people to extend in their own code so that their code would not break
as we added more methods to the interface.
We have since added the describeMismatch method to the Matcher
interface and client code inherited a default implementation without
breaking. We also provided additional base classes that make it easier
to implement describeMismatch without duplicating logic.
So, this is an example of why you can't blindly follow some generic
"best practice" when it comes to design. You have to understand the
tools you're using and make engineering trade-offs within that
context.
EDIT: separating the interface from the base class also helps one cope
with the fragile base class problem:
If you add methods to an interface that is implemented by an abstract
base class, you may end up with duplicated logic either in the base
class or in subclasses when they are changed to implement the new
method. You cannot change the base class to remove that duplicated
logic if doing so changes the API provided to subclasses, because that
will break all subclasses -- not a big problem if the interface and
implementations are all in the same codebase but bad news if you're a
library author.
If the interface is separate from the abstract base class -- that is,
if you distinguish between users of the type and implementers of the
type -- when you add methods to the interface you can add a default
implementation to the base class that will not break existing
subclasses and introduce a new base class that provides a better
partial implementation for new subclasses. When someone comes to
change existing subclasses to implement the method in a better way,
then can choose to use the new base class to reduce duplicated logic
if it makes sense to do so.
If the interface and base class are the same type (as some have
suggested in this thread), and you then want to introduce multiple
base classes in this way, you're stuck. You can't introduce a new
supertype to act as the interface, because that will break client
code. You can't move the partial implementation down the type
hierarchy into a new abstract base class, because that will break
existing subclasses.
This applies as much to traits as Java-style interfaces and classes or
C++ multiple inheritance.
Java8 now allows new methods to be added to an interface if they contains default implementations.
interface Match<T>
default void newMethod(){ impl... }
this is a great tool that gives us a lot of freedom in interface design and evolution.
However, what if you really really want to add an abstract method that has no default implementation?
I think you should just go ahead and add the method. It'll break some existing codes; and they will have to be fixed. Not really a big deal. It probably beats other workarounds that preserve binary compatibility at the cost of screwing up the whole design.
But if the intention is for every class that implements Matcher to
also extend BaseMatcher, why use an interface at all? Why not just
make Matcher an abstract class in the first place and have all other
matchers extend it?
By separating interface and implementation (abstract class is still an implementation) you comply with Dependency Inversion Principle. Do not confuse with dependency injection, nothing in common. You might notice that, in Hamcrest interface is kept in hamcrest-api package, while abstract class is in hamcrest-core. This provides low coupling, because implementation depends only on interfaces but not on other implementation. A good book on this topic is: Interface Oriented Design: With Patterns.
Is there some advantage to doing it the way Hamcrest has done it? Or
is this a great example of bad practice?
The solution in this example looks ugly. I think comment is enough. Making such stub methods is redundant. I wouldn't follow this approach.
Consider the following class
public class Validator {
boolean startValiadation(UserBean user){
//This method is visible inside this package only
return validateUser(user);
}
private static boolean validateUser(UserBean user){
//This method is visible inside this class only
boolean result=false;
//validations here
return result;
}
}
Due to security requirement of above method I did code in above way. Now I want to wrote test cases using Junit. But generally a unit test is intended to exercise the public interface of a class or unit. Still I can use reflection to do some thing what I am expecting here. But I want to know is there any other way to achieve my goal?
But generally a unit test is intended to exercise the public interface of a class or unit.
Well, I don't get too dogmatic about that. I find that often you can get much better testing depth if you're willing to be "white box" about it, and test non-public members. There's a sliding scale though - testing private members directly is relatively ugly, but you can test package private methods easily, just by making sure your test is in the same package as the class.
(This approach also encourages package private classes - I find that if you're rigidly sticking to testing just the public API, you often end up getting into the habit of making all classes public and many methods public when actually they should be package private.)
In this case, I would suggest you test via the startValiadation method. Currently that calls validateUser but ignores the result - I would assume that in the real code it calls the method and does something useful with the result, which would be visible from the test. In that case, you can just call startValiadation with various different User objects and assert which ones should be valid.
You don't need reflection. Just put your test class in the same package as this class. It doesn't need to be in the same folder or the same project to do that.
No you have only three choices to test a private method:
If you are in control of the code, then change the access specifier to public just to test the method
Otherwise use reflection.
This may be of your interest:
3 . Use a public method to test your private method.
Don't test this class in isolation. A unit test, at least in the spirit of TDD as envisioned by Kent Beck, is not a test for a single class or method, but is simply a test that cannot have side effects on other tests.
This Validator class is used in other classes within the same package. First write a failing test using the public interface of those classes, then make it pass by implementing the validation. No reflection needed.
If you would test this class in isolation, you would probably mock this class in the other classes and verify that startValiadation() is really called. This has the disadvantage of coupling your test code to your implementation code. I would say: don't do that.
I recently wrote a post about this, at the bottom there's a link to a presentation by Ian Cooper that goes deeper into this.
I would like to know if I can mock a super class constructors call and its super() calls.
For example, I have the following classes
class A
{
A(..)
{
super(..)
}
}
class B extends A
{
B(C c)
{
super(c)
}
}
So, I am planning to unit test some methods in class B, but when creating an instance it does call the super class constructors making it tough to write unit tests. So, how can I mock all the super class constructor calls. Also I would like to mock few methods in class A so that it does return few values as I need.
Thanks!!
You could use PowerMock library. It is really a lifesaver when you need to accomplish things like yours.
https://github.com/powermock/powermock/wiki/Suppress-Unwanted-Behavior
Mocking a constructor is a very bad idea. Doing so is circumventing behavior that will happen in production. This is why doing work in the constructor, such as starting threads and invoking external dependencies, is a design flaw.
Can you honestly say that the work performed in the constructor has no effect on the behavior you're trying to test? If the answer is no, you run the risk of writing a test that will pass in a test environment, but fail in production. If the answer is yes, that is a plain case for moving that "work" outside the constructor. Another alternative is to move the behavior you're trying to test to another class (maybe it's own).
This is even more true if you're using a DI framework like Guice (which I assume because you tagged it that way).
The short answer to your question is "not exactly." You can't 'mock' a constructor, let alone a super. Also mocking super.anyMethod is difficult or impossible with mocking frameworks I'm familiar with. Powermock does allow you to suppress super constructors and problematic methods, which isn't quite the same as mocking them, but could help.
When B extends A, it is of course completely coupled with A. That's not a problem per se but it can be and it looks like it is here. Instead of having B extend A, try having B contain an A instead (and perhaps implement the same interface if necessary). Then you can inject a mock A and delegate all of the calls that you want. That would be much easier to unit test, no?
It's one of the benefits of test driven development that you discover these things in your design during testing.
For a unit test, I need to mock several dependencies. One of the dependencies is a class which implements an interface:
public class DataAccessImpl implements DataAccess {
...
}
I need to set up a mock object of this class which returns some specified values when provided with some specified parameters.
Now, what I'm not sure of, is if it's better to mock the interface or the class, i.e.
DataAccess client = mock(DataAccess.class);
vs.
DataAccess client = mock(DataAccessImpl.class);
Does it make any difference in regard to testing? What would be the preferred approach?
It may not make much difference in your case but the preferred approach is to mock interface, as normally if you follow TDD (Test Driven Development) then you could write your unit tests even before you write your implementation classes. Thus even if you did not have concrete class DataAccessImpl, you could still write unit tests using your interface DataAccess.
Moreover mocking frameworks have limitations in mocking classes, and some frameworks only mock interfaces by default.
In most cases technically there is no difference and you may mock as class so an interface. Conceptually it is better to use interfaces because of better abstraction.
It depends. If your code depends on the class and not on the interface you must mock the class to write a valid unit test.
You should mock the interface since it will help ensure you are adhering to Liskov Substitution Principal (https://stackoverflow.com/a/56904/3571100).
If you only use it through interface and it's not a partial mock, there is no difference other than your inner feeling. Mocking the class will also mock non-used public method if the class has them, but that is not a big deal to consider.