Using junit build base unit tests and abstract validation - java

Assume that I am building a monthly subscription charging feature for a mobile app, there are multiple ways to charge the bill. It can be through Apple Pay, Google wallet, paypal, visa/master depends on the platform. Each provider has their own implementation and respective JUnit tests ( as they are using Java).
To evaluate few basic functionalities, there are few cases which every implementation has to validate. So, the plan is to write base tests and call an abstract validate method.
Here is my approach,
public abstract class BaseBillingTest
{
public abstract BillCharger getBillCharger();
public abstract void ValidateTest1(Bill input, Bill output);
public void tests_case_1(){
Bill input = new Bill(<some value>);
Bill Output = getBillCharger().charge(input);
ValidateTest1(input, output);
}
}
Any derived test class will implement the abstract methods, so it has the responsibility to implement the validate methods. The derived test class need not know what is happening in the base test, as they can just validate the output based on the input.
Any suggestions to approach this in a more elegant fashion ? Are there any design patterns which I can apply in this scenario ?

Your example of Inheritance in my opinion is not the optimal way to go for the two abstract methods. One is Constructional and the other is static - you validate one Bill against another.
In both cases inheritance is not the correct relationship here. You can use successfully Factory pattern for the "getBillCharger" or the more modern TestDataBuilders or ObjectMother pattern.
For the second method you can just use a helper class.
If you need to invoke the same construction logic several times in a Test class you can use #Before
One important aspect is that if you place your tests_case_1 in super class you will be hiding parts of your logic and your tests will not be that obvious. I prefer more explicit and visible test cases therefore I would avoid this kind of setup.

The original approach does, in fact, apply a design pattern, it's called Template Method.
OTOH, the questions has a slight smell of hunting for design patterns as a goal in its own right. Why bother, if they already have a solution at hand that fulfills functional (here: testing the right thing) as well as non-functional (here: clearly separating common parts from specifics) requirements. The one reason I would accept is: communication, as in: being able to give a name to the design so that co-workers quickly understand my intentions.

Related

Good object oriented design example in Java

I have a Java class called TestExecutor which responsible for starting a test. Starting the test involves a number of stages:
- Update test repository
- Locate the test script
- Create result empty directory
- Execute command
- Parse output
- Update database
For each of these stages I have created private methods in the TestExecutor class which perform each of the actions above, all surrounded in a try-catch block. I'm aware that this is not good design as my class does too much and is also a pain to unit test due to a large amount of functionality being hidden in private methods.
I'd like to hear your suggestions for refactoring this class as I'm not sure how to get away from something similar to the above structure. Code example below:
public void start() throws TestExecuteException {
try {
updateRepository();
locateScript();
createResultDirectory();
executeCommand();
parseOutput();
updateDatabase();
catch(a,b,c) {
}
}
private updateRepository() {
// Code here
}
// And repeat for other functions
I would do this way. First, enforce the contract that each test step should have.
interface TestCommand{
void run();
}
Now make your test commands as separate classes. Try to make these commands classes generic so that you reuse for similar types of commands. Now in your class where you want to run a test, configure the test steps as following.
//in your test class do this.
List<TestStep> testCommands = new ArrayList<>();
testCommands.add(new UpdateRepoCommand());
testCommands.add(new LocateScriptCommand());
// and so on....
Now, execute all your steps chronologically.
public void start(testSteps) throws TestExecuteException {
try {
for(TestCommand command : testCommands){
command.run()
}
catch(Exception e) {
//deal with e
}
}
Moreover, as CKing described above, follow SOLID principle inside those test steps. Inject dependencies and write the unit test for them separately.
Well your class looks ok to me.
I'm aware that this is not good design as my class does too much
As far as the class has a single responsibility the number of methods don't matter.
Check this template method design pattern. Your class is doing something similar to what the abstract Game class is doing.
public abstract class Game {
abstract void initialize();
abstract void startPlay();
abstract void endPlay();
//template method
public final void play(){
//initialize the game
initialize();
//start game
startPlay();
//end game
endPlay();
}
}
and is also a pain to unit test due to a large amount of functionality
being hidden in private methods
Read this & this about testing private methods. You can also use a framework like PowerMock which helps you in testing untestable code.
I'd like to hear your suggestions for refactoring this class as I'm not sure how to get away from something similar to the above structure
You should definitely take a look at the SOLID principles as a starting point to writing clean,testable object oriented code. I was introduced to it at the begning of my career and it really helps a lot to follow these principles.
That said, I would start by grouping related functionality into different classes. For example, updateRepository() and updateDatabase() can be moved to a separate class called DatabaseHelper. Similarly locateScript() and createResultDirectory() seem to be disk related operations and can be moved to a seperate class called DirectoryHelper. I believe you get the gist of it. What you just achieved was Seperation of Concerns.
Now that you have separate classes, you need to bring them together and put them to work. Your TestExecutor can continue to have the methods that you have listed. The only different will be that these methods will now delegate their work to the individual classes that we created above. For this, TestExecutor will need a reference to the DatabaseHelper and DirectoryHelper classes. You could just instantiate these classes directly inside TestExecutor. But that would mean that TestExecutor is tightly coupled to an implementation. What you can do instead is ask code outside TestExecutor to supply the DatabaseHelpe and DirectoryHelper to use. This is known as Dependency Inversion through Dependency Injection. The advantage of this approach is that you can now pass any subclass of DatabaseHelper and DirectoryHelper to TaskExecutor and it doesn't have to know the details of the implementation. This facilitates in the unit testing of TaskExecutor by mocking these dependencies instead of passing actual instances.
I will leave the rest of the SOLID principles for you to explore, implement and appreciate.

Why insist all implementations of an interface extend a base class?

I was just looking at the Java Hamcrest code on GitHub, and noticed they employed a strategy that seemed unintuitive and awkward, but it got me wondering if I'm missing something.
I noticed in the HamCrest API that there is an interface Matcher and an abstract class BaseMatcher. The Matcher interface declares this method, with this javadoc:
/**
* This method simply acts a friendly reminder not to implement Matcher directly and
* instead extend BaseMatcher. It's easy to ignore JavaDoc, but a bit harder to ignore
* compile errors .
*
* #see Matcher for reasons why.
* #see BaseMatcher
* #deprecated to make
*/
#Deprecated
void _dont_implement_Matcher___instead_extend_BaseMatcher_();
Then in BaseMatcher, this method is implemented as follows:
/**
* #see Matcher#_dont_implement_Matcher___instead_extend_BaseMatcher_()
*/
#Override
#Deprecated
public final void _dont_implement_Matcher___instead_extend_BaseMatcher_() {
// See Matcher interface for an explanation of this method.
}
Admittedly, this is both effective and cute (and incredibly awkward). But if the intention is for every class that implements Matcher to also extend BaseMatcher, why use an interface at all? Why not just make Matcher an abstract class in the first place and have all other matchers extend it? Is there some advantage to doing it the way Hamcrest has done it? Or is this a great example of bad practice?
EDIT
Some good answers, but in search of more detail I'm offering a bounty. I think that the issue of backwards / binary compatibility is the best answer. However, I'd like to see the issue of compatibility elaborated on more, ideally with some code examples (preferably in Java). Also, is there a nuance between "backwards" compatibility and "binary" compatibility?
FURTHER EDIT
January 7, 2014 -- pigroxalot provided an answer below, linking to
this comment on Reddit by the authors of HamCrest. I encourage everyone to read it, and if you find it informative, upvote pigroxalot's answer.
EVEN FURTHER EDIT
December 12, 2017 -- pigroxalot's answer was removed somehow, not sure how that happened. It's too bad... that simple link was very informative.
The git log has this entry, from December 2006 (about 9 months after the initial checkin):
Added abstract BaseMatcher class that all Matchers should extend. This allows for future API compatability [sic] as the Matcher interface evolves.
I haven't tried to figure out the details. But maintaining compatibility and continuity as a system evolves is a difficult problem. It does mean that sometimes you end up with a design that you would never, ever, ever have created if you had designed the whole thing from scratch.
But if the intention is for every class that implements Matcher to also extend BaseMatcher, why use an interface at all?
It's not exactly the intent. Abstract base classes and interfaces provide entirely different 'contracts' from an OOP perspective.
An interface is a communication contract. An interface is implemented by a class to signify to the world that it adheres to certain communication standards, and will give a specific type of result in response to a specific call with specific parameters.
An abstract base class is an implementation contract. An abstract base classes is inherited by a class to provide functionality that is required by the base class but left for the implementer to provide.
In this case, both overlap, but this is merely a matter of convenience - the interface is what you need to implement, and the abstract class is there to make implementing the interface easier - there is no requirement whatsoever to use that base class to be able to offer the interface, it's just there to make it less work to do so. You are in no way limited in extending the base class for your own ends, not caring about the interface contract, or in implementing a custom class implementing the same interface.
The given practice is actually rather common in old-school COM/OLE code, and other frameworks facilitating inter-process communications (IPC), where it becomes fundamental to separate implementation from interface - which is exactly what is done here.
I think what happened is that initially a Matcher API was created in the form of an interface.
Then while implementing the interface in various ways a common code base was discovered which was then refactored out into the BaseMatcher class.
So my guess is that the Matcher interface was retained as it is part of the initial API and the descriptive method was then added as a reminder.
Having searched through the code, I found that the interface could easily by done away with as it is ONLY implemented by BaseMatcher and in 2 test units which could easily be changed to use BaseMatcher.
So to answer your question - in this particular case there is no advantage to doing it this way, besides not breaking other peoples implementations of Matcher.
As to bad practice ? In my opinion it is clear and effective - so no I don't think so, just a little odd :-)
Hamcrest provides matching, and matching only. It is a tiny niche market but they appear to be doing it well. Implementations of this Matcher interface are littered across a couple unit testing libraries, take for example Mockito's ArgumentMatcher and across a silly great amount of tiny anonymous copy-paste implementations in unit tests.
They want to be able to extend the Matcher with a new method without breaking all of those existing implementing classes. They would be hell to upgrade. Just imagine suddenly having all your unittest classes showing angry red compile errors. The anger and annoyance would kill hamcrest's niche market in one quick swoop. See http://code.google.com/p/hamcrest/issues/detail?id=83 for a small taste of that. Also, a breaking change in hamcrest would divide all versions of libraries that use Hamcrest into before and after the change and make them mutually exclusive. Again, a hellish scenario. So, to keep some freedom, they need Matcher to be an abstract base class.
But they are also in the mocking business, and interfaces are way easier to mock than base classes. When the Mockito folks unit test Mockito, they should be able to mock the matcher. So they also need that abstract base class to have a Matcher interface.
I think they have seriously considered the options and found this to be the least bad alternative.
There is an interesting discussion about it here. To quote nat_pryce:
Hi. I wrote the original version of Hamcrest, although Joe Walnes
added this wierd method to the base class.
The reason is because of a peculiarity of the Java language. As a
commenter below said, defining Matcher as a base class would make it
easier to extend the library without breaking clients. Adding a method
to an interface stops any implementing classes in client code from
compiling, but new concrete methods can be added to an abstract base
class without breaking subclasses.
However, there are features of Java that only work with interfaces, in
particular java.lang.reflect.Proxy.
Therefore, we defined the Matcher interface so that people could write
dynamic implementations of Matcher. And we provided the base class for
people to extend in their own code so that their code would not break
as we added more methods to the interface.
We have since added the describeMismatch method to the Matcher
interface and client code inherited a default implementation without
breaking. We also provided additional base classes that make it easier
to implement describeMismatch without duplicating logic.
So, this is an example of why you can't blindly follow some generic
"best practice" when it comes to design. You have to understand the
tools you're using and make engineering trade-offs within that
context.
EDIT: separating the interface from the base class also helps one cope
with the fragile base class problem:
If you add methods to an interface that is implemented by an abstract
base class, you may end up with duplicated logic either in the base
class or in subclasses when they are changed to implement the new
method. You cannot change the base class to remove that duplicated
logic if doing so changes the API provided to subclasses, because that
will break all subclasses -- not a big problem if the interface and
implementations are all in the same codebase but bad news if you're a
library author.
If the interface is separate from the abstract base class -- that is,
if you distinguish between users of the type and implementers of the
type -- when you add methods to the interface you can add a default
implementation to the base class that will not break existing
subclasses and introduce a new base class that provides a better
partial implementation for new subclasses. When someone comes to
change existing subclasses to implement the method in a better way,
then can choose to use the new base class to reduce duplicated logic
if it makes sense to do so.
If the interface and base class are the same type (as some have
suggested in this thread), and you then want to introduce multiple
base classes in this way, you're stuck. You can't introduce a new
supertype to act as the interface, because that will break client
code. You can't move the partial implementation down the type
hierarchy into a new abstract base class, because that will break
existing subclasses.
This applies as much to traits as Java-style interfaces and classes or
C++ multiple inheritance.
Java8 now allows new methods to be added to an interface if they contains default implementations.
interface Match<T>
default void newMethod(){ impl... }
this is a great tool that gives us a lot of freedom in interface design and evolution.
However, what if you really really want to add an abstract method that has no default implementation?
I think you should just go ahead and add the method. It'll break some existing codes; and they will have to be fixed. Not really a big deal. It probably beats other workarounds that preserve binary compatibility at the cost of screwing up the whole design.
But if the intention is for every class that implements Matcher to
also extend BaseMatcher, why use an interface at all? Why not just
make Matcher an abstract class in the first place and have all other
matchers extend it?
By separating interface and implementation (abstract class is still an implementation) you comply with Dependency Inversion Principle. Do not confuse with dependency injection, nothing in common. You might notice that, in Hamcrest interface is kept in hamcrest-api package, while abstract class is in hamcrest-core. This provides low coupling, because implementation depends only on interfaces but not on other implementation. A good book on this topic is: Interface Oriented Design: With Patterns.
Is there some advantage to doing it the way Hamcrest has done it? Or
is this a great example of bad practice?
The solution in this example looks ugly. I think comment is enough. Making such stub methods is redundant. I wouldn't follow this approach.

Is this right usage of factory pattern?

Got design problem, maybe you can help to decide.
My client object can ask for set of objects of class Report. There is defined set of available reports and according to client's permissions different reports can included in returned set. Reports are created per request (every client gets brand new report instances on each request).
Should I use kind of "factory" that will encapsulate reports creation like below:
public class ReportsFactory {
private UserPermissionsChecker permissionsChecker;
public Set<Report> createReports() {
Set<Report> reports = new HashSet<Report>();
if(permissionsChecker.hasAccessTo('report A')) {
reports.add(createReportA());
}
if(permissionsChecker.hasAccessTo('report B')) {
reports.add(createReportB());
}
if(permissionsChecker.hasAccessTo('report C')) {
reports.add(createReportC());
}
return reports;
}
private Report createReportA() {...}
private Report createReportB() {...}
private Report createReportC() {...}
}
Is this right usage of so called simple Factory pattern? Or do you have other suggestions?
** EDIT **
Some comments below say it's not exactly Factory pattern. If not, how could I call that?
I think the design is correct, but this is a wrong usage of the "Factory" word. In the Factory pattern, XxxxFactory creates instances of Xxxx, initializes them if required, but applies no other kind of logic.
This design here seems correct to me, but your class would rather be called ReportsService
And maybe UserPermissionsChecker would be AuthorizationService
Edit: To take into account criticism against the word "Service".
There is currently a quite widespread (I did not say universal) convention in the java world, which consists in having:
A purely descriptive business-model implemented by classes emptied of all logic called (maybe mistakenly) POJOs
All business logic mainly related to an object Xxx implemented in a procedural style in the methods of a class called XxxService.
I personally don't agree with this coding style and I prefer object oriented programming, but whether we like it or not, this convention exists in the Java EE world and has it's coherence.
Judging bye the coding style of the class submitted by the OP, I inferred that he followed this procedural approach. In that situation, it's better to follow the existing convention and call the class that serves as a container for the procedural code which handles Reports a ReportService.
To me this looks a bit of a builder pattern, in a sense you have an object, that you build its data to.
This is in contrast to a factory, where usually returns different concrete types of created objects,
And usually the construction of the data of these objects is done in the CTORs of the concrete classes that objects of them are returned from the factory.

Reusing test implementations in JUnit 4?

I have an interface, e.g.:
public interface Thing {
FrobResult frob(FrobInput);
}
And several implementations of that interface (e.g. NormalThing, ImmutableThing, AsyncThing) that I am trying to test.
Many of my test methods are really about ensuring that the interface is implemented correctly, and thus are duplicated across each Thing implementation. In JUnit 3 a common solution to this would be to create a base class (extending TestCase) that is then subclassed by each implementation class. But is this the correct approach for JUnit 4?
Possible alternatives in (I believe) ascending order of preference:
Cut'n'paste the duplicated test methods. Not DRY at all, but I guess less worrisome in tests than it would be in production code.
Create an abstract class with #Test methods, and subclass it for each implementation test class. (Commonly seen with JUnit 3 tests -- is this still a good way to go in JUnit 4?)
Put the common test methods into a helper class, and invoke it on each implementation. (Composition instead of inheritance.)
What's the best practice for doing #3? Maybe a #RunWith(Parameterized.class) test that is parameterized with each implementation? Or is there a better way to accomplish this?
Yes, it is the correct approach to create a base class that is then subclassed by each implementation class in JUnit4, too.
I prefer the base test class for the interface to be abstract, i.e. your "alternative" 2, since I have made good experience in mimicing the inheritance hierarchy from the production code for the test code. So if you have interface I and implementations S1, S2 and S3, you make the abstract test class TestI and the test classes TestS1, TestS2 and TestS3.
Test cases should be speaking, i.e. tell a story. By choosing -- as always -- method names carefully and use clean behavioral subtyping only, inheritance does not obfuscate this.
I use #2 approach for JUnit and TestNG test cases. This is most convenient and easy to maintain. Its also straight forward to pick up (since its native to OOD to have base class that has common methods).
To me unit test classes are no different than regular project classes...so I do apply similar design considerations.

Unit test for method that calls multiple other methods using Mockito

Perhaps I have completely fallen short in my search, but I cannot locate any documentation or discussions related to how to write a unit test for a Java class/method that in turn calls other non-private methods. Seemingly, Mockito takes the position that there is perhaps something wrong with the design (not truly OO) if a spy has to be used in order to test a method where mocking internal method calls is necessary. I'm not certain this is always true. But using a spy seems to be the only way to accomplish this. For example, why could you not have a "wrapper" style method that in turn relies on other methods for primitive functionality but additionally provides functionality, error handling, logging, or different branches dependent on results of the other methods, etc.?
So my question is two-fold:
Is it poorly designed and implemented code to have a method that internally calls other methods?
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
This might be a difficult request, but I would prefer for those who decide to answer to not merely re-publish the Mockito verbiage and/or stance on spies as I already am aware of that approach and ideology. Also, I've used Powermockito as well. To me, the issue here is that Mockito developed this framework where additional workarounds had to be created to support this need. So I suppose the question I am wanting an answer to is if spies are "bad", and Powermockito were not available, how is one supposed to unit test a method that calls other non-private methods?
Is it poorly designed and implemented code to have a method that internally calls other methods?
Not really. But I'd say that, in this situation, the method that calls the others should be tested as if the others where not already tested separately.
That is, it protects you from situations where your public methods stops calling the other ones without you noticing it.
Yes, it makes for (sometimes) a lot of test code. I believe that this is the point: the pain in writing the tests is a good clue that you might want to consider extracting those sub-methods into a separate class.
If I can live with those tests, then I consider that the sub-methods are not to be extracted yet.
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
I'd do something like that:
public class Blah {
public int publicMethod() {
return innerMethod();
}
int innerMethod() {
return 0;
}
}
public class BlahTest {
#Test
public void blah() throws Exception {
Blah spy = spy(new Blah());
doReturn(1).when(spy).innerMethod();
assertThat(spy.publicMethod()).isEqualTo(1);
}
}
To me, this question relates strongly to the concept of cohesion.
My answer would be:
It is ok to have methods (public) that call other methods (private) in a class, in fact very often that is what I think of as good code. There is a caveat to this however in that your class should still be strongly cohesive. To me that means the 'state' of your class should be well defined, and the methods (think behaviours) of your class should be involved in changing your classes state in predictable ways.
Is this the case with what you are trying to test? If not, you may be looking at one class when you should be looking at two (or more).
What are the state variables of the class you're trying to test?
You might find that after considering the answers to these types of questions, your code becomes much easier to test in the way you think it should be.
If you really need (or want) to avoid calling the lower-level methods again, you can stub them out instead of mocking them. For example, if method A calls B and C, you can do this:
MyClass classUnderTest = new MyClass() {
#Override
public boolean B() {return true;}
#Override
public int C() {return 0;}
};
doOtherCommonSetUp(classUnderTest);
String result = classUnderTest.A("whatever");
assertEquals("whatIWant", result);
I've used this quite a quite a bit with legacy code where extensive refactoring could easily lead to the software version of shipwright's disease: Isolate something difficult to test into a small method, and then stub that out.
But if the methods being called are fairly innocuous and don't requiring mocking, I just let them be called again without worrying that I am covering every path within them.
The real question should be:
What do I really want to test?
And actually the answer should be:
The behaviour of my object in response to outside changes
That is, depending on the way one can interact with your object, you want to test every possible single scenario in a single test. This way, you can make sure that your class reacts according to your expectations depending on the scenario you're providing your test with.
Is it poorly designed and implemented code to have a method that internally calls other methods?
Not really, and really not! These so called private methods that are called from public members are namely helper methods. It is totally correct to have helper methods!
Helper methods are there to help break some more complex behaviours into smaller pieces of reusable code from within the class itself. Only it knows how it should behave and return the state accordingly through the public members of your class.
It is unrare to see a class with helper methods and normally they are necessary to adopt an internal behaviour for which the class shouldn't react from the outside world.
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
In my humble opinion, you don't test those methods. They get tested when the public members are tested through the state that you expect out of your object upon a public member call. For example, using the MVP pattern, if you want to test user authentication, you shall not test every private methods, since private methods might as well call other public methods from an object on which depend the object under test and so forth. Instead, testing your view:
#TestFixture
public class TestView {
#Test
public void test() {
// arrange
string expected = "Invalid login or password";
string login = "SomeLogin";
string password = "SomePassword";
// act
viewUnderTest.Connect(login, password);
string actual = viewUnderTest.getErrorMessage;
// assert
assertEqual(expected, actual);
}
}
This test method describes the expected behaviour of your view once the, let's say, connectButton is clicked. If the ErrorMessage property doesn't contain the expected value, this means that either your view or presenter doesn't behave as expected. You might check whether the presenter subscribed to your view's Connect event, or if your presenter sets the right error message, etc.
The fact is that you never need to test whatever is going on in your private methods, as you shall adjust and bring corrections on debug, which in turn causes you to test the behaviour of your internal methods simultaneously, but no special test method should be written expressly for those helper method.

Categories

Resources