I have the following structure of my Java code:
public MyClass {
// some class variables
...
private void process() {
private MyObject obj;
...
obj = createHelper();
...
messageHelper(obj, "One of several possible strings");
...
messageHelper(obj, "Another call with a different string");
...
}
private MyObject createHelper {
MyObject obj = new MyObject();
// some Setter calls
...
return obj;
}
private void messageHelper (MyOject obj, String message) {
...
}
}
I would like to test, that based on properties obj (that I would like to specify), messageHelper() receives the right string. In other words I need to control the result of one method and have access to the parameters of the other.
I'm still very shaky with all this Mock/Stub/Spy stuff.
It seems to me that I need to Spy on MyClass, stub CreateHelper() with a "manually" created object and not sure what for intercepting call parameters for messageHelper().
Also I noted that Wiki cautions against using Spies:
Think twice before using this feature. It might be better to change
the design of the code under specification.
So what would an appropriate Spocky way to accomplish the task?
Slightly Refactored Code: (5/5/14)
public MyClass {
// some class variables
private messageSevice = new messageService();
...
private void process() {
private MyObject obj;
...
obj = new MyObject(parameters ...);
...
if (someCondition) {
messageService.produceMessageOne(obj);
}
...
if (otherCondition) {
messageService.produceMessageTwo(obj);
{
...
}
}
public class MessageService implements IMessageService {
private final static MSG_ONE = "...";
private final static MSG_TWO = "...";
...
public void produceMessageOne(MyObject obj) {
produceMessage(obj, MSG_ONE);
...
}
public void produceMessageOne(MyObject obj) {
produceMessage(obj, MSG_TWO);
}
private void produceMessage(MyObject obj, String message) {
...
}
}
I would greatly appreciate if someone suggests the way it should be tested with Spock.
The caution you're referring to is rightfully there. There's a very good correlation between testable code and good design (I recommend watching this lecture from Michael Feathers to understand why http://www.youtube.com/watch?v=4cVZvoFGJTU).
Using spies tends to be a heads up for design issues since it usually arises from the impossibility of using regular mocks and stubs.
It's a little hard to predict from your example, since you're obviously using pseudo names, but it seems that the design of the MyClass class violates the single responsibility principle (http://en.wikipedia.org/wiki/Single_responsibility_principle), since it does processing, creation and messaging (3 responsibilities).
If you're willing to change your design, so that the processing class (MyClass) will do only processing, you'll be providing another class that does the creation (MyObjectFactory), and yet another class that does the messaging (MyObjectMessager) either through a constructor, setter methods or by dependency injection.
Using this new design, you can create an instance of the class you're testing (MyClass), and pass it mock objects of both the factory and messaging classes. Then you'll be able to verify whatever you want on both.
Take a look at this example (using Mockito):
public class MyClassTest {
#Test
public void testThatProcessingMessagesCorrectly() {
MyObject object = mock(MyObject.class);
MyObjectFactory factory = mock(MyObjectFactory.class);
when(factory.createMyObject()).thenReturn(object);
MyObjectMessager messager = mock(MyObjectMessager.class);
MyClass processor = new MyClass(factory, messager);
processor.process();
verify(factory).createMyObject();
verify(messager).message(EXPECTED_MESSAGE_1);
verify(messager).message(EXPECTED_MESSAGE_2);
...
verify(messager).message(EXPECTED_MESSAGE_N);
}
...
}
Here's a Spock example (untested, double check before using ...):
public class MyClassSpec extends Specification {
def "check that the right messages are produced with the expected object"() {
given:
def messageService = Mock(IMessageService)
def testedInstance = new MyClass()
testedInstance.setMessageService(messageService)
when:
testedInstance.process()
then:
1 * messageService.produceMessageOne(_)
1 * messageService.produceMessageTwo(_)
}
}
If you're a hammer, every problem is a nail
I'd like to call exception-to-the-rule here and say that sometimes stubbing private methods - necessitating spies - can be both correct and useful.
#eitanfar is most likely accurate in his analysis of the function, and 95% of the time this is the case, but as with most things - I believe - not always.
This is for those of us who believe they have an exception but get the usual "code smell" argument.
My example is a complex argument validator. Consider the following:
class Foo {
def doThing(...args) {
doThing_complexValidateArgs(args)
// do things with args
}
def private doThing_complexValidateArgs(...args) {
// ... * 20 lines of non-logic-related code that throws exceptions
}
}
Placing the validator in it's own class IMO seperates the concern too much. (a FooMethodArgumentValidator class?)
Refactoring out the validation arguably significantly improves readability of the doThing() function.
doThing_complexValidateArgs() should not be public
The doThing() function benefits from the reability of a simple call validateArgs(...) and maintains encapsulation.
All I need to be sure of now is that I have called the function within the parent one. how can I do that? well - correct me if I'm wrong - but in order to do that, I need a Spy().
class FooSpec extends Specification {
class Foo {
def doThing(...args) {
doThing_controlTest(args)
doThing_complexValidateArgs(*args)
// do things with args
}
def doThing_controlTest(args) {
// this is a test
}
def private doThing_complexValidateArgs(...args) {
// ... * 20 lines of code
}
}
void "doThing should call doThing_complexValidateArgs" () {
def fooSpy = Spy(Foo)
when:
fooSpy.doThing(1, 2, 3)
then:
1 * fooSpy.doThing_controlTest([1,2,3]) // to prove to ya'll we got into the right method
1 * fooSpy.invokeMethod('doThing_complexValidateArgs', [1, 2, 3]) // probably due to groovy weirdness, this is how we test this call
}
}
Here is my real life example I used for a static private method:
#SuppressWarnings("GroovyAccessibility")
#ConfineMetaClassChanges(DateService) // stops a global GroovySpy from affecting other tests by reseting the metaclass once done.
void "isOverlapping calls validateAndNormaliseDateList() for both args" () {
List list1 = [new Date(1L), new Date(2L)]
List list2 = [new Date(2L), new Date(3L)]
GroovySpy(DateService, global: true) // GroovySpy allows for global replacement. see `org.spockframework.mock.IMockConfiguration#isGlobal()`
when:
DateService.isOverlapping(list1, list2)
then:
1 * DateService.isOverlapping_validateAndNormaliseDateList('first', list1) // groovy 2.x currently allows private method calls
1 * DateService.isOverlapping_validateAndNormaliseDateList('second', list2)
}
Related
I have a lot of similar classes (actually it is different types of events with one parent class). It is about 30 classes already and the number will be growing. Every class has its own logic to process, but there several fields that are exists in every class. I want to be sure every event's flow is taking care of common fields. It is become more complex, because of adding new event types and adding new flows. The best approach will be to create some dynamic test that will be checking common fields are processed. Saying 'dynamically' I mean the ability of the test automatically discover new classes and put them into the test pack. We are using spock, but it is not possible to dynamically generate 'where' section of test. I come with quite a strange approach that is not working, but illustrate my idea:
def "dynamic test"() {
given:
def classes = methodToGetListOfEventClass()
when:
for(Class clazz : classes) {
ParentEvent event = clazz.getDeclaredConstructor().newInstance() as ParentEvent
service.sendEvent(event)
}
}
then:
for(Class clazz : classes) {
ParentEvent event = clazz.getDeclaredConstructor().newInstance() as ParentEvent
1 * sendExternalEvent("someId", event.getClass().getName(), Collections.emptyMap())
//check common fields exists
}
}
}
}
So I just try to create an instance of every class, pass it into the event handler and check created external event has all common fields set. It looks ugly and does not work. Is any suggestion on how to implement such a dynamic test?
You can use dynamic data pipes. Here is a simple example, based on your pseudo code and the limited information you provided. Because you did not say if you use Spock 1.3 or 2.x, I made sure that the example works on 1.3, too.
Given a situation as follows (all Groovy code, but the classes under test can be Java ones, too):
interface Event {
void init()
void sendExternalEvent(String id, String className, Map options)
}
class Service {
void sendEvent(Event event) {
event.sendExternalEvent("123", event.class.name, [:])
}
}
abstract class BaseEvent implements Event {
private static final Random random = new Random()
private static final String alphabet = (('A'..'Z') + ('0'..'9')).join()
protected int id
protected String name
#Override
void init() {
id = 1 + random.nextInt(100)
name = (1..10).collect { alphabet[random.nextInt(alphabet.length())] }.join()
}
}
class FirstEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
String doFirst() { "first" }
}
class SecondEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
String doSecond() { "second" }
}
class ThirdEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
int doThird() { 3 }
}
You can implement your dynamic test for BaseEvent subclasses like this:
import spock.lang.Specification
import spock.lang.Unroll
class DynamicBaseClassTest extends Specification {
#Unroll("verify #className")
def "basic event class functionality"() {
given:
def service = new Service()
def event = Spy(baseEventClass.getConstructor().newInstance())
when:
event.init()
then:
// '.id' and '.name' should be enough, but on Spock 2.1 there is a problem
// when not explicitly using the '#' notation for direct field access.
event.#id > 0
event.#name.length() == 10
when:
service.sendEvent(event)
then:
1 * event.sendExternalEvent(_, event.class.name, [:])
where:
baseEventClass << getEventClasses()
className = baseEventClass.simpleName
}
static List<Class<? extends BaseEvent>> getEventClasses() {
[FirstEvent, SecondEvent, ThirdEvent]
}
}
Try it in the Groovy web console.
The notable things are:
where:
baseEventClass << getEventClasses()
The data pipe is declared to call a data provider method, just like in your example. What getEventClasses() does, is totally up to you: return a fixed list, scan the classpath or whatever.
def event = Spy(baseEventClass.getConstructor().newInstance())
The spy is necessary to have both the real behaviour for the class under test - you do not want to mock it, of course - and the ability to verify interactions on it later:
then:
1 * event.sendExternalEvent(_, event.class.name, [:])
BTW, if you are unfamiliar with #Unroll, it makes the spec look like this in an IDE or a test report:
Here's the scenario:
public class A {
public A {}
void doSomething() {
// do something here...
}
}
Right now, the class is setup where you can create multiple instances. But I also see a need where I might want to restrict the class to only one instance, i.e. Singleton class.
The problem is I'm not sure how to go about the design of accomplishing both goals: Multiple instances and one instance. It doesn't sound possible to do in just one class. I imagine I'll need to use a derived class, an abstract class, interface, something else, or some combination.
Should I create class A as a base class and create a derived class which functions as the singleton class?
Of course, the first thing should always be to question the necessity to use singletons. But sometimes, they are simply a pragmatic way to solve certain problems.
If so, the first thing to understand is: there is no solution that can "enforce" your requirements and prevent mis-use, but here is a "pattern" that helps a lot by turning "intentions" into "meaningful" code:
First, I have an interface that denotes the functionality:
interface WhateverService { void foo() }
Then, I have some impl for that:
class WhateverServiceImpl implements WhateverService {
#Override
void foo() { .... }
Now, if I need that thing to exist as singleton, I do
enum WhateverServiceProvider implements WhateverService {
INSTANCE;
private final WhateverService impl = new WhateverServiceImpl();
#Override
void foo() { impl.foo() }
and finally, some client code can do:
WhateverService service = WhateverServiceProvider.INSTANCE;
service.foo()
(but of course, you might not want to directly assign a service object, but you could use dependency injection here)
Such architectures give you:
A clear separation between the core functionality, its implementation and the singleton concept
Guaranteed singleton semantics (if there is one thing that Java enums are really good for ... then it is that: providing fool-proof singletons!)
Full "testability" (you see - when you just use the enum, without making it available as interface ... then you have a hard time mocking that object in client code - as you can't mock enums directly).
Update - regarding thread safety:
I am not sure what exactly you mean with "singleton concept".
But lets say this: it is guaranteed that there is exactly one INSTANCE object instantiated when you use enums like that, the Java language guarantees that. But: if several threads are turning to the enum, and calling foo() in parallel ... you are still dealing with all the potential problems around such scenarios. So, yes, enum "creation" is thread-safe; but what your code is doing ... is up to you. So is then locking or whatever else makes sense.
I think you should take a look at this question:
Can a constructor in Java be private?
The Builder pattern described there could be a somewhat interesting solution:
// This is the class that will be produced by the builder
public class NameOfClassBeingCreated {
// ...
// This is the builder object
public static class Builder {
// ...
// Each builder has at least one "setter" function for choosing the
// various different configuration options. These setters are used
// to choose each of the various pieces of configuration independently.
// It is pretty typical for these setter functions to return the builder
// object, itself, so that the invocations can be chained together as in:
//
// return NameOfClassBeingCreated
// .newBuilder()
// .setOption1(option1)
// .setOption3(option3)
// .build();
//
// Note that any subset (or none) of these setters may actually be invoked
// when code uses the builer to construct the object in question.
public Builder setOption1(Option1Type option1) {
// ...
return this;
}
public Builder setOption2(Option2Type option2) {
// ...
return this;
}
// ...
public Builder setOptionN(OptionNType optionN) {
// ...
return this;
}
// ...
// Every builder must have a method that builds the object.
public NameOfClassBeingCreated build() {
// ...
}
// The Builder is typically not constructible directly
// in order to force construction through "newBuilder".
// See the documentation of "newBuilder" for an explanation.
private Builder() {}
}
// Constructs an instance of the builder object. This could
// take parameters if a subset of the parameters are required.
// This method is used instead of using "new Builder()" to make
// the interface for using this less awkward in the presence
// of method chaining. E.g., doing "(new Foo.Builder()).build()"
// is a little more awkward than "Foo.newBuilder().build()".
public static Builder newBuilder() {
return new Builder();
}
// ...
// There is typically just one constructor for the class being
// constructed that is private so that it may only be invoked
// by the Builder's "build()" function. The use of the builder
// allows for the class's actual constructor to be simplified.
private NameOfClassBeingCreated(
Option1Type option1,
Option2Type option2,
// ...
OptionNType optionN) {
// ...
}
}
Link for reference:
https://www.michaelsafyan.com/tech/design/patterns/builder
I am not sure that this is what you are looking for, but you can use Factory pattern. Create 2 factories, one will always return the same singleton, while the other one will create a new A object each time.
Factory singletonFactory = new SingetonFactory();
Factory prototypeFactory = new PrototypeFactory();
A a = singletonFactory.createA();
A b = singletonFactory.createA();
System.out.println(a == b); // true
A c = prototypeFactory.createA();
A d = prototypeFactory.createA();
System.out.println(c == d); // false
class A {
private A() {}
void doSomething() { /* do something here... */}
}
interface Factory {
A createA();
}
class SingetonFactory implements Factory {
private final A singleton = new A();
public A createA() {
return singleton;
}
}
class PrototypeFactory implements Factory {
public A createA() {
return new A();
}
}
I would like to have a limited fixed catalogue of instances of a certain complex interface. The standard multiton pattern has some nice features such as lazy instantiation. However it relies on a key such as a String which seems quite error prone and fragile.
I'd like a pattern that uses enum. They have lots of great features and are robust. I've tried to find a standard design pattern for this but have drawn a blank. So I've come up with my own but I'm not terribly happy with it.
The pattern I'm using is as follows (the interface is highly simplified here to make it readable):
interface Complex {
void method();
}
enum ComplexItem implements Complex {
ITEM1 {
protected Complex makeInstance() { return new Complex() { ... }
},
ITEM2 {
protected Complex makeInstance() { return new Complex() { ... }
};
private Complex instance = null;
private Complex getInstance() {
if (instance == null) {
instance = makeInstance();
}
return instance;
}
protected void makeInstance() {
}
void method {
getInstance().method();
}
}
This pattern has some very nice features to it:
the enum implements the interface which makes its usage pretty natural: ComplexItem.ITEM1.method();
Lazy instantiation: if the construction is costly (my use case involves reading files), it only occurs if it's required.
Having said that it seems horribly complex and 'hacky' for such a simple requirement and overrides enum methods in a way which I'm not sure the language designers intended.
It also has another significant disadvantage. In my use case I'd like the interface to extend Comparable. Unfortunately this then clashes with the enum implementation of Comparable and makes the code uncompilable.
One alternative I considered was having a standard enum and then a separate class that maps the enum to an implementation of the interface (using the standard multiton pattern). That works but the enum no longer implements the interface which seems to me to not be a natural reflection of the intention. It also separates the implementation of the interface from the enum items which seems to be poor encapsulation.
Another alternative is to have the enum constructor implement the interface (i.e. in the pattern above remove the need for the 'makeInstance' method). While this works it removes the advantage of only running the constructors if required). It also doesn't resolve the issue with extending Comparable.
So my question is: can anyone think of a more elegant way to do this?
In response to comments I'll tried to specify the specific problem I'm trying to solve first generically and then through an example.
There are a fixed set of objects that implement a given interface
The objects are stateless: they are used to encapsulate behaviour only
Only a subset of the objects will be used each time the code is executed (depending on user input)
Creating these objects is expensive: it should only be done once and only if required
The objects share a lot behaviour
This could be implemented with separate singleton classes for each object using separate classes or superclasses for shared behaviour. This seems unnecessarily complex.
Now an example. A system calculates several different taxes in a set of regions each of which has their own algorithm for calculting the taxes. The set of regions is expected to never change but the regional algorithms will change regularly. The specific regional rates must be loaded at run time via remote service which is slow and expensive. Each time the system is invoked it will be given a different set of regions to calculate so it should only load the rates of the regions requested.
So:
interface TaxCalculation {
float calculateSalesTax(SaleData data);
float calculateLandTax(LandData data);
....
}
enum TaxRegion implements TaxCalculation {
NORTH, NORTH_EAST, SOUTH, EAST, WEST, CENTRAL .... ;
private loadRegionalDataFromRemoteServer() { .... }
}
Recommended background reading: Mixing-in an Enum
Seems fine. I would make initialization threadsafe like this:
enum ComplexItem implements Complex {
ITEM1 {
protected Complex makeInstance() {
return new Complex() { public void method() { }};
}
},
ITEM2 {
protected Complex makeInstance() {
return new Complex() { public void method() { }}
};
private volatile Complex instance;
private Complex getInstance() {
if (instance == null) {
createInstance();
}
return instance;
}
protected abstract Complex makeInstance();
protected synchronized void createInstance() {
if (instance == null) {
instance = makeInstance();
}
}
public void method() {
getInstance().method();
}
}
The modifier synchronized only appears on the createInstance() method, but wraps the call to makeInstance() - conveying threadsafety without putting a bottleneck on calls to getInstance() and without the programmer having to remember to add synchronized to each to makeInstance() implementation.
This works for me - it's thread-safe and generic. The enum must implement the Creator interface but that is easy - as demonstrated by the sample usage at the end.
This solution breaks the binding you have imposed where it is the enum that is the stored object. Here I only use the enum as a factory to create the object - in this way I can store any type of object and even have each enum create a different type of object (which was my aim).
This uses a common mechanism for thread-safety and lazy instantiation using ConcurrentMap of FutureTask.
There is a small overhead of holding on to the FutureTask for the lifetime of the program but that could be improved with a little tweaking.
/**
* A Multiton where the keys are an enum and each key can create its own value.
*
* The create method of the key enum is guaranteed to only be called once.
*
* Probably worth making your Multiton static to avoid duplication.
*
* #param <K> - The enum that is the key in the map and also does the creation.
*/
public class Multiton<K extends Enum<K> & Multiton.Creator> {
// The map to the future.
private final ConcurrentMap<K, Future<Object>> multitons = new ConcurrentHashMap<K, Future<Object>>();
// The enums must create
public interface Creator {
public abstract Object create();
}
// The getter.
public <V> V get(final K key, Class<V> type) {
// Has it run yet?
Future<Object> f = multitons.get(key);
if (f == null) {
// No! Make the task that runs it.
FutureTask<Object> ft = new FutureTask<Object>(
new Callable() {
public Object call() throws Exception {
// Only do the create when called to do so.
return key.create();
}
});
// Only put if not there.
f = multitons.putIfAbsent(key, ft);
if (f == null) {
// We replaced null so we successfully put. We were first!
f = ft;
// Initiate the task.
ft.run();
}
}
try {
/**
* If code gets here and hangs due to f.status = 0 (FutureTask.NEW)
* then you are trying to get from your Multiton in your creator.
*
* Cannot check for that without unnecessarily complex code.
*
* Perhaps could use get with timeout.
*/
// Cast here to force the right type.
return type.cast(f.get());
} catch (Exception ex) {
// Hide exceptions without discarding them.
throw new RuntimeException(ex);
}
}
enum E implements Creator {
A {
public String create() {
return "Face";
}
},
B {
public Integer create() {
return 0xFace;
}
},
C {
public Void create() {
return null;
}
};
}
public static void main(String args[]) {
try {
Multiton<E> m = new Multiton<E>();
String face1 = m.get(E.A, String.class);
Integer face2 = m.get(E.B, Integer.class);
System.out.println("Face1: " + face1 + " Face2: " + Integer.toHexString(face2));
} catch (Throwable t) {
t.printStackTrace(System.err);
}
}
}
In Java 8 it is even easier:
public class Multiton<K extends Enum<K> & Multiton.Creator> {
private final ConcurrentMap<K, Object> multitons = new ConcurrentHashMap<>();
// The enums must create
public interface Creator {
public abstract Object create();
}
// The getter.
public <V> V get(final K key, Class<V> type) {
return type.cast(multitons.computeIfAbsent(key, k -> k.create()));
}
}
One thought about this pattern: the lazy instantiation isn't thread safe. This may or may not be okay, it depends on how you want to use it, but it's worth knowing. (Considering that enum initialisation in itself is thread-safe.)
Other than that, I can't see a simpler solution that guarantees full instance control, is intuitive and uses lazy instantiation.
I don't think it's an abuse of enum methods either, it doesn't differ by much from what Josh Bloch's Effective Java recommends for coding different strategies into enums.
I have the below use case where I get events containing JsonString1, I have have to do some processing/transformation to get to Object1 through Object 4. As of now I only have one such case and its likely that in future there there might more such hierarchies (atmost 2-3).
I am unable to decide on what would an elegant way to code this.
JsonString1
|
JsonString2
/ \
JsonString3 JsonString4
| |
Object1 Object2
|
Object3
I could just have an Abstract class for processing JsonStrings 1 to 4 and concrete implementation for each type of the event. Something like
public abstract class AbstractEventProcessor {
public AbstractEventProcessor(String jsonString1) {
// Do processing to get JsonString2, JsonString3 and JsonString4
}
}
public class Event1Processor extends AbstractEventProcessor {
Event1Processor(String str) {
super(str);
}
Object1 getObject1() {
}
Object2 getObject2() {
}
Object3 getObject3() {
}
}
And similar implementations more events as they come along.
Is there a better way to do this ?
Also for now two things are constant, but in a rare case might change.
All events will have JsonString1 .. JsonString4 but the number of Objects at the end will vary. But in future this might change.
Although its very unlikely (but not impossible) that the format of the strings might change (say from json to xml)
Do I accomodate for such changes as well by providing interfaces for string transformations, or it this an overkill ?
Usually I am stuck at such places where I am trying to figure out the most elegant way to do this and end up spending a lot of time ? Is there any general advice as well for this ? :)
Thanks
It's not very clear what you exactly want. However, even without it, when I see your hierarchy, it smells. Usually, during my code reviews, whenever I see too fancy hierarchy like yours, there is something wrong in the design.
Try considering using decorators to avoid the inheritance hell. Thus you may create any combinations you may need in the near and far future. Get some inspiration in the standard java class java.io.Reader and its subclasses.
For your case it would mean something like this (at least how I understand your description):
public interface EventProcessor {
public BaseObject processJsonString(String jsonString);
}
public abstract class AbstractEventProcessor implements EventProcessor {
final private EventProcessor processor;
public AbstractEventProcessor(EventProcessor processor) {
this.processor = processor;
}
}
public class SpecialObject1 extends/implements BaseObject { ... }
public class SpecialObject2 extends/implements BaseObject { ... }
public class SpecialObject3 extends/implements BaseObject { ... }
// Each your future processor will look like this
public class Event1Processor extends AbstractEventProcessor implements EventProcessor {
public Event1Processor(EventProcessor processor) {
super(processor);
}
public SpecialObject1 processJsonString(String jsonString) {
final SpecialObject1 result = (SpecialObject1) super.processJsonString(jsonString);
// here you add this Event processor specific stuff
...
return result;
}
// Maybe more methods here
}
public class Client {
public void useEventProcessor() {
final EventProcessor processor1 = new Event1Processor(new Event2Processor(new Event3Processor(null)));
final SpecialObjectX object1 = processor.processJsonString(jsonString);
final EventProcessor processor2 = new Event51Processor(new Event20Processor(new Event2Processor(null)));
final SpecialObjectY object2 = processor2.processJsonString(jsonString);
}
}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Intercept object on method invocation with Mockito
I have an class that can mapp from one format to another. Since this is legacy code I don't dare to rewrite it, it is basically a set of plugins so if I change one I might have to change all the other. It wasn't developed with testing in mind.
So this is my problem.
interface Mapper {
void handle(ClassA classA);
void handle(ClassB classB);
}
public interface Publisher {
public void publish(MappedClass mappedClass);
}
class MyMapper implements Mapper {
private Publisher publisher;
public void setPublisher(final Publisher publisher) {
this.publisher = publisher;
}
public handle(ClassA classA) {
final MappedClass mappedClass = // Map from ClassA to MappedClass
publisher.publish(mappedClass);
}
public handle(ClassB classB) {
final MappedClass mappedClass = // Map from ClassB to MappedClass
publisher.publish(mappedClass);
}
}
Okay. So depending on which class was "handled" MappedClass will be published with different values, and it is the values I want to verify (test). The problem is that I will get a test where I first have to write code that tests that the publish method is called,
private boolean wasCalled;
#Test
public void testClassAMapped() {
wasCalled = false;
final MyMapper myMapper = new MyMapper();
myMapper.setPublisher(new Publisher() {
public void publish(final MappedClass mappedClass) {
wasCalled = true;
// Code for verifying the fields in mappedClass
});
}
final ClassA classA = // Create classA
myMapper.handle(classA);
assertTrue(wasCalled);
}
So first we create our mock Publisher which will first set the state of wasCalled to true so we know this method was ever called (this example is simplified so there is actually a dispatcher in the code... legacy code so I don't want to change it), second I want to verify that MappedClass has the correct field values.
What I would like to know is if anyone knows a better way to test this? The wasCalled, and wasCalled check becomes more or less boilerplate code for many of my tests, but since I don't want to add that much clutter (own hacks, test base classes, etc) I would like to know if there is a way to do this in Mockito, or EasyMock?
Use an Mockito ArgumentCaptor
#Test
public void test(){
Publisher publisher = Mockito.mock(Publisher.class);
myMapper.setPublisher(publisher);
ArgumentCaptor<MappedClass> captor = ArgumentCaptor.forClass(MappedClass.class);
....
myMapper.handle(...);
...
verify(publisher).publish(captor.capture());
MappedClass passedValue = captor.getValue();
// assert stuff here
}
I'm not sure I fully understand the problem, but it looks like you are looking for Mockito.verify(publisher).publish(Matchers.isA(MappedClass.class));
For that to work, you'd have to mock the Publisher through
Publisher publisher = Mockito.mock(Publisher.class)
and then hand that into MyMapper.
If you need to assert the state of the MappedClass, use an ArgumentCaptor. See this answer for an example.
The Mockito-API-doc has many additional examples.