Mockito verify that ONLY a expected method was called - java

I'm working in a Project with a Service class and some sort of a Client that acts as a facade (don't know if it's the right term in the Design Patterns's world, but I'll try to make myself clear). Service's methods can be very expensive as they may be communicating with one or more databases, long checkings and so on, so every Client method should call one and only one Service method.
Service class structure is something like
public class Service {
public void serviceA(){...}
public SomeObject serviceB(){...}
// can grow in the future
}
And Client should be something like
public class Client {
private Service myService; // Injected somehow
public void callServiceA() {
// some preparation
myService.serviceA();
// something else
}
public boolean callServiceB(){...}
}
And in the test class for Client I want to have something like
public class ClientTest{
private Client client; // Injected or instantiated in #Before method
private Service serviceMock = mock(Service.class);
#Test
public void callServiceA_onlyCallsServiceA() {
client.callServiceA();
????
}
}
In the ???? section I want something like verifyOnly(serviceMock).serviceA() saying "verify that serviceMock.serviceA() was called only once and no other method from the Service class was called". Is there something like that in Mockito or in some other mocking library? I don't want to use verify(serviceMock, never()).serviceXXX() for every method because, as I said, Service class may grow in the future and I will have to be adding verification to every test (not a happy task for me) so I need something more general.
Thanks in advance for your answers.
EDIT #1
The difference between this post and the possible duplicate is that the answer adds boiler plate code which is not desired in my case because it's a very big project and I must add as few code as posible.
Also, verifyNoMoreInteractions can be a good option even when it's discouraged for every test, no extra boiler plate code needed.
To sumarize, the possible duplicate didn't solved my problem.
There's another issue: I'm writing test for code made by another team, not following a TDD proccess myself, so my test should be extra defensive, as stated in this article quoted in the mockito documentation for verifyNoMoreInteractions. The methods I'm testing are often very longs so I need to check that the method under test calls ONLY the necesary services and no other (because they're expensive, as I said). Maybe verifyNoMoreInteractions is good enough for now but I'd like to see something not being discouraged for every test by the very same API creator team!
Hope this helps to clarify my point and the problem. Best regards.

verify(serviceMock, times(1)).serviceA();
verifyNoMoreInteractions(serviceMock);
From Mockito's javadoc on verifyNoMoreInteractions:
You can use this method after you verified your mocks - to make sure that nothing else was invoked on your mocks.
Also:
A word of warning: Some users who did a lot of classic, expect-run-verify mocking tend to use verifyNoMoreInteractions() very often, even in every test method. verifyNoMoreInteractions() is not recommended to use in every test method. verifyNoMoreInteractions() is a handy assertion from the interaction testing toolkit. Use it only when it's relevant. Abusing it leads to overspecified, less maintainable tests.

The only way you can reliably verify that your service is only ever called once and only once from the method you specify and not from any other method, is to test every single method and assert that your serviceA method is never invoked. But you're testing every other method anyway, so this shouldn't be that much of a lift...
// In other test cases...
verify(serviceMock, never()).serviceA();
While this is undesirable from a code writing standpoint, it opens the door to separating out your service into smaller, more responsible chunks so that you guarantee that only one specific service is called. From there, your test cases and guarantees around your code become smaller and more ironclad.

I think what you are looking for is the Mockito.verify and Mockito.times
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
verify(mockObject, atLeast(2)).someMethod("was called at least twice");
verify(mockObject, times(3)).someMethod("was called exactly three times");
Here another thread with the same question:
Mockito: How to verify a method was called only once with exact parameters ignoring calls to other methods?

Related

What is the difference between full mocking and partial mocking?

I'm currently working with mocking with Mockito using jUnit and I've stumbled upon the Partial Mocking section where you use Mockito.spy to partially mock the object. I don't seem to understand this concept of partial mocking since I can't find a scenario why I should use it (since it's pretty similar to mocking in general).
Can anybody explain how partial mocking differs from the normal mocking? And if possible, kindly provide examples.
Thanks!
Partial mocking is where you take a class and ask it to behave as normal, except you want to override certain functionality.
This is useful for unit testing services who communicate with other parts of your application. By overriding the behaviour that would call the other part of your application you can test your service in isolation.
Another example would be when a component would communicate with a database driver. By mocking the part that would communicate with the driver, you can test that part of the application without having to have a database.
From the EasyMock 2.2 classextension documentation:
Sometimes you may need to mock only some methods of a class and keep
the normal behavior of others. This usually happens when you want to
test a method that calls some others in the same class. So you want to
keep the normal behavior of the tested method and mock the others.
I sometimes use this to mock (complicated or process intensive) private methods that are allready fully tested.
Partial mocking can be very handy, but I try to avoid it as much as possible.
Partial mocking:
Say you have a class which takes 10+ parameters for a constructor (this shouldn't ever happen but for this example lets say it does) it's a real chore to create that entire object. Frameworks like mockito let you just use the parts of the object you really want to test.
for example
#Mock BigClass big; //contains loads of attributes
...
when(big.getAttributeOneOfTwenty()).thenReturn(2); //these are static imports from mockito
I find it useful when I'm forced to work with APIs relying on inheritance from abstract classes and/or legacy code working with nonmockable static classes (one real life example - DAO).
Partial mocking (in sense of using the Spy facility from Mockito) allows you to mock calls to inherited methods in the first case, or wrap calls to static methods you are forced to use into normal methods that you can mock, verify etc.
Generally you should design and write code in such a way, that you won't need this (dependency injection, single responsibility per class etc). But from time to time it's useful.
A quick and rough example, to visualize the static API example:
class BigUglyStaticLegacyApi {
public static Foo someStaticMethodFetchingFoo() {...}
}
class Bar {
public void someMethodYouTest() {
Foo foo = getFoo();
//do something with Foo (a FooBar, for example :) )
}
/*this one you mock via spying - not the most elegant solution,
but it's better than nothing */
#VisibleForTesting
protected Foo getFoo() {
return BigUglyStaticLegacyApi.someStaticMethodFetchingFoo();
}
}
I use it the most to mock some methods in my CUT (Class Under Test) but not the method/s I'm actually unit testing. It is an important feature that should be used in unit testing with Mockito.

JUnit best practise - testing methods, where the result cannot be verified with public methods

I am writing a Socket Application in Java, where a server is taking messages from an eventsource and sending notifications to connected users, depending on the eventtype.
Now I am about to write some JUnit tests for the Server...
JUnit (in eclipse automatically suggests) to implement tests for all public methods and I see the necessity for it. The server class has a public method bufferEvent..., but then the events are handled in private methods and there is not even a method, which returns the number of buffered messages.
So the Server doesn't have public methods to verify the result.
I think the problem can be generalized:
How can I test public methods, where the result cannot be verified with public methods( no getter etc.)
I want to avoid writing additional methods just for testing. Is there a workaround, or best practise to test those things?
Thanks in advance
You should make a constructor that allows you to insert mocks or spies for the collaborators.
For example, you server would have a constructor Server(List<Buffer> buffer). Only used for testing. Then you can add the buffer in the unit test, and assert that modifications are made to that buffer.
List is easy enough to replace with a object you create in the test. If you want more advanced stuff, have a look at a mocking framework like Mockito.
For example you create a mock for a Socket. You'd get Socket = mock(Socket.class. You insert in in the constructor Server(List<Buffer> buffer, Socket socket). Then after you have called whatever function you want to test, you can verify behavior using for example verify(socket).send("yourMessage") to see if the server used the method send with parameters "yourMessage".
For example this Plugins class requires some plugins in it's constructor. To test its the mocks are created, inserted and then verified in this test class like this: verify(proxyServerPlugin).proxyServer(config);.
See the Mockito for more examples.
You could check the negative test case. There is no return from the service but maybe there is an exception which is thrown by the service:
#Test
public void testServerService(){
try{
myServer.service();
Assert.assertTrue(true);
}catch(Exception ex){
Assert.fail("anything goes wrong");
}
}
Otherwise in such cases I write something like this:
#Test
public void testServerService(){
myServer.service();
Assert.assertTrue(true);
}
so I have at least on assertion to check if the process runs without problems.
btw I think you are right. writing new functionality just to verify Junit testcases is very bad practice.
What do think about adding a Logger, with it's own Level TEST , loggin important values at the end of the methods and having a stream to A testclass.

How do people write unit test in this scenario

I have a question regarding unit test.
I am going to test a module which is an adapter to a web service. The purpose of the test is not test the web service but the adapter.
One function call the service provide is like:
class MyAdapterClass {
WebService webservice;
MyAdapterClass(WebService webservice) {
this.webservice = webservice;
}
void myBusinessLogic() {
List<VeryComplicatedClass> result = webservice.getResult();
// <business logic here>
}
}
If I want to unit test the myBusinessLogic function, the normal way is to inject an mocked version of webservice with getResult() function setup for some predefined return value.
But here my question is, the real webservice will return a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element.
If I am going to manually setup a result using Mockito or something like that, it is a huge amount of work.
What do people normally do in this scenario? What I simply do is connect to the real web service and test again the real service. Is something good to do?
Many thanks.
You could write the code to call the real web service and then serialize the List<VeryComplicatedClass> to a file on disk and then in the setup for your mock deserialize it and have mockwebservice.getResult() return that object. That will save you manually constructing the object hierarchy.
Update: this is basically the approach which Gilbert has suggested in his comment as well.
But really.. you don't want to set up a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element, you want to setup a mock or a stub that captures the minimum necessary to write assertions around your business logic. That way the test better communicates the details that it actually cares about. More specifically, if the business logic calls 2 or 3 methods on VeryComplicatedClass then you want the test to be explicit that those are the conditions that are required for the things that the test asserts.
One thought I had reading the comments would be to introduce a new interface which can wrap List<VeryComplicatedClass> and make myBusinessLogic use that instead.
Then it is easy (/easier) to stub or mock an implementation of your new interface rather than deal with a very complicated class that you have little control over.

Is there a better way to test the following methods without mocks returning mocks?

Assume the following setup:
interface Entity {}
interface Context {
Result add(Entity entity);
}
interface Result {
Context newContext();
SpecificResult specificResult();
}
class Runner {
SpecificResult actOn(Entity entity, Context context) {
return context.add(entity).specificResult();
}
}
I want to see that the actOn method simply adds the entity to the context and returns the specificResult. The way I'm testing this right now is the following (using Mockito)
#Test
public void testActOn() {
Entity entity = mock(Entity.class);
Context context = mock(Context.class);
Result result = mock(Result.class);
SpecificResult specificResult = mock(SpecificResult.class);
when(context.add(entity)).thenReturn(result);
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
}
However this seems horribly white box, with mocks returning mocks. What am I doing wrong, and does anybody have a good "best practices" text they can point me to?
Since people requested more context, the original problem is an abstraction of a DFS, in which the Context collects the graph elements and calculates results, which are collated and returned. The actOn is actually the action at the leaves.
It depends of what and how much you want your code to be tested. As you mentionned the tdd tag, I suppose you wrote your test contracts before any actual production code.
So in your contract what do you want to test on the actOn method:
That it returns a SpecificResult given both a Context and an Entity
That add(), specificResult() interactions happen on respectively the Context and the Entity
That the SpecificResult is the same instance returned by the Result
etc.
Depending on what you want to be tested you will write the corresponding tests. You might want to consider relaxing your testing approach if this section of code is not critical. And the opposite if this section can trigger the end of the world as we know it.
Generally speaking whitebox tests are brittle, usually verbose and not expressive, and difficult to refactor. But they are well suited for critical sections that are not supposed to change a lot and by neophytes.
In your case having a mock that returns a mock does look like a whitebox test. But then again if you want to ensure this behavior in the production code this is ok.
Mockito can help you with deep stubs.
Context context = mock(Context.class, RETURNS_DEEP_STUBS);
given(context.add(any(Entity.class)).specificResult()).willReturn(someSpecificResult);
But don't get used to it as it is usually considered bad practice and a test smell.
Other remarks :
Your test method name is not precise enough testActOn does tell the reader what behavior your are testing. Usually tdd practitioners replace the name of the method by a contract sentence like returns_a_SpecificResult_given_both_a_Context_and_an_Entity which is clearly more readable and give the practitioner the scope of what is being tested.
You are creating mock instances in the test with Mockito.mock() syntax, if you have several tests like that I would recommend you to use a MockitoJUnitRunner with the #Mock annotations, this will unclutter a bit your code, and allow the reader to better see what's going on in this particular test.
Use the BDD (Behavior Driven Dev) or the AAA (Arrange Act Assert) approach.
For example:
#Test public void invoke_add_then_specificResult_on_call_actOn() {
// given
... prepare the stubs, the object values here
// when
... call your production code
// then
... assertions and verifications there
}
All in all, as Eric Evans told me Context is king, you shall take decisions with this context in mind. But you really should stick to best practice as much as possible.
There's many reading on test here and there, Martin Fowler has very good articles on this matter, James Carr compiled a list of test anti-patterns, there's also many reading on using well the mocks (for example the don't mock types you don't own mojo), Nat Pryce is the co-author of Growing Object Oriented Software Guided by Tests which is in my opinion a must read, plus you have google ;)
Consider using fakes instead of mocks. It's not really clear what the classes in question are meant to to, but if you can build a simple in-memory (not thread-safe, not persistent etc) implementation of both interfaces, you can use that for flexible testing without the brittleness that sometimes comes from mocking.
I like to use names beginning mock for all my mock objects. Also, I would replace
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
with
Runner toTest = new Runner();
toTest.actOn( mockEntity, mockContext );
verify( mockResult ).specificResult();
because all you're trying to assert is that specificResult() gets run on the right mock object. Whereas your original assert doesn't make it quite so clear what is being asserted. So you don't actually need a mock for SpecificResult. That cuts you down to just one when call, which seems to me to be about right for this kind of test.
But yes, this does seem frightfully white box. Is Runner a public class, or some hidden implementation detail of a higher level process? If it's the latter, then you probably want to write tests around the behaviour at the higher level; rather than probing implementation details.
Not knowing much about the context of the code, I would suggest that Context and Result are likely simple data objects with very little behavior. You could use a Fake as suggested in another answer or, if you have access to the implementations of those interfaces and construction is simple, I'd just use the real objects in lieu of Fakes or Mocks.
Although the context would provide more information, I don't see any problems with your testing methodology myself. The whole point of mock objects is to verify calling behavior without having to instantiate the implementations. Creating stub objects or using actual implementing classes just seems unnecessary to me.
However this seems horribly white box, with mocks returning mocks.
This may be more about the class design than the testing. If that is the way the Runner class works with the external interfaces then I don't see any problem with having the test simulate that behavior.
First off, since nobody's mentioned it, Mockito supports chaining so you can just do:
when(context.add(entity).specificResult()).thenReturn(specificResult);
(and see Brice's comment for how to do enable this; sorry I missed it out!)
Secondly, it comes with a warning saying "Don't do this except for legacy code." You're right about the mock-returning-mock being a bit strange. It's OK to do white-box mocking generally because you're really saying, "My class ought to collaborate with a helper like <this>", but in this case it's collaborating across two different classes, coupling them together.
It's not clear why the Runner needs to get the SpecificResult, as opposed to whatever other result comes out of context.add(entity), so I'm going to make a guess: the Result contains a result with some messages or other information and you just want to know whether it's a success or failure.
That's like me saying, "Don't tell me all about my shopping order, just tell me that I made it successfully!" The Runner shouldn't know that you only want that specific result; it should just return everything that came out, the same way that Amazon shows you your total, postage and all the things you bought, even if you've shopped there lots and are perfectly aware of what you're getting.
If some classes regularly use your Runner just to get a specific result while others require more feedback then I'd make two methods to do it, maybe called something like add and addWithFeedback, the same way that Amazon let you do one-click shopping by a different route.
However, be pragmatic. If it's readable the way you've done it and everyone understands it, use Mockito to chain them and call it a day. You can change it later if you have need.

How to test an anonymous inner class that calls a private method

We have a bunch of classes that listen for events from the server and then respond to them. For example:
class EventManager {
private Set<Event> cache = new HashSet<Event>();
private EventListener eventListener = new EventListener() {
void onEvent(Event e) {
if (e instanceof MyEvent || e instanceof YourEvent) {
handleEvent(e);
}
}
}
public EventManager(ServerCommunication serverComm) {
serverComm.addListener(eventListener);
}
private handleEvent(Event e) {
// handle the event...
// ...
cache.add(cache);
// ...
}
}
Here's a made-up example of the kind of thing we are doing. Here are the problems I see:
I'd like to test handleEvent to make sure it's doing what it is supposed to but I can't because it's private.
I'd also like to check that something got added to the cache too but that also seems difficult since cache is a private member and I don't want to add a needless getter method.
I'd also like to test the code inside the anonymous class's onEvent method.
For now, what I did was move all logic from the anonymous class to the handleEvent method, and I made handleEvent package private (my unit test is in the same package). I'm not checking the contents of the cache although I want to.
Does anyone have any suggestion for a better design that is more testable?
I would probably extract a EventCache component. You can replace this for your test with an implementation that counts the cached events or records whatever is of interest.
I probably would not change the visibility of handleEvent. You could implement a ServerCommunication that just raises the event from the test case.
Well, there are two approaches here: black box and white box.
Black box testing suggests you should only test the publicly visible changes. Does this method have any observable effect? (Some things don't - caches being an obvious example where they improve performance but may otherwise be invisible.) If so, test that. If not, test that it isn't having a negative effect - this may well just be a case of beefing up other tests.
White box testing suggests that maybe you could add a package-level method for the sake of testing, e.g.
Cache getCacheForTesting()
By putting "for testing" in the name, you're making it obvious to everyone that they shouldn't call this from production code. You could use an annotation to indicate the same thing, and perhaps even have some build rules to make sure that nothing from production does call such a method.
This ends up being more brittle - more tied to the implementation - but it does make it easier to test the code thoroughly, IMO. Personally I err on the side of white box testing for unit tests, whereas integration tests should definitely be more black box. Others are rather more dogmatic about only testing the public API.
I assume your EventManager is a singleton, or you have access to the particular instance of the class you're testing.
1 - I suppose you can send events to your class. Your method is private, and nobody else can call it, then sending an event should be enough.
2 - You can access that through reflection, if you really need to. Your test would depend on a particular implementation.
3 - What would you like to test, actually? If you want to be sure that this method is called, you can replace the EventListener with another EventListener object through reflection (and eventually call the onEvent method of the first listener from your new listener). But your question seems to be more about code coverage than actual unit-testing.
Sometimes, when coming across private methods that I want to test... they are simply screaming to be public methods on another object.
If you believe that HandleEvent is worth testing in isolation (and not through onEvent processing), one approach would be to expose HandleEvent as a public method on new/different object.
Use this opportunity to break the code up into smaller more focussed (default access) classes. A test is just another client for the code.
Note that the anonymous inner class' onEvent method is actually accessible, so calling it should not be a problem.

Categories

Resources