Over time, our tests have collected a whole bunch of Mockito.when calls. I'd like to know if there some that aren't needed any more. I tried doing
Mockito.verify(Mockito.when(someMock.someCall().return("foo").getMock(), Mockito.atLeastOnce());
But get
org.mockito.exceptions.misusing.UnfinishedVerificationException:
Missing method call for verify(mock) here:
Is there a way to accomplish this? I'd like to aspect in a way to check if these when()s are being used as there are far too many for a human to do by hand, so trying to get to figure out how to do this inline.
I could just duplicate the same call found in when() with a completely separate call to Mockito.verify(), but it is hard duplicate the when() call method and arguments call chain through aspects.
The built-in solution to the problem would be to use strict stubbing. There is an article over at Baelung explaining it.
In Mockito if you want a void method to do nothing you can do this:
doNothing().when(_mockedClass).voidMethod();
Is there a way to do this with JMockit?
I can't seem to find anything about it. I've been trying to switch to JMockit but am having trouble finding documentation for some of the things we do with Mockito. I suspect they are all there in one form or another, just having trouble finding them, so figured I'd start asking one question at a time here and hope there are easy answers! Thanks!
Just like with Mockito, actually, if you want a method to do nothing, then simply don't record an expectation for it. Alternatively, you can record an expectation with no result, but there is really no need for that (or a point to it).
(This would only not be the case when using strict expectations (with new StrictExpectations() {{ ... }}), in which case all method invocations need to be accounted for.)
I'm using Mockito in order to do some mocks/testing. My scenario is simple : I have a class mocked using mock() and I'm invoking this class (indirectly) for a large number of times (i.e. ~100k)
Mockito seems to hold some data for every invocation, and so I run out of memory at a certain point.
I'd like to tell mockito not to hold any data (I don't intend to call verify(), etc, I just don't care, for this specific tests, what reaches to that mock). I don't want to create new mocks with every invocation.
You can use Mockito.reset(mock), just be aware that after you call it, your mock will forget all stubbing as well as all interactions, so you would need to set it up again. Mockito's documentation on the method has these usage instructions:
List mock = mock(List.class);
when(mock.size()).thenReturn(10);
mock.add(1);
reset(mock);
//at this point the mock forgot any interactions & stubbing
They do also discourage use of this method, like the comments on your question do. Usually it means you could refactor your test to be more focused:
Instead of reset() please consider writing simple, small and focused test methods over lengthy, over-specified tests. First potential code smell is reset() in the middle of the test method. This probably means you're testing too much. Follow the whisper of your test methods: "Please keep us small & focused on single behavior". There are several threads about it on mockito mailing list.
Assume the following setup:
interface Entity {}
interface Context {
Result add(Entity entity);
}
interface Result {
Context newContext();
SpecificResult specificResult();
}
class Runner {
SpecificResult actOn(Entity entity, Context context) {
return context.add(entity).specificResult();
}
}
I want to see that the actOn method simply adds the entity to the context and returns the specificResult. The way I'm testing this right now is the following (using Mockito)
#Test
public void testActOn() {
Entity entity = mock(Entity.class);
Context context = mock(Context.class);
Result result = mock(Result.class);
SpecificResult specificResult = mock(SpecificResult.class);
when(context.add(entity)).thenReturn(result);
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
}
However this seems horribly white box, with mocks returning mocks. What am I doing wrong, and does anybody have a good "best practices" text they can point me to?
Since people requested more context, the original problem is an abstraction of a DFS, in which the Context collects the graph elements and calculates results, which are collated and returned. The actOn is actually the action at the leaves.
It depends of what and how much you want your code to be tested. As you mentionned the tdd tag, I suppose you wrote your test contracts before any actual production code.
So in your contract what do you want to test on the actOn method:
That it returns a SpecificResult given both a Context and an Entity
That add(), specificResult() interactions happen on respectively the Context and the Entity
That the SpecificResult is the same instance returned by the Result
etc.
Depending on what you want to be tested you will write the corresponding tests. You might want to consider relaxing your testing approach if this section of code is not critical. And the opposite if this section can trigger the end of the world as we know it.
Generally speaking whitebox tests are brittle, usually verbose and not expressive, and difficult to refactor. But they are well suited for critical sections that are not supposed to change a lot and by neophytes.
In your case having a mock that returns a mock does look like a whitebox test. But then again if you want to ensure this behavior in the production code this is ok.
Mockito can help you with deep stubs.
Context context = mock(Context.class, RETURNS_DEEP_STUBS);
given(context.add(any(Entity.class)).specificResult()).willReturn(someSpecificResult);
But don't get used to it as it is usually considered bad practice and a test smell.
Other remarks :
Your test method name is not precise enough testActOn does tell the reader what behavior your are testing. Usually tdd practitioners replace the name of the method by a contract sentence like returns_a_SpecificResult_given_both_a_Context_and_an_Entity which is clearly more readable and give the practitioner the scope of what is being tested.
You are creating mock instances in the test with Mockito.mock() syntax, if you have several tests like that I would recommend you to use a MockitoJUnitRunner with the #Mock annotations, this will unclutter a bit your code, and allow the reader to better see what's going on in this particular test.
Use the BDD (Behavior Driven Dev) or the AAA (Arrange Act Assert) approach.
For example:
#Test public void invoke_add_then_specificResult_on_call_actOn() {
// given
... prepare the stubs, the object values here
// when
... call your production code
// then
... assertions and verifications there
}
All in all, as Eric Evans told me Context is king, you shall take decisions with this context in mind. But you really should stick to best practice as much as possible.
There's many reading on test here and there, Martin Fowler has very good articles on this matter, James Carr compiled a list of test anti-patterns, there's also many reading on using well the mocks (for example the don't mock types you don't own mojo), Nat Pryce is the co-author of Growing Object Oriented Software Guided by Tests which is in my opinion a must read, plus you have google ;)
Consider using fakes instead of mocks. It's not really clear what the classes in question are meant to to, but if you can build a simple in-memory (not thread-safe, not persistent etc) implementation of both interfaces, you can use that for flexible testing without the brittleness that sometimes comes from mocking.
I like to use names beginning mock for all my mock objects. Also, I would replace
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
with
Runner toTest = new Runner();
toTest.actOn( mockEntity, mockContext );
verify( mockResult ).specificResult();
because all you're trying to assert is that specificResult() gets run on the right mock object. Whereas your original assert doesn't make it quite so clear what is being asserted. So you don't actually need a mock for SpecificResult. That cuts you down to just one when call, which seems to me to be about right for this kind of test.
But yes, this does seem frightfully white box. Is Runner a public class, or some hidden implementation detail of a higher level process? If it's the latter, then you probably want to write tests around the behaviour at the higher level; rather than probing implementation details.
Not knowing much about the context of the code, I would suggest that Context and Result are likely simple data objects with very little behavior. You could use a Fake as suggested in another answer or, if you have access to the implementations of those interfaces and construction is simple, I'd just use the real objects in lieu of Fakes or Mocks.
Although the context would provide more information, I don't see any problems with your testing methodology myself. The whole point of mock objects is to verify calling behavior without having to instantiate the implementations. Creating stub objects or using actual implementing classes just seems unnecessary to me.
However this seems horribly white box, with mocks returning mocks.
This may be more about the class design than the testing. If that is the way the Runner class works with the external interfaces then I don't see any problem with having the test simulate that behavior.
First off, since nobody's mentioned it, Mockito supports chaining so you can just do:
when(context.add(entity).specificResult()).thenReturn(specificResult);
(and see Brice's comment for how to do enable this; sorry I missed it out!)
Secondly, it comes with a warning saying "Don't do this except for legacy code." You're right about the mock-returning-mock being a bit strange. It's OK to do white-box mocking generally because you're really saying, "My class ought to collaborate with a helper like <this>", but in this case it's collaborating across two different classes, coupling them together.
It's not clear why the Runner needs to get the SpecificResult, as opposed to whatever other result comes out of context.add(entity), so I'm going to make a guess: the Result contains a result with some messages or other information and you just want to know whether it's a success or failure.
That's like me saying, "Don't tell me all about my shopping order, just tell me that I made it successfully!" The Runner shouldn't know that you only want that specific result; it should just return everything that came out, the same way that Amazon shows you your total, postage and all the things you bought, even if you've shopped there lots and are perfectly aware of what you're getting.
If some classes regularly use your Runner just to get a specific result while others require more feedback then I'd make two methods to do it, maybe called something like add and addWithFeedback, the same way that Amazon let you do one-click shopping by a different route.
However, be pragmatic. If it's readable the way you've done it and everyone understands it, use Mockito to chain them and call it a day. You can change it later if you have need.
I have a bean in my applicationContext-test.xml that I use for mocking an external search engine. This way, when I run tests, any time my application code refers to this search engine, I know that I am using my mock engine instead of the real one.
A problem I am facing is that I want this engine to behave differently in different scenarios. For example, when I call getDocuments(), I usually want it to return documents. But sometimes I want it to throw an exception to make sure that my application code is handling the exception appropriately.
I can achieve this by referencing the bean in my test code and changing some stubs, but then I have to change the stubs back to what they were so that my other tests will also pass. This seems like bad practice for many reasons, so I'm seeking alternatives.
One alternative I considered was to reinitialize the bean completely. The bean is initialized from the applicationContext-test.xml with a static factory method. What I want to do is:
Reference the bean from my test code to change some of its stubs
Run the test using these new stubs
At the end of this test, reinitialize the bean using the static factory method specified in applicationContext-test.xml
I tried something like this:
ClassPathXmlApplicationContext appContext = new ClassPathXmlApplicationContext(
new String[] { "applicationContext-test.xml" });
Factory factory = appContext.getBean(Factory.class);
factory = EngineMocks.createMockEngineFactory();
But this does not do the trick. Any tests that are run after this will still fail. It seems that my new factory variable contains the Factory that I want and behaves accordingly, but when the bean is referenced elsewhere, getDocuments() still throws the exception that was stubbed in previously. Clearly, my re-initialization only affected the local variable and not the bean itself.
Can someone tell me how I can accomplish my goal?
Update
While I appreciate suggestions as to how to write better tests and better mocks, my goal is to reinitialize a bean. I believe there is value in learning how to do this whether it fits my use case or not (I believe it does fit my use case, but I'm having a hard time convincing some of my critics here).
The only answers that will get any up votes or green ticks from me are those which suggest how I can reinitialize my bean.
You should define the cases when you want a result and the cases when you want an exception. They should be differentiated by input parameters to the method. Otherwise it is not a good test. So, for a given set of parameters the output should be predictable.
How about injecting different implementations of the search engine? Just create more beans representing the different mocks of the search engine.
one test class is initialized with one mock and another test class with another mock; this, of course, means you'll have more test classes for a certain class that you are testing (not that good/clean)
or...
inject more mocks (of the search engine) in 1 testing class. some testing methods (from that testing class) use one mock, other testing methods use another mock
Instead of:
factory = EngineMocks.createMockEngineFactory();
do:
factory.callMethodThatChangesTheStateOfThisObjectSuchThatItIsSuitableForYourTest(withOptionalParameters);
Also, if you are using Spring Integration Testing, make sure to annotate your method with #DirtiesContext so it won't affect the next test.