How do people write unit test in this scenario - java

I have a question regarding unit test.
I am going to test a module which is an adapter to a web service. The purpose of the test is not test the web service but the adapter.
One function call the service provide is like:
class MyAdapterClass {
WebService webservice;
MyAdapterClass(WebService webservice) {
this.webservice = webservice;
}
void myBusinessLogic() {
List<VeryComplicatedClass> result = webservice.getResult();
// <business logic here>
}
}
If I want to unit test the myBusinessLogic function, the normal way is to inject an mocked version of webservice with getResult() function setup for some predefined return value.
But here my question is, the real webservice will return a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element.
If I am going to manually setup a result using Mockito or something like that, it is a huge amount of work.
What do people normally do in this scenario? What I simply do is connect to the real web service and test again the real service. Is something good to do?
Many thanks.

You could write the code to call the real web service and then serialize the List<VeryComplicatedClass> to a file on disk and then in the setup for your mock deserialize it and have mockwebservice.getResult() return that object. That will save you manually constructing the object hierarchy.
Update: this is basically the approach which Gilbert has suggested in his comment as well.
But really.. you don't want to set up a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element, you want to setup a mock or a stub that captures the minimum necessary to write assertions around your business logic. That way the test better communicates the details that it actually cares about. More specifically, if the business logic calls 2 or 3 methods on VeryComplicatedClass then you want the test to be explicit that those are the conditions that are required for the things that the test asserts.

One thought I had reading the comments would be to introduce a new interface which can wrap List<VeryComplicatedClass> and make myBusinessLogic use that instead.
Then it is easy (/easier) to stub or mock an implementation of your new interface rather than deal with a very complicated class that you have little control over.

Related

How to test a void method before implementing it?

I'm developing a new project. This is what has been done until now:
A technical design.
The model classes (data classes).
All the interfaces in the project (but no implementations yet).
Next thing I wanna do is implementing the methods from the skeleton (the high level methods) down to the nested objects. Nevertheless, I want to create a unit test for each method before I write the implementation. There won't be any problem to implement the high level methods first, because I'm going to work with interfaces and bind the concrete implementation only in an external Java configuration file using DI.
The first method I'm gonna implement is called lookForChanges() and it both accepts and returns void. This method is called by Spring's scheduler (#Scheduled), and it manages the whole process: it retrieves data from the DB, retrieves data from a web service, compares them, and if there were any changes then it updates the database and sends a JMS message to a client. Of course, it doesn't do all those things by itself but it calls the relevant classes and methods.
So the first problem I'd had is how to create a unit test to a void method. In all tutorials always the tested methods accept parameters and return a result. I've found an answer for that in this question. He says that even if there's no result to check, at least one can make sure that the methods inside the tested method were called and with the correct order of parameters.
I sort of liked this answer, but the problem is that I'm working TDD so in contast to the guy who asked this question, I'm writing the test before implementing the tested method, so I don't know yet which methods and in what order it will use. I can guess, but I will only be sure about that once the method will have been already implemented.
So, how can I test a void skeleton method before I implement it?
So the first problem I'd had is how to create a unit test to a void method.
A void method implies collaborators. You verify those.
Example. Suppose we needed a task that would copy System.in to System.out. How would we write an automated test for that?
void copy() {
// Does something clever with System.in and System.out
}
But if you squint a little bit, you'll see you really have code that looks like
void copy() {
InputStream in = System.in;
PrintStream out = System.out;
// Does something clever with `in` and `out`
}
If we perform and extract method refactoring on this, then we might end up with code that looks like
void copy() {
InputStream in = System.in;
PrintStream out = System.out;
copy(in, out);
}
void copy(InputStream in, PrintStream out) {
// Does something clever with `in` and `out`
}
The latter of these is an API that we can test - we configure the collaborators, pass them to the system under test, and verify the changes afterwards.
We don't, at this point, have a test for void copy(), but that's OK, as the code there is "so simple that there are obviously no deficiencies".
Notice that, from the point of view of a test, there's not a lot of difference between the following designs
{
Task task = new Task();
task.copy(in, out);
}
{
Task task = new Task(in, out);
task.copy();
}
{
Task task = Task.createTask();
task.copy(in, out)
}
{
Task task = Task.createTask(in, out);
task.copy();
}
A way of thinking about this is: we don't write the API first, we write the test first.
// Arrange the test context to be in the correct initial state
// ???
// Verify that the test context arrived in final state consistent with the specification.
Which is to say, before you start thinking about the API, you first need to work out how you are going to evaluate the result.
Same idea, different spelling: if the effects of the function call are undetectable, then you might as well just ship a no-op. If a no-op doesn't meet your requirements, then there must be an observable effect somewhere -- you just need to work out whether that effect is observed directly (inspecting a return value), or by proxy (inspecting the effect on some other element in the solution, or a test double playing the role of that element).
OK so now I can pass params to the method for testing it, but what can I test?
You test what it is supposed to do.
Try this thought experiment - suppose you and I were pairing, and you proposed this interface
interface Task {
void lookForChanges();
}
and then, after some careful thought, I implemented this:
class NoOpTask implements Task {
#Override
void lookForChanges() {}
}
How would you demonstrate that my implementation doesn't satisfy the requirements?
What you wrote in the question was "it updates the database and sends a JMS message to a client", so there are are two assertions to consider - did the database get updated, and was a JMS message sent?
The whole thing looks something like this
Given:
A database with data `A`
A webservice with data `B`
A JMS client with no messages
When:
The task is connected to this database, webservice, and JMS client
and the task is run
Then:
The database is updated with data `B`
The JMS client has a message.
It looks like what you suggest is an end-to-end test.
It does look like one. But if you use test doubles for these collaborators, rather than live systems, then your test is running in an isolated and deterministic shell.
It's probably a sociable test - the test doesn't know or care about the implementation details of the system under test in the when clause. I make no claims that the SUT is one-and-exactly-one "unit".
I have to see the implementation of foo first. Am I wrong?
Yes - you need to understand the specification of foo, not the implementation.

Mockito verify that ONLY a expected method was called

I'm working in a Project with a Service class and some sort of a Client that acts as a facade (don't know if it's the right term in the Design Patterns's world, but I'll try to make myself clear). Service's methods can be very expensive as they may be communicating with one or more databases, long checkings and so on, so every Client method should call one and only one Service method.
Service class structure is something like
public class Service {
public void serviceA(){...}
public SomeObject serviceB(){...}
// can grow in the future
}
And Client should be something like
public class Client {
private Service myService; // Injected somehow
public void callServiceA() {
// some preparation
myService.serviceA();
// something else
}
public boolean callServiceB(){...}
}
And in the test class for Client I want to have something like
public class ClientTest{
private Client client; // Injected or instantiated in #Before method
private Service serviceMock = mock(Service.class);
#Test
public void callServiceA_onlyCallsServiceA() {
client.callServiceA();
????
}
}
In the ???? section I want something like verifyOnly(serviceMock).serviceA() saying "verify that serviceMock.serviceA() was called only once and no other method from the Service class was called". Is there something like that in Mockito or in some other mocking library? I don't want to use verify(serviceMock, never()).serviceXXX() for every method because, as I said, Service class may grow in the future and I will have to be adding verification to every test (not a happy task for me) so I need something more general.
Thanks in advance for your answers.
EDIT #1
The difference between this post and the possible duplicate is that the answer adds boiler plate code which is not desired in my case because it's a very big project and I must add as few code as posible.
Also, verifyNoMoreInteractions can be a good option even when it's discouraged for every test, no extra boiler plate code needed.
To sumarize, the possible duplicate didn't solved my problem.
There's another issue: I'm writing test for code made by another team, not following a TDD proccess myself, so my test should be extra defensive, as stated in this article quoted in the mockito documentation for verifyNoMoreInteractions. The methods I'm testing are often very longs so I need to check that the method under test calls ONLY the necesary services and no other (because they're expensive, as I said). Maybe verifyNoMoreInteractions is good enough for now but I'd like to see something not being discouraged for every test by the very same API creator team!
Hope this helps to clarify my point and the problem. Best regards.
verify(serviceMock, times(1)).serviceA();
verifyNoMoreInteractions(serviceMock);
From Mockito's javadoc on verifyNoMoreInteractions:
You can use this method after you verified your mocks - to make sure that nothing else was invoked on your mocks.
Also:
A word of warning: Some users who did a lot of classic, expect-run-verify mocking tend to use verifyNoMoreInteractions() very often, even in every test method. verifyNoMoreInteractions() is not recommended to use in every test method. verifyNoMoreInteractions() is a handy assertion from the interaction testing toolkit. Use it only when it's relevant. Abusing it leads to overspecified, less maintainable tests.
The only way you can reliably verify that your service is only ever called once and only once from the method you specify and not from any other method, is to test every single method and assert that your serviceA method is never invoked. But you're testing every other method anyway, so this shouldn't be that much of a lift...
// In other test cases...
verify(serviceMock, never()).serviceA();
While this is undesirable from a code writing standpoint, it opens the door to separating out your service into smaller, more responsible chunks so that you guarantee that only one specific service is called. From there, your test cases and guarantees around your code become smaller and more ironclad.
I think what you are looking for is the Mockito.verify and Mockito.times
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
verify(mockObject, atLeast(2)).someMethod("was called at least twice");
verify(mockObject, times(3)).someMethod("was called exactly three times");
Here another thread with the same question:
Mockito: How to verify a method was called only once with exact parameters ignoring calls to other methods?

Unit testing and too many mocks

I'm starting to practice TDD in my project, as a background it also contains legacy code. We use Mockito as a mocking framework and follow a Spring MVC approach.
There are times when there's a Service class implemented with many different DAO objects as #Autowired properties. There are simple methods within these services, like for example completeTransaction.
completeTransaction will use many of the DAO objects to complete its responsibilities
Update and save the transaction
Advance a business process
Closing other pending operations
However, in performing those operations, the method requires calls to different DAO to fetch and update the transaction, fetch a business process ID, fetch pending transactions (and save their updates). This means that unit testing this method makes me add many #Mock properties. And I need to set up the mock objects before the test will actually finish for me to test a certain condition.
This seems like a code smell, and to me it almost feels like the test is ensuring the implementation of the code instead of just its contract. Again, without mocking the dependencies, the test case will not run (due to NPE and others).
What is a strategy that I can follow to clean up code like this? (I can't really provide the actual source code on the question though). I'm thinking that one possibility would be to set up a facade class with methods like ("getPendingOperations" and "advanceBusinessProcess"). Then I can mock a single dependency. But then I figure that in all other classes that have situations like this I would need to do the same, and then I'm afraid to end up with a lot of "helper" classes just for the sake of cleaner tests.
Thank you in advanced.
I think you'll want to do two things in general when you find yourself with too many mocks. These are not necessary easy, but you may find them helpful.
1) Try and make your methods and classes smaller. I think Clean Code says there are two rules, that classes should small. And that classes should be smaller then that. This makes some sense because as the units you are testing (methods and classes) get smaller, so will the dependencies. You will of course end up with more tests, but they will have less setup in each test.
2) Look at the Law of Demeter (https://en.wikipedia.org/wiki/Law_of_Demeter). There are a bunch of rules, but basically, you want to avoid long string of property/method calls. objA = objB.propertyA.SomeMethod().propertyC; If you need to mock out all of these objects just to get objA, the you will have a lot of setup. But if you can replace this with objA = objB.newProperty; then you only need to mock objB and it's one property.
Neither of these are silver bullets, but hopefully you can use some of these ideas with your project.
If the unit test is testing the completeTransaction method, then you must mock everything on which it depends. Since you are using Mockito, you can use verify to confirm that the correct mocked methods are called and that they are called in the correct order.
If the unit test is testing something that calls the completeTransaction method, then just mock the completeTransaction method.
If this is your class hierarchy:
class A -> class B -> class C
(where -> is "depends on")
In unit tests for class A, mock only class B.
In unit tests for class B, mock only class C.

How can I unit test this method in java?

I'm working with the Struts2 framework and would like to unit test the execute method below:
public String execute() {
setDao((MyDAO) ApplicationInitializer.getApplicationContext().getBean("MyDAO"));
setUserPrincipal(); //fetches attribute from request and stores it in a var
setGroupValue(); //
setResults(getMyDao().getReportResults(getActionValue(), getTabName());
setFirstResultSet((List) getResults()[0]);
setSecondResultSet((List) getResults()[1]);
return SUCCESS;
}
As you can see most of the logic is database related. So how would I go about unit testing this functionality? I would like to unit test by mocking a HTTPServletRequest with few request variables inside it.
My questions are:
How can I fake/mock a request variable as if its coming from a browser
Should my unit test be calling the actual DAO and making sure that the data is coming back?
If so, how can I call the DAO from unit test since the DAO is tied to the server since jndi pool settings reside on the application server.
I'd appreciate any book/article that shows how to really accomplish this.
The code you have shown us is not enough to fully answer your question.
Line by line
setDao((MyDAO) ApplicationInitializer.getApplicationContext().getBean("MyDAO"));
This is the hardest line since it uses static method. We would need to see how ApplicationInitializer works. In ideal world the getApplicationContext() method should return mock of ApplicationContext. This mock in turns should return MyDAO when getBean("MyDAO"). mockito is perfectly capable of handling this, as well as all other mocking frameworks.
setUserPrincipal(); //fetches attribute from request and stores it in a var
Where does the request come from? Is it injected to action class? If so, simply inject mocked request object, e.g. MockHttpServletRequest.
setGroupValue(); //
Same as above? Please provide more details, what this method actually does?
setResults(getMyDao().getReportResults(getActionValue(), getTabName());
Your previously created mock should return something when getReportResults() is called with given arguments.
setFirstResultSet((List) getResults()[0]);
setSecondResultSet((List) getResults()[1]);
I guess methods below set some field on the action class. Because you have full control over what was returned from mocked getReportResults(), this is not a problem.
return SUCCESS;
You can assert whether SUCCESS was the result of execution.
Now in general
How can I fake/mock a request variable as if its coming from a browser
See above, there is a mock built-in in Spring.
Should my unit test be calling the actual DAO and making sure that the data is coming back?
If your unit test calls real DAO, it is no longer unit test. It is an integration test.
If so, how can I call the DAO from unit test since the DAO is tied to the server since jndi pool settings reside on the application server.
This means you are doing integration testing. In that case you should use in-memory database like h2 so you can still run the test on ci server. You must somehow configure your application to fetch DataSource from different place.
Final note
In essence you should inject mocks of everything to your Struts action class. You can tell mocks to return any value upon calling. Then, after calling execute(), you can verify given methods were called, fields set and result value is correct. Consider splitting this to several tests.
Code review
Struts 2 integrates perfectly with Spring. If you take advantage of that functionality Spring container will automatically inject MyDAO to your action class. The first line becomes obsolete.
This code is hard to unit test because instead of using Spring as intended (i.e. as a dependency injection framework), you use it as a factory. Dependency injection is precisely used to avoid this kind of bean lookup you're doing, which is hard to test. The DAO should be injected into your object. That way, you could inject a mock DAO when unit testing your object.
Also, this logic is not database-related at all. The DAO contains the database-related logic. This action uses the DAO, and the DAO is thus another unit (which should be tested in its own unit test). You should thus inject a mock DAO to unit test this method.
Finally, this method doesn't use HttpServletRequest (at least not directly), so I don't understand why you would need to use a fake request. You could mock the setXxx methods which use the request.
Instead of simply mocking the HTTPServletRequest, how about
mocking an actual automated targeted request to the application
itself? Check out Selenium which lets you do just that.
For testing DAOs (integration testing), you could create your databases in memory using HSQLDB, which will allow you to create / delete objects from your tests, and making sure they are persisted / retrieved properly. The advantage with HSQLDB is that your tests will run much quicker than they will with an actual database. When the time comes to commit your code, you could run the tests against your actual database. Usually, you would setup different run configurations in your IDE to facilitate this.
The easiest way to make your injected daos available to your tests is to let your unit test classes extend AbstractJUnit4SpringContextTests. You could then use the #ContextConfiguration annotation to point to multiple xml application context files, or if you use annotation based configuration, point it to your context file which has the <context:annotation-config /> declaration in it.

Is there a better way to test the following methods without mocks returning mocks?

Assume the following setup:
interface Entity {}
interface Context {
Result add(Entity entity);
}
interface Result {
Context newContext();
SpecificResult specificResult();
}
class Runner {
SpecificResult actOn(Entity entity, Context context) {
return context.add(entity).specificResult();
}
}
I want to see that the actOn method simply adds the entity to the context and returns the specificResult. The way I'm testing this right now is the following (using Mockito)
#Test
public void testActOn() {
Entity entity = mock(Entity.class);
Context context = mock(Context.class);
Result result = mock(Result.class);
SpecificResult specificResult = mock(SpecificResult.class);
when(context.add(entity)).thenReturn(result);
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
}
However this seems horribly white box, with mocks returning mocks. What am I doing wrong, and does anybody have a good "best practices" text they can point me to?
Since people requested more context, the original problem is an abstraction of a DFS, in which the Context collects the graph elements and calculates results, which are collated and returned. The actOn is actually the action at the leaves.
It depends of what and how much you want your code to be tested. As you mentionned the tdd tag, I suppose you wrote your test contracts before any actual production code.
So in your contract what do you want to test on the actOn method:
That it returns a SpecificResult given both a Context and an Entity
That add(), specificResult() interactions happen on respectively the Context and the Entity
That the SpecificResult is the same instance returned by the Result
etc.
Depending on what you want to be tested you will write the corresponding tests. You might want to consider relaxing your testing approach if this section of code is not critical. And the opposite if this section can trigger the end of the world as we know it.
Generally speaking whitebox tests are brittle, usually verbose and not expressive, and difficult to refactor. But they are well suited for critical sections that are not supposed to change a lot and by neophytes.
In your case having a mock that returns a mock does look like a whitebox test. But then again if you want to ensure this behavior in the production code this is ok.
Mockito can help you with deep stubs.
Context context = mock(Context.class, RETURNS_DEEP_STUBS);
given(context.add(any(Entity.class)).specificResult()).willReturn(someSpecificResult);
But don't get used to it as it is usually considered bad practice and a test smell.
Other remarks :
Your test method name is not precise enough testActOn does tell the reader what behavior your are testing. Usually tdd practitioners replace the name of the method by a contract sentence like returns_a_SpecificResult_given_both_a_Context_and_an_Entity which is clearly more readable and give the practitioner the scope of what is being tested.
You are creating mock instances in the test with Mockito.mock() syntax, if you have several tests like that I would recommend you to use a MockitoJUnitRunner with the #Mock annotations, this will unclutter a bit your code, and allow the reader to better see what's going on in this particular test.
Use the BDD (Behavior Driven Dev) or the AAA (Arrange Act Assert) approach.
For example:
#Test public void invoke_add_then_specificResult_on_call_actOn() {
// given
... prepare the stubs, the object values here
// when
... call your production code
// then
... assertions and verifications there
}
All in all, as Eric Evans told me Context is king, you shall take decisions with this context in mind. But you really should stick to best practice as much as possible.
There's many reading on test here and there, Martin Fowler has very good articles on this matter, James Carr compiled a list of test anti-patterns, there's also many reading on using well the mocks (for example the don't mock types you don't own mojo), Nat Pryce is the co-author of Growing Object Oriented Software Guided by Tests which is in my opinion a must read, plus you have google ;)
Consider using fakes instead of mocks. It's not really clear what the classes in question are meant to to, but if you can build a simple in-memory (not thread-safe, not persistent etc) implementation of both interfaces, you can use that for flexible testing without the brittleness that sometimes comes from mocking.
I like to use names beginning mock for all my mock objects. Also, I would replace
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
with
Runner toTest = new Runner();
toTest.actOn( mockEntity, mockContext );
verify( mockResult ).specificResult();
because all you're trying to assert is that specificResult() gets run on the right mock object. Whereas your original assert doesn't make it quite so clear what is being asserted. So you don't actually need a mock for SpecificResult. That cuts you down to just one when call, which seems to me to be about right for this kind of test.
But yes, this does seem frightfully white box. Is Runner a public class, or some hidden implementation detail of a higher level process? If it's the latter, then you probably want to write tests around the behaviour at the higher level; rather than probing implementation details.
Not knowing much about the context of the code, I would suggest that Context and Result are likely simple data objects with very little behavior. You could use a Fake as suggested in another answer or, if you have access to the implementations of those interfaces and construction is simple, I'd just use the real objects in lieu of Fakes or Mocks.
Although the context would provide more information, I don't see any problems with your testing methodology myself. The whole point of mock objects is to verify calling behavior without having to instantiate the implementations. Creating stub objects or using actual implementing classes just seems unnecessary to me.
However this seems horribly white box, with mocks returning mocks.
This may be more about the class design than the testing. If that is the way the Runner class works with the external interfaces then I don't see any problem with having the test simulate that behavior.
First off, since nobody's mentioned it, Mockito supports chaining so you can just do:
when(context.add(entity).specificResult()).thenReturn(specificResult);
(and see Brice's comment for how to do enable this; sorry I missed it out!)
Secondly, it comes with a warning saying "Don't do this except for legacy code." You're right about the mock-returning-mock being a bit strange. It's OK to do white-box mocking generally because you're really saying, "My class ought to collaborate with a helper like <this>", but in this case it's collaborating across two different classes, coupling them together.
It's not clear why the Runner needs to get the SpecificResult, as opposed to whatever other result comes out of context.add(entity), so I'm going to make a guess: the Result contains a result with some messages or other information and you just want to know whether it's a success or failure.
That's like me saying, "Don't tell me all about my shopping order, just tell me that I made it successfully!" The Runner shouldn't know that you only want that specific result; it should just return everything that came out, the same way that Amazon shows you your total, postage and all the things you bought, even if you've shopped there lots and are perfectly aware of what you're getting.
If some classes regularly use your Runner just to get a specific result while others require more feedback then I'd make two methods to do it, maybe called something like add and addWithFeedback, the same way that Amazon let you do one-click shopping by a different route.
However, be pragmatic. If it's readable the way you've done it and everyone understands it, use Mockito to chain them and call it a day. You can change it later if you have need.

Categories

Resources