I have a Spring boot + REST application. When I need to write unit testing, should I directly invoke the service beans or call the rest controller? If I invoke the rest controller directly, I have to use RestTemplate and invoke the rest api as a client, right?
What would be the best and required practice?
If I invoke the service beans directly it will result in less code coverage because controller methods code will be not covered. Is that acceptable?
Hmm, this is a complex question but I'll answer as best I can. A lot of this will depend on you/your organization's risk tolerance and how much time they want to invest in tests. I believe in a lot of testing, but there is such a thing as too much.
A unit test tests the unit of code. Great, but what's a unit? This article is a pretty good discussion: http://martinfowler.com/bliki/UnitTest.html but a unit is basically the smallest testable part of your application.
Much literature (e.g. https://www.amazon.ca/Continuous-Delivery-Reliable-Deployment-Automation/dp/0321601912/ ) describes multiple phases of testing including unit tests which are very low level and mock externalities such as DBs or file systems or remote systems, and "api acceptance tests" (sometimes called integration tests although this is a vague term that can mean other things). This latter type fires up a test instance of your application, invokes APIs and asserts on responses.
The short answer is as follows: for unit tests, focus on the units (probably services or more granular), but the other set of tests you describe, wherein the test behaves like a client and invokes your api, are worthwhile too. My suggestion: do both, but don't call both unit tests.
Best Approach is to Test VIA Controllers. WebServices are entered and values are returned here. So Controller is having quite a good role in this. There can be small logic as well, you may miss that
You can Try Using the MockMvc Method for testing controllers.
Reference: Reference-1, Reference-2
Or use the RestTemplate as you mentioned in question Reference-3
It is based on what you want to test, you can separate your test, specially if you have team of developers, make test case to test your business "services", and another test cases as integration test to use REST template, in this case you can figure your bugs faster and easier.
It depends on what you want to do.
One approach would be to unit test the units of work, like the service and the MVC controller. These test will only test eventual logic found in this classes and try to reach a high branch coverage, if applicable.
Besides this, you can write an integration test that makes the HTTP request, goes to the real service bean and only mock an eventual resource access.
For integration tests you can use Spring's support, see here: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/integration-testing.html#spring-mvc-test-framework
Related
I have an application that use spring-mvc, basically we have a presentation layer (controllers), service layer (business units, helpers), integration layer and data access layer(jdbc/jpa repositories), we want to ensure using testing that future addition to the code won't break nothing that was previously working, to do this we are using unit testing(mockito) and integration testing (spring-test,spring-test-mvc).
Unit testing is made per class/component, basically we tried to have a good coverage for the incoming inputs and possible flows within these components and this action is working fine, not have doubts here as unit test is about ensure the units works as expected.
Integration test is different story and very debatable one, as for now we are using sometimes the same scenarios we use to design our unit testing but having the entire system available using real platform and so on, but I have doubts about the best practices here.
As we have a controller, service, data layer one approach is made an IT per layer, example we have UserService class, we will have UserServiceTest which will be the Unit test and UserServiceIT, but maintainability is not ideal, I feel sometimes we repeat the same test scenario but now using the real system. Does this practice really make sense or in which scenarios this makes sense ?. If we already have 100% test coverage in the class with unit testing why we need IT for this one, seems that we have this only to ensure real component is going to start-up ?, Make sense to have all the same scenarios or which is a good criteria to decide?
Other approach is just go with the most important test cases via integration test but just from the controller layer, which it means invoke the REST services and verify the JSON output. This is enough ?, we don't need to verify more things in the others layers ?. I know calling the real REST api will use underneath all the layers (controller, service, dao) but this is enough ? Some consideration you will say here?
If we have a helper class I don't think make sense to have unit and IT, as most of the method as there for only one purpose I think unit testing will be enough here, does you think the same?.
Some classes in the data layer could use Criteria API, QueryDSL for those I go using IT as make the unit testing in some cases is extremely difficult, this is a valid justification?
I am trying to get the best way, tips and practices that makes the task of ensure the system integrity a real and valuable process keeping in mind the maintainability of this stuff.
You kindda touch the entire Test strategy needed for your application. Testing is not only about coverage and layers. As example:
we want to ensure using testing that future addition to the code won't break nothing that was previously working, to do this we are using unit testing(mockito) and integration testing (spring-test,spring-test-mvc).
this is how you actually support Regression testing, which is a type. If we look at the (detailed) Test pyramid
it's easy to see that the integration tests take good portion (recommended 5-15%). Integration goes cross-layer, but also cross-componentAPIs. It's natural that your business components will live in same layer, but you still need to assure that they are working as expected with each other too. Having mSOA will push you to support such extensive interfaces integration testing.
I agree with you on this one
Integration test is different story and very debatable
Some experts even suggest that you have to keep only unit tests and the GUI E2E ones. IMHO there are no strict best practices - only good ones. If you are happy with the trade-offs, use what ever suits your case.
I feel sometimes we repeat the same test scenario but now using the real system. Does this practice really make sense or in which scenarios this makes sense ? If we already have 100% test coverage in the class with unit testing why we need IT for this one, seems that we have this only to ensure real component is going to start-up ? Make sense to have all the same scenarios or which is a good criteria to decide?
It looks like you need to draw a line in those scenarios. Keeping the long story short - unit testing and Mock objects go together naturally. Component tests will require some real system behavior, it can be used to check the handling of data passed between various units, or subsystem components - like your component/service DB or messaging that is not an unit level task.
from the controller layer, which it means invoke the REST services and verify the JSON output. This is enough ?, We don't need to verify more things in the others layers ?. I know calling the real REST api will use underneath all the layers (controller, service, dao) but this is enough ?
Not quite true - testing the presentation layer will exercise the underlying layers too ... so why bother with all the rest of the testing? If you are OK with such approach - Selenium team suggests such DB validation approach.
If you're talking about Beans and ViewHelpers here
we have a helper class I don't think make sense to have unit and IT, as most of the method as there for only one purpose I think unit testing will be enough here, does you think the same?.
you'll need both unit and IT, because all the reasons valid for other components. Having Single responsibility doesn't deny need of IT testing.
make the unit testing in some cases is extremely difficult, this is a valid justification?
Same goes for all your encapsulated private (and static) classes, methods, properties etc. But there is a way of testing those as well - like reflection. This of course is for a special case of unit testing legacy code or an API you can't change. If you needed it for your own code, maybe this lack of testability points to a design smell.
The approach I would recommend, based on recent experience of testing Java EE 7 and Spring-based codebases, is:
Use per-feature integration tests, and avoid unit tests and mocking. Each integration test should cover code from all layers, from the presentation layer down to the infrastructure layer (this one containing reusable components that are not application or domain specific, but appropriate to the chosen architecture).
Most integration tests should be based on actual business requirements and input data. Others may be created to exercise remaining parts of the codebase, according to the code coverage report generated from each execution of the integration test suite.
So, assuming "full" code coverage is achieved with integration tests, and they run sufficiently fast, there isn't much reason to have unit tests at all. My experience is that when writing unit tests, developers tend to use too much mocking, often creating brittle tests that verify unnecessary implementation details. Also, unit tests can never provide the same level of confidence as integration tests can, since they usually don't cover things like database queries, ORM mapping, and so on.
Unit testing applies as you did on classes and components. Its purpose is to:
Write code (TDD).
Illustrate the code usage and make it sustainable over time and changes.
Cover as much border cases as possible.
When you encounter an issue with some specific usage or parameters, first reproduce it with a new test, then fix it.
Mocking should only be used when it is needed to test a class or component standalone behavior and the mocked feature comes in production from outside your application (an email server for instance). It is overkill and useless when the code is already covered and the mocking overlaps the responsibility of other kind of tests, such as integration tests.
Now that you know every piece of code works, how do the pieces work together?
This is where comes the integration testing which is about how the components interact together in various conditions and environments. There is sometimes little difference between UT and IT: for instance the testing of the data access layer.
Integration tests are used for the same purposes as unit testing but at a higher level, less atomic, to illustrate the use cases on APIs, services...
What do you call the "integration layer"?
The presentation layer testing is rather the responsibility of functional testing, not unit nor integration.
You also did not talk about performance testing.
Finally, the goal is getting all code wrote along with the tests, bugs fixed after reproduction with new tests, and maximum coverage cumulating the all kinds of tests in all possible conditions (OS, databases, browsers...).
So you validate your overall testing quality with:
a tool calculating the coverage. You will likely have to instrumentate the code to evaluate the coverage from functional testing or use advanced JDK tools.
the number of bugs coming from lack of tests on some components, services...
I usually consider a bunch of tests being good when reading them gives me immediately no doubt about how to use the code they cover and full confidence to its contract, capabilities, inputs and outputs, history and exhaustibility on use cases, strength and stability in regard to error management and report.
Not also the coverage is one important thing, but it could be better to have a few less tests if you focus on their quality: threadsafe, made of unordered methods and classes, testing real conditions (no "if test condition" hacks).
To answer your question: I would say that given the above considerations, you don't have to write an integration test per layer since you will rather choose a different testing strategy (unit, integration, functional, performance, smoke, mocked...) for each layer.
I have an API that I would like to test in an automated fashion. I'm doing it in java at the moment but I think that the problem is language agnostic
A little bit of context:
The main idea is to integrate with a payments system in a manner similar to this demo.
The main flow is something like:
You checkout your cart and click a pay button
Your webapp will initiate a transaction with the payment api and you'd get a reference number. This reference number would now be used as a query parameter for that payment and you'd get redirected to that website.
After the customer makes the payment, you'd get redirected back to the webapp where you can retrieve the transaction and display the result
My main problem is how do I approach automated integration testing for this type of scenario? The main ideas that I have:
Use stubbing or mocking for the transaction reference and response. But this is more in line with unit testing than integration testing. I don't mind doing this, but I would want to explore the integration testing first.
Perhaps I should try some sort of automated form filling. So I would do a curl type request on the redirect url and a curl type post request after inspecting what the redirect website does.
Use some sort of web testing tool like selenium or something like that.
I think the answer depends on the goals and scope of your integration test, and also the availability of a suitable platform to use for integration testing. Here are a couple of thoughts that may aid your decision, focussing first on establishing the goals of your tests before making any suggestions on what the appropriate testing tools would be.
(I make the assumption that you don't want to actually use the production version of the payments service when running your tests)
Testing Integration with a 'Real' Payment Service: The strongest form of integration test would be one where you actually invoke a real payments service as part of your test, as this would test the code in your application, the code in the payments service, and the communication between the two. However, this requires you to have an always running test version of the payment service available, and the lower the availability of this, the more fragile your tests become.
If the payment service is owned by your team/department/company, this might not be so bad because you have the necessary control to make sure it is always available for testing. However, if it is a vendor system, assuming they control the test version of the service, then you are opening yourself up to fragility issues if they don't effectively maintain that test service to provide a high level of availability (which, in my experience, they generally don't, issues frequently occur like the service doesn't get upgraded often enough, or their support teams don't notice if the service has gone down).
One scenario you may come across is that the vendor may provide a test service that is always running a pre-release version of their software, so that their clients can run tests with new versions of their software before they are released into production and flag any integration issues. If they do, this should definitely influence your decision.
Testing Integration with a 'Fake' Payment Service: An alternative is to build a fake version of the service, running in a similar environment to the real service and with the same API, that can be used for integration tests. This can be as simple or as complex as you like, ranging from a very thin service that simply returns a single canned response to each request, to something that can return a range of different responses (success, fail, service not found etc...), depending on what your test goals are.
The upside of this is less fragility - it is likely to have much higher availability because it is under your control, and it will also be much simpler to guarantee the responses from the service you are looking for from your tests. Also, it makes it much simpler to build more intelligent test cases (i.e. a test for how your code responds if the service is unavailable, or if it reports it is under heavy load and cannot process your transaction yet). The downsides are that it is likely to be more work to set up, and you are not exercising the code in the real payments service when you run your unit tests.
What is the Best Approach to Take to Test the Service?: This is highly context specific, however here is what I would consider an ideal approach to testing, based on striking a balance between effectively testing integration with the service, and minimizing the risk of impacting developer productivity through fragile tests.
If the vendor provides a test version of their service, write a small test suite that verifies you are getting the expected responses from their service, and run this once per day. This will allow the benefits of verifying your own assumptions about the behavior of their service, without introducing fragility to the rest of your tests. The appropriate tool would be the simplest tool for the job (even a shell script which emails you any issues would be absolutely fine here), as long as it is maintainable, at the end of the day these wouldn't be tests developers would be expected to run regularly.
For my end-to-end test suite (i.e. one that deploys a test version of the system and tests it end to end), I would build a fake version of the payment service that can be deployed with the test system, and use that for testing. This allows you to test your own system completely end to end, and with a stable payment service endpoint. These end-to-end tests should include scenarios where the service cannot be found, reports a failure, things like that. Given this is end-to-end testing, a tool such as Selenium would be appropriate for tests driven through a web UI.
For the rest of the automated tests I would mock out calls to the payment service. One very simple approach is to encapsulate all of your calls to the payment service in a single component in your system, which can be easily replaced by a mock version during tests (e.g. using Dependency Injection).
We are working on a large team with many off-shore resources, most of whom are at a junior-level and we do not expect to fully understand how to use Spring AOP. Despite that, we still want to use Spring AOP because of rapidly changing (by our customer) nature of the applications cross-cutting concerns.
Our concern is ensuring that the advice is being applied how we expect it is, meaning:
ensuring that it is getting applied to the methods we want it to get applied to
that it is not getting applied to any other methods
What worries us most is that the juniors could make changes that could break our point cuts by doing things like renaming methods. Also, we are worried about which advice gets applied where because some services on exception should rollback the transaction, while some should log and carry-on, which we want to implement using AOP as well.
Therefore we want to programmatically test the application of Spring AOP advice, but we are not sure how to best proceed.
tl;dr: How to unit test the application the application of Spring AOP advice?
PS- please no semantic complaints of the use of "unit" vs "integration" test here.
The best I could come up with is a Spring unit test, and create mock implementations of all the services (targetted and not), and have them injected into the unit test, and then have the services the advice calls into mocked as well, and then call each method on each service and then verify whether or not the advice's mocked service was called. For every. single. method. on every. single. service. :-S
Hopefully, there is some higher-level facility where you could query into Spring to ask where it gets applied, but we have not uncovered any such ability in any of the tutorials so-far.
I have heard some people who I cannot talk to are big fans of jmock. I've done test centered development for years and so I went through the website and looked at some of the docs and still can't figure out what good it is.
I had the same problem with spring. Their docs do a great job explaining it if you already understand what it is, so I'm not assuming that jmock is of no value. I just don't understand what it does for me.
So if jmock provides me with the ability to mock out stubbed data, let's go with an example of how I do things and see how jmock would be better.
Let's say I have my UI layer that says, create me a widget and the widget service, when creating a widget, initializes the widget and stores pieces of it in the three tables necessary to make up a widget.
When I write my tests, here's how I go about it.
First, I re-point hibernate to my test hypersonic database so I don't have to do a bunch of database set up. Hibernate creates my tables for me.
All of my tests for my classes have static factory methods that construct a test instance of the class for me. Each of my DAOs create test versions that point to the test schema. Then my service class has one that constructs itself with DAOs generated by the test class.
Now, when I run my test of the UI controller that calls the service, I am testing my code all the way through the application. Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
Because Hypersonic under hibernate is slow, it takes slightly longer to run all of my tests, but my entire build still runs in less than five minutes on an older computer for full build and packaging, so I find that pretty acceptable.
How would I do things differently with jmock?
In your example, there are two interfaces where one would use a mocking framework to do proper unit tests:
The interface between the UI layer and the widget service - replacing the widget service with a mock would allow you to test the UI layer in isolation, with the service returning manually created data and the mock verifying that the expected service calls (and no others) happen.
The interface between the widget service and the DAO - by replacing the DAO with a mock, any service methods that contain complex logic can be tested in isolation.
Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
This seems to be the core of your question. The answer has a number of facets:
If you're not testing components in isolation, you do not have unit tests, you have integration tests. As you observe, these are quite valuable, but they have their drawbacks
Since they test more things at the same time, they tend to break more often, they tend to break in large groups (when there's a problem with common functionality) and when they do, it is harder to find out where the actual problem lies
They are more constrained in what kinds of scenarios you can test. It can be hard or impossible to simulate certain edge cases in an integration test.
Sometimes a full integration test cannot be automated because some component is not sufficiently under your control (e.g. a third-party webservice) to set up the test data you need. In such a case you might even end up using a mocking framework in what is otherwise a high-level integration test.
I haven't looked at JMock in particular (I use Mockito) but in general mock frameworks allow you to "mock" external services such that you only need to test a single class at a time. Any dependencies of that class can be mocked, meaning the real method calls are not made, and instead stubs are called that return or throw constants. This is a good thing, because external calls can be slow, inconsistent, or unreliable--all bad things for unit testing.
To give a single example of how this works, imagine you have a service class that has a dependency on a web service client. If you test with the real web service client, it might be down, the connection might be slow, or the data behind the web service might change over time. How are you going to write a reliable test against that? (You can't). So you use a mock framework to mock/stub the web service client, and you create fake responses, fake errors, fake exceptions, to mimic the web service behavior. The difference is the result is always fast and consistent.
Also, you'd like to test all the failure cases that a given dependency might have, but without mocking that's hard to do. Consider the example I gave above. You'd like to be sure your code does the right thing if the web service throws an IOException because the web service is down (or times out), but it's not so easy to force that condition. With mocking this becomes trivial.
I'm currently assigned to write a test for a project, is it necessary to write tests for DAO classes?
It depends :-)
If your DAO classes contain only the code needed to fetch entities from the DB, it is better to test them in separate integration tests*. The code to be unit tested is the "business logic" which you can unit test using mock DAOs.
[Update] E.g. with EasyMock you can easily set up a mock for a specific class (with its class extension, even concrete classes can be mocked), configure it to return a specific object from a certain method call, and inject it into your class to be tested.
The EasyMock website seems to be down right now, hopefully it will come back soon - then you can check the documentation, which is IMHO quite clean and thorough, with lots of code examples. Without much details in your question, I can't give a more concrete answer. [/Update]
If, OTOH, the DAOs contain also business logic, your best choice - if you can do that - would be to refactor them and move the business logic out of DAOs, then you can apply the previous strategy.
But the bottom line is, always keep in mind the unit testing motto "test everything which could possibly break". In other words, we need to prioritize our tasks and concentrate our efforts on writing the tests which provide the most benefit with the least effort. Write unit tests for the most critical, most bug-risky code parts first. Code which - in your view - is so simple it can't possibly break is further down the list. Of course it is advisable to consult with experienced developers on concrete pieces of code - they may know and notice possible traps and problems you aren't aware of.
* unit tests are supposed to be lightweight, fast and isolated from the environment as much as possible. Therefore tests which include calls to a real DB are not unit tests but integration tests. Even though technically they can be built and executed with JUnit (and e.g. DbUnit), they are much more complex and orders of magnitude slower than genuine unit tests. Sometimes this makes them unsuitable to be executed after every small code change, as regular unit tests could (and often should) be used.
Yes. But few folks would argue that, it doesn't come into the category of unit tests. Because it will not conform to the definition of unit test per say. We call it integration test where we test the integration of code to the database.
Moreover, I agree to the idea of Bruno here. Further, there are APIs available just to do that, one is DBUnit.
Yes, you should write unit tests for DAO's.
These unit tests can use an in-memory database. See for example: HyperSQL
Article on how to use HyperSQL to write persistence unit tests in Java:
http://www.mikebosch.com/?p=8
It's not necessary to write tests for anything. Do you get benefit from writing tests for your DAO classes? Probably.
Yes. There are several benefits of doing so. Once you are sure that your DAO layer is working fine, defect fixing in later stages becomes easy.
I would argue that we should write unit tests for DAOs and one of the biggest challenge to do it is the test data setup and cleanup. That is where I think, frameworks, such as Spring JDBC testing framework can help us out by letting us control the transaction using different annotations [Example: #Rollback(true)].
For example, if you are testing a "create/insert" operation, Spring allows you to completly rollback the transaction after the execution of the test method, thereby leaving the database in its original state always.
You may take a look at this link for more information: Spring Testing
This can be even more useful when for your integration tests where you don't want one test to spoil the data integrity, which can cause another test to fail.
The book xUnit Test Patterns offers a lot of great insights into this very question.