Is mocking an essential part of unit testing? - java

I have a web service which calls a third party web service.
Now I want to unit-test my web service. For this, should I mock the third party web service or is it fine to call it during the tests?
Is there any standard document on unit testing?

Yes, you should mock the third party web service for unit testing. Here are some advantages:
Your unit tests are independent of other components and really only test the unit and not also the web service. If you include the service in your unit tests and they fail, you won't know whether the problem is in your code or in the foreign web service.
Your tests are independent of the environment. If the Internet connection is ever down or you want to test on a machine that doesn't have Internet access at all, your test will still work.
Your tests will be much faster without actually connecting to the web service. This might not seem like a big thing but if you have many many tests (which you should), it can really get annoying.
You can also test the robustness of your unit by having the mock send unexpected things. This can always happen in the real world and if it does, your web service should react nicely and not just crash. There is no way to test that when using the real web service.
However, there is also integration testing. There, you test the complete setup, with all components put together (integrated). This is where you would use the real web service instead of the mock.
Both kinds of testing are important, but unit testing is typically done earlier and more frequently. It can (and should) be done before all components of the system are created. Integration testing requires a minimal amount of progress in all of most of the components.

This depend on what you're unit-testing.
So if you're testing out whether you can successfully communicate with the third-party webservice, you wouldn't obviously try to mock this. However if you're unit-testing some business use-cases that are part of your web-service offering (unrelated to what the other modules/web services are doing), then you might want to mock the third-party web service.

You should test both but both test cases does not fall under Unit testing.
Unit testing is primarily used for testing individual smaller pieces i.e. classes and methods. Unit testing is something that should preferably happen during the build process without any human intervention. So as part of Unit testing you should mock out the third party webservice. The advantage of mocking is that you can make the mocked object behave in many possible ways and test your classes/methods to make sure they handle all possible cases.
When multiple systems are involved, the test case falls under System/Integration/Functional testing. So as part of your System/Integration/Functional testing, you should call the methods in other webservice from your webservice and make sure everything works as expected.

Mocking is an usually essential in unit testing components which have dependent components. This is so because in unit testing , you are limited to testing that your code works correctly ( does what its contract says it will do ). If this method , in trying to do the part of its contract depends upon other component doing their part correctly , it is perfectly fine to mock that part out ( assuming that they work correctly ).
If you dont mock the other dependent parts , you soon run into problems. First , you cannot be certain what behavior is exhibited by that component. Second , you cannot predict the results of your own test ( because you dont know what was the inputs supplied to your tests )

Related

What are integration tests containing and how to set them up

I’m currently learning about unit tests and integration testing and as I understood it, unit tests are used to test the logic of a specific class and integration tests are used to check the cooperation of multiple classes and libraries.
But is it only used to test multiple classes and if they work together as expected, or is it also valid to access databases in an integration test? If so, what if the connection can‘t be established because of a server sided error, wouldn’t the tests fail, although the code itself would work as expected? How do I know what‘s valid to use in this kind of tests?
Second thing I don‘t understand is how they are set up. Unit tests seem to me have a quite common form, like:
public class classTest {
#BeforeEach
public void setUp(){
}
#Test
public void testCase(){
}
}
But how are integration tests written? Is it commonly done the same way, just including more classes and external factors or is there another way that is used for that?
[... ] is it also valid to access databases in an integration test? [...] How do I know what‘s valid to use in this kind of tests?
The distinction between unit-tests and integration tests is not whether or not more than one component is involved: Even in unit-testing you can get along without mocking all your dependencies if these dependencies don't keep you from reaching your unit-testing goals (see https://stackoverflow.com/a/55583329/5747415).
What distinguishes unit-testing and integration testing is the goal of the test. As you wrote, in unit-testing your focus is on finding bugs in the logic of a function, method or class. In integration testing the goal is then, obviously, to detect bugs that could not be found during unit-testing but can be found in the integrated (sub-)system. Always keeping the test goals in mind helps to create better tests and to avoid unnecessary redundancy between integration tests and unit-tests.
One flavor of integration testing is interaction testing: Here, the goal is to find bugs in the interaction between two or more components. (Additional components can be mocked, or not - again this depends on whether the additional components keep you from reaching your testing goals.) Typical questions in the interactions of two components A and B could be, for example if B is a library: Is component A calling the right function of component B, is component B in a proper state to be accessed by A via that function (B might not be initialized yet), is A passing the arguments in the correct order, do the arguments contain the values in the expected form, does B give back the results in the expected way and in the expected format?
Another flavor of integration testing is subsystem testing, where you do not focus on the interactions between components, but look at the boundaries of the subsystem formed by the integrated components. And, again, the goal is to find bugs that could not be found by the previous tests (i.e. unit-tests and interaction tests). For example, are the components integrated in the correct versions, can the desired use-cases be exercised on the integrated subsystem etc.
While unit-tests form the bottom of the test pyramid, integration testing is a concept that applies on different levels of integration and can even focus on interfaces orthogonal to the software integration strategy (for example when doing interaction testing of a driver and its corresponding hardware device).
Second thing I don‘t understand is how they are set up. [...] how are integration tests written?
There is an extreme variation here. For many integration tests you can just use the same testing framework that is used for unit-tests: There is nothing unit-test specific in these frameworks. You will, certainly, in the test cases have to ensure that the setup actually combines the components of interest in their proper versions. And, whether or not additional dependencies are just used or mocked needs to be decided (see above).
Another typical scenario is to perform integration tests in the fully integrated system, using a system-test-like setup. This is often done out of convenience, just to avoid the trouble to create different special setups for the different integration tests: The fully integrated system just has them all combined. Certainly, this has also disadvantages, because it is often impossible or at least impractical to perform all integration tests as desired this way. And, when doing integration testing this way the boundaries between integration testing and system testing get fuzzy. Keeping focused in such a case means you really have to have a good understanding of the different test goals.
There are also mixed forms, but there are too many to describe them here. Just one example, there is a possibility to mock some shared libraries with the help of LD_PRELOAD (What is the LD_PRELOAD trick?).
It would be valid to access a database as part of an integration test, as integration tests are supposed to show whether a feature is working correctly.
If a feature were to not work because of a failed connection to a server side error, you would want the test to fail to inform you that this feature is not working. Integration tests are not there to inform you where the fault lies, just that a feature is not working.
See https://stackoverflow.com/a/7876055/10461045 as this helps to clarify the widely accepted difference.
Using a database (or an external connection to a service you are using) in an integration test is not only valid, but should be done. However, do not rely on integration tests heavily. Unit test every logic element you have and set up integration tests for certain flows.
Integration tests can be written in the same way, except (as you mentioned) they include more methods etc. In fact, the code snippet you've shown above is a common start write-up of an integration test.
You can read up more on tests here: https://softwareengineering.stackexchange.com/questions/301479/are-database-integration-tests-bad

Feign with Ribbon: reset

We are trying to use Feign + Ribbon in one of our projects. In production code, we do not have a problem, but we have a few in JUnit tests.
We are trying to simulate number of situations (failing services, normal runs, exceptions etc.), hence we need to configure Ribbon in our integration test many times. Unfortunately, we noticed that even when we destroy the Spring context, part of the state still survives probably somewhere in static variables (for example: new tests still connect to balancer from the previous suite).
Is there any recommended way, how to purge the static state of both these tools? (something like Hystrix.reset())
Thanks in advance!
We tried to reset JVM after each suite - it works perfectly, but its not very practical (we must set it up in both Gradle and Idea (as Idea test tunner does not honor this out of the box)). We also tried renaming the service between tests - this works on lets say 99% (it sometimes fails for some reason...).
You should submit a bug to Ribbon if it is the case that there is some static state somewhere. Figure out what minimal code causes the issue, if you are not able to do that though then they won't do anything. In your code base you should do a search for any use of static which is not also final and refactor them as well if any exist.
Furthermore you may find it useful to make strong distinctions between the various different types of tests. It doesn't sound like you are doing a unit test to me. Even though you are just simulating these services, and simulating failures, this sort of test is really an integration test, because you are testing if Ribbon is configured correctly with your own components, which is really an integration test. It would be a unit test if you would test only that your component is configuring Ribbon correctly, not sure if I made sense there haha it's a subtle distinction but it has large implications in your test.
On another note don't dismiss what you have now as necessarily a bad idea. It may be very useful to have some heavy weight integration tests checking the behaviour of Feign if this is a mission critical function, IMO it's a great idea in that case. But it's a heavy weight integration test and should be treated as such. You might want to even use some container magic ect to achieve this sort of test, with services which fail in your various different failure scenarios. This would live in CI and usually developers wouldn't run those guys with each commit unless they were working directly with a piece of functionality concerning integration.

what's the simplest way to intercept http calls from within a junit?

We have some tests, which we run from a shell script. They make calls to system A, which in turn calls system B. We therefore have 3 separate JVMs. These tests are fully automated and run without any human intervention on a jenkins system.
As part of my test, I want to intercept the calls from A to B and check various things about the content. The call still needs to reach B, and B's response needs to be returned unaltered to A.
System A has configuration to tell it where B is. They are all running on my local machine, so the obvious option is to start an http server (perhaps jetty) from within the test, configure A to talk to my temporary server, then when I run my test, the server will see all traffic sent from A to B. It needs to then pass those requests to B, get the response, and return that response back to A. The unit test then needs to see the contents of each request as a String, and do some checks on it.
I have done something similar in the past using jetty. My previous stub solution did something very similar: but instead of proxying the calls to another system, it simply checked the request and returned a dummy response. We are restricted to using jetty 6.1 - using another version would be do-able, but a PITA.
I think that jetty could be the best solution. I could do it very simply by extending AbstractHandler, then creating a new http call to system B. But it would be a bit messy. Is there a simple, standard way of doing this?
The simplest way is don't.
First, what you're describing are clearly not unit tests. A unit test is one that tests a small bit of code in isolation from the rest of the application, never mind external resources. Instead, you're testing the application as a whole, and the application has external dependencies. That's fine, but these tests fall into either the integration or functional test category. (In practice, you might still use a unit testing framework to write those kinds of tests, but the distinction is important.)
Second, what do you actually hope to gain by doing this? How is it going to improve the reliability or quality of application A? It's highly likely that it won't, and the added complexity of trying to maintain the set up and extra assertions are actually going to make everything less maintainable.
So here is what I would do:
Write a series of unit tests on the individual bits of logic within applications A and B. These will test the logic in isolation. (I would recommend that you execute the unit tests first and separately, and then when unit tests fail, your build can fail fast before executing the integration and functional tests. Integration and functional tests will be slower and more cumbersome, so this notifies you of problems more quickly. Up to you on that point, though.)
Write a series of integration or functional tests that check that application B gives the correct output given a specific input.
Write a series of integration or functional tests on the input and output of A. When writing them, have it just call the real B and assume that B is working as intended. If it isn't, your application B tests will pick up on it, and you'll have some extra failures in the application A tests that you can ignore until B is fixed.
You don't have to mock everything. Trying to do so will cause you more trouble than it will save. Overly complex tests will be a net loss.

Exactly what is integration testing - compared with unit

I am starting to use unit testing in my projects, and am writing tests that are testing at the method/function level.
I understand this and it makes sense.
But, what is integration testing? From what i read it moves the scope of testing up to test larger features of an application.
This implies that I write a new test suite to test larger things such as (on an e-commerce site) checkout functionality, user login functionality, basket functionality. So here i would have 3 integration tests written?
Is this correct - if not can someone explain what is meant.
Also, does integration test involve the ui (web application context here) and would employ the likes of selenium to automate. Or is integration testing still at the code level but tying together difference classes and areas of the code.
Consider a method like this PerformPayment(double amount, PaymentService service);
An unit test would be a test where you create a mock for the service argument.
An integration test would be a test where you use an actual external service so that you test if that service responds correctly to your input data.
Unit tests are tests that the tested code is inside of the actual class. Another dependencies of this class are mocked or ignored, because the focus is test the code inside the class.
Integration tests are tests that involves disk access, application service and/or frameworks from the target application. The integration tests run isolated from another external services.
I will give an example. You have a Spring application and you made a lot of unit tests to guarantee that the business logic is working properly. Perfect. But what kind of tests you have to guarantee:
Your application service can start
Your database entity is mapped correctly
You have all the necessary annotations working as expected
Your Filter is working properly
Your API is accepting some kind of data
Your main feature is really working in the basic scenario
Your database query is working as expected
Etc...
This can't be done with unit tests but you, as developer, need to guarantee that all things are working too. This is the objective of integration tests.
The ideal scenario is the integration tests running independent from another external systems that the application use in a production environment. You can accomplish that using Wiremock for Rest calls, a memory database like H2, mocking beans from some specific classes that call external systems, etc.
A little curiosity, Maven have a specific plugin for Integration Tests: the maven failsafe plugin, that execute test classes that the name ends with IT (by default). Example: UserIT.java.
The confusion about what Integration Test means
Some people understand the "integration test" as a test involving the "integration" to other external systems that the currently system use. This kind of tests can only be done in a environment where you have all the systems up and running to attend you. Nothing fake, nothing mocked.
This might be only a naming problem, but we have a lack of tests (what I understand as integration tests) that attends the necessity of the items described above. On contrary, we are jumping for a definition of unit tests (test class only) to a "integration" test (the whole real systems up). So what is in the middle of it if not the integration tests?
You can read more about this confusion on this article by Martin Fowler. He separates the "integration tests" term on two meanings: the "broad" and "narrow" integration tests:
narrow integration tests
exercise only that portion of the code in my service that talks to a
separate service
uses test doubles of those services, either in
process or remote
thus consist of many narrowly scoped tests, often no
larger in scope than a unit test (and usually run with the same test
framework that's used for unit tests)
broad integration tests
require live versions of all services, requiring substantial test
environment and network access
exercise code paths through all
services, not just code responsible for interactions
You can get even more details on this article.
Unit testing is where you are testing your business logic within a class or a piece of code. For example, if you are testing that a particular section of your method should call a repository your unit test will check to make sure that the method of the interface which calls the repository is called the correct number of times that you expect, otherwise it fails the test.
Integration testing on the other hand is testing that the actual service or repository (database) behavior is correct. It is checking that based on data you pass in you retrieve the expected results. This ties in with your unit tests so that you know what data you should retrieve and what it does with that data.
As far as I see the selenium tests should be in another test suite. Those tests are the most fragile test in nature even if you write them correctly. Here you can use Specflow or some other kind of specification by example framework. Perhaps you can call these tests as acceptance tests. These are for developers and business experts too.
The integration, or module tests normally do not use UI. The integration tests exercise some classes which are working together. These are lower level tests than the selenium tests, and a bit easier to maintain. These tests are for developers only.
Here are a couple of constraints that a good unit test satisfies. Meeting these constraints also required good testable code.
No I/O - disk or network
Only one assertion (if multiple, they should be minor variations of each other)
Does not exercise (cover) much more production code than what it asserts
These constraints usually don't apply to integration tests.

Unit testing web service calls

I am working on a Java project that is split up into a web project and a back-end project. The web talks to the back-end via web service calls.
There is one class in the web project that makes all of the web service calls and I would like to add testing around this class. I want to do unit testing, and not functional testing, so I do not want to have to have the web service actually running to run the tests. If this class were simply passing the calls through to the back-end, I might be willing to overlook testing it, however there is caching happening at this point, so I want to test that it is working correctly.
When the web service is generated jax-ws wsgen it creates an interface that the front end uses. I have used this generated interface in order to create a fake object for testing. This works pretty well, but there are issues with this approach.
I am currently the only one on my team that is doing unit testing, and so am the only one maintaining the test code. I would like to be able to have the test code be built when the rest of the code is built, but if someone else introduces a new method into one of the web service classes, then the interface will have the new method on it, and my fake object will not implement it, and will therefor be broken.
The web and the back end code projects are not dependent on one another, and I do not want to introduce a dependency between them. So, introducing an interface on top of the web service endpoint does not seem plausible since if I put it in the back-end, my web code needs to reference it, and if I put it in the front-end, my back-end code needs to reference it. I also cannot extend the endpoint since this will also introduce a dependency between the projects.
I am unfamiliar with how web services work, and how the classes are generated for the web project to be able to refer to them. So, I do not know how to create an interface in the back end that will be available for me to use in the web project.
So, my question is, how would I get the interface available to my front-end project without introducing a project dependency (in Eclipse build path)? Or, is there another, better way to fake out the back-end web service that I am calling?
First off, I'd break out the caching code into a testable unit that does not directly depend upon the web service calls.
As for the web services, I find it useful to have both functional tests that exercise the web services and other tests that mock out the web services. The functional tests can help you find edge cases that your mocks may miss.
For instance, I'm using Axis2 and generating stubs from the WSDL. For the mocks, I just implement or extend the generated stubs. In our case the real web service is implemented by an outside organization. Probing their web service through exploratory functional tests has revealed some exceptions that needed to be handled that were not apparent by just examining the generated stubs. I used this information to better mock these edge cases.

Categories

Resources