Feign with Ribbon: reset - java

We are trying to use Feign + Ribbon in one of our projects. In production code, we do not have a problem, but we have a few in JUnit tests.
We are trying to simulate number of situations (failing services, normal runs, exceptions etc.), hence we need to configure Ribbon in our integration test many times. Unfortunately, we noticed that even when we destroy the Spring context, part of the state still survives probably somewhere in static variables (for example: new tests still connect to balancer from the previous suite).
Is there any recommended way, how to purge the static state of both these tools? (something like Hystrix.reset())
Thanks in advance!
We tried to reset JVM after each suite - it works perfectly, but its not very practical (we must set it up in both Gradle and Idea (as Idea test tunner does not honor this out of the box)). We also tried renaming the service between tests - this works on lets say 99% (it sometimes fails for some reason...).

You should submit a bug to Ribbon if it is the case that there is some static state somewhere. Figure out what minimal code causes the issue, if you are not able to do that though then they won't do anything. In your code base you should do a search for any use of static which is not also final and refactor them as well if any exist.
Furthermore you may find it useful to make strong distinctions between the various different types of tests. It doesn't sound like you are doing a unit test to me. Even though you are just simulating these services, and simulating failures, this sort of test is really an integration test, because you are testing if Ribbon is configured correctly with your own components, which is really an integration test. It would be a unit test if you would test only that your component is configuring Ribbon correctly, not sure if I made sense there haha it's a subtle distinction but it has large implications in your test.
On another note don't dismiss what you have now as necessarily a bad idea. It may be very useful to have some heavy weight integration tests checking the behaviour of Feign if this is a mission critical function, IMO it's a great idea in that case. But it's a heavy weight integration test and should be treated as such. You might want to even use some container magic ect to achieve this sort of test, with services which fail in your various different failure scenarios. This would live in CI and usually developers wouldn't run those guys with each commit unless they were working directly with a piece of functionality concerning integration.

Related

What are integration tests containing and how to set them up

I’m currently learning about unit tests and integration testing and as I understood it, unit tests are used to test the logic of a specific class and integration tests are used to check the cooperation of multiple classes and libraries.
But is it only used to test multiple classes and if they work together as expected, or is it also valid to access databases in an integration test? If so, what if the connection can‘t be established because of a server sided error, wouldn’t the tests fail, although the code itself would work as expected? How do I know what‘s valid to use in this kind of tests?
Second thing I don‘t understand is how they are set up. Unit tests seem to me have a quite common form, like:
public class classTest {
#BeforeEach
public void setUp(){
}
#Test
public void testCase(){
}
}
But how are integration tests written? Is it commonly done the same way, just including more classes and external factors or is there another way that is used for that?
[... ] is it also valid to access databases in an integration test? [...] How do I know what‘s valid to use in this kind of tests?
The distinction between unit-tests and integration tests is not whether or not more than one component is involved: Even in unit-testing you can get along without mocking all your dependencies if these dependencies don't keep you from reaching your unit-testing goals (see https://stackoverflow.com/a/55583329/5747415).
What distinguishes unit-testing and integration testing is the goal of the test. As you wrote, in unit-testing your focus is on finding bugs in the logic of a function, method or class. In integration testing the goal is then, obviously, to detect bugs that could not be found during unit-testing but can be found in the integrated (sub-)system. Always keeping the test goals in mind helps to create better tests and to avoid unnecessary redundancy between integration tests and unit-tests.
One flavor of integration testing is interaction testing: Here, the goal is to find bugs in the interaction between two or more components. (Additional components can be mocked, or not - again this depends on whether the additional components keep you from reaching your testing goals.) Typical questions in the interactions of two components A and B could be, for example if B is a library: Is component A calling the right function of component B, is component B in a proper state to be accessed by A via that function (B might not be initialized yet), is A passing the arguments in the correct order, do the arguments contain the values in the expected form, does B give back the results in the expected way and in the expected format?
Another flavor of integration testing is subsystem testing, where you do not focus on the interactions between components, but look at the boundaries of the subsystem formed by the integrated components. And, again, the goal is to find bugs that could not be found by the previous tests (i.e. unit-tests and interaction tests). For example, are the components integrated in the correct versions, can the desired use-cases be exercised on the integrated subsystem etc.
While unit-tests form the bottom of the test pyramid, integration testing is a concept that applies on different levels of integration and can even focus on interfaces orthogonal to the software integration strategy (for example when doing interaction testing of a driver and its corresponding hardware device).
Second thing I don‘t understand is how they are set up. [...] how are integration tests written?
There is an extreme variation here. For many integration tests you can just use the same testing framework that is used for unit-tests: There is nothing unit-test specific in these frameworks. You will, certainly, in the test cases have to ensure that the setup actually combines the components of interest in their proper versions. And, whether or not additional dependencies are just used or mocked needs to be decided (see above).
Another typical scenario is to perform integration tests in the fully integrated system, using a system-test-like setup. This is often done out of convenience, just to avoid the trouble to create different special setups for the different integration tests: The fully integrated system just has them all combined. Certainly, this has also disadvantages, because it is often impossible or at least impractical to perform all integration tests as desired this way. And, when doing integration testing this way the boundaries between integration testing and system testing get fuzzy. Keeping focused in such a case means you really have to have a good understanding of the different test goals.
There are also mixed forms, but there are too many to describe them here. Just one example, there is a possibility to mock some shared libraries with the help of LD_PRELOAD (What is the LD_PRELOAD trick?).
It would be valid to access a database as part of an integration test, as integration tests are supposed to show whether a feature is working correctly.
If a feature were to not work because of a failed connection to a server side error, you would want the test to fail to inform you that this feature is not working. Integration tests are not there to inform you where the fault lies, just that a feature is not working.
See https://stackoverflow.com/a/7876055/10461045 as this helps to clarify the widely accepted difference.
Using a database (or an external connection to a service you are using) in an integration test is not only valid, but should be done. However, do not rely on integration tests heavily. Unit test every logic element you have and set up integration tests for certain flows.
Integration tests can be written in the same way, except (as you mentioned) they include more methods etc. In fact, the code snippet you've shown above is a common start write-up of an integration test.
You can read up more on tests here: https://softwareengineering.stackexchange.com/questions/301479/are-database-integration-tests-bad

Unit testing strategies for a layered Java Rest application

I am currently puzzled regarding the right way I should do unit tests. Let's say we have to write JUnit tests for a basic Java Rest CRUD application composed of these "common layers" where each layer calls the sub layer:
Controller layer (ex: AccountRestController.getAccount(id) - returns
JSON of Account)
Service layer: (ex: AccountServive.getAccount(id) - returns Account
object )
Repository layer (ex: AccountRepository.getAccount(Id) - returns
Account object )
Domain layer (ex: Account (id, name) )
Database table (ex : Account(ID, NAME) )
We would also have the following hypothesis (or restrictions) for the Unit tests (not sure they are appropriate though ?)
They have to be out of container (no Tomcat\Jetty and no in memory
database - I guess I would do that in my integration test)
Use mocking (for example Mockito framework)
So my questions are:
What is the best way\the right way\the best practices to write unit
tests for this type of application ?
To be rigorous, do we have to unit test each layer(controller,
service, repository, domain) independently by mocking each time the
sub layer
Would unit testing only the top Rest Controller be enough?
... and again ... are my hypothesis appropriate ? (Couldn’t we do
Unit testing with a container & in memory database?)
Regards
And I will jump right into the opinionated waters...
I do start writing tests from the bottom layer, and go up, letting the upper layers call the real thing under (as long as it's well contained within my code and/or easy to predict the correct behaviour of lower layer). While this is more like integration test, it works for me.
For me the crucial part about "unit testing" is not to have correct unit tests per these definitions, but to have automated test which can be run fast enough (usually I keep them under 3s in C++ including compilation, in Android+Java I have huge performance problems, as the whole IDE+toolchain is insanely slow, often leading to times like 5+ seconds, on larger projects even hitting 20-30s with gradle build, and I'm trying to run only really basic unit tests, far away from what I do in C++, where my tests are closer to QA set).
AND if they fail, it should be easy to pinpoint the cause of failure. As I call often all layers deep inside, failure in some base class often leads to many failures, but I rarely have problem to identify the cause within a quick look, so this is not worrying me.
When files/databases gets involved, things get usually slower, so I tend to differentiate what is my "unit test" and what belongs to Integration/QA set. Still a in-memory DB can do quite OK for basic things.
I prefer these bastard tests, because when I mock layers under/above the code being tested, it makes me worried I "bake" the expected result into that test too much, missing bugs when I mock it wrong. Plus mocking something is often additional work, so when the run time of tests is a low price to pay, I gladly turn to "integration" like tests.
In Android/Java from what have I seen/used:
I like Mockito a lot, it somehow fits the way I think about mocking nicely.
Robolectric (Android specific) is heavy weight, so I use it sparsely, but sometimes it feels like a better fit than mocking pretty much everything.
Dagger and other dependency injection libs: I can't get to like these, usually they clash with the way I write unit tests, and I don't see any benefit of using them, I prefer to write dependency injection in pure Java, it's almost same number of lines in source, and the source is where I expect it when reading it after few years.
Bus/Event libraries: these annoy me just as much, I didn't yet figure way how to test event driven code thoroughly and easily, my tests always feels way too much staged and full of assumptions, plus these are sometimes hard to mock.
BTW, if possible, always do unit test while developing (close to TDD approach). Doing unit tests afterwards is usually much more painful, the API is then quite often set (already used by other parts/project), and when you realise it's difficult to test it easily, it's too late already, or big refactoring is next (without tests covering the original version, so error prone).
For a Java Rest CRUD app it sounds to me like most of the API can be tested trough all layers without much performance penalty (with probably the DB mocked/injected/in-memory, and other external systems solved in similar way). But I'm doing in Java only Android stuff, so I don't have direct experience with such scenario.

How to safely test code requiring an external web API

I'm about to embark on the journey of writing a java library to wrap a web API and I'd like to write tests along the way, just to make sure everything is peachy. I've been using JUnit for quite some time now and am pretty comfortable using it along with tools like PowerMockito/Mockito. I am however concerned that tests may fail if the API is down or I am unable to reach it as I do eventually plan to run this on a CI server (travis-ci) and would like the build-test-deploy procedure to be as close to automated as possible.
I've done quite a bit of Googling and most of the questions I've found here are unfortunately about testing an API that the programmer has authored or can set up locally. I understand it is possible to replicate the basic functionality of the API with a little tinkering, although that feels more like a step backwards than it does forwards.
I'm currently drafting ideas in my head and so far this feels like a moderately reliable solution, although it'd be nice if somebody were able to verify this or offer a better one.
TestUtil.java
public static boolean isReachable() {
try (Socket socket = new Socket("api.host.com", 80)) {
return true;
} catch (Exception e) {
return false;
}
}
TestCase.java
#BeforeClass
public static void testReachable() {
Assume.assumeTrue("API was not reachable, test cannot run", TestUtil.isReachable());
}
I'm assuming it with #BeforeClass simply out of paranoia.
However, this doesn't account for HTTP errors, only checking that something is listening on port 80. Would it be worth replacing with a HEAD request? Other than checking for errors, I'm honestly not sure. I'm reluctant to use HTTP without confirmation it's the best way as this library has the potential to get quite large.
Edit:
I've just stumbled upon InetAddress#isReachable(), although according to an article I'm reading it's not the most reliable.
You should distinguish between unit tests and integration tests.
Unit tests should never depend on infrastructure like network and file system. All those aspects should be refactored out of the way, e.g. in a separate class or method which is mocked during the unit test. I always write unit tests as 'white box' tests, where I try to cover each possible flow in the code by using a code coverage tool.
In your case you could write unit tests for the business logic in your project, like which API calls to make in which order, logic rules depending on the results of the API calls, maybe some content-related validation and error handling, mapping of your domain objects to the remote API's protocol etc.
This leaves just the parts that actually call the API untested. To do this I would run an embedded web server (e.g. Jetty) which hosts a mock version of the remote API offering cooked responses. Then you can write integration tests that call this local server to check your network code and configuration thereof.
I have often skipped the integration test part when I was using a framework like Spring-WS or JAXB because it means doing a lot of work to just test your configuration, there is no real need to test the code of such frameworks. This might just me being lazy but I always try to weigh the effort of creating tests against the expected benefit.
It becomes a different story when you have a complex landscape of services with a lot of compex mapping, configuration and routing going on. Then your integration tests are the best way to validate that all your services are wired up and talking to each other correctly. I would call these 'black box' tests, where you specify tests in terms of expected functionality of the system as a whole (e.g. user stories), independent of implementation details.
Depending on the size of the API and what you intend on doing with it, the code (as you said yourself) might become quite large. Before embarking on your journey, perhaps you could evaluate something that already exists to do this like RetroFit.
It's also worth mentioning that RetroFit by the nature of what it does talks to external APIs. It also has tests and is open source. You could look at their tests for inspiration.

Given a choice of adding unit tests or integration tests to an existing system, which is better to start with and why?

I'm currently consulting on an existing system, and I suspect the right next step is to add unit tests because of the types of exceptions that are occurring (null pointers, null lists, invalid return data). However, an employee who has a "personal investment" in the application insists on integration tests, even though the problems being reported are not related to specific use cases failing. In this case is it better to start with unit or integration tests?
Typically, it is very difficult to retrofit an untested codebase to have unit tests. There will be a high degree of coupling and getting unit tests to run will be a bigger time sink than the returns you'll get. I recommend the following:
Get at least one copy of Working Effectively With Legacy Code by Michael Feathers and go through it together with people on the team. It deals with this exact issue.
Enforce a rigorous unit testing (preferably TDD) policy on all new code that gets written. This will ensure new code doesn't become legacy code and getting new code to be tested will drive refactoring of the old code for testability.
If you have the time (which you probably won't), write a few key focused integration tests over critical paths of your system. This is a good sanity check that the refactoring you're doing in step #2 isn't breaking core functionality.
Integration tests have an important role to play, but central to the testing of your code is unit-tests.
In the beginning, you will probably be forced to do integration tests only. The reason is that your code base is very heavily coupled (just a wild guess since there are no unit tests). Tight coupling means that you cannot create an instance of an object for test without creating a lot of related objects first. This makes any tests integration-tests per definition. It is crucial that you write these integration tests, as should be used as base lines for your bug-finding/refactoring efforts.
Write tests that document the bug.
Fix the bug so all created unit-tests are green.
It is time to be a good boyscout (leave the campsite/code in better order that it was when you entered) : Write tests that documents the functionality of the class that contained the bug.
As a part of your boyscout efforts, you start to decouple the class from others. Dependency Injection is THE tool here. Think that no other classes should be constructed inside other classses -- they should be injected as interfaces instead.
Finally, when you have decoupled the class, you can decouple the tests as well. Now, when you are injecting interfacing instead of creating concrete instances inside the tested class, you can make stubs/mocks instead. Suddenly your tests have become unit-tests!!
You can create integration tests as well, where you inject concrete classes instead of stubs and mocks. Just remember to keep them far away from the unit-tests; preferably in another assembly. Unit-tests should be able to run all the time, and run very fast don't let them be slowed down by slow integration tests.
The answer to the question depends on the context in which it is being asked. If you are looking to bring an existing codebase, and you are considering rewriting or replacing large portions of the code then it will be more valuable to design a comprehensive set of integration tests around the components you wish to rewrite or replace. On the other hand, if you are taking responsibility for an existing system that needs to be support and maintained, you might want to first start with unit tests to make sure that your more focused changes do not introduce errors.
I'll put it another way. If someone sends you an old car, take a look at it. If you are going to replace all of the components right away, then don't bother testing the minute performance characteristics of the fuel injector. If, on the other hand, you are going to be maintaining the car, as is, go ahead and write targeted unit tests around the components you are going to be fixing.
General rule, code without unit tests are brittle, systems without integrations are brittle. If you are going to be focused on low-level code changes, write Unit Tests first. If you are going to be focused on system-level changes, write integration tests.
And, also, make sure to ignore everything you read on sites like this. No one here knows the specifics of your project.
Choosing between integration tests and unit tests is highly subjective. It depends on various metrics of the codebase, most notably cohesion and coupling of the classes.
The generic advice that I would provide is that if classes that are loosely coupled, then test setup is going to consume lesser time, and hence, it would be much easier to start writing unit tests (especially against the more critical classes in the codebase).
On the other hand, in the event of high coupling, you might be better off writing integration tests against the more critical code paths, starting especially with a class that is loosely coupled (and resident much higher up in the execution stack). At the same time, attempts must be made to refactor the classes involved to reduce coupling (while using the integration tests as a safety net).

How to best test Java code?

I have been working on a comparatively large system on my own, and it's my first time working on a large system(dealing with 200+ channels of information simultaneously). I know how to use Junit to test every method, and how to test boundary conditions. But still, for system test, I need to test all the interfacing and probably so some stress test as well (maybe there are other things to do, but I don't know what they are). I am totally new to the world of testing, and please give me some suggestions or point me to some info on how a good code tester would do system testing.
PS: 2 specific questions I have are:
how to test private functions?
how to testing interfaces and avoid side effects?
Here are two web sites that might help:
The first is a list of open source Java tools. Many of the tools are addons to JUnit that allow either easier testing or testing at a higher integration level.
Depending on your system, sometimes JUnit will work for system tests, but the structure of the test can be different.
As for private methods, check this question (and the question it references).
You cannot test interfaces (as there is no behavior), but you can create an abstract base test classes for testing that implementations of an interface follow its contract.
EDIT: Also, if you don't already have unit tests, check out Working Effectivly with Legacy Code; it is a must for testing code that is not set up well for testing.
Mocking is a good way to be able to simulate system tests in unit testing; by replacing (mocking) the resources upon which the other component depends, you can perform unit testing in a "system-like" environment without needing to have the entire system constructed to do it.
As to your specific questions: generally, you shouldn't be using unit testing to test private functions; if they're private, they're private to the class. If you need to test something, test a public method which uses that private method to do something. Avoiding side effects that can be potentially problematic is best done using either a complete test environment (which can easily be wiped back to a "virgin" state) or using mocking, as described above. And testing interfaces is done by, well, testing the interface methods.
Firstly, if you already have a large system that doesn't have any unit tests, and you're planning on adding some, then allow me to offer some general advice.
From maintaining the system and working with it, you'll probably already know the areas of the system which tend to be buggiest, which tend to change often and which tend not to change very much. If you don't, you can always look through the source control logs (you are using source control, right?) to find out where most of the bug fixes and changes are concentrated. Focus your testing efforts on these classes and methods. There's a general rule called the 80/20 rule which is applicable to a whole range of things, this being one of them.
It says that, roughly on average, you should be able to cover 80 percent of the offending cases by doing just 20% of the work. That is, by writing tests for just 20% of the code, you can probably catch 80% of the bugs and regressions. That's because most of the fragile code, commonly changed code and worst offending code makes up just 20% of the codebase. In fact, it may be even less.
You should use junit to do this and you should use something like JMock or some other mocking library to ensure you're testing in isolation. For system testing/integration testing, that is, testing things while they're working together, I can recommend FitNesse. I've had good experience with it in the past. It allows you to write your test in a web browser using simple table-like layouts, where you can easily define your inputs and expected outputs. All you have to do is write a small backing class called a Fixture, which handles the creation of the components.
Private functions will be tested when the public functions that call them. Your testing of the public function only cares that the result returned is correct.
When dealing with API (to other packages or URLS or even to file/network/database) you should mock them. A good unit test should run in a few milliseconds not in seconds. Mocking is the only way to do that. It means that bugs between packages can be dealt with a lot easier than logical bugs at the functional level. For Java easymock is a very good mocking framework.
You may have a look on this list : Tools for regression testing / test automation of database centric java application? for a list of interesting tools.
As you seem to already use Junit extensively it means that you're already "test infected", that is a good point...
In my personal experience, the most difficult thing to manage is data. I mean, controlling very acutely the data agaisnt which the tests are runned.
The lists of tools given before are useful. From personal experience these are the tools I find useful:
Mocking - Mockito is an excellent implementation and has clever techniques to ensure you only have to mock the methods you really care about.
Database testing - DBunit is indespensible for setting up test data and verifying database interactions.
Stress testing - Jmeter - once you see passed the slightly clunky gui this is a very robust tool for setting up scenarios and running stress tests.
As for general approach start by trying to get tests running for the usual "happy paths" through your application these can form a basis for regression testing and performance testing. Once this is complete you can start looking at edge cases and error scenarios.
Although this level of testing should be secondary to good unit testing.
Good luck!

Categories

Resources