Unit tests for third party rest services - java

I have written unit tests for a third party rest API. These tests are what I would call live tests in that they do test the rest api responses and require valid credentials. This is required because the documentation provided by the third party is not up-to-date so its the only way of knowing what the response will be. Obviously, I can't use these as the unit tests because they actually connect externally. Where would be a good place to put these tests or separate them from mocked unit tests?
I have currently had to comment them out when I check them in so that they don't get run by the build process.

I tend to use assumeTrue for these sort of tests and pass a system property to the tests. So the start of one of your tests would be:
#Test
public void remoteRestTest()
{
assumeTrue(System.getProperty("run.rest.tests").equals("true"));
...
}
This will only allow the test to run if you pass -Drun.rest.tests=true to your build.

What you are looking for are integration tests. While the scope of a unit test is usually a single class the scope of an integration test is a whole component in its environment and this includes the availability of external resources such as your remove REST service. Yes, you should definitely keep integration test separate from unit tests. How this can be done in your environment depends on your build process.
For instance, in case you work with Maven there is a Maven Failsafe Plugin that targets integration testing in your build process.

Related

void method in Java Integration Tests

I have some experience with Unit Tests in Java and now started to write Integration Tests. However, I have some troubles understanding the Integration Test and writing test. Here are some issues that I would like to be clarified:
1. In my Java (based on Spring Boot) project, should I write Integration Test for Controllers or may it be also ok to write Integration Test for Services also (because there are some methods that are not called from Controller).
2. How can I test a void in service by Integration Test? I have not found a proper example on the web and I thought there is no need or way to test void via Integration Test. Any clarification pls?
OK, so here are the best practices in the area.
should I write Integration Test for Controllers or may it be also ok to write Integration Test for Services also
Most of the integration tests run for services. This is because each service can support several controllers and multiple interfaces in general. For controllers you do just the happy path tests on an integration level.
Any tests for controller specific exceptions go into the controller unit test, where you mock out the service.
How can I test a void in service by Integration Test?
You check its side effect. For example, you use a repository class to see if the relevant data has been persisted to the database.
Does this solve your problem ? Let me know in the comments.

Writing System Tests in JUnit

Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.

Is there a common way to validate if a (xml based) spring configuration is valid?

Question: Is there a common way to validate if a (xml based) spring configuration is valid?
Further explanation:
With "valid" I mean not if the xml itself is valid (I don't talk about xsd validation), I mean more "logical valid", e.g. if all referenced classes are available, or if a specific reference is available / could be resolved.
The background of this question is a QA-process within a CI-environment for a spring-mvc application:
Assuming a developer have a typo in a class name, or a reference is not unique in a webcontext configuration file, and he commits this change. Now an automated build is triggered:
The application compiles successfully
unit tests are "green" (since they don't need any spring configuration)
integration tests are "green" (since integration tests does not rely on webcontext configurations)
functional / regression testing starts
In current setup we would note this simple typo in step 4 - but it is quite time consuming to reach this point.
It would be great to have a mechanism / tool which can validate if a spring context could be loaded in order to save some time.
integration tests does not rely on webcontext configurations
My integration tests run against the deployed application. If you want to test the web context then... test it. Specifics depends on what you want to test.
You can use Maven or whatever build tool you have to deploy your app in a test environment and run the integration tests afterwards.
You can use a lightweight server like Grizzly, reuse it across test classes etc.
Since those tests are costly I usually have one that checks the app can be deployed and the context starts. I use Grizzly and run this test along the rest of the Unit tests so issues are detected asap. Other similar test cases can be added depending on the situation.
A simple way to "pre-validate" the XML configuration before integration testing would be to use a JUnit test using SpringJUnit4ClassRunner. This is available in the spring-test JAR.
e.g.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "/foo.xml" })
public class XmlValidate {
#Test
public void loadsWithoutError() {
// nothing to do, SpringJUnit4ClassRunner will throw exception
// if there are any class-not-found exceptions etc.
}
}

Integration tests randomly fail or throw error when run by maven

I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.

Exactly what is integration testing - compared with unit

I am starting to use unit testing in my projects, and am writing tests that are testing at the method/function level.
I understand this and it makes sense.
But, what is integration testing? From what i read it moves the scope of testing up to test larger features of an application.
This implies that I write a new test suite to test larger things such as (on an e-commerce site) checkout functionality, user login functionality, basket functionality. So here i would have 3 integration tests written?
Is this correct - if not can someone explain what is meant.
Also, does integration test involve the ui (web application context here) and would employ the likes of selenium to automate. Or is integration testing still at the code level but tying together difference classes and areas of the code.
Consider a method like this PerformPayment(double amount, PaymentService service);
An unit test would be a test where you create a mock for the service argument.
An integration test would be a test where you use an actual external service so that you test if that service responds correctly to your input data.
Unit tests are tests that the tested code is inside of the actual class. Another dependencies of this class are mocked or ignored, because the focus is test the code inside the class.
Integration tests are tests that involves disk access, application service and/or frameworks from the target application. The integration tests run isolated from another external services.
I will give an example. You have a Spring application and you made a lot of unit tests to guarantee that the business logic is working properly. Perfect. But what kind of tests you have to guarantee:
Your application service can start
Your database entity is mapped correctly
You have all the necessary annotations working as expected
Your Filter is working properly
Your API is accepting some kind of data
Your main feature is really working in the basic scenario
Your database query is working as expected
Etc...
This can't be done with unit tests but you, as developer, need to guarantee that all things are working too. This is the objective of integration tests.
The ideal scenario is the integration tests running independent from another external systems that the application use in a production environment. You can accomplish that using Wiremock for Rest calls, a memory database like H2, mocking beans from some specific classes that call external systems, etc.
A little curiosity, Maven have a specific plugin for Integration Tests: the maven failsafe plugin, that execute test classes that the name ends with IT (by default). Example: UserIT.java.
The confusion about what Integration Test means
Some people understand the "integration test" as a test involving the "integration" to other external systems that the currently system use. This kind of tests can only be done in a environment where you have all the systems up and running to attend you. Nothing fake, nothing mocked.
This might be only a naming problem, but we have a lack of tests (what I understand as integration tests) that attends the necessity of the items described above. On contrary, we are jumping for a definition of unit tests (test class only) to a "integration" test (the whole real systems up). So what is in the middle of it if not the integration tests?
You can read more about this confusion on this article by Martin Fowler. He separates the "integration tests" term on two meanings: the "broad" and "narrow" integration tests:
narrow integration tests
exercise only that portion of the code in my service that talks to a
separate service
uses test doubles of those services, either in
process or remote
thus consist of many narrowly scoped tests, often no
larger in scope than a unit test (and usually run with the same test
framework that's used for unit tests)
broad integration tests
require live versions of all services, requiring substantial test
environment and network access
exercise code paths through all
services, not just code responsible for interactions
You can get even more details on this article.
Unit testing is where you are testing your business logic within a class or a piece of code. For example, if you are testing that a particular section of your method should call a repository your unit test will check to make sure that the method of the interface which calls the repository is called the correct number of times that you expect, otherwise it fails the test.
Integration testing on the other hand is testing that the actual service or repository (database) behavior is correct. It is checking that based on data you pass in you retrieve the expected results. This ties in with your unit tests so that you know what data you should retrieve and what it does with that data.
As far as I see the selenium tests should be in another test suite. Those tests are the most fragile test in nature even if you write them correctly. Here you can use Specflow or some other kind of specification by example framework. Perhaps you can call these tests as acceptance tests. These are for developers and business experts too.
The integration, or module tests normally do not use UI. The integration tests exercise some classes which are working together. These are lower level tests than the selenium tests, and a bit easier to maintain. These tests are for developers only.
Here are a couple of constraints that a good unit test satisfies. Meeting these constraints also required good testable code.
No I/O - disk or network
Only one assertion (if multiple, they should be minor variations of each other)
Does not exercise (cover) much more production code than what it asserts
These constraints usually don't apply to integration tests.

Categories

Resources