Spring microservice end-to-end testing - java

I'd like to write a end to end test for a pipeline, built with spring boot.
Consider two microservices A, B where B consumes output from A and produces a RESTful API. They are connected using rabbitmq and rely on an external database.
I would like to achieve something like:
Create a new project that includes both microservices
Create a test configuration that configures JPA provider to be an in-memory database
Inject custom MQ into A, B to connect them (rabbitmq is not tightly coupled)
Write tests
Essentially replacing the white parts with mocks and testing the coloured parts.
Does this make sense? Test coverage of A and B is not complete and such a test would guarantee that the contract between A and B holds. Are there better ways?

If you have the time, I suggest you to read this :
https://martinfowler.com/articles/microservice-testing/
The purpose of end-to-end testing is not to do 100% of line coverage.

My first idea about this topic is that if it is an end-to-end test, then you should forget which framework do you use, because that relates to implementation in this context. So I would create a test project, which is essentially a docker-compose file, and defines 5 containers for
service A
service B
RabbitMQ
maybe database too, unless you want to stick to the in-memory approach
and a separate container for running the tests
From this perspective you have 2 ways of handling env-specific configuration:
you define test-specific config in a separate spring profile, and you activate it by defining the SPRING_PROFILES_ACTIVE env var in the docker-compose file
you pass your config in a properties file, and mount it in the docker-compose file
The test runner can be kept simple, I would write a JUnit-based test suite which uses RestAssured, or something similar.
I hope this gives a clue. Of course it is a broad topic so going into every detail doesn't fit into a SO answer.

I would recommend you to use Spring-cloud-contract. It helps you in maintaining contract between your microservices(Producer-Consumer Contracts).
It's available for both HTTP based and event-based communication.

Another approach is to test each component in isolation and mock the dependent service on the other side of the RabbitMQ server. You can do this using an async API simulation/mocking tool.
For example you can use Traffic Parrot which can be run in a Docker container as part of your CI/CD pipeline.
Here is a video demo of how you can use the tool to send mock response messages to a RabbitMQ queue in an aysnc request/response pattern. There is also a corresponding tutorial available to follow.

Related

Is there some way to rollback redis data after finish java test code?

Is there some way to rollback the data in redis after test code run.
I worked on a java web project using spring boot 2.
I know redis does not provided rollback operation.
So using another redis (like some embedded redis) in test can ensure test code does not change the redis data. And make a mocked redis client to get data in test redis first and if no data then get from the origin Redis.
Does it workable?
And is there a ready made package implements this function?
Or has any simpler way to rollback?
First of all, you should be clear on terms. Either you are doing a real (narrow) unit test, then you absolutely decouple your code under test from any "real" resource, such as databases or file systems or remote servers. In other words: then you mock out such dependencies.
Otherwise, you are doing functional testing. And there are simply too many options to give a meaningful answer here. One example would be Redis Mock.
But as said: the real answer is that you get clear on your requirements. You should do unit-tests that mock out on a lower level, directly at the (single) class under test.

Microservices - Stubbing/Mocking

I am developing a product using microservices and am running into a bit of an issue. In order to do any work, I need to have all 9 services running on my local development environment. I am using Cloud Foundry to run the applications, but when running locally I am just running the Spring Boot Jars themselves. Is there anyway to setup a more lightweight environment so that I don't need everything running? Ideally, I would only like to have the service I am currently working on to have to be real.
I believe this is a matter of your testing strategy. If you have a lot of micro-services in your system, it is not wise to always perform end-to-end testing at development time -- it costs you productivity and the set up is usually complex (like what you observed).
You should really think about what is the thing you wanna test. Within one service, it is usually good to decouple core logic and the integration points with other services. Ideally, you should be able to write simple unit tests for your core logic. If you wanna test integration points with other services, use mock library (a quick google search shows this to be promising http://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks/)
If you don't have already, I would highly recommend to set up a separate staging area with all micro-services running. You should perform all your end-to-end testing there, before deploying to production.
This post from Martin Fowler has a more comprehensive take on micro-service testing stratey:
https://martinfowler.com/articles/microservice-testing
It boils down to a test technique that you use. Here my recent answer in another topic that you could find useful https://stackoverflow.com/a/44486519/2328781.
In general, I think that Wiremock is a good choice because of the following reasons:
It has out-of-the-box support by Spring Boot
It has out-of-the-box support by Spring Cloud Contract, which gives a possibility to use a very powerful technique called Consumer Driven Contracts.
It has a recording feature. Setup your Wiremock as a proxy and make requests through it. This will generate stubs for you automatically based on your requests and responses.
There are multiple tools out there that let you create mocked versions of your microservices.
When I encountered this exact problem myself I decided to create my own tool which is tailored for microservice testing. The goal is to never have to run all microservices at once, only the one that you are working on.
You can read more about the tool and how to use it to mock microservices here: https://mocki.io/mock-api-microservices. If you only want to run them locally, it is possible using the open source CLI tool
It can be solved if your microservices allow passing metadata along with requests.
Good microservice architecture should use central service discovery, also every service should be able to take metadata map along with request payload. Known fields of this map can be somehow interpreted and modified by the service then passed to next service.
Most popular usage of per-request metadata is request tracing (i.e. collecting tree of nodes used to process this request and timings for every node) but it also can be used to tell entire system which nodes to use
Thus plan is
register your local node in dev environment service discovery
send request to entry node of your system along with metadata telling everyone to use your local service instance instead of default one
metadata will propagate and your local node will be called by dev environment, then local node will pass processed results back to dev env
Alternatively:
use code generation for inter-service communication to reduce risk of failing because of mistakes in RPC code
resort to integration tests, mocking all client apis for microservice under development
fully automate deployment of your system to your local machine. You will possibly need to run nodes with reduced memory (which is generally OK as memory is commonly consumed only under load) or buy more RAM.
An approach would be to use / deploy an app which maps paths / urls to json response files. I personally haven't used it but I believe http://wiremock.org/ might help you
For java microservices, you should try Stybby4j. This will mock the json responses of other microservices using Stubby server. If you feel that mocking is not enough to map all the features of your microservices, you should setup a local docker environment to deploy the dependent microservices.

Regression component tests with Cucumber. Is there any boundary to the layers that should be tested?

I found myself last week having to start thinking about how to refactor an old application that only contains unit tests. My first idea was to add some component test scenarios with Cucumber to get familiarised with the business logic and to ensure I don't break anything with my changes. But at that point I had a conversation with one of the architects in the company I work for that made me wonder whether it was worth it and what was actually the code I had to actually test.
This application has many different types of endpoints: rest endpoints to be called from and to call to, Oracle stored procedures and JMS topics and queues. It's deployed in a war file to a Tomcat server and the connection factory to the broker and the datasource to the database are configured in the server and fetched using JNDI.
My first idea was to load the whole application inside an embedded Jetty, pointing to the real web.xml so everything is loaded as it would be loaded from a production environment but then mocking the connection factory and the datasource. By doing that, all the connectivity logic to the infrastructure where the application is deployed would be tested. Thinking about the hexagonal architecture, this seems like too much effort having in mind that those are only ports which logic should only be about transforming received data into application data. Shouldn't this just be unit tested?
My next idea was to just mock the stored procedures and load the Spring XMLs in my test without any web server, which makes it easier to mock classes. For this I would be using libraries like Spring MockMvc for the rest endpoints and Mockrunner for JMS. But again, this approach would still test some adapters and complicate the test as the result of the tests would be XML and JSON payloads. The transformations done in this application are quite heavy where the same message type could contain different versions of a class (each message could contain many complex object that implement several interfaces).
So right now I was thinking that maybe the best approach would be to just create my tests from the entry point to the application, the services called from the adapters, and verify that the services responsible to send messages to the broker or to call other REST endpoints are actually invoked. Then just ensure there are proper unit tests for the endpoints and verify everything works once deployed by just providing some smoke tests that are executed in a real environment. This would test the connectivity logic and the business logic would be tested in isolation, without caring if a new adapter is added or one is removed.
Is this approach correct? Would I be leaving something without testing this way? Or is it still too much and I should just trust the unit tests?
Thanks.
Your application and environment sound quite complicated. I would definitely want integration tests. I'd test the app outside-in as follows:
Write a smoke-test suite that runs against the application in the actual production environment. Cucumber would be a good tool to use. That suite should only do things that are safe in production, and should be as small as possible while giving you confidence that the application is correctly installed and configured and that its integrations with other systems are working.
Write an acceptance test suite that runs against the entire application in a test environment. Cucumber would be a good choice here too.
I would expect the acceptance-test environment to include a Tomcat server with test versions of all services that exist in your production Tomcat and a database with a schema, stored procedure, etc. identical to production (but not production data). Handle external dependencies that you don't own by stubbing and mocking, by using a record/replay library such as Betamax and/or by implementing test versions of them yourself. Acceptance tests should be free to do anything to data, and they shouldn't have to worry about availability of services that you don't own.
Write enough acceptance tests to both describe the app's major use cases and to test all of the important interactions between the parts of the application (both subsystems and classes). That is, use your acceptance tests as integration tests. I find that there is very little conflict between the goals of acceptance and integration tests. Don't write any more acceptance tests than you need for specification and integration coverage, however, as they're relatively slow.
Unit-test each class that does anything interesting whatsoever, leaving out only classes that are fully tested by your acceptance tests. Since you're already integration-testing, your unit tests can be true unit tests which stubb or mock their dependencies. (Although there's nothing wrong with letting a unit-tested class use real dependencies that are simple enough to not cause issues in the unit tests).
Measure code coverage to ensure that the combination of acceptance and unit tests tests all your code.

Integration Test of REST APIs with Code Coverage

We have built a REST API that exposes bunch of business services - a business service may invoke other platform/utility services to perform database read and writes, to perform service authorization etc.
We have deployed these services as WAR files in Tomcat.
We want to test this whole setup using a integration test suite which we would like to also treat as regression test suite.
What would be a good approach to perform integration testing on this and any tools that can speed up the development of suite? Here are few requirements we think we need to address:
Ability to define integration test cases which exercise business scenarios.
Set up the DB with test data before suite is run.
Invoke the REST API that is running on a remote server (Tomcat)
Validate the DB post test execution for verifying expected output
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
At my work we have recently put together a couple of test suites to test some RESTful APIs we built. Like your services, ours can invoke other RESTful APIs they depend on. We split it into two suites.
Suite 1 - Testing each service in isolation
Mocks any peer services the API depends on using restito. Other alternatives include rest-driver, wiremock, pre-canned and betamax.
The tests, the service we are testing and the mocks all run in a single JVM
Launches the service we are testing in Jetty
I would definitely recommend doing this. It has worked really well for us. The main advantages are:
Peer services are mocked, so you needn't perform any complicated data setup. Before each test you simply use restito to define how you want peer services to behave, just like you would with classes in unit tests with Mockito.
The suite is super fast as mocked services serve pre-canned in-memory responses. So we can get good coverage without the suite taking an age to run.
The suite is reliable and repeatable as its isolated in it's own JVM, so no need to worry about other suites/people mucking about with an shared environment at the same time the suite is running and causing tests to fail.
Suite 2 - Full End to End
Suite runs against a full environment deployed across several machines
API deployed on Tomcat in environment
Peer services are real 'as live' full deployments
This suite requires us to do data set up in peer services which means tests generally take more time to write. As much as possible we use REST clients to do data set up in peer services.
Tests in this suite usually take longer to write, so we put most of our coverage in Suite 1. That being said there is still clear value in this suite as our mocks in Suite 1 may not be behaving quite like the real services.
With regards to your points, here is what we do:
Ability to define integration test cases which exercise business scenarios.
We use cucumber-jvm to define business scenarios for both of the above suites. These scenarios are English plain text files that business users can understand and also drive the tests.
Set up the DB with test data before suite is run.
We don't do this for our integration suites, but in the past I have used unitils with dbunit for unit tests and it worked pretty well.
Invoke the REST API that is running on a remote server (Tomcat)
We use rest-assured, which is a great HTTP client geared specifically for testing REST APIs.
Validate the DB post test execution for verifying expected output
I can't provide any recommendations here as we don't use any libraries to help make this easier, we just do it manually. Let me know if you find anything.
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
We do not measure code coverage for our integration tests, only for our unit tests, so again I can't provide any recommendations here.
Keep your eyes peeled on our techblog as there may be more details on their in the future.
You may also check out the tool named Arquillian, it's a bit difficult to set up at first, but provides the complete runtime for integration tests (i.e. starts its own container instance and deploys your application along with the tests) and provides extensions that solve your problems (invoking REST endpoints, feeding the databases, comparing results after the tests).
Jacoco extension generates the coverage reports than can be later displayed i.e. by the Sonar tool.
I've used it for a relatively small-scale JEE6 project and, once I had managed to set it up, I was quite happy with how it works.

Simulating JMS - jUnit

I need to simulate JMS behavior while performing automated tests via maven/hudson. I was thinking about using some mock framework i.e. Mockito to achieve that goal but maybe there is some easier tool which can accomplish this task? I have read a little bit about ActiveMQ but from what I have found out it requires to install broker prior using it. In my case it is important to have everything run by maven only because I don't have any privileges to install anything on the build server.
You can run ActiveMQ in embedded mode - the broker starts within your application and queues are created on the fly. You just need to add activemq.jar and run few lines of code.
On the other hand there is a Mockrunner library that has support for JMS - although it was designed mainly for unit tests, not integration.

Categories

Resources