I am developing a product using microservices and am running into a bit of an issue. In order to do any work, I need to have all 9 services running on my local development environment. I am using Cloud Foundry to run the applications, but when running locally I am just running the Spring Boot Jars themselves. Is there anyway to setup a more lightweight environment so that I don't need everything running? Ideally, I would only like to have the service I am currently working on to have to be real.
I believe this is a matter of your testing strategy. If you have a lot of micro-services in your system, it is not wise to always perform end-to-end testing at development time -- it costs you productivity and the set up is usually complex (like what you observed).
You should really think about what is the thing you wanna test. Within one service, it is usually good to decouple core logic and the integration points with other services. Ideally, you should be able to write simple unit tests for your core logic. If you wanna test integration points with other services, use mock library (a quick google search shows this to be promising http://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks/)
If you don't have already, I would highly recommend to set up a separate staging area with all micro-services running. You should perform all your end-to-end testing there, before deploying to production.
This post from Martin Fowler has a more comprehensive take on micro-service testing stratey:
https://martinfowler.com/articles/microservice-testing
It boils down to a test technique that you use. Here my recent answer in another topic that you could find useful https://stackoverflow.com/a/44486519/2328781.
In general, I think that Wiremock is a good choice because of the following reasons:
It has out-of-the-box support by Spring Boot
It has out-of-the-box support by Spring Cloud Contract, which gives a possibility to use a very powerful technique called Consumer Driven Contracts.
It has a recording feature. Setup your Wiremock as a proxy and make requests through it. This will generate stubs for you automatically based on your requests and responses.
There are multiple tools out there that let you create mocked versions of your microservices.
When I encountered this exact problem myself I decided to create my own tool which is tailored for microservice testing. The goal is to never have to run all microservices at once, only the one that you are working on.
You can read more about the tool and how to use it to mock microservices here: https://mocki.io/mock-api-microservices. If you only want to run them locally, it is possible using the open source CLI tool
It can be solved if your microservices allow passing metadata along with requests.
Good microservice architecture should use central service discovery, also every service should be able to take metadata map along with request payload. Known fields of this map can be somehow interpreted and modified by the service then passed to next service.
Most popular usage of per-request metadata is request tracing (i.e. collecting tree of nodes used to process this request and timings for every node) but it also can be used to tell entire system which nodes to use
Thus plan is
register your local node in dev environment service discovery
send request to entry node of your system along with metadata telling everyone to use your local service instance instead of default one
metadata will propagate and your local node will be called by dev environment, then local node will pass processed results back to dev env
Alternatively:
use code generation for inter-service communication to reduce risk of failing because of mistakes in RPC code
resort to integration tests, mocking all client apis for microservice under development
fully automate deployment of your system to your local machine. You will possibly need to run nodes with reduced memory (which is generally OK as memory is commonly consumed only under load) or buy more RAM.
An approach would be to use / deploy an app which maps paths / urls to json response files. I personally haven't used it but I believe http://wiremock.org/ might help you
For java microservices, you should try Stybby4j. This will mock the json responses of other microservices using Stubby server. If you feel that mocking is not enough to map all the features of your microservices, you should setup a local docker environment to deploy the dependent microservices.
Related
How do I launch and control multiple microservices on the same system in Java? Is there an existing Java controller that can do this?
I have an application server which consists of multiple microservices that run on the same OS instance/system. Each microservice is spring boot based though there are a few exceptions. I'm looking for some already written controller which will start each service in order and restart if a service fails. I'm not looking for a container based approach but rather a controller which runs as a process on windows.
I don't want to create and maintain windows service entries for each service as that is error prone and tough to keep configured correctly. Getting the startup order right is also difficult.
I can write one myself but I'd rather not re-invent the wheel if I can find something that does what I need.
I need to use dynamic keystores in my Spring Boot application because at any moment i might have to change them and i don't want to have any downtime.
From what i saw it this post i have three options:
Writing a custom KeyManager;
Use a reverse-proxy;
Or on Tomcat use local JMX to reload SSL context.
In last one i don't really understand the implications of that. The reverse-proxy seems the easier way, but is it the best approach?
If someone could point me which one would be the best solution and why or recommend something else would be much appreciated.
You can have your own implementation of SslStoreProvider which will enable you to get the keystore/truststore from whatever source you want (does not need to be on disk).
Then check out #RefreshScope you can recreate beans (like your own SslStoreProvider) with it.
Here you can find an example (please note that this was only created to demonstrate a bug in spring-boot 1.x which was fixed in 2.x).
The external reverse-proxy approach is the most flexible. It doesn't require any changes to your application or deployment logic and will address most of the scenarios. The downside is that your architecture gets more complex and requires additional server resource for the proxy server.
You can take it a step further and do blue-green deployments:
The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.
I'd like to write a end to end test for a pipeline, built with spring boot.
Consider two microservices A, B where B consumes output from A and produces a RESTful API. They are connected using rabbitmq and rely on an external database.
I would like to achieve something like:
Create a new project that includes both microservices
Create a test configuration that configures JPA provider to be an in-memory database
Inject custom MQ into A, B to connect them (rabbitmq is not tightly coupled)
Write tests
Essentially replacing the white parts with mocks and testing the coloured parts.
Does this make sense? Test coverage of A and B is not complete and such a test would guarantee that the contract between A and B holds. Are there better ways?
If you have the time, I suggest you to read this :
https://martinfowler.com/articles/microservice-testing/
The purpose of end-to-end testing is not to do 100% of line coverage.
My first idea about this topic is that if it is an end-to-end test, then you should forget which framework do you use, because that relates to implementation in this context. So I would create a test project, which is essentially a docker-compose file, and defines 5 containers for
service A
service B
RabbitMQ
maybe database too, unless you want to stick to the in-memory approach
and a separate container for running the tests
From this perspective you have 2 ways of handling env-specific configuration:
you define test-specific config in a separate spring profile, and you activate it by defining the SPRING_PROFILES_ACTIVE env var in the docker-compose file
you pass your config in a properties file, and mount it in the docker-compose file
The test runner can be kept simple, I would write a JUnit-based test suite which uses RestAssured, or something similar.
I hope this gives a clue. Of course it is a broad topic so going into every detail doesn't fit into a SO answer.
I would recommend you to use Spring-cloud-contract. It helps you in maintaining contract between your microservices(Producer-Consumer Contracts).
It's available for both HTTP based and event-based communication.
Another approach is to test each component in isolation and mock the dependent service on the other side of the RabbitMQ server. You can do this using an async API simulation/mocking tool.
For example you can use Traffic Parrot which can be run in a Docker container as part of your CI/CD pipeline.
Here is a video demo of how you can use the tool to send mock response messages to a RabbitMQ queue in an aysnc request/response pattern. There is also a corresponding tutorial available to follow.
I am developing a server-side app using Java and couchbase. I am trying to understand the pros and cons of handling the cluster and bucket management from the java code over using the couchbase admin web console.
For instance, should I handle create/ remove buckets, indexing, and update buckets in my java code?
The reason I want to handle as many as couchbase administration functions is my app is expected to run on-prem not a cloud services. I want to avoid that our customers need to learn how to administrate couchbase.
The main reason to use the management APIs programmatically, rather than using the admin console, is exactly as you say: when you need to handle initializing and maintaining yourself, especially if the application needs to be deployed elsewhere. Generally speaking, you'll want to have some sort of database initializer or manager module in your code, which handles bootstrapping the correct buckets and indexes if they don't exist.
If all you need to do is handle preparing the DB environment one time for your application, you can also use the command line utilities that come with Couchbase, or send calls to the REST API. A small deployment script would probably be easier than writing code to do the same thing.
I have an API that I would like to test in an automated fashion. I'm doing it in java at the moment but I think that the problem is language agnostic
A little bit of context:
The main idea is to integrate with a payments system in a manner similar to this demo.
The main flow is something like:
You checkout your cart and click a pay button
Your webapp will initiate a transaction with the payment api and you'd get a reference number. This reference number would now be used as a query parameter for that payment and you'd get redirected to that website.
After the customer makes the payment, you'd get redirected back to the webapp where you can retrieve the transaction and display the result
My main problem is how do I approach automated integration testing for this type of scenario? The main ideas that I have:
Use stubbing or mocking for the transaction reference and response. But this is more in line with unit testing than integration testing. I don't mind doing this, but I would want to explore the integration testing first.
Perhaps I should try some sort of automated form filling. So I would do a curl type request on the redirect url and a curl type post request after inspecting what the redirect website does.
Use some sort of web testing tool like selenium or something like that.
I think the answer depends on the goals and scope of your integration test, and also the availability of a suitable platform to use for integration testing. Here are a couple of thoughts that may aid your decision, focussing first on establishing the goals of your tests before making any suggestions on what the appropriate testing tools would be.
(I make the assumption that you don't want to actually use the production version of the payments service when running your tests)
Testing Integration with a 'Real' Payment Service: The strongest form of integration test would be one where you actually invoke a real payments service as part of your test, as this would test the code in your application, the code in the payments service, and the communication between the two. However, this requires you to have an always running test version of the payment service available, and the lower the availability of this, the more fragile your tests become.
If the payment service is owned by your team/department/company, this might not be so bad because you have the necessary control to make sure it is always available for testing. However, if it is a vendor system, assuming they control the test version of the service, then you are opening yourself up to fragility issues if they don't effectively maintain that test service to provide a high level of availability (which, in my experience, they generally don't, issues frequently occur like the service doesn't get upgraded often enough, or their support teams don't notice if the service has gone down).
One scenario you may come across is that the vendor may provide a test service that is always running a pre-release version of their software, so that their clients can run tests with new versions of their software before they are released into production and flag any integration issues. If they do, this should definitely influence your decision.
Testing Integration with a 'Fake' Payment Service: An alternative is to build a fake version of the service, running in a similar environment to the real service and with the same API, that can be used for integration tests. This can be as simple or as complex as you like, ranging from a very thin service that simply returns a single canned response to each request, to something that can return a range of different responses (success, fail, service not found etc...), depending on what your test goals are.
The upside of this is less fragility - it is likely to have much higher availability because it is under your control, and it will also be much simpler to guarantee the responses from the service you are looking for from your tests. Also, it makes it much simpler to build more intelligent test cases (i.e. a test for how your code responds if the service is unavailable, or if it reports it is under heavy load and cannot process your transaction yet). The downsides are that it is likely to be more work to set up, and you are not exercising the code in the real payments service when you run your unit tests.
What is the Best Approach to Take to Test the Service?: This is highly context specific, however here is what I would consider an ideal approach to testing, based on striking a balance between effectively testing integration with the service, and minimizing the risk of impacting developer productivity through fragile tests.
If the vendor provides a test version of their service, write a small test suite that verifies you are getting the expected responses from their service, and run this once per day. This will allow the benefits of verifying your own assumptions about the behavior of their service, without introducing fragility to the rest of your tests. The appropriate tool would be the simplest tool for the job (even a shell script which emails you any issues would be absolutely fine here), as long as it is maintainable, at the end of the day these wouldn't be tests developers would be expected to run regularly.
For my end-to-end test suite (i.e. one that deploys a test version of the system and tests it end to end), I would build a fake version of the payment service that can be deployed with the test system, and use that for testing. This allows you to test your own system completely end to end, and with a stable payment service endpoint. These end-to-end tests should include scenarios where the service cannot be found, reports a failure, things like that. Given this is end-to-end testing, a tool such as Selenium would be appropriate for tests driven through a web UI.
For the rest of the automated tests I would mock out calls to the payment service. One very simple approach is to encapsulate all of your calls to the payment service in a single component in your system, which can be easily replaced by a mock version during tests (e.g. using Dependency Injection).