Avoid restarting the server every time when tiny changes on integration tests
I'm new to Spring and feeling a lot of pain on writing integration tests on Spring.
For example, say I'm running an integration test and change the code below.
See, there's nothing related to the server code change.
To run the new updated integration test code, I have to launch the webserver and data seeding again which might take 5 minutes long.
Hard to imaging how people manage this way for development.
I'm not sure if it is possible to launch the webserver along by bootRun and the integration test should try to communicate the dedicate server without bothering to reboot the servers for running tests.
Usually, which part config file will define this behavior?
I took over this project and have to figure it out my own.
Before
serverResp.then()
.statusCode(203)
.body("query.startDateTime", equalTo("2018-07-01T00:00:00"))
After
serverResp.then()
.statusCode(200)
.body("query.endDateTime", equalTo("2020-07-01T00:00:00"))
There are many different way to do integration testing.
Spring has a built-in framework that runs the integration test in the same JVM as the real server.
If the application is heavy (usually relevant for monoliths) it indeed can take time to run it, the best you can do then is to "chose" which parts of the application to load that are relevant to the test. In spring there are ways to achieve this, the question is whether your application code allows such separation.
Then there is a way to write integration tests so that they indeed communicate with a remote server which is up-and-running "in advance". during the build this can be done once before a testing phase and then when the tests are done, the server should be shut down.
Usually tests like this have some ways to specify the server host/port for communications (I put aside the security, credentials, etc).
So you can check if there is some special flag/system property - read the host / port from there.
A good thing in this approach is that you won't need to restart the server before every test.
The bad thing is that it doesn't always allow to test easily: If your test deploys some test data, this test must also remove the data at the end of test.
Tests must be designed carefully.
A third approach is a kind of hybrid and generally not a mainstream IMO:
you can create a special setup that will run test in different JVM (externally) but once the test starts its bytecode gets uploaded to the running server (the server must have a backdoor for this) and the actual bytecode gets executed actually on the server. Again the server is up-and-running.
I wrote once a library to do this with Spock, but it was long time ago and we haven't used it after all (that project was closed).
Don't want to self-advertise or something, but you can check it out and maybe borrow technical ideas of how to do this.
Related
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
We are trying to use Feign + Ribbon in one of our projects. In production code, we do not have a problem, but we have a few in JUnit tests.
We are trying to simulate number of situations (failing services, normal runs, exceptions etc.), hence we need to configure Ribbon in our integration test many times. Unfortunately, we noticed that even when we destroy the Spring context, part of the state still survives probably somewhere in static variables (for example: new tests still connect to balancer from the previous suite).
Is there any recommended way, how to purge the static state of both these tools? (something like Hystrix.reset())
Thanks in advance!
We tried to reset JVM after each suite - it works perfectly, but its not very practical (we must set it up in both Gradle and Idea (as Idea test tunner does not honor this out of the box)). We also tried renaming the service between tests - this works on lets say 99% (it sometimes fails for some reason...).
You should submit a bug to Ribbon if it is the case that there is some static state somewhere. Figure out what minimal code causes the issue, if you are not able to do that though then they won't do anything. In your code base you should do a search for any use of static which is not also final and refactor them as well if any exist.
Furthermore you may find it useful to make strong distinctions between the various different types of tests. It doesn't sound like you are doing a unit test to me. Even though you are just simulating these services, and simulating failures, this sort of test is really an integration test, because you are testing if Ribbon is configured correctly with your own components, which is really an integration test. It would be a unit test if you would test only that your component is configuring Ribbon correctly, not sure if I made sense there haha it's a subtle distinction but it has large implications in your test.
On another note don't dismiss what you have now as necessarily a bad idea. It may be very useful to have some heavy weight integration tests checking the behaviour of Feign if this is a mission critical function, IMO it's a great idea in that case. But it's a heavy weight integration test and should be treated as such. You might want to even use some container magic ect to achieve this sort of test, with services which fail in your various different failure scenarios. This would live in CI and usually developers wouldn't run those guys with each commit unless they were working directly with a piece of functionality concerning integration.
We have some tests, which we run from a shell script. They make calls to system A, which in turn calls system B. We therefore have 3 separate JVMs. These tests are fully automated and run without any human intervention on a jenkins system.
As part of my test, I want to intercept the calls from A to B and check various things about the content. The call still needs to reach B, and B's response needs to be returned unaltered to A.
System A has configuration to tell it where B is. They are all running on my local machine, so the obvious option is to start an http server (perhaps jetty) from within the test, configure A to talk to my temporary server, then when I run my test, the server will see all traffic sent from A to B. It needs to then pass those requests to B, get the response, and return that response back to A. The unit test then needs to see the contents of each request as a String, and do some checks on it.
I have done something similar in the past using jetty. My previous stub solution did something very similar: but instead of proxying the calls to another system, it simply checked the request and returned a dummy response. We are restricted to using jetty 6.1 - using another version would be do-able, but a PITA.
I think that jetty could be the best solution. I could do it very simply by extending AbstractHandler, then creating a new http call to system B. But it would be a bit messy. Is there a simple, standard way of doing this?
The simplest way is don't.
First, what you're describing are clearly not unit tests. A unit test is one that tests a small bit of code in isolation from the rest of the application, never mind external resources. Instead, you're testing the application as a whole, and the application has external dependencies. That's fine, but these tests fall into either the integration or functional test category. (In practice, you might still use a unit testing framework to write those kinds of tests, but the distinction is important.)
Second, what do you actually hope to gain by doing this? How is it going to improve the reliability or quality of application A? It's highly likely that it won't, and the added complexity of trying to maintain the set up and extra assertions are actually going to make everything less maintainable.
So here is what I would do:
Write a series of unit tests on the individual bits of logic within applications A and B. These will test the logic in isolation. (I would recommend that you execute the unit tests first and separately, and then when unit tests fail, your build can fail fast before executing the integration and functional tests. Integration and functional tests will be slower and more cumbersome, so this notifies you of problems more quickly. Up to you on that point, though.)
Write a series of integration or functional tests that check that application B gives the correct output given a specific input.
Write a series of integration or functional tests on the input and output of A. When writing them, have it just call the real B and assume that B is working as intended. If it isn't, your application B tests will pick up on it, and you'll have some extra failures in the application A tests that you can ignore until B is fixed.
You don't have to mock everything. Trying to do so will cause you more trouble than it will save. Overly complex tests will be a net loss.
I'm currently writing a Java Client Server Application. So i want to implement two Libraries, one for the Client and one for the Server. The Client Server Communication has a very strict protocol, that I wan't to test with JUnit.
As build tool im using Maven and a Husdon Server for continues Integration.
Actually I do not have any good Idea how to test these Client / Server Libraries.
I got following Approaches:
Just write a Dummy Client for testing the server and write a Dummy Server to test the Client.
Disadvantages: Unfortunately this will result in many extra work. I could not be 100% sure that client and Server could work together, because I'm not sure that the Tests are completely identical.
Write an separate Test Project that Tests the Client and the Server together.
Disadvantages: The Unit Tests does not belong to the Project it self, so Hudson will not run them automatically. Everyone who changes anything at one of these Libraries, will have to run the Tests manually to ensure, everything is correct. Also i will not receive any Code Coverage Report.
Are there any better approaches to test codes like that?
Maybe test a Maven Multi Module Project, or something like that.
I hope any one got a good solution for that Issue.
Thanks.
Think of all your code as "transforms input to output": X -> [A] -> Y
X is the data that goes in, [A] is the transformer, Y is the output. In your case, you have this setup:
[Client] -> X -> [Server] -> Y -> [Client]
So the unit tests work like this:
You need a test that runs the client code to generate X. Verify that the code actually produces X with an assert. X should be a final static String in the code.
Use the constant X in a second test to call the server code which transforms it into Y (another constant).
A third test makes sure that the client code can parse the input Y
This way, you can keep the tests independent and still make sure that the important parts work: The interface between the components.
My suggestion would be to use two levels of testing:
For your client/server project, include some mocking in your unit tests to ensure the object interfaces are working as expected.
Following the build, have a more extensive integration test run, with automation to install the compiled client and server on one or more test systems. Then you can ensure that all the particulars of the protocol are tested thoroughly. Have this integration test project triggered on each successful build of the client/server project. You can use JUnit for this and still receive the conventional report from Hudson.
The latest approach to solve this problem is by using Docker containers. Create a docker file containing a base image and all the necessary dependencies required for your client server application. Create a separate container for each node type of your distributed client-server system and test all the entry point server API/client interactions using TestNG or JUnit. The best part of this approach is that you are not mocking any service calls. In most cases you can orchestrate all the end-to-end client-server interactions.
There is a little bit of learning curve involved in this approach but Docker is becoming highly popular in the Dev community especially for solving this problem.
Here is an example of how you could use docker client api to pull docker images in your JUnit test:
https://github.com/influxdb/influxdb-java/blob/master/src/test/java/org/influxdb/InfluxDBTest.java
The approach described above is now opensource product : testcontainers
So finally the resolution was to build a Multi Module Project, with a separate Test Module that includes the Server and the Client Module
Works great in Husdon. And even better in the Eclipse IDE.
Thanks # Aaron for the hint