Automated integration tests on deployed Java EE environment? - java

Our team is currently introducing automated testing to an existing Java EE web application that is deployed on Weblogic. We've had success with unit testing using JUnit and Mockito which are automatically run when our app is built and deployed by Jenkins.
Integration testing has been more challenging because our application relies on components provided by the Java EE container such as WorkManager. There are several Spring beans which require these components to initialize properly. One way we've been able to get around it is to create custom application context configuration files which create mocks of the components that we don't really need for testing but still require to initialize the bean. This has been somewhat of a maintenance nightmare because each integration test needs it's own config and some can be quite involved.
What we really want is to be able to have the entire application initialized with the normal configuration used in a deployed environment when running our integration tests. Is there a way to have integration tests automatically execute after deployment through Jenkins or another tool?

You may want to check Arquillian, it can run test in your containers. Even remote ones.
I qoute:
No more mocks. No more container lifecycle and deployment hassles.
Just real tests!

Related

Integration test in Spring application using Flowable

I need to write some integration tests for my Spring application using Flowable. My tests must include the application BPMN workflow logic.
My question is - should I start and deploy normal Flowable engine during my tests as I do in the application? In official documentation I see some Flowable classes prepared for unit testing but nothing for integration.
Won't starting real Flowable engine cause performance issues during running IT? I'm afraid that they will take long time if I will need to run this with every test separately. How do you deal with this in your applications?
If you ask me, then you should definitely start and deploy a normal Flowable engine during your tests. The link you pasted from the documentation is the exact way how you can do the test. Keep in mind that you can use your own configuration, you don't need a special Spring configuration for the testing.
Starting the real Flowable engines won't cause any performance issues during your testing. All tests in the Flowable repository are actually tests that create and destroy an engine within a single test and that is quite fast. In your case it would be even faster as you won't be starting the engine for each test (the Spring application context is cached between tests). I also have to note that even if you start the engine for each test the time would be negligible as booting the engine is quite fast.
Keep in mind that other components from your Spring application might slow down the start of the tests.
As a reference in the flowable-spring module there are 76 tests in 28 test classes, where each test class has it's own Spring configuration, which means there is no Spring context reuse between tests. All those tests take 55s on my local machine. For those tests you need to keep into consideration that some tests are testing some complex scenarios where the async executors are running and are taking more time than usually. You most probably won't have such tests. With those specific tests disabled (3 from 3 test classes) the test time goes down to 28s.
NB: If you are not using #Deployment or you are relying on the auto deploy feature from Flowable then make sure that you are deleting the instances that you are creating in your tests. This would make sure that data from one test does not affect data from another test.

Single integration test for several Spring apps

I have two Spring applications which interact which each other via database and some AMQP:
web-application built on Spring MVC
Spring-Boot application
Each application has its independent context and properties files.
What is the proper way of writing single integration test for these two applications?
More specifically: I can merge this two applications in one maven project in order to have access to both of them.
Is it possible to configure test contexts for both applications in
one Spring test? At the moment I have no idea how to tell spring use different contexts for different applications in one test.
Another purpose of this testing is also to obtain code coverage for these two applications. That is why I can not just start, say, Spring-boot application as separate process. Is it possible at all?
Spring's test module brings up a single application context (take a look at the key abstractions section of the official documentation) per test so no, you cannot have multiple application contexts per test.
What you can have is a merged application context that imports both the Spring Boot and Spring MVC application's context; that way, you can test beans from both applications. However, this is probably not what you want to do and it's something I would recommend against - your tests will become almost worthless since making this approach work could probably entail some hacks and you will not be testing your applications realistically given that they will be deployed separately.
You should write per-application integration tests and measure coverage for each of them. If your application is relatively small, you can have an end-to-end testing module that would leverage Docker containers to create an environment similar to your production and verify that your applications correctly work together.

Regression component tests with Cucumber. Is there any boundary to the layers that should be tested?

I found myself last week having to start thinking about how to refactor an old application that only contains unit tests. My first idea was to add some component test scenarios with Cucumber to get familiarised with the business logic and to ensure I don't break anything with my changes. But at that point I had a conversation with one of the architects in the company I work for that made me wonder whether it was worth it and what was actually the code I had to actually test.
This application has many different types of endpoints: rest endpoints to be called from and to call to, Oracle stored procedures and JMS topics and queues. It's deployed in a war file to a Tomcat server and the connection factory to the broker and the datasource to the database are configured in the server and fetched using JNDI.
My first idea was to load the whole application inside an embedded Jetty, pointing to the real web.xml so everything is loaded as it would be loaded from a production environment but then mocking the connection factory and the datasource. By doing that, all the connectivity logic to the infrastructure where the application is deployed would be tested. Thinking about the hexagonal architecture, this seems like too much effort having in mind that those are only ports which logic should only be about transforming received data into application data. Shouldn't this just be unit tested?
My next idea was to just mock the stored procedures and load the Spring XMLs in my test without any web server, which makes it easier to mock classes. For this I would be using libraries like Spring MockMvc for the rest endpoints and Mockrunner for JMS. But again, this approach would still test some adapters and complicate the test as the result of the tests would be XML and JSON payloads. The transformations done in this application are quite heavy where the same message type could contain different versions of a class (each message could contain many complex object that implement several interfaces).
So right now I was thinking that maybe the best approach would be to just create my tests from the entry point to the application, the services called from the adapters, and verify that the services responsible to send messages to the broker or to call other REST endpoints are actually invoked. Then just ensure there are proper unit tests for the endpoints and verify everything works once deployed by just providing some smoke tests that are executed in a real environment. This would test the connectivity logic and the business logic would be tested in isolation, without caring if a new adapter is added or one is removed.
Is this approach correct? Would I be leaving something without testing this way? Or is it still too much and I should just trust the unit tests?
Thanks.
Your application and environment sound quite complicated. I would definitely want integration tests. I'd test the app outside-in as follows:
Write a smoke-test suite that runs against the application in the actual production environment. Cucumber would be a good tool to use. That suite should only do things that are safe in production, and should be as small as possible while giving you confidence that the application is correctly installed and configured and that its integrations with other systems are working.
Write an acceptance test suite that runs against the entire application in a test environment. Cucumber would be a good choice here too.
I would expect the acceptance-test environment to include a Tomcat server with test versions of all services that exist in your production Tomcat and a database with a schema, stored procedure, etc. identical to production (but not production data). Handle external dependencies that you don't own by stubbing and mocking, by using a record/replay library such as Betamax and/or by implementing test versions of them yourself. Acceptance tests should be free to do anything to data, and they shouldn't have to worry about availability of services that you don't own.
Write enough acceptance tests to both describe the app's major use cases and to test all of the important interactions between the parts of the application (both subsystems and classes). That is, use your acceptance tests as integration tests. I find that there is very little conflict between the goals of acceptance and integration tests. Don't write any more acceptance tests than you need for specification and integration coverage, however, as they're relatively slow.
Unit-test each class that does anything interesting whatsoever, leaving out only classes that are fully tested by your acceptance tests. Since you're already integration-testing, your unit tests can be true unit tests which stubb or mock their dependencies. (Although there's nothing wrong with letting a unit-tested class use real dependencies that are simple enough to not cause issues in the unit tests).
Measure code coverage to ensure that the combination of acceptance and unit tests tests all your code.

Integration Test of REST APIs with Code Coverage

We have built a REST API that exposes bunch of business services - a business service may invoke other platform/utility services to perform database read and writes, to perform service authorization etc.
We have deployed these services as WAR files in Tomcat.
We want to test this whole setup using a integration test suite which we would like to also treat as regression test suite.
What would be a good approach to perform integration testing on this and any tools that can speed up the development of suite? Here are few requirements we think we need to address:
Ability to define integration test cases which exercise business scenarios.
Set up the DB with test data before suite is run.
Invoke the REST API that is running on a remote server (Tomcat)
Validate the DB post test execution for verifying expected output
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
At my work we have recently put together a couple of test suites to test some RESTful APIs we built. Like your services, ours can invoke other RESTful APIs they depend on. We split it into two suites.
Suite 1 - Testing each service in isolation
Mocks any peer services the API depends on using restito. Other alternatives include rest-driver, wiremock, pre-canned and betamax.
The tests, the service we are testing and the mocks all run in a single JVM
Launches the service we are testing in Jetty
I would definitely recommend doing this. It has worked really well for us. The main advantages are:
Peer services are mocked, so you needn't perform any complicated data setup. Before each test you simply use restito to define how you want peer services to behave, just like you would with classes in unit tests with Mockito.
The suite is super fast as mocked services serve pre-canned in-memory responses. So we can get good coverage without the suite taking an age to run.
The suite is reliable and repeatable as its isolated in it's own JVM, so no need to worry about other suites/people mucking about with an shared environment at the same time the suite is running and causing tests to fail.
Suite 2 - Full End to End
Suite runs against a full environment deployed across several machines
API deployed on Tomcat in environment
Peer services are real 'as live' full deployments
This suite requires us to do data set up in peer services which means tests generally take more time to write. As much as possible we use REST clients to do data set up in peer services.
Tests in this suite usually take longer to write, so we put most of our coverage in Suite 1. That being said there is still clear value in this suite as our mocks in Suite 1 may not be behaving quite like the real services.
With regards to your points, here is what we do:
Ability to define integration test cases which exercise business scenarios.
We use cucumber-jvm to define business scenarios for both of the above suites. These scenarios are English plain text files that business users can understand and also drive the tests.
Set up the DB with test data before suite is run.
We don't do this for our integration suites, but in the past I have used unitils with dbunit for unit tests and it worked pretty well.
Invoke the REST API that is running on a remote server (Tomcat)
We use rest-assured, which is a great HTTP client geared specifically for testing REST APIs.
Validate the DB post test execution for verifying expected output
I can't provide any recommendations here as we don't use any libraries to help make this easier, we just do it manually. Let me know if you find anything.
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
We do not measure code coverage for our integration tests, only for our unit tests, so again I can't provide any recommendations here.
Keep your eyes peeled on our techblog as there may be more details on their in the future.
You may also check out the tool named Arquillian, it's a bit difficult to set up at first, but provides the complete runtime for integration tests (i.e. starts its own container instance and deploys your application along with the tests) and provides extensions that solve your problems (invoking REST endpoints, feeding the databases, comparing results after the tests).
Jacoco extension generates the coverage reports than can be later displayed i.e. by the Sonar tool.
I've used it for a relatively small-scale JEE6 project and, once I had managed to set it up, I was quite happy with how it works.

Simulating JMS - jUnit

I need to simulate JMS behavior while performing automated tests via maven/hudson. I was thinking about using some mock framework i.e. Mockito to achieve that goal but maybe there is some easier tool which can accomplish this task? I have read a little bit about ActiveMQ but from what I have found out it requires to install broker prior using it. In my case it is important to have everything run by maven only because I don't have any privileges to install anything on the build server.
You can run ActiveMQ in embedded mode - the broker starts within your application and queues are created on the fly. You just need to add activemq.jar and run few lines of code.
On the other hand there is a Mockrunner library that has support for JMS - although it was designed mainly for unit tests, not integration.

Categories

Resources