Spring context dirty after each integration test - java

I recently started as a freelancer on my current project. One of the thing I threw myself on, was the failing Jenkins build (it was failing starting from April 8th, a week before I started here).
Generally speaking, you could see a buttload of DI issues in the log. First thing I did, was get all tests to work in the same way, starting from the same application context.
They also implemented their own "mocking" thing, which seemed to fail to work correctly. After a discussion with the lead dev, I suggested to start using Springockito. (for a certain module, they needed mocking for their integration testing -- legacy reasons, which can't be changed)
Anyway, stuff started failing badly after that. A lot of beans which were mocked in the test, simply weren't mocked, or weren't found or whatever. Typically, it would fail on the loading of the application context, stating that one or another bean was missing.
I tried different stuff and different approaches, but in the end, only the thing I most feared would work: add #DirtiesContext to every single test. Now, the maven build is starting to turn green again, tests start doing what they are supposed to do. But I am reloading the Spring context each and every time, which takes time - which is all relative, since the context is loaded in about 1 - 2 seconds.
A side note to this story is that they've upgraded to Hibernate 4, and thus to Spring 3.2. Previously, they were using an older version of Spring 3. All tests were working back then, and the #DirtiesContext thing was not necessary.
Now, what worries me the most, is that I can't immediately think of an explanation for this weird behaviour. It almost seems that Springs context is dirtied, simply by launching a test which uses #Autowired beans. Not all tests are using Mocks, so it can't be that.
Does this sound familiar to anyone? Has anyone had the same experiences with integration testing with (the latest version of) Spring?
On Stackoverflow, I've found this ticket: How can a test 'dirty' a spring application context?
It seems to pretty much sum up the behaviour I'm seeing, but the point is that we're autowiring services/repositories/..., and that we don't have any setters on those classes whatsoever.
Any thoughts?
Thanks!

To answer my own question, the secret was in the Spring version. We were using Spring 3.1.3, whereas I presumed they were using Spring 3.2 (they were constantly speaking about a recent upgrade of the Spring version).
The explanation was here, a blog post I stumbled over in my hunt to get it fixed: Spring Framework 3.2 RC1: New Testing Features
And a copy paste of the relevant piece:
The use of generic factory methods in Spring configuration is by no means specific to testing, but generic factory methods such as EasyMock.createMock(MyService.class) or Mockito.mock(MyService.class) are often used to create dynamic mocks for Spring beans in a test application context. For example, prior to Spring Framework 3.2 the following configuration could fail to autowire the OrderRepository into the OrderService. The reason is that, depending on the order in which beans are initialized in the application context, Spring would potentially infer the type of the orderRepository bean to be java.lang.Object instead of com.example.repository.OrderRepository.
So, how did I solve this problem? Well, I did the following steps:
create a new maven module
filter out the tests which needed mocking. All the non-mocked test would run normallly in a Spring build, in a separate Failsafe run (I created a base-package "clean", and sorted them out like that)
Put all the mocked tests in a base package called "mocked", and make an additional run in Failsafe for the mocked tests.
Each mocked test is using Springockito, to create the mocks. I'm also using the Springockito annotations, to easily do a #ReplaceWithMock in place. Every mocked test is then annotated with #DirtiesContext, so the context is dirtied after each test, and the Spring context is reintroduced with each test.
The only reasonable explanation that I could give, is that the context is effectively being dirtied, because there is a framework (Springockito) which is taking over the management of the Spring beans from the Spring framework. I don't know if that's correct, but it's the best explanation I could come up with. That, in fact, is the definition of a dirty context, which is why we need to flag it as dirty.
Using this strategy, I got the build up and running again, and all tests are running ok. It's not perfect, but it's working, and it's consistent.

Related

Edit and re-run spring boot unit test without reloading context to speed up tests

I have a spring boot app and have written unit tests using a postgres test container (https://www.testcontainers.org/) and JUnit. The tests have the #SpringBootTest annotation which loads the context and starts up a test container before running the test.
Loading the context and starting the container takes around 15sec on my relatively old Macbook, but the test themselves are pretty fast (< 100ms each). So in a full build with 100s of tests, this does not really matter. It is a one time cost of 15sec.
But developing/debugging the tests individually in an IDE becomes very slow. Every single test incurs a 15 sec startup cost.
I know IntelliJ and Springboot support hot reload of classes when the app is running. Are there similar solutions/suggestions for doing the same for unit tests ? i.e Keep the context loaded and the testcontainer(DB) running but recompile just the modified test class and run the selected test again .
There is a simple solution for your issue I believe. You haven't specified how exactly do you run the test container in the test, however I have a successful experience with the following approach:
For tests running locally - start postgres server on your laptop once (say at the beginning of your working day or something). It can be dockerized process or even regular postgresql installation.
During the test spring boot application doesn't really know that it interacts with test container - it gets host/port/credentials and that's it - it creates a DataSource out of these parameters.
So for your local development, you can modify the integration with the test container so that the actual test container will be launched only if there is no "LOCAL.TEST.MODE" env. variable defined (basically you can pick any name - it's not something that exists).
Then, define the ENV variable on your laptop (or you can use system property for that - whatever works for you better) and then configure spring boot's datasource to get the properties of your local installation if that system property is defined:
In a nutshell, it can be something like:
#Configuration
#ConditionalOnProperty(name = "test.local.mode", havingValue = "true", matchIfMissing = false)
public class MyDbConfig {
#Bean
public DataSource dataSource () {
// create a data source initialized with local credentials
}
}
Of course, more "clever" solution with configuration properties can be implemented, it all depends on how do you integrate with test containers and where do the actual properties for the data source initialization come from, but the idea will remain the same:
In your local env. you'll actually work with a locally installed PostgreSQL server and won't even start the test container
Since all the operations in postgresql including DDL are transactional, you can put a #Transactional annotation on the test and spring will roll back all the changes done by the test so that the DB won't be full of garbage data.
As opposed to Test containers, this method has one significant advantage:
If your test fails and some data remains in the database you can check that locally because the server will remain alive. So you'll be able to connect to the db with PG Admin or something and examine the state...
Update 1
Based on op's comment
I see what you say, Basically, you've mentioned two different issues that I'll try to refer to separately
Issue 1 Application Context takes about 10-12 seconds to start.
Ok, this is something that requires investigation. The chances are that there is some bean that gets initialized slowly. So you should understand why exactly does the application starts so slowly:
The code of Spring (scanning, bean definition population, etc) works for particles of a second and usually is not a bottleneck by itself - it must be somewhere in your application.
Checking the beans startup time is kind of out of scope for this question, although there are certainly methods to do so, for example:
see this thread and for newer spring versions and if you use actuator this here. So I'll assume you will figure out one way or another why does it start slowly
Anyway, what you can do with this kind of information, and how you can make the application context loading process faster?
Well, obviously you can exclude the slow bean/set of beans from the configuration, maybe you don't need it at all in the tests or at least can use #MockBean instead - this highly varies depending on the actual use case.
Its also possible to provide configuration in some cases that will still load that slow bean but will alter its behavior so that it won't become slow.
I can also point of "generally applicable ideas" that can help regardless your actual code base.
First of all, if you're running different test cases (multi-select tests in the IDE and run them all at once) that share exactly the same configurations, then spring boot is smart enough to not re-initialize the application context. This is called "caching of the application context in cache". Here is one of the numerous tutorials about this topic.
Another approach is using lazy beans initialization. In spring 2.2+ there is a property for that
spring:
main:
lazy-initialization: true
Of course, if you're not planning to use it in production, define it in src/test/resource's configuration file of your choice. spring-boot will read it as well during the test as long as it adheres to the naming convention. If you have technical issues with this. (again out of scope of the question), then consider reading this tutorial
If your spring boot is older than 2.2 you can try to do that "manually": here is how
The last direction I would like to mention is - reconsidering your test implementation. This is especially relevant if you have a big project to test. Usually, the application has separation onto layers, like services, DAO-s, controllers, you know. My point is that the testing that involves DB should be used only for the DAO's layer - this is where you test your SQL queries.
The Business logic code usually doesn't require DB connection and in general, can be covered in unit tests that do not use spring at all. So instead of using #SpringBootTest annotation that starts the whole application context, you can run only the configuration of DAO(s), the chances that this will start way faster and "slow beans" belong to other parts of the application. Spring boot even has a special annotation for it (they have annotations for everything ;) ) #DataJpaTest.
This is based on the idea that the whole spring testing package is intended for integration tests only, in general, the test where you start spring is the integration test, and you'll probably prefer to work with unit tests wherever possible because they're way faster and do not use external dependencies: databases, remote services, etc.
The second issue: the schema often goes out of sync
In my current approach, the test container starts up, liquibase applies my schema and then the test is executed. Everything gets done from within the IDE, which is a bit more convenient.
I admit I haven't worked with liquibase, we've used flyway instead but I believe the answer will be the same.
In a nutshell - this will keep working like that and you don't need to change anything.
I'll explain.
Liquibase is supposed to start along with spring application context and it should apply the migrations, that's true. But before actually applying the migrations it should check whether the migrations are already applied and if the DB is in-sync it will do nothing. Flyway maintains a table in the DB for that purpose, I'm sure liquibase uses a similar mechanism.
So as long as you're not creating tables or something that test, you should be good to go:
Assuming, you're starting the Postgres server for the first time, the first test you run "at the beginning of your working day", following the aforementioned use-case will create a schema and deploy all the tables, indices, etc. with the help of liquibase migrations, and then will start the test.
However, now when you're starting the second test - the migrations will already be applied. It's equivalent to the restarting of the application itself in a non-test scenario (staging, production whatever) - the restart itself won't really apply all the migration to the DB. The same goes here...
Ok, that's the easy case, but you probably populate the data inside the tests (well, you should be ;) ) That's why I've mentioned that it's necessary to put #Transactional annotation on the test itself in the original answer.
This annotation creates a transaction before running all the code in the test and artificially rolls it back - read, removes all the data populated in the test, despite the fact that the test has passed
Now to make it more complicated, what if you create tables, alter columns on existing tables inside the test? Well, this alone will make your liquibase crazy even for production scenarios, so probably you shouldn't do that, but again putting #Transactional on the test itself helps here, because PostgreSQL's DDLs (just to clarify DDL = Data Definition Language, so I mean commands like ALTER TABLE, basically anything that changes an existing schema) commands also transactional. I know that Oracle for example didn't run DDL commands in a transaction, but things might have changed since then.
I don't think you can keep the context loaded.
What you can do is activate reusable containers feature from testcontainers. It prevents container's destruction after test is ran.
You'll have to make sure, that your tests are idempotent, or that they remove all the changes, made to container, after completion.
In short, you should add .withReuse(true) to your container definition and add testcontainers.reuse.enable=true to ~/.testcontainers.properties (this is a file in your homedir)
Here's how I define my testcontainer to test my code with Oracle.
import org.testcontainers.containers.BindMode;
import org.testcontainers.containers.OracleContainer;
public class StaticOracleContainer {
public static OracleContainer getContainer() {
return LazyOracleContainer.ORACLE_CONTAINER;
}
private static class LazyOracleContainer {
private static final OracleContainer ORACLE_CONTAINER = makeContainer();
private static OracleContainer makeContainer() {
final OracleContainer container = new OracleContainer()
// Username which testcontainers is going to use
// to find out if container is up and running
.withUsername("SYSTEM")
// Password which testcontainers is going to use
// to find out if container is up and running
.withPassword("123")
// Tell testcontainers, that those ports should
// be mapped to external ports
.withExposedPorts(1521, 5500)
// Oracle database is not going to start if less
// than 1gb of shared memory is available, so this is necessary
.withSharedMemorySize(2147483648L)
// This the same as giving the container
// -v /path/to/init_db.sql:/u01/app/oracle/scripts/startup/init_db.sql
// Oracle will execute init_db.sql, after container is started
.withClasspathResourceMapping("init_db.sql"
, "/u01/app/oracle/scripts/startup/init_db.sql"
, BindMode.READ_ONLY)
// Do not destroy container
.withReuse(true)
;
container.start();
return container;
}
}
}
As you can see this is a singleton. I need it to control testcontainers lifecycle manually, so that I could use reusable containers
If you want to know how to use this singleton to add Oracle to Spring test context, you can look at my example of using testcontainers. https://github.com/poxu/testcontainers-spring-demo
There's one problem with this approach though. Testcontainers is not going to stop reusable container ever. You have to stop and destroy the container manually.
I can't imagine some hot reload magic flag for testing - there is just so much stuff that can dirty the spring context, dirty the database etc.
In my opinion the easiest thing to do here is to locally replace test container initializer with manual container start and change the properties for the database to point to this container. If you want some automation for this - you could add before launch script (if you are using IntelliJ...) to do something like that: docker start postgres || docker run postgres (linux), which will start the container if its not running and do nothing if it is running.
Usually IDE recompiles just change affected classes anyway and Spring context probably wont start for 15 secs without a container starting, unless you have a lot of beans to configure as well...
I'm trying to learn testing with Spring Boot, so sorry if this answer is not relevant.
I came across this video that suggests a combination of (in order of most to least used):
Mockito unit tests with the #Mock annotation, with no Spring context when it's possible
Slice tests using the #WebMvcTest annotation, when you want to involve some Spring context
Integration tests with #SpringBootTest annotation, when you want to involve the entire Spring Context

#Autowired variables access in Spock setupSpec()

I need to execute block of code once upon startup on the Spock tests. I cannot use #Autowired in setupSpec() which is default method for such initialisation, however #Beans would not be loaded till that time.
Found on web (dating back to 2015) source :
The behavior is a consequence of the design of Spring's TestContext framework. I don't
see a way to change it without hitting other problems. The situation isn't any different
when using the TestContext framework with JUnit.
It's been 6 years already, is there any clean way to do this? I want to omit dirty workarounds
You are in luck, thanks to #erdi for implementing this in Add support for injection into #Shared fields in spock-spring module, you can try the feature in the Spock Snapshot 2.0 builds, and it will be in the Spock-2.0M5 release. You need to opt-in into #Shared injection via placing #EnableSharedInjection on your specification, also really important, that you read the javadoc and understand the mentioned implications of doing that.

Spring Integration Test: Incompatible Beans with the Same Name and Class

I'm working on resolving an odd issue I'm having with my project that has cropped up since we've started working on integration testing. What happens is that I use the "jetty-maven-plugin" to startup an instance of the application, once it's started the "maven-failsafe-plugin" starts to run the integration tests. This much is setup and running well.
What I'm trying to do now is this: I'd like to get a handle on my service layer so that I can setup some fixtures for my tests to run against. Up until now, our integration tests have been pretty simple minded and I'd like to turn it up a notch and test the actual filling out of forms and so on. For this to work, I need to be able to setup some fixtures and then remove them so that these test are reproducible. We're running against a test database that we use for just this purpose.
From what I've read, this is not unreasonable. Nonetheless, when I actually run the tests I get a very odd error message and stack trace. From what I can tell, Maven starts up the application in Jetty without issue. Then the failsafe plugin starts running the test and, once it hits the first integration test, it begins instantiating a Spring instance and context. It correctly pulls in it's properties and configuration files but when it tries to actually inject the service object, I am seeing this error:
Caused by: org.springframework.beans.factory.BeanDefinitionSt
oreException: Unexpected exception parsing XML document from class
path resource [app-config.xml]; nested exception is
org.springframework.context.annotation.Conflicting
BeanDefinitionException: Annotation-specified bean name
'pesticideRoleRepositoryImpl' for bean class
[dao.role.PesticideRoleRepositoryImpl] conflicts with existing,
non-compatible bean definition of same name and class
[dao.role.PesticideRoleRepositoryImpl]
I will spare you all the stack trace, I can make it available at any time. ;-)
I started wondering if I was going about this all wrong and so I went back and setup a test project in much the same way. The test project is far simpler and doesn't have this issue. When it runs the integration tests the service objects are injected without issue. If you are interested, you can take a look at my test project on GitHub.
My Question Is This...
Has anyone seen this error before? Could there be some conditions under which Spring will have this kind of problem?
It's clear to me that with this kind of integration testing, I end up with two Spring containers that use the exact same configuration. I thought this might be the problem but the test project works the same way and doesn't have this issue. I'm also troubled by the fact that even though there are two beans with the same name and class, Spring believes that they are incompatible.
Thank you, any help would be greatly appreciated! :-D
This error occurs when two diferent files contains the same class (bean) definition and are incompatibles, ie oldBeanDefintion.equals(newBeanDefinition) == false
You could check:
Why the scanner is loading this class twice.
Why oldBeanDefintion.getSource().equals(newBeanDefinition.getSource()) = false
Why oldBeanDefinition.equals(newBeanDefinition) = false
You could put a break point on ClassPathBeanDefinitionScanner.isCompatible() or extends ClassPathBeanDefinitionScanner and override isCompatible method and log some useful info to find the error.
As last option, XML BeanDefinitions cannot be overriden by scanned ones, so if you define the bean in XML, scanned clases with same bean name will be ignored.
The selected answer was correct, the root problem was that there were multiple instances of the bean being created. Interesting, though, is that the other instances were mock instances; they were being picked up because they were mixed in with the tests and all of the tests were placed in the classpath.
There are likely many solutions, my fix was to add a "context:exclude-filter" to the "context:component-scan" declaration in my application configuration.

how to validate spring applicationContext.xml file

I have couple of beans defined in the applicationContext.xml file and found if I made mistake(say typo) with the bean's name, spring won't complain anything and go ahead load the invalidate configuration. doesn't spring do the checking automatically? and how can i implement schema validation on the applicationContext.xml file? thanks.
IntelliJ IDEA has a wonderful support for Spring, including:
detecting broken references (bean does not exist, has a wrong type, etc.)
completing bean names when Ctrl+Space is pressed (along with narrowing the list to only the beans matching by type)
discovering missing/duplicated beans when #Resource/#Autowired is used and it will fail to autowire at runtime
quick navigation between Java and application context XML
...and lots more
Also I strongly recommend writing Spring smoke integration test. It doesn't have to test anything, just context startup (you would be amazed, how many errors it discovers).
To avoid errors in spring context I suggest you to use a plugin which checks its contents, for instance springIDE or SpringSource Tool Suite. In this way the plugin validates your spring contexts during development and you can find errors before the execution of your application.
in addition to this problem , i had problems with detecting duplicate bean ids that would get overridden unintentionally among others but finally i found this open-source project that helps you write JUnit unit tests that going to detect these problems. it was very easy to use and solved my problemsm it's called Beanoh

Mocking my custom dependencies with Spring

Is is possible to declare mocks using some mocking framework for my own classes declaratively with Spring? I know there are some standard mocks available in Spring, but I'd like to be able to mock out my own classes declaratively too.
Just to check I'm not going about this the wrong way: the idea is to have a pair of JUnit test and Spring config for each integration test I want to do, mocking everything except the specific integration aspect I'm testing (say I had a dependency on two different data services, test one at a time) and minimising the amount of repeated Java code specifying the mocks.
I did it using special context.xml that included the real XML and overwrote definition of the special beans. Id'd be happy to know that there is better and smarter solution but this one worked fine for me.
Seriously - you really dont want to be doing that.
I have seen a number of projects that attempt to do this and i promise that you will end up with
A huge number of spring files, each one slightly different, but you don't know what and why.
Spaghetti code, because the "declarative" definition doesn't allow to figure out that your objects are doing too much, or doing it with the wrong collaborators.
In the system case, there are a number of points at which you can stub out external services...
I would recommend that you read GOOS - It devotes a book to answering this kind of question.
http://www.growing-object-oriented-software.com/
If there is only a few beans that you want to change, and you want to change them for all tests, the you could have a look at the #Primary annotation.
You have to annotate the special class for the tests with #Primary - then it will "override" the real class. -- But use this only if you want to do it for all tests.

Categories

Resources