I'm working on resolving an odd issue I'm having with my project that has cropped up since we've started working on integration testing. What happens is that I use the "jetty-maven-plugin" to startup an instance of the application, once it's started the "maven-failsafe-plugin" starts to run the integration tests. This much is setup and running well.
What I'm trying to do now is this: I'd like to get a handle on my service layer so that I can setup some fixtures for my tests to run against. Up until now, our integration tests have been pretty simple minded and I'd like to turn it up a notch and test the actual filling out of forms and so on. For this to work, I need to be able to setup some fixtures and then remove them so that these test are reproducible. We're running against a test database that we use for just this purpose.
From what I've read, this is not unreasonable. Nonetheless, when I actually run the tests I get a very odd error message and stack trace. From what I can tell, Maven starts up the application in Jetty without issue. Then the failsafe plugin starts running the test and, once it hits the first integration test, it begins instantiating a Spring instance and context. It correctly pulls in it's properties and configuration files but when it tries to actually inject the service object, I am seeing this error:
Caused by: org.springframework.beans.factory.BeanDefinitionSt
oreException: Unexpected exception parsing XML document from class
path resource [app-config.xml]; nested exception is
org.springframework.context.annotation.Conflicting
BeanDefinitionException: Annotation-specified bean name
'pesticideRoleRepositoryImpl' for bean class
[dao.role.PesticideRoleRepositoryImpl] conflicts with existing,
non-compatible bean definition of same name and class
[dao.role.PesticideRoleRepositoryImpl]
I will spare you all the stack trace, I can make it available at any time. ;-)
I started wondering if I was going about this all wrong and so I went back and setup a test project in much the same way. The test project is far simpler and doesn't have this issue. When it runs the integration tests the service objects are injected without issue. If you are interested, you can take a look at my test project on GitHub.
My Question Is This...
Has anyone seen this error before? Could there be some conditions under which Spring will have this kind of problem?
It's clear to me that with this kind of integration testing, I end up with two Spring containers that use the exact same configuration. I thought this might be the problem but the test project works the same way and doesn't have this issue. I'm also troubled by the fact that even though there are two beans with the same name and class, Spring believes that they are incompatible.
Thank you, any help would be greatly appreciated! :-D
This error occurs when two diferent files contains the same class (bean) definition and are incompatibles, ie oldBeanDefintion.equals(newBeanDefinition) == false
You could check:
Why the scanner is loading this class twice.
Why oldBeanDefintion.getSource().equals(newBeanDefinition.getSource()) = false
Why oldBeanDefinition.equals(newBeanDefinition) = false
You could put a break point on ClassPathBeanDefinitionScanner.isCompatible() or extends ClassPathBeanDefinitionScanner and override isCompatible method and log some useful info to find the error.
As last option, XML BeanDefinitions cannot be overriden by scanned ones, so if you define the bean in XML, scanned clases with same bean name will be ignored.
The selected answer was correct, the root problem was that there were multiple instances of the bean being created. Interesting, though, is that the other instances were mock instances; they were being picked up because they were mixed in with the tests and all of the tests were placed in the classpath.
There are likely many solutions, my fix was to add a "context:exclude-filter" to the "context:component-scan" declaration in my application configuration.
Related
We have some builds that are failing with variations of this error:
Error creating bean with name 'cartServiceImpl': Unsatisfied dependency expressed through field 'addressServiceClient'; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'addressServiceClient': Bean with name 'addressServiceClient' has been injected into other beans [addressInfoServiceImpl] in its raw version as part of a circular reference, but has eventually been wrapped. This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching - consider using 'getBeanNamesOfType' with the 'allowEagerInit' flag turned off, for example.
The thing is, we never see this error when we start up the service on our desktops. We only see this error when the build runs on the CI server. In fact, most of the time when we're building the same code, this error does not occur. I have a test case where it runs four concurrent builds of the same branch and commit (targeting for deployment to four different clusters), and sometimes all four succeed, but sometimes one (or even two) of them will fail with this error.
My first theory, when I determined the seeming randomness of this, was that there was some screwy problem with our docker registry or docker cache, which was somehow occasionally giving us an older image (there was a related problem of this nature, for real, several weeks ago). Despite my desire to hang this on another team, I have to assume that there's something we're doing that could be causing this, but perhaps it's random because this is depending on a race condition. I find it hard to believe that Spring bean resolution could have race conditions.
Is there any possibility that an error like this might occur or not occur, depending on race conditions?
We're using Spring Framework 5.0.9 with Spring Boot 2.0.5.
Update:
Note that I still can't repeat this problem with ordinary testing on my laptop, but we were able to extract the jar file constructed on the CI server and download it to my laptop, and then run that directly, and it does get the same error. We compared the contents of the jar file between that jar and a "good" one, and the differences were subtle, no obvious problems that might cause this. We did notice that the AddressServiceClient mentioned in the error is second in the list of classes in the "bad" jar, and far down the list in the "good" jar.
I then thought that perhaps adding #Lazy to the AddressServiceClient class would avoid the problem (not that I don't say "fix"). I tried modifying that "bad" jar file locally, using "zip" to update the jar file with the updated class file, and I found that that resulting jar file did NOT demonstrate the symptom. However, when I finally merged the PR with this change and the builds ran on the CI server, one of them still failed with the same error.
You can use setter injection, it use Spring L3 Cache.
For example:
private TmsOrderService tmsOrderService;
#Autowired
public void setTmsOrderService(TmsOrderService tmsOrderService) {
this.tmsOrderService = tmsOrderService;
}
Spring L3 Cache avoid circular dependencies.
We're migrating our Java-only Play application from Play 2.4 to Play 2.5. First step: Get rid of GlobalSettings, still completely in the 2.4 realm. I wrote some StartModule which will take over functionality as the migration guide and "the internet" describes. I add
play.modules.enabled += "de.[...].modules.StartModule"
to the application's .conf file. Executing this via sbt run or sbt start works as expected. Massive problems, however, arise when I try to unittest this stuff with sbt test or sbt test-only.
We have a rather elaborated unit test setup as the application is complex and has large legacy parts. Eventually, the unit test instance of the Play server is started with
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder()
.configure(getConfiguration())
.build()));
This works as long as the play.modules.enabled line above is not visible to the unit test code. As soon as I enable it, I get a number of errors like
Test de.[...]Tests failed: com.google.inject.CreationException:
Unable to create injector, see the following errors:
1) No implementation for play.inject.ApplicationLifecycle was bound.
while locating play.inject.ApplicationLifecycle
or
2) Could not find a suitable constructor in play.api.Environment.
Classes must have either one (and only one) constructor annotated with #Inject
or a zero-argument constructor that is not private.
Same thing happens if I remove the play.modules.enabled line and change the server start to
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder()
.load(Guiceable.modules(new StartModule()))
.configure(getConfiguration())
.build()));
In my limited understanding, it seems that GuiceApplicationBuilder (or whatever) "forgets" about all builtin dependency injection configuration if any additional dependency definitions are given. Unfortunately, I have not found any applicable postings here or anywhere else which would lead me to a solution.
Questions:
Is my analysis correct?
How can I make my unit test code functional with the additional module in the DI framework?
Would it be helpful to directly continue in Play 2.5? I'd like to solve this problem before as that migration step will bring its own plethora of things to handle and I'd really to have a functional base for this - including an operational unit test framework...
Any insight and help greatly appreciated!
Kind regards,
Dirk
Update This is StartModule:
public class StartModule extends AbstractModule {
#Override protected void configure() {
bind(InnerServerLauncher.class).asEagerSingleton();
}
}
And this is the InnerServerLauncher:
#Singleton
public class InnerServerLauncher {
#Inject
public InnerServerLauncher(ApplicationLifecycle lifecycle,
Environment environment) {
System.err.println("*** Server launcher called ***");
}
}
I should add that the problem also arises if I put a totally different class into play.modules.enabled like
play.modules.enabled += "play.filters.cors.CORSModule"
Ok, I finally got it. Problem is the configuration() method which I mentioned above but did not elaborate further. As I said, we have quite some legacy in our system. Therefore, we have a mechanism which constructs the configuration independently from Play's config files for the unit tests. GuiceBuilder.configure() (and btw. also the fakeApplication()-based methods) merges this with the Play-internal configuration but only on the topmost layer. For plain settings (strings, numbers etc.) that's ok, but for value lists it means that the complete list is overwritten and replaced.
play.modules.enabled is used by Play internally to gather the default modules which have to be registered with the dependency injection framework. Documentation states very clearly that your statements in application.conf must only add elements to play.modules.enabled, i.e.
play.modules.enabled += "package.Module"
Our "special way" of constructing the configuration environment for unit tests, however, overwrote Play's own play.modules.enabled as soon as any value in our own configuration was set. And that destroyed the complete dependency injection scheme of Play as none of its own base classes were accessible any more. Bummer!
I solved this by actually using a "real" configuration file which is just read normally by GuiceApplicationBuilder and which contains those play.modules.enabled += ... lines. As this config file is still artifically and temporarily generated for the unit test scenario, I pass its name to GuiceApplicationBuilder via System.setProperty:
System.setProperty("config.file",conffilename);
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder().build()));
Now, the configuration is created correctly and with the internal default settings for play.modules.enabled. Now, I finally can start to actually move the code from GlobalSettings into that injected and eagerly loaded module. And it was just ten hours of hunting...
i am working on a really big enterprise application, with couple of thousand beans, and a big dependency graph between classes. We are using Spring 3, with #Autowired fields (autowiring in constuctor).
I am trying to create an integration test for one of the controllers, which has multiple dependencies, each of those with more dependencies, etc. It is borderline impossible to create an xml definition of the classes which needs to be resolved because of the bad project structure and dependency graph - therefore i cant build the ApplicationContext...
What i am trying to do is to scan for fields in a class, and if they are beans (Component, Service, etc), add them to the ApplicationContext from the code.
I could iterate through the class' fields in a recursive function with relfection i guess, and add the beans to the appcontext, but i have no idea how...
How can i do this? Is this feasible?
Unless I am missing something, you're trying to solve the wrong problem. If your module structure is in that state, trying to build the context dynamically is not going to help you because ultimately you're going to load pretty much the whole application.
I would advise you to create a common "low-level" stack for your app: something that is reasonable and clearly identified by separate modules. Once you have that, start creating boundaries for major features and try to load only them.
If you can't do that, you can still load a test application context by using mocks to cut dependencies in your graph. In any case, discovering the fields to wire along the way is not going to buy you anything.
I recently started as a freelancer on my current project. One of the thing I threw myself on, was the failing Jenkins build (it was failing starting from April 8th, a week before I started here).
Generally speaking, you could see a buttload of DI issues in the log. First thing I did, was get all tests to work in the same way, starting from the same application context.
They also implemented their own "mocking" thing, which seemed to fail to work correctly. After a discussion with the lead dev, I suggested to start using Springockito. (for a certain module, they needed mocking for their integration testing -- legacy reasons, which can't be changed)
Anyway, stuff started failing badly after that. A lot of beans which were mocked in the test, simply weren't mocked, or weren't found or whatever. Typically, it would fail on the loading of the application context, stating that one or another bean was missing.
I tried different stuff and different approaches, but in the end, only the thing I most feared would work: add #DirtiesContext to every single test. Now, the maven build is starting to turn green again, tests start doing what they are supposed to do. But I am reloading the Spring context each and every time, which takes time - which is all relative, since the context is loaded in about 1 - 2 seconds.
A side note to this story is that they've upgraded to Hibernate 4, and thus to Spring 3.2. Previously, they were using an older version of Spring 3. All tests were working back then, and the #DirtiesContext thing was not necessary.
Now, what worries me the most, is that I can't immediately think of an explanation for this weird behaviour. It almost seems that Springs context is dirtied, simply by launching a test which uses #Autowired beans. Not all tests are using Mocks, so it can't be that.
Does this sound familiar to anyone? Has anyone had the same experiences with integration testing with (the latest version of) Spring?
On Stackoverflow, I've found this ticket: How can a test 'dirty' a spring application context?
It seems to pretty much sum up the behaviour I'm seeing, but the point is that we're autowiring services/repositories/..., and that we don't have any setters on those classes whatsoever.
Any thoughts?
Thanks!
To answer my own question, the secret was in the Spring version. We were using Spring 3.1.3, whereas I presumed they were using Spring 3.2 (they were constantly speaking about a recent upgrade of the Spring version).
The explanation was here, a blog post I stumbled over in my hunt to get it fixed: Spring Framework 3.2 RC1: New Testing Features
And a copy paste of the relevant piece:
The use of generic factory methods in Spring configuration is by no means specific to testing, but generic factory methods such as EasyMock.createMock(MyService.class) or Mockito.mock(MyService.class) are often used to create dynamic mocks for Spring beans in a test application context. For example, prior to Spring Framework 3.2 the following configuration could fail to autowire the OrderRepository into the OrderService. The reason is that, depending on the order in which beans are initialized in the application context, Spring would potentially infer the type of the orderRepository bean to be java.lang.Object instead of com.example.repository.OrderRepository.
So, how did I solve this problem? Well, I did the following steps:
create a new maven module
filter out the tests which needed mocking. All the non-mocked test would run normallly in a Spring build, in a separate Failsafe run (I created a base-package "clean", and sorted them out like that)
Put all the mocked tests in a base package called "mocked", and make an additional run in Failsafe for the mocked tests.
Each mocked test is using Springockito, to create the mocks. I'm also using the Springockito annotations, to easily do a #ReplaceWithMock in place. Every mocked test is then annotated with #DirtiesContext, so the context is dirtied after each test, and the Spring context is reintroduced with each test.
The only reasonable explanation that I could give, is that the context is effectively being dirtied, because there is a framework (Springockito) which is taking over the management of the Spring beans from the Spring framework. I don't know if that's correct, but it's the best explanation I could come up with. That, in fact, is the definition of a dirty context, which is why we need to flag it as dirty.
Using this strategy, I got the build up and running again, and all tests are running ok. It's not perfect, but it's working, and it's consistent.
I have couple of beans defined in the applicationContext.xml file and found if I made mistake(say typo) with the bean's name, spring won't complain anything and go ahead load the invalidate configuration. doesn't spring do the checking automatically? and how can i implement schema validation on the applicationContext.xml file? thanks.
IntelliJ IDEA has a wonderful support for Spring, including:
detecting broken references (bean does not exist, has a wrong type, etc.)
completing bean names when Ctrl+Space is pressed (along with narrowing the list to only the beans matching by type)
discovering missing/duplicated beans when #Resource/#Autowired is used and it will fail to autowire at runtime
quick navigation between Java and application context XML
...and lots more
Also I strongly recommend writing Spring smoke integration test. It doesn't have to test anything, just context startup (you would be amazed, how many errors it discovers).
To avoid errors in spring context I suggest you to use a plugin which checks its contents, for instance springIDE or SpringSource Tool Suite. In this way the plugin validates your spring contexts during development and you can find errors before the execution of your application.
in addition to this problem , i had problems with detecting duplicate bean ids that would get overridden unintentionally among others but finally i found this open-source project that helps you write JUnit unit tests that going to detect these problems. it was very easy to use and solved my problemsm it's called Beanoh