I have a java #Configuration class with a FOO #Bean annotated with #ConditionalOnBean(BAR.class). As I expect this bean to be or not provided by the importer project, it's not present anywhere in my project.
In my integration test, I mock Bar.class by means of #MockBeans. But for some reason Spring-Boot -debug tells me it did not found it so my conditional bean has not been loaded.
I'm almost sure this situation has worked properly in the past, but did I configure anything extra? I can't manage to make it work
P.S> I discovered that manually re-registering the #Bean in the same #Configuration as the conditional does not see it neither! Is there any known bug related to his?
Autoreply: The culprit in this case is
You need to be very careful about the order that bean definitions are
added as these conditions are evaluated based on what has been
processed so far. For this reason, we recommend only using
#ConditionalOnBean and #ConditionalOnMissingBean annotations on
auto-configuration classes (since these are guaranteed to load after
any user-defined beans definitions have been added).
P.S2> I realized Bar.class is an interface but I don't see why shouldn't it work as long as an implementation is present
P.S3> I found out that the MockitoTestExecutionListener is executed after the OnBeanCondition class. This seems my problem totally.
This and this explain why it is not possible:
https://github.com/spring-projects/spring-boot/issues/9624 and
https://gitter.im/spring-projects/spring-boot?at=59536ea74bcd78af56538629
Related
Why do we need META-INF/spring.factories when create starters for Spring Boot applications? What if omit it at all or leave empty?
Doesn't the target application's #SpringBootApplication which is
a combination of three annotations #Configuration (used for Java-based
configuration), #ComponentScan (used for component scanning), and
#EnableAutoConfiguration
scan everything and find all beans from all the starters with no help of META-INF/spring.factories?
Component Scanning would scan the packages that you give it. You could technically tell it to scan all the packages of your dependencies, too, and it would start loading up any beans defined in them. If you don’t specify any packages to scan, then Spring will use the base package where the annotation is applied, which would very likely not include beans defined in any dependency libs.
There’s another layer to this- a lot of the libraries you use may be using annotations like “#AutoConfigureBefore” to give spring instructions on the order of bean creation. Component Scanning will not respect that, which could result in some weird behaviors if some dependency tries to override a bean from another which is annotated with #ConditionalOnMissingBean (I.e. create this bean only if it doesn’t exist.) You could easily end up with name collision issues where that bean actually gets created first, and then the override bean is created, too.
So the answer seems to be no. You need spring.factories.
Doesn't the target application's #SpringBootApplication scan everything...
No, it doesn't scan everything because if it was it could take a lot of time and resources. Think about it in a way that in order to understand whether the file with an extension *.class contains a bean (something annotated with #Component for example) it needs at least to read a class and analyze the byte code or even load it into memory to check the annotation by reflection.
So if your application's root package is in com.sample.app (the package with the class annotated with #SpringBootApplication), then spring boot by convention scans only the beans in this package and the packages beneath it. This means that it won't scan any thirdparties (assuming they won't be placed in com.sample.app anyway).
Now Its true that you can change the rules of component scanning, but again, you don't want to scan everything for performance reasons at least.
So Autoconfiguration modules (technically implemented with META-INF/spring.factories can specify an additional place (classes annotated with #Configuration) that spring boot will load despite the fact that they are not placed under the packages of your application
or, in other words, they do not obey the default component scanning rules.
In addition, spring.factories file allows to specify much more than auto configuration rules, you can specify environment post processors there for example, and other stuff that can be useful for your application, probably mostly beneficial at the level of application infrastructure, but still.
We have a project that uses spring-boot-cache-starter, registering an ehCache CacheManager implementation, and spreading #Cacheables accross the code.
Then, some other team created an starter, that basically relies on the default configuration autconfigured by spring-boot-cache starter (hashmap) for its own processing of #Cacheable methods.
Both codes contain the #EnableCaching annotation, and our issue is that the behavior is different in case we comment our main project's #EnableCaching annotation.
If we don't comment #EnableCaching in our project, when we use the custom starter, everything works fine. #Cacheables from the
starter are indexed and resolved in the starter scope but
#Cacheables from our domain are resolved in our ehcache.
If we comment #EnableCaching in our project, then both the starter and our project's #Cacheables are tried to be resolved
against our ehCache implementation.
This breaks a lot of preconceptions I had so far:
I always thought an annotation such as #Enable... applied to all the context, regardless of the placement (starter/application configuration), and regardless of whether it was found once or twice when scanning all #Configuration classes.
Why does the case work when both annotations are there, I guess the CacheManager in the spring-boot-cache-starter is a #ConditionalOnBean, so in that case I would expect both projects using the ehcache bean for resolving, not each one's domain
P.S: the #EnableCaching found in our main project is placed on an inner static #Configuration class. Could this be significant?
It is very hard to answer your question if you don't reveal what the custom starter does. In particular, this looks weird to me:
#Cacheables from the starter are indexed and resolved in the starter scope but #Cacheables from our domain are resolved in our ehcache.
Your preconceptions 1 is valid: it doesn't matter where you you put the annotation or if you add it more than once. It will just enable caching for the whole ApplicationContext. In the case of Spring Boot, that will trigger the auto-configuration unless a custom CacheManager bean is defined in user's configuration.
The "each one's domain" sounds broken to me. Are you sure this is what's happening? If you want to store in several cache managers there is not a lot of different ways:
You need to define a CacheResolver and refer to it in the #Cacheable annotations (or #CacheConfig)
You need a special CacheManager that knows where to find the caches in each underlying stores
If each domain has a standard use of #Cacheable it will go against the CacheManager. If you notice the behaviour you are describing, it has nothing to do with #EnableCaching at all.
I want to add a custom PropertySource (the class, not annotation). Annotation is not sufficient as it only handles file sources.
The approach which works is to define own ApplicationContextInitializer and add proper declaration to META-INF/spring.factories. ApplicationContextInitializer just uses:
Environment.getPropertySources().addLast(...)
But there are some drawbacks, mainly:
It is always run, but the preferable behaviour would be to only run if certain conditions are met (#ConditionalOnClass, etc)
How to achieve that? Ideally I'd write my autoconfiguration with #Condition... annotations and inside declare such initializer (preferably Ordered).
Edit:
In my particular case I want to define Archaius PolledConfigurationSource, but only if Archaius is on the classpath - that's why I'd like to use #ConditionalOnClass together with a listener on an event very early in the lifecycle.
You could have an intermediary class - part of your application, let's call it the "ProviderConfigurer" - of which goal will be to load a Service (packaged in a separate jar with META-INF/services/targetSPi) that in turn will load Archaius.
So to activate Archaius you will have to place 2 jars instead of one, but then the ProviderConfigurer will be able to load the property source provided by the Service (the API will be part of the interface you will have to define...) if any is discovered in the class path and do nothing in case the Service doesn't find any class implementing the SPI you will define for the purpose.
We have a maven multi-module project with the following modules:
prj-srv
prj-client
The srv project contains EJBs. In prj-srv/src/test, we have #Alternative implementations of the EJBs, specified in the alternatives section of beans.xml. This works.
The project prj-client has prj-srv as a dependency. In addition, it has a dependency on prj-srv of type test-jar, scope test, so that it can use the alternative EJB implementations for tests. This works, too.
Now then: in prj-client/src/main/java, we have local implementations of the EJB interfaces (so that we can cache the data) annotated with our qualifier #Cacheable. What I would like to do, is setup the tests in prj-client/src/test/java so that they use my test implementations from prj-srv (the ones that aren't cacheable, but who cares since it's for testing).
I have tried:
Creating a class with producer methods (#Produces #Alternative #Cacheable) in prj-client/src/test/java, but I don't know how to configure beans.xml to set this up as the alternative
Creating classes in prj-srv/src/test/java that extend the test EJBs, annotated #Alternative #Cacheable, and put them in src/test/resources/META-INF/beans.xml in the alternatives section, but weld still injects the "real" #Cacheable beans from src/main/java.
Is there some problem mixing #Alternative with qualifiers? How can I get my tests to use alternate implementations of a qualified class?
Just found it: I had forgotten to mark the constructors of the #Cacheable implementations with #Inject. Apparently, even though it was marked as an alternative in beans.xml, since Weld didn't know how to initialize it, instead of throwing an error, it just silently decided to ignore the alternative...
being new to CDI, i want to know the practical difference between an alternative and a
Qualifier.
in Weld reference, it's stated that:
4.3. Qualifier annotations
If we have more than one bean that implements a particular bean type,
the injection point can specify exactly which bean should be injected
using a qualifier annotation.
but while explaining the Alternatives, it is said:
4.7. Alternatives
Alternatives are beans whose implementation is specific to a
particular client module or deployment scenario.
If I understood right, #Qualifier defines which implementations of the target bean get injected to the Injection Points.
on the other hand #Alternative describes a wish during deployment dependending on the client about whether or not an Alternatice to the standard (the "#default" I mean ) bean get Injected to the injection's point.
It is right ?
Yes, that's right. You can imagine qualifiers as the basic weaving that you setup at development time, using annotations in your source code.
Alternatives allow you to overwrite this at execution time using the beans.xml file - a simple deployment artifact.
A typical scenario would be to use different beans.xml for different environments and thereby enable mock-alternatives for components that you don't want to execute on your local / integration environments.