Trying to setup a Sprint Boot Application to load configurations from Azure App Configuration, with a reference to a Azure Key Vault entry for properties with sensitive information.
Using App Configuration is working properly and problems emerge when the Key Vault reference is added to App Configuration.
In order to connect to Key Vault, AzureConfigBootstrapConfiguration looks for a KeyVaultCredentialProvider bean, which is not available when it is loaded:
#Bean
public AzureConfigPropertySourceLocator sourceLocator(AzureCloudConfigProperties properties,
AppConfigProviderProperties appProperties, ClientStore clients, ApplicationContext context) {
KeyVaultCredentialProvider keyVaultCredentialProvider = null;
try {
keyVaultCredentialProvider = context.getBean(KeyVaultCredentialProvider.class);
} catch (NoUniqueBeanDefinitionException e) {
LOGGER.error("Failed to find unique TokenCredentialProvider Bean for authentication.", e);
if (properties.isFailFast()) {
throw e;
}
} catch (NoSuchBeanDefinitionException e) {
LOGGER.info("No TokenCredentialProvider found.");
}
return new AzureConfigPropertySourceLocator(properties, appProperties, clients, keyVaultCredentialProvider);
}
Tried to create the bean with highest precedence but it is not working:
#Configuration
public class DemoConfiguration {
#Bean
#Order(Ordered.HIGHEST_PRECEDENCE)
public KeyVaultCredentialProvider keyVaultCredentialProvider() {
return uri -> new EnvironmentCredentialBuilder().build();
}
}
Also tried using #Primary and #Priority on bean, and #AutoConfigureBefore(AzureConfigBootstrapConfiguration.class) on DemoConfiguration class, but none of the alternatives work.
Question:
Do you know how to create the KeyVaultCredentialProvider bean before AzureConfigBootstrapConfiguration is initialised?
It is hard to give any hints without knowing the exact exception and the stack trace that is thrown in your case.
But if it is really a missing configuration at runtime, another way to enforce your own order of configurations is this:
public static void main(String[] args) {
SpringApplication.run(
new Class[]{ YourSpringBootApplication.class,
KeyVaultCredentialProvider.class,
AzureConfigBootstrapConfiguration.class // , ...
}, args);
}
The Class array contains a list of primary sources to load at application startup. So this list does not need to contain all Components and Configurations.
Have you set DemoConfiguration in your spring.factories?
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.example.DemoConfiguration
That should enable it to be found.
Solution:
Since Azure App Configuration uses BootstrapConfiguration, solution is to create the META-INF/spring.factories file to enable the configuration with the required bean, such as:
org.springframework.cloud.bootstrap.BootstrapConfiguration=\
org.davidcampos.autoconfigure.DemoConfiguration
I am running a Spring Boot 2 Application and added the actuator spring boot starter dependency. I enabled all web endpoints and then called:
http://localhost:8080/actuator/metrics
result is:
{
"names": ["jdbc.connections.active",
"jdbc.connections.max",
"jdbc.connections.min",
"hikaricp.connections.idle",
"hikaricp.connections.pending",
"hikaricp.connections",
"hikaricp.connections.active",
"hikaricp.connections.creation",
"hikaricp.connections.max",
"hikaricp.connections.min",
"hikaricp.connections.usage",
"hikaricp.connections.timeout",
"hikaricp.connections.acquire"]
}
But I am missing all the JVM stats and other built-in metrics. What am I missing here? Everything I read said that these metrics should be available at all times.
Thanks for any hints.
I want to share the findings with you. The problem was that a 3rd party library (Shiro) and my configuration for it. The bean loading of micrometer got mixed up which resulted in a too late initialisation of a needed PostProcessingBean which configures the MicroMeterRegistry (in my case the PrometheusMeterRegistry).
I dont know if its wise to do the configuration of the Registries via a different Bean (PostProcessor) which can lead to situations i had... the Registries should configure themselves without relying on other Beans which might get constructed too late.
In case this ever happens to anybody else:
I had a similar issue (except it wasn't Graphite but Prometheus, and I was not using Shiro).
Basically I only had Hikari and HTTP metrics, nothing else (no JVM metrics like GC).
I banged my head on several walls before finding out the root cause: there was a Hikari auto configure post processor in Spring Boot Autoconfigure that eagerly retrieved a MeterRegistry, so all Metric beans didn't have time to initialize before.
And to my surprise, when looking at this code in Github I didn't find it. So I bumped my spring-boot-starter-parent version from 2.0.4.RELEASE to 2.1.0.RELEASE and now everything works fine. I correctly get all the metrics.
As I expected, this problem is caused by the loading order of the beans.
I used Shiro in the project.
Shiro's verification method used MyBatis to read data from the database.
I used #Autowired for MyBatis' Mapper file, which caused the Actuator metrics related beans to not be assembled by SpringBoot (I don't know what the specific reason is).
So i disabled the automatic assembly of the Mapper file by manual assembly.
The code is as follows:
public class SpringContextUtil implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
SpringContextUtil.applicationContext = applicationContext;
}
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
public static Object getBean(String beanId) throws BeansException {
return applicationContext.getBean(beanId);
}
}
Then
StoreMapper userMapper = (UserMapper) SpringContextUtil.getBean("userMapper");
UserModel userModel = userMapper.findUserByName(name);
The problem can be solved for the time being. This is just a stopgap measure, but at the moment I have no better way.
I can not found process_update_seconds in /actuator/prometheus, so I have spent some time to solve my problem.
My solution:
Rewrite HikariDataSourceMetricsPostProcessor and MeterRegistryPostProcessor;
The ordered of HikariDataSourceMetricsPostProcessor is Ordered.HIGHEST_PRECEDENCE + 1;
package org.springframework.boot.actuate.autoconfigure.metrics.jdbc;
...
class HikariDataSourceMetricsPostProcessor implements BeanPostProcessor, Ordered {
...
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE + 1;
}
}
The ordered of MeterRegistryPostProcessor is Ordered.HIGHEST_PRECEDENCE;
package org.springframework.boot.actuate.autoconfigure.metrics;
...
import org.springframework.core.Ordered;
class MeterRegistryPostProcessor implements BeanPostProcessor, Ordered {
...
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
}
In my case I have used shiro and using jpa to save user session id. I found the order of MeterRegistryPostProcessor and HikariDataSourceMetricsPostProcessor cause the problem. MeterRegistry did not bind the metirc because of the loading order.
Maybe my solution will help you to solve the problem.
I have a working sample with Spring Boot, Micrometer, and Graphite and confirmed the out-of-the-box MeterBinders are working as follows:
{
"names" : [ "jvm.memory.max", "process.files.max", "jvm.gc.memory.promoted", "tomcat.cache.hit", "system.load.average.1m", "tomcat.cache.access", "jvm.memory.used", "jvm.gc.max.data.size", "jvm.gc.pause", "jvm.memory.committed", "system.cpu.count", "logback.events", "tomcat.global.sent", "jvm.buffer.memory.used", "tomcat.sessions.created", "jvm.threads.daemon", "system.cpu.usage", "jvm.gc.memory.allocated", "tomcat.global.request.max", "tomcat.global.request", "tomcat.sessions.expired", "jvm.threads.live", "jvm.threads.peak", "tomcat.global.received", "process.uptime", "tomcat.sessions.rejected", "process.cpu.usage", "tomcat.threads.config.max", "jvm.classes.loaded", "jvm.classes.unloaded", "tomcat.global.error", "tomcat.sessions.active.current", "tomcat.sessions.alive.max", "jvm.gc.live.data.size", "tomcat.servlet.request.max", "tomcat.threads.current", "tomcat.servlet.request", "process.files.open", "jvm.buffer.count", "jvm.buffer.total.capacity", "tomcat.sessions.active.max", "tomcat.threads.busy", "my.counter", "process.start.time", "tomcat.servlet.error" ]
}
Note that the sample on the graphite branch, not the master branch.
If you could break the sample in the way you're seeing, I can take another look.
I am using a SOLR 7.1.0 Server with an JAVA spring boot application.
To communicate with the SOLR server I am using "springframework.data.solr"
I have a "template" schema from which I want to create new cores at the runtime.
The goal I want to achieve is to create a new core for each customer, while keeping the schema the same.
This is how my SolrConfig looks like:
#Configuration
#EnableSolrRepositories(basePackages = "com.my.repository", multicoreSupport = true)
#ComponentScan
public class SolrConfig {
#Bean
public SolrClient solrClient() {
return new HttpSolrClient("http://localhost:8983/solr");
}
#Bean
#Scope("prototype")
public SolrTemplate solrTemplate(SolrClient client) throws Exception {
return new SolrTemplate(client);
}
}
my repository interface:
public interface OpenItemsDebtorsRepository extends CustomOpenItemsDebtorsRepository, SolrCrudRepository<OpenItemDebtor, String> {
void setCore(String core);
#Query("orderNumber:*?0*~")
List<OpenItemDebtor> findByOrderNumber(String orderNumber);
}
I am looking for something like this:
solrTemplate.CreateNewCore(String coreName)
Do you have any suggestions?
I would strongly suggest to use the native Solr client (SolrJ) for your spring boot project. Have a service component created that would provide you an instance of the the Solr Server (CLoudSolrClient).
SolrJ has all the components that you would need to create and manage cores and collection.
I know this is not a straight answer but I hope this helps.
I need to delay my application "service" bean processing after all Spring Boot cloud auto-configuration is finished.
"service" bean has dependency both on SQL DataSource and other Bean (S3 repository) which is optional and can be created based on cloud services configuration or directly configured properties (or even not created at all). Both underlying S3 and SQL services are configured perfectly based on various external conditions. They behave well with direct, inderect properties, cloud and so on.
So now I have something like...
#Autowired(required=false)
S3Storage s3;
#Autowired
SQLDatabase db;
#Bean
MyService myservice() {
if (s3 != null) {
return new SQLWithS3Implementation(db, s3);
} else {
return new SQLImplementation(db);
}
}
What have I do with this Bean so it is not processed before cloud services (spring-cloud-connectors is used) with s3 yet null?
I cannot make s3 required. It is not always configured.
I cannot use #Lazy. References from other services... will be messy at least (if possible).
I need something like...
#ProcessAfter(CloudAutoConfiguration.class)
But how actually to do so in Spring Boot / Spring Cloud?
In Spring boot, you can make use of #ConditionalOnBean
Doc : http://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/condition/ConditionalOnBean.html
#ConditionalOnBean(CloudAutoConfiguration.class)
In this special case nothing was able to help me but 'reverse of the ownership'.
I have changed the result of #Bean on cloud service scan. Now it creates not storage service info but repository created with these properties.
#ServiceScan
public class S3CloudCOnfig extends AbstractCloudConfig {
#Autowired
#Bean
public S3Repository s3Repository(final MyService service) {
// Cloud service configuration.
S3Properties properties = null;
try {
} catch (...) {
// No service is configured...
}
S3Repository result = null;
if (properties) {
result = new S3Repository(properties);
service.setObjectStorage(service);
}
return result;
}
}
As service is 100% configured just through normal configurations optional S3Repository than registers on service. This solves problem of S3 repository being optional.
And service logics just accounts optional strategy on S3 repository being available. So no any #Autowired(required = false).
I'm trying to get a Spring project to work with a simple rest service and a repository which fetches data from a MongoDB database. At this moment two separate things are working:
I can run this simple REST example: https://spring.io/guides/gs/rest-service/
I can connect to the MongoDB instance and fetch data
This both in separate projects.
I don't see, however, how I can bring these two together properly. At this moment I've tried the following based on several other tutorials and references (for example https://spring.io/guides/gs/accessing-mongodb-data-rest/). We now have 2 configs but when we deploy and try to go to rest url we just get 404's. it's not clear to me if the mapping is alright, I also don't see how the mapping is done in the first simple REST example.
Rest Controller:
#RestController
public class UserController {
#Autowired
private UserRepository userRepository;
#RequestMapping(value = "/users/{emailaddress}", method = RequestMethod.GET)
#ResponseBody
public User getUser(#PathVariable("emailaddress") String email) {
User user = userRepository.findByEmailAddress(email);
return user;
}
}
The Application class (as done in the tutorials):
#Configuration
#ComponentScan
#EnableAutoConfiguration
#Import(MongoConfig.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
The MongoConfig class (which we assume is about right but not 100% sure):
#Configuration
public class MongoConfig extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "<dbname>";
}
#Override
public Mongo mongo() throws Exception {
MongoCredential mongoCredential = MongoCredential.createPlainCredential("<username>", "<dbname>", "<pswd>".toCharArray());
return new MongoClient(new ServerAddress("<dbaddress>", <port>), Arrays.asList(mongoCredential));
}
}
I really hope someone can shed some light on how to this best, we don't need a Spring (MVC) front-end, just a REST service which will get data from our MongoDB.
Thanks in advance.
I too had this problem, I still get 404 error while running through eclipse tomcat, but I deployed the war in the tomcat webapps and ran through the server which worked for me.