How to extend Spring Boot's DataSourceAutoConfiguration - java

I want to be able to leverage the Spring Boot datasource auto-configuration. However it doesn't support all the features I'm using, logValidationErrors in particular.
spring:
datasource:
driverClassName: oracle.jdbc.OracleDriver
url: jdbc:jtds:sqlserver://111.11.11.11/DataBaseName
username: someuser
password: somepass
testOnBorrow: true
testWhileIdle: true
validationQuery: select /* validationQuery */ 1 from dual
minEvictableIdleTimeMillis: 1000
validationInterval: 30000
These aren't currently used:
logValidationErrors: true
maxAge: 1800000 # //30 Minute idle age
removeAbondoned: true
Can I just grab the created DataSource bean and set those values manually? Or is there a better way to extend or wrap the autoconfiguration?
See here for more about logValidationErrors, etc: https://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html

I solved this with a BeanPostProcessor, similar to Dave Syer's suggestion:
#Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof DataSource) {
DataSource ds = (DataSource) bean;
ds.setLogValidationErrors(true);
ds.setRemoveAbandoned(true);
ds.setMaxAge(1800000);
}
return bean;
}
I'll also likely submit a PR to get the properties added to Spring Boot itself.

There are multiple ways you can fix this issue.
First, you may want to submit a pull request for TomcatDataSourceConfiguration against the spring boot project. Adding those dependencies is straightforward (look at validationInterval in the source code for an example.
Or, you could create your own datasource the way you want. If a DataSource bean is present, boot will not attempt to create its own. Then you could just extend from TomcatDataSourceConfiguration and add any property you want by overriding dataSource. Finally, you should import your extended class so that the bean is registered, which will disable auto-configuration for it.
If you chose that last option and that works for you, it might be worthwhile to still report an issue for those properties if you believe they could be interesting to a wider audience.

If I were you I would just #Autowire the existing DataSource, downcast as necessary, and then set the extra property. It can be dangerous up do that for some beans (if they initialize themselves from their properties), but I doubt if that will be a problem here.

Related

Spring boot quartz schema other than public doesn't work

I am not able to use other schema (than public) for quartz tables. This is my quartz setup:
spring.quartz.job-store-type=jdbc
spring.quartz.jdbc.initialize-schema=always
spring.quartz.properties.org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
spring.quartz.properties.org.quartz.jobStore.isClustered=true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval=2000
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
spring.quartz.properties.org.quartz.jobStore.useProperties=false
spring.quartz.properties.org.quartz.jobStore.tablePrefix=QRTZ_
And config class:
#Bean
public SchedulerFactoryBean schedulerFactory(ApplicationContext applicationContext, DataSource dataSource, QuartzProperties quartzProperties) {
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
AutowireCapableBeanJobFactory jobFactory = new AutowireCapableBeanJobFactory(applicationContext.getAutowireCapableBeanFactory());
Properties properties = new Properties();
properties.putAll(quartzProperties.getProperties());
schedulerFactoryBean.setOverwriteExistingJobs(true);
schedulerFactoryBean.setDataSource(dataSource);
schedulerFactoryBean.setQuartzProperties(properties);
schedulerFactoryBean.setJobFactory(jobFactory);
return schedulerFactoryBean;
}
#Bean
public Scheduler scheduler(ApplicationContext applicationContext, DataSource dataSource, QuartzProperties quartzProperties)
throws SchedulerException {
Scheduler scheduler = schedulerFactory(applicationContext, dataSource, quartzProperties).getScheduler();
scheduler.start();
return scheduler;
}
This works fine, and the tables are getting created. However I would like to have the tables in a different schema. So I set quartz to use 'quartz' schema.
spring.quartz.properties.org.quartz.jobStore.tablePrefix=quartz.QRTZ_
This is the error I'm getting:
[ClusterManager: Error managing cluster: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block] [org.quartz.impl.jdbcjobstore.LockException: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block
Any ideas on how to solve it?
It was a bold hope that "tablePrefix" can also adjsut the "db schema", (and there is no documented property concerning "db schema"), but you could get more lucky, if you configure it on the datasource.
i.e. you would introduce/configure different (spring) datasource( bean)s for every user/schema used by your application ...
(like here:) Spring Boot Configure and Use Two DataSources
or here
, then you'd wire the scheduler factory with the appropriate datasource (quartz).
schedulerFactoryBean.setDataSource(quartzDataSource);
Or via (#Autowired) parameter injection, or method invocation : #Bean initialization - difference between parameter injection vs. direct method access?
UPDATE (regarding "wiring"):
..from current spring-boot doc:
To have Quartz use a DataSource other than the application’s main DataSource, declare a DataSourcebean, annotating its #Bean method with #QuartzDataSource. Doing so ensures that the Quartz-specific DataSource is used by both the SchedulerFactoryBean and for schema initialization.
Similarly, to have Quartz use a TransactionManager other than the application’s main ... declare a TransactionManager bean, ...#QuartzTransactionManager.
You can take even more control by customizing:
spring.quartz.jdbc.initialize-schema
Database schema initialization mode.
default: embedded (embedded|always|never)
spring.quartz.jdbc.schema
Path to the SQL file to use to initialize the database schema.
default: classpath:org/quartz/impl/jdbcjobstore/tables_##platform##.sql
... properties, where ##platform## refers to your db vendor.
But it is useless for your requirement... since looking at
and complying with the original schemes - they seem schema independent/free. (So the data source approach looks more promising, herefor.)
REFS:
https://www.quartz-scheduler.org/documentation/quartz-2.3.0/configuration/ConfigJobStoreTX.html
Spring Boot Configure and Use Two DataSources
https://stackoverflow.com/a/42360877/592355
https://www.baeldung.com/spring-annotations-resource-inject-autowire
#Bean initialization - difference between parameter injection vs. direct method access?
spring.quartz.jdbc.initialize-schema
What are the possible values of spring.datasource.initialization-mode?
spring.quartz.jdbc.schema
https://github.com/quartz-scheduler/quartz/tree/master/quartz-core/src/main/resources/org/quartz/impl/jdbcjobstore
https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.quartz
https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/quartz/QuartzDataSource.html
So the idea is that Quartz doesn't create the tables using spring.quartz.properties.org.quartz.jobStore.tablePrefix
Table names are static. Eg qrtz_triggers. as #xerx593 pointed out.
What we can do is to create the tables (manual, flyway, liquibase) in a different schema, update tablePrefix=schema.qrtz_ and it will work.
Tested with Postgres

Spring Initialisation Order with Configuration

I have a Spring project, and I need to configure Flyway.
Using the default FlywayAutoConfiguration, Flyway migration is immediatly executed before everything else (caching, PostConstruct annotations, services). This is the behavior I expected (in term or startup workflow)
Unfortunatly, I need to override the default FlywayAutoConfiguration because I use a custom Flyway implementation, but that's not the my main problem here, my issue is really related to Spring Priority for configuration and initialization sequence.
So do use my own flyway, I first copied FlywayAutoConfiguration to my maven module, name it CustomFlywayAutoConfiguration and just adapt imports. I also change the property "spring.flyway.enabled" to false and create another one "spring.flywaycustom.enabled" to be able to activate mine and not the default one.
Doing that fully change the sequence of startup. Now, flyway is executed at the end of the startup sequence (after caching and other #PostConstruct that are in my project)
The following bean defined in CustomFlywayAutoConfiguration is now created onyl at the end of the startup sequence. With the default FlywayAutoConfiguration , was well created on the beginning.
#Bean
#ConditionalOnMissingBean
public FlywayMigrationInitializer flywayInitializer(Flyway flyway,
ObjectProvider<FlywayMigrationStrategy> migrationStrategy) {
return new FlywayMigrationInitializer(flyway, migrationStrategy.getIfAvailable());
}
I tryied to play a lot with ordering (HIGHEST_PRECEDENCE, and LOWEST_PRECEDENCE)
#AutoConfigureOrder on configuration class
#Order on components
Try to Autowire FlywayMigrationInitializer to force the bean initialisation earlier
It's doesn't change anything, looks like Spring ignore #Order and #AutoConfigureOrder
Any idea why when Configuration is inside spring-boot-autoconfigure dependencies, it started as first priority, and when the same Configuration code is inside my project, I don't have the same ordering?
Thank you so much for your help.
Thanks to your answer, Focusin my attention FlywayMigrationInitializerEntityManagerFactoryDependsOnPostProcessor help me to solve the issue.
#Configuration(proxyBeanMethods = false)
#AutoConfigureOrder(Ordered.HIGHEST_PRECEDENCE)
#EnableConfigurationProperties({ DataSourceProperties.class, FlywayProperties.class })
#Import({ FlywayMigrationInitializerEntityManagerFactoryDependsOnPostProcessor.class }) // Very important to force flyway before
public class CustomFlywayConfiguration {
#Bean
public Flyway flyway(FlywayProperties properties, DataSourceProperties dataSourceProperties,
ResourceLoader resourceLoader, ObjectProvider<DataSource> dataSource,
#FlywayDataSource ObjectProvider<DataSource> flywayDataSource,
ObjectProvider<FlywayConfigurationCustomizer> fluentConfigurationCustomizers,
ObjectProvider<JavaMigration> javaMigrations, ObjectProvider<Callback> callbacks) {
FluentConfiguration configuration = new FluentConfiguration(resourceLoader.getClassLoader());
DataSource dataSourceToMigrate = configureDataSource(configuration, properties, dataSourceProperties, flywayDataSource.getIfAvailable(), dataSource.getIfUnique());
checkLocationExists(dataSourceToMigrate, properties, resourceLoader);
configureProperties(configuration, properties);
List<Callback> orderedCallbacks = callbacks.orderedStream().collect(Collectors.toList());
configureCallbacks(configuration, orderedCallbacks);
fluentConfigurationCustomizers.orderedStream().forEach((customizer) -> customizer.customize(configuration));
configureFlywayCallbacks(configuration, orderedCallbacks);
List<JavaMigration> migrations = javaMigrations.stream().collect(Collectors.toList());
configureJavaMigrations(configuration, migrations);
return new CustomFlyway(configuration);
}
/**
* FlywayAutoConfiguration.FlywayConfiguration is conditioned to the missing flyway bean. #Import annotation are not executed in this case.
*
* So if we declare our own Flyway bean, we also need to create this bean to trigger the flyway migration.
*
* The main issue is now that bean is create after the #PostConstruct init method in MyInitComponent.
*
*/
#Bean
public FlywayMigrationInitializer flywayInitializer(Flyway flyway,
ObjectProvider<FlywayMigrationStrategy> migrationStrategy) {
return new FlywayMigrationInitializer(flyway, migrationStrategy.getIfAvailable());
}
/**
* Thanks to that, it's now working, because make sure it's required before to create EntityManager
*/
// #ConditionalOnClass(LocalContainerEntityManagerFactoryBean.class)
// #ConditionalOnBean(AbstractEntityManagerFactoryBean.class)
static class FlywayMigrationInitializerEntityManagerFactoryDependsOnPostProcessor
extends EntityManagerFactoryDependsOnPostProcessor {
FlywayMigrationInitializerEntityManagerFactoryDependsOnPostProcessor() {
super(FlywayMigrationInitializer.class);
}
}
...
So that confirm we can't easily override the Flyway bean without copy some logic from FlywayAutoConfiguration.
I created a small project to reproduce the bug, with the solution I found.
https://github.com/w3blogfr/flyway-issue-demo
I don't know if it's necessary to fix it or not in spring auto-configuration project?
I checked the history to find back the commit
https://github.com/spring-projects/spring-boot/commit/795303d6676c163af899e87364846d9763055cf8
And the ticket was this one.
https://github.com/spring-projects/spring-boot/issues/18382
The change looks technical and probably he missed that was an #ConditionalOnClass
None of the annotations that you've described have an impact on runtime semantics:
#AutoConfigureOrder works only on auto-configurations (so if you've copied the auto-configuration as a user config, it won't even be considered). This is used to order how auto-configurations are parsed (typical use case: make sure an auto-config contribute a bean definition of type X before another one check if a bean X is available).
#Order orders components of the same type. Doesn't have any effect as to "when" something occurs
Forcing initialisation using an injection point is a good idea but it'll only work if the component you're injecting it into is initialised itself at the right time.
The auto-configuration has a bunch of post-processors that create links between various components that use the DataSource and the FlywayMigrationInitializer. For instance, FlywayMigrationInitializerEntityManagerFactoryDependsOnPostProcessor makes sure that FlywayMigrationIntializer is a dependency of the entity manager so that the database is migrated before the EntityManager is made available to other components. That link is what's making sure Flyway is executed at the right time. I don't know why that's not working with your copy, a sample we can run ourselves shared on GitHub could help us figure out.
With all that said, please don't copy an auto-configuration in your own project. I'd advise to describe your use case in more details and open an issue so that we consider improving the auto-configuration.

Spring Boot: setting a PostgreSQL run-time parameter when database connection is open

I am looking for the right way to set a run-time parameter when a database connection is open. My run-time parameter is actually a time zone, but I think this should work for an arbitrary parameter.
I've found following solutions, but I feel like none of these is the right thing.
JdbcInterceptor
Because Spring Boot has Apache Tomcat connection pool as default I can use org.apache.tomcat.jdbc.pool.JdbcInterceptor to intercept connections.
I don't think this interceptor provides a reliable way to perform a statement when connection is open. Possibility to intercept every statement provided by this interceptor is unnecessary to set a parameter that should be set only once.
initSQL property
Apache's pooled connection has a build-in ability to initialise itself by a statement provided by PoolProperties.initSQL parameter. This is executed in ConnectionPool.createConnection(...) method.
Unfortunately official support for this parameter has been removed from Spring and no equivalent functionality has been introduced since then.
I mean, I can still use a datasource builder like in an example below, and then hack the property into a connection pool, but this is not a good looking solution.
// Thank's to property binders used while creating custom datasource,
// the datasource.initSQL parameter will be passed to an underlying connection pool.
#Bean
#ConfigurationProperties(prefix = "datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Update
I was testing this in a Spring Boot 1.x application. Above statements are no more valid for Spring Boot 2 applications, because:
Default Tomcat datasource was replaced by Hikari which supports spring.datasource.hikari.connection-init-sql property. It's documentation says Get the SQL string that will be executed on all new connections when they are created, before they are added to the pool.
It seems that similar property was reintroduced for Tomcat datasource as spring.datasource.tomcat.init-s-q-l.
ConnectionPreparer & AOP
This is not an actual solution. It is more like an inspiration. The connection preparer was a mechanism used to initialise Oracle connections in Spring Data JDBC Extensions project. This thing has its own problems and is no more maintained but possibly can be used as a base for similar solution.
If your parameter is actually a time zone, why don't you find a way to set this property.
For example if you want to store or read a DateTime with a predefined timestamp the right way to do this is to set property hibernate.jdbc.time_zone in hibernate entityManager or spring.jpa.properties.hibernate.jdbc.time_zone in application.properties

Proper setting transaction manager with multitenant databases configuration with spring-boot

I have multitenant database in Spring Boot. I store multi spring JDBC templates (based on tomcat Data Sources, configured manually) in map (immutable bean). And I choose proper data source based on uuid in a request (connection pool per database). I have disabled standard configuration in Spring Boot by:
#SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
What is the proper way of transaction manager configuration? With single data source I can use PlatformTransactionManager, but how it should be done with multiple jdbc templates/data sources in spring? It would be the best if I could set everything dynamically. Thanks in advance.
Here a solution for using multiple datasources
http://www.baeldung.com/spring-data-jpa-multiple-databases
Configure Two DataSources
If you need to configure multiple data sources, you can apply the same tricks that are described in the previous section. You must, however, mark one of the DataSource #Primary as various auto-configurations down the road expect to be able to get one by type.
If you create your own DataSource, the auto-configuration will back off. In the example below, we provide the exact same features set than what the auto-configuration provides on the primary data source
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSourceProperties fooDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSource fooDataSource() {
return fooDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("app.datasource.bar")
public BasicDataSource barDataSource() {
return (BasicDataSource) DataSourceBuilder.create()
.type(BasicDataSource.class).build();
}
fooDataSourceProperties has to be flagged #Primary so that the database initializer feature uses your copy (should you use that).
app.datasource.foo.type=com.zaxxer.hikari.HikariDataSource
app.datasource.foo.maximum-pool-size=30
app.datasource.bar.url=jdbc:mysql://localhost/test
app.datasource.bar.username=dbuser
app.datasource.bar.password=dbpass
app.datasource.bar.max-total=30

Spring: Define AbstractRoutingDataSource which's targetDataSources are filled from seperate (H2-)DataSource

I try to setup a spring/spring-boot project which has an AbstractRoutingDataSource whichs targetDataSources should be filled from a separate DataSource (actually embedded H2-DataSource).
I have tried many different things (configuring multiple EntityManagerFactories and TransactionManagers) but somewhere I always either get a circular reference or the Repository-Bean which should provide the details for my targetDataSources is null when autowiring it into the RoutingDataSource.
The thing seems to be that I can't use any DataSource dependency inside initialization of my RoutingDataSource, because it's a DataSource itself and therefore will be created before any Repository-Beans.
Can you give me a hint how the approach would look to configure
A H2-Datasource which is initialized first
A Repository for this (and only this) H2-DataSource
A RoutingDataSource which depends on the Repository and loads its targetDataSources from it

Categories

Resources