Need help
#Configuration
#EnableR2dbcRepositories(basePackages = "com.paymentservice.repository", databaseClientRef = "databaseClient")
public class PaymentR2dbcConfiguration extends AbstractR2dbcConfiguration {
#Value("${payment.data.mssql.host}")
private String host;
#Value("${payment.data.mssql.port}")
private int port;
#Value("${payment.data.mssql.database}")
private String database;
#Value("${payment.data.mssql.username}")
private String username;
#Value("${payment.data.mssql.password}")
private String password;
/**
* An implementation of {#link ConnectionFactory} for creating connections to
* a Microsoft SQL Server database using R2DBC.
*
* #return A factory for creating {#link Connection}s.
*/
#Override
public ConnectionFactory connectionFactory() {
return new MssqlConnectionFactory(
MssqlConnectionConfiguration.builder()
.host(host)
.port(port)
.database(database)
.username(username)
.password(password).build());
}
}
I m getting
quote The bean 'r2dbcDatabaseClient', defined in class path resource [org/springframework/boot/autoconfigure/data/r2dbc/R2dbcDataAutoConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [com/paymentservice/configurations/PaymentR2dbcConfiguration.class] and overriding is disabled.
I m using
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<version>0.8.4.RELEASE</version>
</dependency>
If you are using Spring Boot starter to configure R2dbc automatically.And you want to customize the R2dbc configuration via AbstractR2dbcConfiguration.
If you are using more than one connection in your application, and use this config to add another connection factory, try to add a name attribute to #Bean to identify multiple beans.
#Bean(name="myConn")
#Override
public ConnectionFactory connectionFactory() {}
If you want to override the default ConnectionFactory created by Spring Boot starter, add an extra #Primary on it.
#Bean
#Primary
#Override
public ConnectionFactory connectionFactory() {}
You have missed to annotate the method connectionFactory with Bean.
Check the documentation here
#Bean
#Override
public ConnectionFactory connectionFactory() {
return new MssqlConnectionFactory(
MssqlConnectionConfiguration.builder()
.host(host)
.port(port)
.database(database)
.username(username)
.password(password).build());
}
If the problem still persists in case you have any overriding bean defined in any of your configuration class, then you should use this property:
spring.main.allow-bean-definition-overriding=true
Starting in Spring 5.1, the BeanDefinitionOverrideException was introduced to allow developers to automatically throw the exception to prevent any unexpected bean overriding.
this is an open issue in Spring-boot and spring-data-r2dbc see post:
https://github.com/spring-projects/spring-data-r2dbc/issues/296
and https://github.com/spring-projects/spring-boot/issues/21586
The solution which works for me is to use default R2dbcDataAutoConfiguration and remove the custom configuration AbstractR2dbcConfiguration until Spring team fixed it.
Below properties are used to initialize the R2dbcDataAutoConfiguration
spring:
r2dbc:
url: r2dbc:mssql://xx.xxx.xxx.xxx:6515/*****
username: xxxx_xxxx
password: xxxxx
The second solution which works for me is to exclude R2dbcDataAutoConfiguration using below code.
#SpringBootApplication(exclude = { R2dbcDataAutoConfiguration.class,
R2dbcAutoConfiguration.class })
Related
I'm working on a spring boot (v 2.2.4) app, specifically to add integration tests which leverage Testcontainers to instantiate a docker container that runs a Postgres instance for tests to perform database transactions against. The tests push our database schema into the Postgres instance via Liquibase. I implemented this following this guide. The connection to the test time Postgres is managed by a class called TestPostgresConfig.java (See below). The liquibase operations are performed by a SpringLiquibase object defined in the same class. I run into a problem when I try running the application after successfully building. The issue is the Spring context tries to instantiate the SpringLiquibase bean at runtime (fails due to db.changelog-master.yaml not being found) and I don't want it to do so:
WARN [main]
org.springframework.context.support.AbstractApplicationContext:
Exception encountered during context initialization - cancelling
refresh attempt:
org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'liquibase' defined in class path
resource
[org/springframework/boot/autoconfigure/liquibase/LiquibaseAutoConfiguration$LiquibaseConfiguration.class]:
Invocation of init method failed; nested exception is
liquibase.exception.ChangeLogParseException: Error parsing
classpath:db/changelog/changelog-master.yaml
Cause by java.io.FileNotFoundException class path resource
[db/changelog/changelog-master.yaml] cannot be resolved to URL because
it does not exist
This file does not exist, will never exist in this project, and liquibase should not be trying to push change logs at runtime in the first place. I need help figuring out why Spring tries to load the liquibase bean so I can keep that from happening at runtime.
My set up:
#SpringBootApplication
#EnableRetry
#EnableCommonModule
#EnableScheduling
#Slf4j
#EnableConfigurationProperties({
ExternalProperties.class,
ApplicationProperties.class
})
public class MyApplication implements WebMvcConfigurer, CommandLineRunner {
#Autowired
MyService myService;
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
public void run(String... args) throws Exception {
myService.doSomething();
}
}
TestPostgresConfig.java:
#TestConfiguration
#Profile("integration")
public class TestPostgresConfig {
#Bean
public DataSource dataSource() {
DriverManagerDataSource ds = new DriverManagerDataSource();
ds.setDriverClassName("org.postgresql.Driver");
ds.setUrl(format("jdbc:postgresql://%s:%s/%s", MyIT.psqlContainer.getContainerIpAddress(),
MyIT.psqlContainer.getMappedPort(5432), MyIT.psqlContainer.getDatabaseName()));
ds.setUsername(MyIT.psqlContainer.getUsername());
ds.setPassword(MyIT.psqlContainer.getPassword());
ds.setSchema(MyIT.psqlContainer.getDatabaseName());
return ds;
}
#Bean
public SpringLiquibase springLiquibase(DataSource dataSource) throws SQLException {
tryToCreateSchema(dataSource);
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDropFirst(true);
liquibase.setDataSource(dataSource);
liquibase.setDefaultSchema("the_schema");
// This and all supported liquibase changelog files are copied onto my classpath
// via the maven assembly plugin. The config to do this has been omitted for the
// sake of brevity
// see this URL for how I did it:
// https://blog.sonatype.com/2008/04/how-to-share-resources-across-projects-in-maven/
liquibase.setChangeLog("classpath:/test/location/of/liquibase.changelog-root.yml");
return liquibase;
}
private void tryToCreateSchema(DataSource dataSource) throws SQLException {
String CREATE_SCHEMA_QUERY = "CREATE SCHEMA IF NOT EXISTS test";
dataSource.getConnection().createStatement().execute(CREATE_SCHEMA_QUERY);
}
}
MyIT.java:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes=CommonConfig.class)
#ActiveProfile("integration")
#Import(TestPostgresConfig.class)
public class MyIT {
#ClassRule
public static PostgreSQLContainer psqlContainer = new PostgreSQLContainer("postgres:13.1")
.withDatabseName("test-database-instance")
.withUsername("divdiff")
.withPassword("theseAreNotTheDroidsForYou123");
#BeforeClass
public static void init() {
System.setProperty("spring.datasource.url", "jdbc:postgresql://"
+ psqlContainer.getHost() + ":"
+ psqlContainer.getMappedPort(5432) + "/"
+ psqlContainer.getDatabaseName()));
System.setProperty("spring.datasource.username", psqlContainer.getUsername());
System.setProperty("spring.datasource.password", psqlContainer.getPassword());
}
#Before
public void setUp() {
// code to set up my test
}
#Test
public void testMyCodeEndToEnd() {
// my test implementation
}
}
MyConfig.java:
#Configuration
#ComponentScan(basePackages = "my.code")
#EntityScan("my.code")
#Slf4j
public class MyConfig {
#Bean
public KeyStore keyStore() {
//load keystore and set javax.net.ssl.keystore* properties
}
#Bean
public KeyStore trustStore() {
//load truststore and set javax.net.ssl.truststore* properties
}
#Bean
public RestTemplate restTemplate() {
//Set up and load SSL Context with key and trust store
//Create HTTPClient and connection stuff
//Look at this link for a similar set up
//https://www.baeldung.com/rest-template
}
}
application-integration.yml
spring:
jpa:
properties:
hibernate:
enable_lazy_load_no_trans: true
profiles:
active: default
server:
ssl:
# My key and trust store values
application:
unrelated-app-properties:
# propertie values below
Package structure:
app-project/src/main/java/com/my/code/MyApplication.java
app-project/src/main/java/com/my/code/service/MyService.java
app-project/src/test/java/my/code/OTHER-TEST-CLASSES-LIVE-HERE...
app-project/src/test/java/integration/MyIT.java
app-project/src/test/java/integration/TestPostgresConfig.java
app-project/src/test/resources/application-integration.yml
my-common-project/src/main/java/common/config/MyConfig.java
YOUR HELP IS MUCH APPRECIATED!!! :D
I'm an idiot. The maven dependency I brought into for my tests was using provided scope instead of test:
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>project-with-db-changelogs</artifactId>
<version>1.0-SNAPSHOT</version>
<classifier>resources</classifier>
<type>zip</type>
<scope>provided</scope>
</dependency>
When it should have been test scope:
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>project-with-db-changelogs</artifactId>
<version>1.0-SNAPSHOT</version>
<classifier>resources</classifier>
<type>zip</type>
<scope>test</scope>
</dependency>
Per this link, "This is available only in compile-classpath and test-classpath", hence the liquibase code was being run in both my tests and the resulting jar. #amateur-hour
You can defile liqubase context as test
<changeSet author="name" id="id-of-file" context="test">
and have an application property like:
spring.liquibase.contexts=test
and add a liquibase bean like:
#Value("${spring.liquibase.contexts}")
private String liquibaseContexts;
#Bean
public SpringLiquibase liquibase() {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(localDatabaseDataSource);
liquibase.setShouldRun(liquibaseEnabled);
liquibase.setChangeLog(localDatabaseLiquibaseChangeLog);
liquibase.setContexts(liquibaseContexts);
return liquibase;
}
I have a batch configuration. I saw the batch process is default using InMemoryMap. Instead I need to use the MySQL to send all the execution details by Batch. But when I use the following code I am getting the following error,
Error creating bean with name 'batchDataSource': Requested bean is
currently in creation: Is there an unresolvable circular reference?
#Configuration
#EnableBatchProcessing
public class BatchProcess extends DefaultBatchConfigurer {
private #Autowired Environment env;
#Bean
#StepScope
public ItemReader reader() {
...
}
#Bean
#StepScope
public ItemProcessor processor() {
...
}
#Bean
#StepScope
public ItemWriter writer() {
...
}
#Bean
#Primary
public DataSource batchDataSource() {
HikariDataSource hikari = new HikariDataSource();
hikari.setDriverClassName(env.getProperty("spring.datasource.driver-class-name"));
hikari.setJdbcUrl(env.getProperty("spring.datasource.url"));
hikari.setUsername(env.getProperty("spring.datasource.username"));
hikari.setPassword(env.getProperty("spring.datasource.password"));
return hikari;
}
public JobRepository getJobRepository() {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(batchDataSource());
factory.setTransactionManager(manager());
factory.afterPropertiesSet();
return factory.getObject();
}
public PlatformTransactionManager manager() {
return new ResourcelessTransactionManager();
}
#Bean
public Step step() {
return stepBuilderFactory.get("step")
.chunk(1000)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
public Job job() {
return jobBuilderFactory.get("job")
.flow(step())
.end()
.build();
}
#Bean
public JobLauncher getJobLauncher() {
SimpleJobLauncher launcher = new SimpleJobLauncher();
launcher.setJobRepository(createJobRepository());
return launcher;
}
}
In property file I am using,
spring.batch.job.enabled=false
spring.batch.initialize-schema=always
So what I missed? I am using JPA. And even why it is not using available JPA datasource? How can I force the Spring batch to use default MySQL instead InMemoryMap?
The error message you are receiving may not be the clearest, but it should point you in the right direction. You appear to have a circular dependency within your code.
This happens when you have two (or more) beans that mutually depend upon one another, preventing the creation of one without the existence of the other (and vice versa) - the proverbial chicken and egg problem. You can generally avoid this with setter injection and some kind of post-construction initialization.
I think you have created this situation by extending DefaultBatchConfigurer and then defining the #Bean annotated method getJobLauncher() which directly calls DefaultBatchConfigurer's createJobRepository() method without ensuring that the DataSource is first set within DefaultBatchConfigurer.
This is entirely unnecessary, because DefaultBatchConfigurer already creates JobRepository, JobExplorer, and JobLauncher for you in the proper order.
From DefaultBatchConfigurer:
#PostConstruct
public void initialize() {
try {
this.jobRepository = createJobRepository();
this.jobExplorer = createJobExplorer();
this.jobLauncher = createJobLauncher();
} catch (Exception e) {
throw new BatchConfigurationException(e);
}
}
If you are going to extend DefaultBatchConfigurer, then I suggest you eliminate the following methods from your code:
getJobRepository()
manager()
getJobLauncher()
From your code sample, it appears that you are already setting the following properties (in your application.properties file?):
spring.datasource.jdbcUrl=...
spring.datasource.username=...
spring.datasource.password=...
spring.datasource.driverClassName=...
That should be sufficient to allow Spring's AutoConfiguration to create an Hikari DataSource for you automatically, and this is the approach I usually take. The Spring Bean name will be dataSource, and this will be autowired into DefaultBatchConfigurer via setDataSource().
However, in your code sample, you have also defined a #Bean annotated method named batchDataSource(), which looks no different to what you should receive from Spring AutoConfiguration. As long as you have the spring.datasource properties mentioned earlier configured, you should be able to eliminate batchDataSource() as well, but I don't think that's necessary, so your choice.
If you still want to manually configure your DataSource, then I suggest that you not extend DefaultBatchConfigurer, but instead define a custom bean for it in a configuration class where you can directly pass in your custom DataSource (based on what I currently know of your use case).
#Bean
public BatchConfigurer batchConfigurer(){
return new DefaultBatchConfigurer( batchDataSource() );
}
First of all, explicitly define the mysql-connector dependency in the pom.xml and remove anything related to in-memory map from the project.
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
If you want to define your own configuration with beans manually, then you can't use the AutoConfiguration classes because they create the required beans for you on startup automatically and that might cause an issue if you are defining own custom DB configuration classes. Therefore, you have to exclude DataSourceAutoConfiguration, HibernateJpaAutoConfiguration and DataSourceTransactionManagerAutoConfiguration to resolve the issue.
Just update the #SpringBootApplication class :
#SpringBootApplication(
exclude = {DataSourceAutoConfiguration.class, HibernateJpaAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class
}
)
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
I have a Spring Boot application that uses 2 datasources. It can connect to the first one fine. When I try to use the second data source, I am getting the SQL error: Table or view does not exist because it is using the first data source as its connection.
This is my application property file:
#Property for both DBs
spring.datasource.driver.class.name=oracle.jdbc.driver.OracleDriver
## Database Properties DB #1
spring.datasource.tms.url=jdbc:oracle:thin:#ldap:<connection properties have been removed but are present>
spring.datasource.tms.username=own_app
spring.datasource.tms.password=own_app_l1
spring.datasource.tms.jmx.enabled=true
spring.main.allow-bean-definition-overriding=true
## Database Properties DB #2
spring.datasource.lhl.url=jdbc:oracle:thin:#ldap:<connection string has been removed but is correct>
spring.datasource.lhl.username=LHL_PURCH_APP
spring.datasource.lhl.password=ChangemeChangemeChangeme$$2019
spring.datasource.lhl.jmx-enabled=true
This is my Configuration file for both data sources:
#Configuration
#PropertySource("classpath:application-local.properties")
public class FxgLhlPurchasedItineraryAdapterDataSourceConfiguration {
#Value("${spring.datasource.driver.class.name}")
private String driverClassName;
//TMS properties
#Value("${spring.datasource.tms.url}")
private String tmsUrl;
#Value("${spring.datasource.tms.username}")
private String tmsUsername;
#Value("${spring.datasource.tms.password}")
private String tmsPassword;
//LHL Properties
#Value("${spring.datasource.lhl.url}")
private String lhlUrl;
#Value("${spring.datasource.lhl.username}")
private String lhlUsername;
#Value("${spring.datasource.lhl.password}")
private String lhlPassword;
#Primary
#Bean(name = "tmsDataSource")
#ConfigurationProperties(prefix = "spring.datasource.tms")
public DataSource tmsDataSource() {
DataSourceBuilder factory = DataSourceBuilder.create(this.getClass().getClassLoader())
.driverClassName(driverClassName).url(tmsUrl)
.username(tmsUsername)
.password(tmsPassword);
return factory.build();
}
#Bean(name = "lhlDataSource")
#ConfigurationProperties(prefix = "spring.datasource.lhl")
public DataSource lhlDataSource() {
DataSourceBuilder factory = DataSourceBuilder.create(this.getClass().getClassLoader())
.driverClassName(driverClassName).url(lhlUrl)
.username(lhlUsername)
.password(lhlPassword);
return factory.build();
}
#Bean(name = "tmsJdbcTemplate")
public JdbcTemplate tmsJdbcTemplate(final DataSource tmsDataSource) {
return new JdbcTemplate(tmsDataSource);
}
#Bean(name = "lhlJdbcTemplate")
public JdbcTemplate lhlJdbcTemplate(final DataSource lhlDataSource) {
return new JdbcTemplate(lhlDataSource);
}
}
I had to put in the #Primary annotation if the service will not run.
When I try to do a simple SELECT of a table in that is not the Primary database, I get the error that the table does not exist.
This is the code that calls the select statement:
private JdbcTemplate lhlJdbcTemplate;
DataSource ds = lhlJdbcTemplate.getDataSource();
Connection con = ds.getConnection();
LOGGER.info("Connection info: {}", con.getSchema());
lhlParmSqIdModelList = lhlJdbcTemplate.query(
selectSequenceNbrSQLStatement,
new LhlParmSqIdModelRowMapper());
The logger statement returns the schema for the Primary database.
How can I get the connection to use the Second database
Because you have multiple DataSource beans, normally Spring will fail because it doesn't know how to automatically decide which of multiple equivalent beans it should use, it puts that responsibility on you as the programmer.
By adding the #Primary annotation, you are telling Spring "If there are multiple candidate beans of this type, use this one."
Your bean methods aren't asking Spring for a particular DataSource, they just want any DataSource, so Spring gives each of them the one marked with #Primary.
Instead, you'll want to use #Qualifier to indicate exactly which named DataSource they want:
#Bean(name = "tmsJdbcTemplate")
public JdbcTemplate tmsJdbcTemplate(#Qualifier("tmsDataSource") final DataSource tmsDataSource) {
return new JdbcTemplate(tmsDataSource);
}
#Bean(name = "lhlJdbcTemplate")
public JdbcTemplate lhlJdbcTemplate(#Qualifier("lhlDataSource") final DataSource lhlDataSource) {
return new JdbcTemplate(lhlDataSource);
}
I don't guarantee this syntax is exactly right, but something like that.
You'll also need to qualify the JdbcTemplate injection point. (Credit to Sherif Behna in the comments)
I'm a beginner, I have a simple Spring Boot project, it's my first time using a connection pool (HikariCP in this case) and I need your help. It's working but I want to know if I'm using it the right way with Hibernate, or if there are better ways to do it, and if my Spring Boot project structure is correct.
EDIT : It's working even if I remove the class HikariCPConfig, how can I know if connection pools are working or not?
The project is as follow :
- BankManager
src/main/java
|
|__com.manager
|__BankManagerApplication.java
|__HikariCPConfig.java
|__com.manager.dao
|__ClientRepository.java
|__com.manager.entities
|__Client.java
|__com.manager.service
|__ClientServiceImpl.java
|__ClientServiceInterface.java
src/main/resources
|__application.properties
BankManagerApplication.java :
#SpringBootApplication
public class BankManagerApplication {
public static void main(String[] args) {
ApplicationContext ctx = SpringApplication.run(BankManagerApplication.class, args);
ClientServiceInterface service = ctx.getBean(ClientServiceInterface.class);
service.addClient(new Client("client1"));
service.addClient(new Client("client2"));
}
}
HikariCPConfig.java :
#Configuration
#ComponentScan
class HikariCPConfig {
#Value("${spring.datasource.username}")
private String user;
#Value("${spring.datasource.password}")
private String password;
#Value("${spring.datasource.url}")
private String dataSourceUrl;
#Value("${spring.datasource.dataSourceClassName}")
private String dataSourceClassName;
#Value("${spring.datasource.poolName}")
private String poolName;
#Value("${spring.datasource.connectionTimeout}")
private int connectionTimeout;
#Value("${spring.datasource.maxLifetime}")
private int maxLifetime;
#Value("${spring.datasource.maximumPoolSize}")
private int maximumPoolSize;
#Value("${spring.datasource.minimumIdle}")
private int minimumIdle;
#Value("${spring.datasource.idleTimeout}")
private int idleTimeout;
#Bean
public HikariDataSource primaryDataSource() {
Properties dsProps = new Properties();
dsProps.put("url", dataSourceUrl);
dsProps.put("user", user);
dsProps.put("password", password);
dsProps.put("prepStmtCacheSize",250);
dsProps.put("prepStmtCacheSqlLimit",2048);
dsProps.put("cachePrepStmts",Boolean.TRUE);
dsProps.put("useServerPrepStmts",Boolean.TRUE);
Properties configProps = new Properties();
configProps.put("dataSourceClassName", dataSourceClassName);
configProps.put("poolName",poolName);
configProps.put("maximumPoolSize",maximumPoolSize);
configProps.put("minimumIdle",minimumIdle);
configProps.put("minimumIdle",minimumIdle);
configProps.put("connectionTimeout", connectionTimeout);
configProps.put("idleTimeout", idleTimeout);
configProps.put("dataSourceProperties", dsProps);
HikariConfig hc = new HikariConfig(configProps);
HikariDataSource ds = new HikariDataSource(hc);
return ds;
}
}
ClientServiceImpl.java
#Service
public class ClientServiceImpl implements ClientServiceInterface {
#Autowired
ClientRepository clientRepository; // this class extends JPARepository
#Override
public Client addClient(Client c) {
return clientRepository.save(c);
}
}
application.properties :
server.port = 8888
spring.jpa.databasePlatform=org.hibernate.dialect.MySQL5Dialect
spring.jpa.show-sql = true
spring.jpa.hibernate.ddl-auto = update
spring.datasource.dataSourceClassName=com.mysql.jdbc.jdbc2.optional.MysqlDataSource
spring.datasource.url=jdbc:mysql://localhost:3306/bank_manager
spring.datasource.username=root
spring.datasource.password=
spring.datasource.poolName=SpringBootHikariCP
spring.datasource.maximumPoolSize=5
spring.datasource.minimumIdle=3
spring.datasource.maxLifetime=2000000
spring.datasource.connectionTimeout=30000
spring.datasource.idleTimeout=30000
spring.datasource.pool-prepared-statements=true
spring.datasource.max-open-prepared-statements=250
Thank you in advance.
You project structure is standard, so it's correct.
About Hikari:
Hikari is indeed a great choice for pooling. I'm used to work with Hikari successfully by using a smaller set of params you are applying in your case, but if it's working for you that's fine.
For more info about Hikaru setup, I recommend reading the official wiki, if you haven't already.
About property loading:
You can make use of some SpringBoot features to read the DB parameters and apply into your runtime Beans with less code. Like:
In application.properties (define a custom prefix 'myproject.db' for your pool params)
myproject.db.dataSourceClassName=com.mysql.jdbc.jdbc2.optional.MysqlDataSource
myproject.db.url=jdbc:mysql://localhost:3306/bank_manager
myproject.db.username=root
... and the other params below
Create a Spring Configuration class
#Configuration
public class MyDBConfiguration {
#Bean(name = "myProjectDataSource")
#ConfigurationProperties(prefix = "myproject.db")
public DataSource dataSource(){
//This will activate Hikari to create a new DataSource instance with all parameters you defined with 'myproject.db'
return DataSourceBuilder.create().build();
}
}
In your ClientRepository class:
#Repository
public class ClientRepository {
//The code below is optional, but will work if you want to use jdbctemplate tied to the DataSource created above. By default all Hibernate Sessions will take the DataSource generated by Spring
#Bean(name = "myProjectJdbcTemplate")
public JdbcTemplate jdbcTemplate(#Qualifier("myProjectDataSource") DataSource dataSource){
return new JdbcTemplate(dataSource);
}
}
There are other options to manage the DataSource beans creation if you are going to use 2 or more different Databases. You can vary the properties prefix for the other databases and annotate 1 Datasource only as #Primary, which is mandatory when you have more than 1 DataSources in Spring context
I am currently in a Spring 4 application that uses MyBatis and is completely annotation-driven (that cannot change per architecture requirements). I am trying to add a second data source definition with a completely separate set of mapping configurations.
The problem I am having is that I cannot get the two data sources to play nicely together.
I created a new, virtually identical class and added #Qualifier data to the new file.
The configuration for the classes looks like this:
Data Source 1
#Configuration
#MapperScan (basePackages = "com.myproject.package1", annotationClass = Mapper.class)
public class DataSource1 {
#Bean
#Qualifier ("DS1")
public DataSource getDataSource() {
/* configuration loaded */
}
#Bean
#Qualifier ("DS1")
public SqlSessionFactory getSqlSessionFactory() {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setDataSource(getDataSource());
/* mapper resources added */
return bean.getObject();
}
}
Data Source 2
#Configuration
#MapperScan (basePackages = "com.myproject.package2", annotationClass = Mapper.class)
public class DataSource2 {
#Bean
#Qualifier ("DS2")
public DataSource getDataSource() {
/* configuration loaded */
}
#Bean
#Qualifier ("DS2")
public SqlSessionFactory getSqlSessionFactory() {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setDataSource(getDataSource());
/* mapper resources added */
return bean.getObject();
}
}
When this runs I get exception messages like:
org.apache.ibatis.binding.BindingException: Invalid bound statement (not found)
If I comment-out the data in DS2, DS1 works just fine again. I tried adding the mapper scanning configuration data in another bean and setting the name of the SqlSessionFactoryBean to pass into it but that did not work.
Suggestions?
UPDATE
I looked at this post and updated to use the following.
#Bean (name = "the_factory_1")
public SqlSessionFactory getSqlSessionFactory() { /* same code */ }
#Bean
public MapperScannerConfigurer getMapperScannerConfigurer() {
MapperScannerConfigurer configurer = new MapperScannerConfigurer();
configurer.setBasePackage("com.myproject.package1");
configurer.setAnnotationClass(Mapper.class);
configurer.setSqlSessionFactoryBeanName("the_factory_1");
return configurer;
}
However, that leads me to this error:
No qualifying bean of type [com.myproject.package1.mapper.MyMapper] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}
When I debug only one #Bean for the factory gets invoked.
UPDATE 2
If I move everything to a single file all is fine. However, that is not ideal as I want the DataSource definitions to be separated. That's my only hurdle right now.
You can use ace-mybatis, it simplifies configuration.
Add one bean.
#Bean
public static AceMapperScannerConfigurer mapperScannerConfigurer() {
return AceMapperScannerConfigurer.builder()
.basePackage("com.myproject.package1")
.build();
}
And then mark your mapper interfaces with #AceMapper and specify sqlSessionFactory
#AceMapper(sqlSessionFactoryBeanName = "firstSqlSessionFactory")
public interface UserMapper {
Stream<User> selectUsers();
}
#AceMapper(sqlSessionFactoryBeanName = "secondSqlSessionFactory")
public interface ClientMapper {
Stream<Client> selectClients();
}
Please use DAOFactory pattern to get connections for multiple datasources like DS1 and DS2 and use DAOUtil class to provide required configuration using annotation