I'm using Spring-Data-Neo4j 4.0.0.M1, and trying to connect to the server. I'm getting an exception:
Caused by: org.apache.http.client.HttpResponseException: Unauthorized
I have a password on the server interface, but I'm not sure how to tell Spring about it.
#Configuration
#EnableNeo4jRepositories(basePackages = "com.noxgroup.nitro.persistence")
#EnableTransactionManagement
public class MyConfiguration extends Neo4jConfiguration {
#Bean
public Neo4jServer neo4jServer () {
/*** I was quite surprised not to see an overloaded parameter here ***/
return new RemoteServer("http://localhost:7474");
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory("org.my.software.domain");
}
#Bean
ApplicationListener<BeforeSaveEvent> beforeSaveEventApplicationListener() {
return new ApplicationListener<BeforeSaveEvent>() {
#Override
public void onApplicationEvent(BeforeSaveEvent event) {
if (event.getEntity() instanceof User) {
User user = (User) event.getEntity();
user.encodePassword();
}
}
};
}
}
Side Note
4.0.0 Milestone 1 is absolutely fantastic. If anyone is using 3.x.x, I'd recommend checking it out!
The username and password are passed currently via system properties
e.g.
-Drun.jvmArguments="-Dusername=<usr> -Dpassword=<pwd>"
or
System.setProperty("username", "neo4j");
System.setProperty("password", "password");
https://jira.spring.io/browse/DATAGRAPH-627 is open (not targeted for 4.0 RC1 though), please feel free to add comments/vote it up
Related
I wanted to upgrade to Java 11 and tomcat 9 my spring boot application that uses Hazelcast 3.12.9 as cashing mechanism. When I deployed locally everything looks to work fine and the caching successfully works. But when the application runs on the cluster, I receive from all 3 nodes that are available the following error:
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.some.service.some.server.domain.ClassA
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:88)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:77)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:187)
at com.hazelcast.map.impl.proxy.MapProxySupport.toObject(MapProxySupport.java:1237)
at com.hazelcast.map.impl.proxy.MapProxyImpl.get(MapProxyImpl.java:120)
at com.hazelcast.spring.cache.HazelcastCache.lookup(HazelcastCache.java:162)
at com.hazelcast.spring.cache.HazelcastCache.get(HazelcastCache.java:67)
at com.some.service.some.server.domain.ClassACache.get(GlassACache.java:28)
at com.some.service.some.server.domain.ClassAFacade.getClassA(ClassAFacade.java:203)
at com.some.service.some.server.domain.ClassAFacade.getGlassA(ClassAFacade.java:185)
at com.some.service.some.server.domain.ClassALogic.lambda$getClassAInParallel$1(ClassALogic.java:196)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:952)
at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:926)
at java.base/java.util.stream.AbstractTask.compute(AbstractTask.java:327)
at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.ClassNotFoundException: com.some.service.some.server.domain.ClassA
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:288)
Hazelcast customizer :
#Configuration
public class ClassAHazelcastConfig {
private static final MaxSizePolicy HAZELCAST_DEFAULT_MAX_SIZE_POLICY = MaxSizePolicy.PER_NODE;
private static final EvictionPolicy HAZELCAST_DEFAULT_EVICTION_POLICY = EvictionPolicy.LRU;
#Bean
HazelcastConfigurationCustomizer customizer(CachePropertiesHolder cacheProperties) {
return config -> {
config.addMapConfig(new MapConfig()
.setName(CLASS_A_CACHE)
.setMaxSizeConfig(new MaxSizeConfig(cacheProperties.getMaxsize(), HAZELCAST_DEFAULT_MAX_SIZE_POLICY))
.setEvictionPolicy(HAZELCAST_DEFAULT_EVICTION_POLICY)
.setTimeToLiveSeconds(cacheProperties.getTtl()));
config.getSerializationConfig().addSerializerConfig(
new SerializerConfig()
.setImplementation(new OptionalStreamSerializer())
.setTypeClass(Optional.class)
);
};
}
}
#Configuration
#EnableConfigurationProperties(CachePropertiesHolder.class)
public class CacheConfig implements CachingConfigurer, EnvironmentAware, ApplicationContextAware {
public static final String CLASS_A_CACHE = "CACHE_A";
private Environment environment;
private ApplicationContext applicationContext;
#Override
#Bean(name="cacheManager")
public CacheManager cacheManager() {
boolean cachingEnabled = Boolean.parseBoolean(environment.getProperty("cache.enabled"));
if (cachingEnabled) {
HazelcastInstance instance = (HazelcastInstance) applicationContext.getBean("hazelcastInstance");
return new HazelcastCacheManager(instance);
}
return new NoOpCacheManager();
}
#Override
public CacheResolver cacheResolver() {
return new SimpleCacheResolver(Objects.requireNonNull(cacheManager()));
}
#Bean
#Override
public KeyGenerator keyGenerator() {
return new SimpleKeyGenerator();
}
#Bean
#Override
public CacheErrorHandler errorHandler() {
return new SimpleCacheErrorHandler();
}
#Override
public void setEnvironment(#NotNull Environment environment) {
this.environment = environment;
}
#Override
public void setApplicationContext(#NotNull ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}
Everything works properly fine with Java 8 and tomcat 8.
Update:
After some days of investigation, I see that the only place that these exceptions are thrown into a parallel stream that is used.
return forkJoinPool.submit(() ->
items.parallelStream()
.map(item -> {
try {
return biFunction.apply(item);
} catch (Exception e) {
LOG.error("Error", e);
return Optional.<Item>empty();
}
})
The weird thing is that with Java 8 and tomcat 8 I did not have that issue.
Eventually, it is completely unrelated to Hazelcast. Mostly it is Java 11 difference from Java 8.
In the part that the exception was thrown a ForkJoinPool was used which from Java 11, it is not guaranteed when exactly will be created and it seems to not have the same classloader as the spring application. (Classes with default access result in NoClassDefFound error at runtime in spring boot project for java 11)
I made a wrong assumption because the exception was coming from Hazelcast and I also saw that there were other issues related.
Hazelcast offers two deployment models: embedded mode and client-server mode. Please check the relevant documentation section for more information.
In the former case, you have a single JVM and all of your classes are on your classpath.
In the latter case, you have at least 2 JVMs, one for each client and one for the member. It seems you forgot to set up the classpath of the member to reference ClassA.
How to set the classpath depends on how you launch your Hazelcast members. The method that works in most cases is to use the CLASSPATH environment variable.
I am not able to figure out how to implement this. Any help and/or pointers will be greatly appreciated.
Currently, my Java/Spring application backend is deployed on EC2 and accessing MySQL on RDS successfully using the regular Spring JDBC setup. That is, storing database info in application.properties and configuring DataSource and JdbcTemplate in #Configuration class. Everything works fine.
Now, I need to access MySQL on RDS securely. RDS instance has IAM Authentication enabled. I have also successfully created IAM role and applied inline policy. Then, following the AWS RDS documentation and Java example on this link, I am able to access the database from a standalone Java class successfully using Authentication Token and the user I created instead of regular db username and password. This standalone Java class is dealing with "Connection" object directly.
The place I am stuck is how I translate this to Spring JDBC configuration. That is, setting up DataSource and JdbcTemplate beans for this in my #Configuration class.
What would be a correct/right approach to implement this?
----- EDIT - Start -----
I am trying to implement this as a library that can be used for multiple projects. That is, it will be used as a JAR and declared as a dependency in a project's POM file. This library is going to include configurable AWS Services like this RDS access using general DB username and password, RDS access using IAM Authentication, KMS (CMK/data keys) for data encryption, etc.
Idea is to use this library on any web/app server depending on the project.
Hope this clarifies my need more.
----- EDIT - End -----
DataSource internally has getConnection() so I can basically create my own DataSource implementation to achieve what I want. But is this a good approach?
Something like:
public class MyDataSource implements DataSource {
#Override
public Connection getConnection() throws SQLException {
Connection conn = null;
// get a connection using IAM Authentication Token for accessing AWS RDS, etc. as in the AWS docs
return conn;
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return getConnection();
}
//other methods
}
You can use the following snippet as a replacement for the default connection-pool provided by SpringBoot/Tomcat. It will refresh the token password every 10 minutes, since the token is valid for 15 minutes. Also, it assumes the region can be extracted from the DNS hostname. If this is not the case, you'll need to specify the region to use.
public class RdsIamAuthDataSource extends org.apache.tomcat.jdbc.pool.DataSource {
private static final Logger LOG = LoggerFactory.getLogger(RdsIamAuthDataSource.class);
/**
* The Java KeyStore (JKS) file that contains the Amazon root CAs
*/
public static final String RDS_CACERTS = "/rds-cacerts";
/**
* Password for the ca-certs file.
*/
public static final String PASSWORD = "changeit";
public static final int DEFAULT_PORT = 3306;
#Override
public ConnectionPool createPool() throws SQLException {
return pool != null ? pool : createPoolImpl();
}
protected synchronized ConnectionPool createPoolImpl() throws SQLException {
return pool = new RdsIamAuthConnectionPool(poolProperties);
}
public static class RdsIamAuthConnectionPool extends ConnectionPool implements Runnable {
private RdsIamAuthTokenGenerator rdsIamAuthTokenGenerator;
private String host;
private String region;
private int port;
private String username;
private Thread tokenThread;
public RdsIamAuthConnectionPool(PoolConfiguration prop) throws SQLException {
super(prop);
}
#Override
protected void init(PoolConfiguration prop) throws SQLException {
try {
URI uri = new URI(prop.getUrl().substring(5));
this.host = uri.getHost();
this.port = uri.getPort();
if (this.port < 0) {
this.port = DEFAULT_PORT;
}
this.region = StringUtils.split(this.host,'.')[2]; // extract region from rds hostname
this.username = prop.getUsername();
this.rdsIamAuthTokenGenerator = RdsIamAuthTokenGenerator.builder().credentials(new DefaultAWSCredentialsProviderChain()).region(this.region).build();
updatePassword(prop);
final Properties props = prop.getDbProperties();
props.setProperty("useSSL","true");
props.setProperty("requireSSL","true");
props.setProperty("trustCertificateKeyStoreUrl",getClass().getResource(RDS_CACERTS).toString());
props.setProperty("trustCertificateKeyStorePassword", PASSWORD);
super.init(prop);
this.tokenThread = new Thread(this, "RdsIamAuthDataSourceTokenThread");
this.tokenThread.setDaemon(true);
this.tokenThread.start();
} catch (URISyntaxException e) {
throw new RuntimeException(e.getMessage());
}
}
#Override
public void run() {
try {
while (this.tokenThread != null) {
Thread.sleep(10 * 60 * 1000); // wait for 10 minutes, then recreate the token
updatePassword(getPoolProperties());
}
} catch (InterruptedException e) {
LOG.debug("Background token thread interrupted");
}
}
#Override
protected void close(boolean force) {
super.close(force);
Thread t = tokenThread;
tokenThread = null;
if (t != null) {
t.interrupt();
}
}
private void updatePassword(PoolConfiguration props) {
String token = rdsIamAuthTokenGenerator.getAuthToken(GetIamAuthTokenRequest.builder().hostname(host).port(port).userName(this.username).build());
LOG.debug("Updated IAM token for connection pool");
props.setPassword(token);
}
}
}
Please note that you'll need to import Amazon's root/intermediate certificates to establish a trusted connection. The example code above assumes that the certificates have been imported into a file called 'rds-cacert' and is available on the classpath. Alternatively, you can also import them into the JVM 'cacerts' file.
To use this data-source, you can use the following properties for Spring:
datasource:
url: jdbc:mysql://dbhost.xyz123abc.us-east-1.rds.amazonaws.com/dbname
username: iam_app_user
driver-class-name: com.mysql.cj.jdbc.Driver
type: com.mydomain.jdbc.RdsIamAuthDataSource
Using Spring Java config:
#Bean public DataSource dataSource() {
PoolConfiguration props = new PoolProperties();
props.setUrl("jdbc:mysql://dbname.abc123xyz.us-east-1.rds.amazonaws.com/dbschema");
props.setUsername("iam_dbuser_app");
props.setDriverClassName("com.mysql.jdbc.Driver");
return new RdsIamAuthDataSource(props);
}
UPDATE: When using MySQL, you can also decide to use the MariaDB JDBC driver, which has builtin support for IAM authentication:
spring:
datasource:
host: dbhost.cluster-xxx.eu-west-1.rds.amazonaws.com
url: jdbc:mariadb:aurora//${spring.datasource.host}/db?user=xxx&credentialType=AWS-IAM&useSsl&serverSslCert=classpath:rds-combined-ca-bundle.pem
type: org.mariadb.jdbc.MariaDbPoolDataSource
The above requires MariaDB and AWS SDK libraries, and needs the CA-bundle in the classpath
I know this is an older question, but after a some searching I found a pretty easy way you can now do this using the MariaDB driver. In version 2.5 they added an AWS IAM credential plugin to the driver. It will handle generating, caching and refreshing the token automatically.
I've tested using Spring Boot 2.3 with the default HikariCP connection pool and it is working fine for me with these settings:
spring.datasource.url=jdbc:mariadb://host/db?credentialType=AWS-IAM&useSsl&serverSslCert=classpath:rds-combined-ca-bundle.pem
spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
spring.datasource.username=iam_username
#spring.datasource.password=dont-need-this
spring.datasource.hikari.maxLifetime=600000
Download rds-combined-ca-bundle.pem and put it in src/main/resources so you can connect via SSL.
You will need these dependencies on the classpath as well:
runtime 'org.mariadb.jdbc:mariadb-java-client'
runtime 'com.amazonaws:aws-java-sdk-rds:1.11.880'
The driver uses the standard DefaultAWSCredentialsProviderChain so make sure you have credentials with policy allowing IAM DB access available wherever you are running your app.
Hope this helps someone else - most examples I found online involved custom code, background threads, etc - but using the new driver feature is much easier!
There is a library that can make this easy. Effectively you just override the getPassword() method in the HikariDataSource. You use STS to assume the role and send a "password" for that role.
<dependency>
<groupId>io.volcanolabs</groupId>
<artifactId>rds-iam-hikari-datasource</artifactId>
<version>1.0.4</version>
</dependency>
That's a bit silly, but I'm really missing the point here. I have a Spring Boot working app, the main Application class is
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
It it is from this https://github.com/netgloo/spring-boot-samples/tree/master/spring-boot-mysql-springdatajpa-hibernate great and simple example.
It has a controller for http requests which says
#RequestMapping("/create")
#ResponseBody
public String create(String email, String name) {
User user = null;
try {
user = new User(email, name);
userDao.save(user);
}
catch (Exception ex) {
return "Error creating the user: " + ex.toString();
}
return "User succesfully created! (id = " + user.getId() + ")";
}
My question is, how do I use it like my simple local application with some logic in main() mwthod?
I tried something like
new UserController().create("test#test.test", "Test User");
but it doesn't work, though I get no errors. I just don't need any http requests/responses.
Once your application is running and your UserController() is ready to accept requested if it annotated by #Controller you can call the create method by url
ex. localhost:8080/create providing email, name as request parameters
Yes you can create sprint boot command line application.
please add the below method in Application class of spring boot
#Autowired
private UserServiceImpl userServiceImpl;
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext context) {
return ((String...args) -> {
/**
*
* Call any methods of service class
*
*/
userServiceImpl.create();
});
}
I have a large Spring application that is set up without XML using only annotations. I have made some changes to this application and have a separate project with what should be almost all the same code. However, in this separate project, Togglz seems to be using some sort of default config instead of the TogglzConfig file I've set up.
The first sign that something was wrong was when I couldn't access the Togglz console. I get a 403 Forbidden error despite my config being set to allow anyone to use it (as shown on the Togglz site). I then did some tests and tried to see a list of features and the list is empty when I call FeatureContext.getFeatureManager().getFeatures() despite my Feature class having several features included. This is why I think it's using some sort of default.
TogglzConfiguration.java
public enum Features implements Feature {
FEATURE1,
FEATURE2,
FEATURE3,
FEATURE4,
FEATURE5;
public boolean isActive() {
return FeatureContext.getFeatureManager().isActive(this);
}
}
TogglzConfiguration.java
#Component
public class TogglzConfiguration implements TogglzConfig {
public Class<? extends Feature> getFeatureClass() {
return Features.class;
}
public StateRepository getStateRepository() {
File properties = [internal call to property file];
try {
return new FileBasedStateRepository(properties);
} catch (Exception e) {
throw new TogglzConfigException("Error getting Togglz configuration from " + properties + ".", e);
}
}
#Override
public UserProvider getUserProvider() {
return new UserProvider() {
#Override
public FeatureUser getCurrentUser() {
return new SimpleFeatureUser("admin", true);
}
};
}
}
SpringConfiguration.java
#EnableTransactionManagement
#Configuration
#ComponentScan(basePackages = { "root package for the entire project" }, excludeFilters =
#ComponentScan.Filter(type=FilterType.ANNOTATION, value=Controller.class))
public class SpringConfiguration {
#Bean
public TransformerFactory transformerFactory() {
return TransformerFactory.newInstance();
}
#Bean
public DocumentBuilderFactory documentBuilderfactory() {
return DocumentBuilderFactory.newInstance();
}
#Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}
}
My project finds a bunch of other beans set up with the #Component annotation. I don't know if the problem is that this component isn't being picked up at all or if Togglz simply isn't using it for some reason. I tried printing the name of the FeatureManager returned by FeatureContext.getFeaturemanager() and it is FallbackTestFeatureManager so this seems to confirm my suspicion that it's just not using my config at all.
Anyone have any ideas on what I can check? I'm flat out of ideas, especially since this is working with an almost completely the same IntelliJ project on my machine right now. I just can't find out what's different about the Togglz setup or the Spring configurations. Thanks in advance for your help.
I finally had my light bulb moment and solved this problem. In case anyone else has a similar issue, it seems my mistake was having the Togglz testing and JUnit dependencies added to my project but not limiting them to the test scope. I overlooked that part of the site.
<!-- Togglz testing support -->
<dependency>
<groupId>org.togglz</groupId>
<artifactId>togglz-testing</artifactId>
<version>2.5.0.Final</version>
<scope>test</scope>
</dependency>
Without that scope, I assume these were overriding the Togglz configuration I created with a default test configuration and that was causing my issue.
I am writing a Spring Boot (Batch) application, that should exit with a specific exit code. A requirement is to return an exit code, when the database cannot be connected.
My approach is to handle this exception as early as possible by explicitly creating a DataSource bean, calling getConnection() and catch and throw a custom exception that implements ExitCodeGenerator. The configuration is as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
...
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSourceProperties dataSourceProps() {
return new DataSourceProperties();
}
#Bean
#ConfigurationProperties("spring.datasource")
public DataSource customDataSource(DataSourceProperties props) {
DataSource ds = props.initializeDataSourceBuilder().create().build();
try {
ds.getConnection();
} catch (SQLException e) {
throw new DBConnectionException(e); // implements ExitCodeGenerator interface
}
return ds;
}
...
}
I want to reuse as much of the Spring Boot Autoconfiguration as possible, thats why I use the #ConfigurationProperties. I do not know if this is the way to go.
A call on DataSourceProperties.getUrl() returns the configured url (from my properties file):
spring.datasource.url=jdbc:oracle:....
But why does Spring Boot throw this exception when I call DataSource.getConnection():
java.sql.SQLException: The url cannot be null
at java.sql.DriverManager.getConnection(DriverManager.java:649) ~[?:1.8.0_141]
at java.sql.DriverManager.getConnection(DriverManager.java:208) ~[?:1.8.0_141]
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:308) ~[tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:203) ~[tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:735) ~[tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:667) ~[tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:482) [tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.<init>(ConnectionPool.java:154) [tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:118) [tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:107) [tomcat-jdbc-8.5.15.jar:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:131) [tomcat-jdbc-8.5.15.jar:?]
at com.foo.bar.BatchConfiguration.customDataSource(BatchConfiguration.java:xxx) [main/:?]
...
Or do you know some cleaner way of handling this situation?
Thanks
Edit: Spring Boot version is 1.5.4
The error is subtle and lies in the line
DataSource ds = props.initializeDataSourceBuilder().create().build();
The create() creates a new DataSourceBuilder and erases the preconfigured properties.
props.initializeDataSourceBuilder() already returns a DataSourceBuilder with all the properties (url, username etc.) set. So you only have to add new properties or directly build() it. So the solution is removing create():
DataSource ds = props.initializeDataSourceBuilder().build();
In this context the dataSourceProps() method bean can be removed too.
It looks like you dont set any value to your Datasource.
props.initializeDataSourceBuilder().create().build(); does not set the values of your properties to your datasource. It just creates and builds one.
Try to set your values manually by using the static DataSourceBuilder. You will get the values from your dataSourceProps bean like that:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
...
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSourceProperties dataSourceProps() {
return new DataSourceProperties();
}
#Bean
#ConfigurationProperties("spring.datasource")
public DataSource customDataSource(DataSourceProperties props) {
DataSource ds = DataSourceBuilder.create()
.driverClassName(dataSourceProps().getDriverClassName())
.url(dataSourceProps().getUrl())
.username(dataSourceProps().getUsername())
.password(dataSourceProps().getPassword())
.build();
try {
ds.getConnection();
} catch (SQLException e) {
throw new DBConnectionException(e); // implements ExitCodeGenerator interface
}
return ds;
}
...
}