I'm trying to setup a spring boot batch project that uses a ResourcelessTransactionManager transaction manager using Java Configuration, but I'm having no luck.
The reason I am trying to do this is that I don't want any state persisted, and I'd prefer not to waste memory with hsqldb if I don't want it to begin with. I have an existing Spring Batch project that is not using Spring Boot, and it is working with no persistance and without hsqldb.
I'm using this sample project as the base (but with hsqldb removed), and this other answer as a reference but I keep getting this exception:
Caused by: org.springframework.boot.autoconfigure.jdbc.DataSourceProperties$DataSourceBeanCreationException: Cannot determine embedded database driver class for database type NONE. If you want an embedded database please put a supported one on the classpath. If you have database settings to be loaded from a particular profile you may need to active it (no profiles are currently active).
at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.determineDriverClassName(DataSourceProperties.java:218) ~[spring-boot-autoconfigure-1.4.0.RELEASE.jar:1.4.0.RELEASE]
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.createDataSource(DataSourceConfiguration.java:42) ~[spring-boot-autoconfigure-1.4.0.RELEASE.jar:1.4.0.RELEASE]
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat.dataSource(DataSourceConfiguration.java:55) ~[spring-boot-autoconfigure-1.4.0.RELEASE.jar:1.4.0.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_73]
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_73]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_73]
at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_73]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162) ~[spring-beans-4.3.2.RELEASE.jar:4.3.2.RELEASE]
... 56 common frames omitted
This is what I modified:
#SpringBootApplication
#EnableBatchProcessing
#Configuration
public class SampleBatchApplication {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
protected Tasklet tasklet() {
return new Tasklet() {
#Override
public RepeatStatus execute(StepContribution contribution,
ChunkContext context) {
return RepeatStatus.FINISHED;
}
};
}
#Bean
public Job job() throws Exception {
return this.jobs.get("job").start(step1()).build();
}
#Bean
protected Step step1() throws Exception {
return this.steps.get("step1").tasklet(tasklet()).build();
}
public static void main(String[] args) throws Exception {
// System.exit is common for Batch applications since the exit code can be used to
// drive a workflow
System.exit(SpringApplication
.exit(SpringApplication.run(SampleBatchApplication.class, args)));
}
#Bean
ResourcelessTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository getJobRepo() throws Exception {
return new MapJobRepositoryFactoryBean(transactionManager()).getObject();
}
}
What do I need to do to make it use the ResourcelessTransactionManager?
EDIT: Added clarity around why I want the ResourcelessTransactionManager to work.
Below are some basic spring boot properties to set up data sources. By looking at driver class , boot can infer your db type and can auto create DataSource bean.
spring.datasource.driver-class-name
spring.datasource.url
spring.datasource.username
spring.datasource.password
spring.datasource.tomcat.max-active
spring.datasource.tomcat.initialSize
spring.datasource.tomcat.maxIdle
Last three properties are for setting up connection pools in container.
Explicit DataSource information is missing and no in-memory database is provided in classpath.
Fix issue by explicitly providing entries in application.properties or by including in-memory db ( H2 , HSQL etc ) in class path.
Your configuration looks OK for not using any data sources ( i.e. if you have configured ResourcelessTransactionManager & MapJobRepository ) as long as you don't use EnableAutoConfiguration but your stack trace indicates usage of Boot with EnableAutoConfiguration.
I guess, selective disabling of data source is not allowed, see question
EDIT: I was able to fix error in your code by adding #SpringBootApplication(exclude={DataSource.class,DataSourceAutoConfiguration.class})
Logs dumped this - after bean creation process,
Exclusions:
org.apache.tomcat.jdbc.pool.DataSource
org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
Your problem seem to be no data source available in your configuration. Spring Batch needs database so persist its state.
Spring Boot can automatically configure in memory DB if you have some on classpath. Example application you are referring to has HSQL included in POM here: https://github.com/spring-projects/spring-boot/blob/v1.4.0.RELEASE/spring-boot-samples/spring-boot-sample-batch/pom.xml#L26
So to fix your problem, define access to your Database.
Related
I'm trying to run a Spring batch jar through SCDF. I use different datasource fpr both reading and writing(Both Oracle DB). The dataSource I use to write is primary datasource. I use a Custom Build SCDF to include oracle driver dependencies. Below is the custom SCDF project location.
dataflow-server-22x
I my local Spring batch project I implemented DefaultTaskConfigurer to provide my primary datasource. So when I run the Batch project from IDE the project runs fine and it reads records from secondary datasource and writes into primary datasource. But when I deploy the batch jar to custom build SCDF as task and launch it, I get an error that says,
org.springframework.context.ApplicationContextException: Failed to start bean 'taskLifecycleListener'; nested exception is java.lang.IllegalArgumentException: Invalid TaskExecution, ID 3 not found
When I checked the task Execution table (which can be accessed via primary datasource), the task execution ID is there in the table. But still I get this error. For each each run a new task Id is inserted into Task_execution table but I get the above error message with newly inserted task_execution id.
Below are the project specifics:
Spring-boot-starter-parent : 2.2.5.RELEASE.
Spring-cloud-dataflow : 2.2.0.RELEASE.
I load all of my Batch_jobs from main class of Boot using the instance of batch job class and only the main class (which kickstarts all jobs)contains #EnableTask annotation. Below is my class structure.
#SpringBootApplication
#EnableScheduling
#EnableTask
public class SpringBootMainApplication{
#Autowired
Job1Loader job1Loader;
public static void main(String[] args) {
SpringApplication.run(SpringBootMainApplication.class, args);
}
#Scheduled(cron = "0 */1 * * * ?")
public void executeJob1Loader() throws Exception
{
JobParameters param = new JobParametersBuilder()
.addString("JobID",
String.valueOf(System.currentTimeMillis()))
.toJobParameters();
jobLauncher.run(job1Loader.loadJob1(), param);
}
}
//Job Config
#Configuration
#EnableBatchProcessing
public class Job1Loader {
#Bean
public Job loadJob1()
{
return jobBuilderFactory().get("JOb1Loader")
.incrementer(new RunIdIncrementer())
.flow(step01())
.end()
.build();;//return job
}
I use two different datasources in my Spring job project, both are oracle datasource(Different servers).I marked one of them as primary and used that Datasource in my custom implementation of "DefaultTaskConfigurer" as below.
#Configuration
public class TaskConfig extends DefaultTaskConfigurer {
#Autowired
DatabaseConfig databaseConfig;
#Override
public DataSource getTaskDataSource() {
return databaseConfig.dataSource();//dataSource() returns the
primary ds
}
}
Below are the properties I use in both SCDF custom serer and Spring Batch project.
UPDATE - 1
**Spring batch Job :**
spring.datasource.jdbc-url=jdbc:oracle:thin:#**MY_PRIMARY_DB**
spring.datasource.username=db_user
spring.datasource.password=db_pwd
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.jdbc-url=jdbc:oracle:thin:#**MY_SECONDARY_DB**
spring.datasource.username=db_user
spring.datasource.password=db_pwd
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
**SCDF custom Server:**
spring.datasource.url=jdbc:oracle:thin:#**MY_PRIMARY_DB**
spring.datasource.username=db_user
spring.datasource.password=db_pwd
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
My Batch application uses two db configurations. one to Read and one write.
Because the source and destination are different.
Since the TASK_EXECUTION tables were created in MY_PRIMARY_DB database I pass only the primary db configuration for the SCDF to read and write. Because read and write takes place in the same DB.
I tried other answers for this question, But none worked. And as I said earlier, Any input on this would be of great help.
Thanks.
Instead of overriding the DefaultTaskConfigurer.getTaskDataSource() method as I have done above, I changed the DefaultTaskConfigurer implementation as below. I'm not sure yet why overriding the method getTaskDataSource() is causing the problem. Below is the solution that worked for me.
#Configuration
public class TaskConfig extends DefaultTaskConfigurer
{
Logger logger = LoggerFactory.getLogger(TaskConfig.class);
Autowired
public TaskConfig(#Qualifier("datasource1") DataSource dataSource) {
super(dataSource); //"datasource1" is reference to the primary datasource.
}
}
I have implemented some redis stuff in my spring boot 2.1.5 application. It works fine.
I also want the health check for redis. If I switch off the redis server the health check (actuator/health) hangs forever.
How can I configure a sensible timeout?
I have created a little demo of this problem here:
https://github.com/markuskruse/demo-redis-health-bug
Clone, run, stop redis, check health (wait forever), start redis (health returns).
This is my gradle for redis:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
This is my application.yaml:
spring:
redis:
timeout: 5000
host: localhost
This is my RedisConfig.java
#Configuration
#EnableConfigurationProperties(RedisProperties.class)
public class RedisConfig {
#Bean
public LettuceConnectionFactory redisConnectionFactory(
#Value("${spring.redis.host:localhost}") String redisHost) {
RedisStandaloneConfiguration redisStandaloneConfiguration =
new RedisStandaloneConfiguration(redisHost);
return new LettuceConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public StringRedisTemplate redisTemplate(RedisConnectionFactory jedisConnectionFactory) {
final StringRedisTemplate template = new StringRedisTemplate();
template.setConnectionFactory(jedisConnectionFactory);
template.afterPropertiesSet();
return template;
}
}
According to this issue on github, it is a mere configuration issue:
https://github.com/spring-projects/spring-boot/issues/15542
According to this jira ticket, it should be fixed in spring boot 2.1.4 (I'm on 2.1.5).
https://jira.spring.io/browse/DATAREDIS-918
They mention a workaround that I have tried:
#Bean
public ClientOptions clientOptions() {
return ClientOptions.builder()
.timeoutOptions(TimeoutOptions.enabled())
.build();
}
By itself, it had no effect. I have to inject it somewhere. Googling gave this:
#Bean
LettucePoolingClientConfiguration lettucePoolConfig(ClientOptions options, ClientResources dcr){
return LettucePoolingClientConfiguration.builder()
.clientOptions(options)
.clientResources(dcr)
.build();
}
Then I get this:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration]: Factory method 'lettucePoolConfig' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericObjectPoolConfig
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:622)
... 50 more
Caused by: java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericObjectPoolConfig
at org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration$LettucePoolingClientConfigurationBuilder.<init>(LettucePoolingClientConfiguration.java:91)
at org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration.builder(LettucePoolingClientConfiguration.java:50)
at com.ikea.cps.mhs.config.RedisConfig.lettucePoolConfig(RedisConfig.java:50)
at com.ikea.cps.mhs.config.RedisConfig$$EnhancerBySpringCGLIB$$3804d114.CGLIB$lettucePoolConfig$3(<generated>)
at com.ikea.cps.mhs.config.RedisConfig$$EnhancerBySpringCGLIB$$3804d114$$FastClassBySpringCGLIB$$ccabed80.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:363)
at com.ikea.cps.mhs.config.RedisConfig$$EnhancerBySpringCGLIB$$3804d114.lettucePoolConfig(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
... 51 more
Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool2.impl.GenericObjectPoolConfig
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 64 more
I can maybe work around this. But I am thinking that I am doing something (fundamentally) wrong. It should already be fixed.
Edit: I added the commons pool and the error goes away, but health check still hangs forever.
I also tried this below, to no effect.
#Component
public class RedisConfigurer implements LettuceClientConfigurationBuilderCustomizer {
#Override
public void customize(LettuceClientConfigurationBuilder builder) {
builder.clientOptions(ClientOptions.builder()
.timeoutOptions(TimeoutOptions.enabled(Duration.of(5, SECONDS))).build());
}
}
It seems that your problem is in your manual Connection factory configuration.
If you remove that part, everything should be fine as you expected.
Otherwise you need to provide a LettuceClientConfiguration for the second argument of the LettuceConnectionFactory constructor and there you can configure ClientOptions with enabled TimeoutOptions
I found postgresql-embedded and write a spring-boot-starter for tests porpoises based on this project. After adding starter dependency in project on run it fails with next error:
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "user"
application.properties
embedded.postgres.database-name=test
embedded.postgres.username=user
embedded.postgres.password=user
embedded.postgres.port=5433
spring.datasource.url=jdbc:postgresql://localhost:5433/test
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=user
spring.datasource.password=user
spring.jpa.database=POSTGRESQL
Reason is that spring-boot-starter-data-jpa runs before my starter that startup an embedded PostgreSQL. It is possible to set priority for starter or any other ways ?
Starter sources: https://github.com/esempla/spring-boot-starter-embedded-postgres
You need to set up a dependency on your bean that starts Postgres from the DataSource bean. You can do so with a BeanFactoryPostProcessor in your starter. You might like to take inspiration from Boot's own AbstractDependsOnBeanFactoryPostProcessor, its concrete subclasses such as MongoClientDependsOnBeanFactoryPostProcessor, and how it's used in auto-configuration.
Spring Boot Autoconfiguration is working as it supposed to be as it assumes the Postgresql is already running when the datasource is being auto configured. Since this is a Special Case where you are using an Embedded Postgresql that starts from within the same application context, you will not be able to use Datasource Auto Configuration.
#Configuration
#AutoConfigureAfter({PostgresAutoConfiguration.class})
public class CustomDataSourceConfiguration {
#Value("${datasource.url}")
private String dataSourceUrl;
#Value("${datasource.driver-class}")
private String dataSourceDriverClass;
#Value("${datasource.username}")
private String dataSourceUsername;
#Value("${datasource.password}")
private String dataSourcePassword;
#Bean
public DataSource dataSource() {
// Logic to create the DataSource
}
}
The #AutoConfigureAfter will make sure that the embedded Postgresql is loaded before the datasource is created.
Already done thanks Andy Wilkinson for advises. Problem was solved by creating creating DependsOnBeanFactoryPostProcessor as follow:
#Order
public class DataSourceDependsOnBeanFactoryPostProcessor extends AbstractDependsOnBeanFactoryPostProcessor {
public DataSourceDependsOnBeanFactoryPostProcessor(String... dependsOn) {
super(DataSource.class, LocalContainerEntityManagerFactoryBean.class, dependsOn);
}
}
And adding this class as a configuration in AutoConfiguration class
#Configuration
protected static class EmbeddedPostgresDependencyConfiguration extends DataSourceDependsOnBeanFactoryPostProcessor{
public EmbeddedPostgresDependencyConfiguration(){
super("embeddedPostgres");
}
}
Now this starter works fine for my test cases. Sources can by find there:
https://github.com/esempla/spring-boot-starter-embedded-postgres
I have multitenant database in Spring Boot. I store multi spring JDBC templates (based on tomcat Data Sources, configured manually) in map (immutable bean). And I choose proper data source based on uuid in a request (connection pool per database). I have disabled standard configuration in Spring Boot by:
#SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
What is the proper way of transaction manager configuration? With single data source I can use PlatformTransactionManager, but how it should be done with multiple jdbc templates/data sources in spring? It would be the best if I could set everything dynamically. Thanks in advance.
Here a solution for using multiple datasources
http://www.baeldung.com/spring-data-jpa-multiple-databases
Configure Two DataSources
If you need to configure multiple data sources, you can apply the same tricks that are described in the previous section. You must, however, mark one of the DataSource #Primary as various auto-configurations down the road expect to be able to get one by type.
If you create your own DataSource, the auto-configuration will back off. In the example below, we provide the exact same features set than what the auto-configuration provides on the primary data source
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSourceProperties fooDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSource fooDataSource() {
return fooDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("app.datasource.bar")
public BasicDataSource barDataSource() {
return (BasicDataSource) DataSourceBuilder.create()
.type(BasicDataSource.class).build();
}
fooDataSourceProperties has to be flagged #Primary so that the database initializer feature uses your copy (should you use that).
app.datasource.foo.type=com.zaxxer.hikari.HikariDataSource
app.datasource.foo.maximum-pool-size=30
app.datasource.bar.url=jdbc:mysql://localhost/test
app.datasource.bar.username=dbuser
app.datasource.bar.password=dbpass
app.datasource.bar.max-total=30
I am trying to update datasource in Spring Boot when the DB property like DB name, password or hostname changes in the spring configuration file or custom DB property file. When the property changes the application has to update by its own by listening changes to property.
I was using Spring actuator to /restart beans once the DB configuration is changed. But user has to explicitly make a post request to restart. This step has to be avoided by listening to the changes and update datasource.
Can you tell me the best way to do this in Spring boot?
Found a way to update datasource on-the-fly,
I have given external spring config file which contains DB properties to the application and then refreshed the properties using #RefreshScope for the datasource bean.
A thread monitors the file changes and makes a call to actuator refresh() method.
database.properties
dburl=jdbc://localhost:5432/dbname
dbusername=user1
dbpassword=userpwd
Creating datasource,
#RefreshScope
public class DBPropRefresh {
#Value("${dburl}")
private String dbUrl;
#Value("${dbusername}")
private String dbUserName;
#Value("${dbpassword}")
private String dbPassword;
#Bean
#RefreshScope
public DataSource getDatasource() {
return new DatasourceBuilder().create().url(dbUrl).username(dbUserName).password(dbPassword);
}
}
Giving external config file to the application,
java -jar myapplication.jar --spring.config.location=database.properties
I have created a Java thread class to monitor database.properties file changes. Followed https://dzone.com/articles/how-watch-file-system-changes
When there are changes then it makes call to refreshEndPoint.refresh().
In pom.xml,
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<version>1.5.6.RELEASE</version>
</dependency>
You can use Spring's Dynamic Data Source routing and check if it helps? It's a very old technique and might come handy, if that serves your purpose.
But please note that - this is data source routing and not new data source configuration.
https://spring.io/blog/2007/01/23/dynamic-datasource-routing/
In my project I used multitenancy . Basically I defined several datasources in properties like this:
primary.datasource.url=jdbc:postgresql://localhost:5432/db_name?currentSchema=schema_name
primary.datasource.username=user
primary.datasource.password=password
primary.datasource.driverClassName=org.postgresql.Driver
primary.datasource.driver-class-name=org.postgresql.Driver
secondary.datasource.url=jdbc:postgresql://localhost:5432/other_db?currentSchema=schema
secondary.datasource.username=user
secondary.datasource.password=password
secondary.datasource.driverClassName=org.postgresql.Driver
secondary.datasource.driver-class-name=org.postgresql.Driver
default.datasource.url=jdbc:postgresql://localhost:5432/default_db?currentSchema=public
default.datasource.username=user
default.datasource.password=password
default.datasource.driverClassName=org.postgresql.Driver
default.datasource.driver-class-name=org.postgresql.Driver
then in configuration class defined multiple datasources:
#Bean
#Primary
#ConfigurationProperties(prefix="primary.datasource")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
#ConfigurationProperties(prefix="secondary.datasource")
public DataSource secondaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
#ConfigurationProperties(prefix="default.datasource")
public DataSource defaultDataSource(){
return DataSourceBuilder.create().build();
}
and configured multitenancy basing on this and this article.
Pros:
Easy tenant switch which could be triggered manually or even configured to be triggered on some specific header in request (filters).
Could be cofigured to switch between schemas or databases.
Happens dynamically ( you don't have to restart your beans )
Cons:
You have to define all db possibilities in property file.
You have to turn off schema validation because it will go nuts.