Spring Boot and Database default data with Mongodb - java

I have a Spring Boot project that use Mongodb. So, in my pom i have that dependence:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
So i'm able to access to the database with this repository class:
package it.de.marini.server.dal;
import org.springframework.data.mongodb.repository.MongoRepository;
import it.de.marini.server.model.Role;
public interface RoleRepository extends MongoRepository<Role, String> {
}
I need to inizialize my data in Mongodb database putting default Role for example. What is the best way in Spring Boot framework?

There are several ways to do this, I will suggest you with CommandlineRunner
try:
#Bean
public CommandLineRunner initConfig(MyRepo repo) {
if (data not exist) {
repo.save(...);
}
}
Otherwise you can use #PostConstruct to initiate it..
if you need something like liquibase for RDBMS, checkout mongobee: https://github.com/mongobee/mongobee

As #Jaiwo99 say i understand that there is not a standard for do that. So i decided to make the work with Spring Batch. I realized a batch for load from CSV files Roles and Permissions of my application.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
public MongoTemplate mongoTemplate;
#Bean
public PlatformTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public Tasklet defaultRolePermissionTasklet() {
return new DefaultRolePermissionTasklet();
}
public <T> FlatFileItemReader<T> readerFile(String fileName,String[] fields,Class<T> type) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
reader.setStrict(false);
reader.setResource(new ClassPathResource(fileName));
reader.setLineMapper(new DefaultLineMapper<T>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(fields);
setStrict(false);
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<T>() {
{
setTargetType(type);
}
});
}
});
return reader;
}
#Bean
public PermissionItemProcessor permissionProcessor() {
return new PermissionItemProcessor();
}
#Bean
public RoleItemProcessor roleProcessor() {
return new RoleItemProcessor();
}
#Bean
public Job initAuthenticationData(JobCompletionNotificationListener listener) {
return jobBuilderFactory.get("initAuthenticationData").incrementer(new RunIdIncrementer()).listener(listener)
.start(stepPermission())
.next(stepRole())
.next(stepDefaultRolePermissions())
.build();
}
#Bean
public Step stepDefaultRolePermissions() {
return stepBuilderFactory.get("stepDefaultRolePermissions").tasklet(defaultRolePermissionTasklet()).build();
}
#Bean
public Step stepPermission() {
MongoItemWriter<Permission> writer = new MongoItemWriter<Permission>();
writer.setTemplate(mongoTemplate);
return stepBuilderFactory.get("stepPermission").<Permission, Permission>chunk(20)
.reader(readerFile("permission-data.csv",new String[] {"name","description"},Permission.class))
.processor(permissionProcessor())
.writer(writer)
.build();
}
#Bean
public Step stepRole() {
MongoItemWriter<Role> writer = new MongoItemWriter<Role>();
writer.setTemplate(mongoTemplate);
return stepBuilderFactory.get("stepRole").<Role, Role>chunk(20)
.reader(readerFile("role-data.csv",new String[] {"name","description"},Role.class))
.processor(roleProcessor())
.writer(writer)
.build();
}
}

At least, there is one more way how to initialize data in Mongodb using Spring Boot. You can create in your configuration like this:
#Configuration
public class AppConfiguration {
#Autowired
public void prepare(ReactiveMongoOperations mongoOperations,
UserRepository userRepository) {
mongoOperations.createCollection("users",
CollectionOptions.empty()
.maxDocuments(1_000)
.size(1024 * 8)
.capped()).block();
userRepository
.insert(List.of(
User.builder()
.name("Joe Doe")
.build()
))
.blockLast();
}
}
And of course, you must make a check that collection doesn't exist, in order to not create a collection if the database has already been created.

Related

How to give different Beans in different classes using spring boot batch

I am trying to load data from SQL server, apply some transformations and put it into CSV using the spring batch scheduler. All works fine when everything is in the same class.
This is my code:
package com.abc.tools.bootbatch;
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
public DataSource dataSource;
private static final String qry = "select top 20 colA, colB, colC from ABC";
private Resource outputResource = new FileSystemResource("output/outputData.csv");
#Bean
public DataSource dataSource() {
final DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(driver_class);
dataSource.setUrl("db_url");
dataSource.setUsername(usr);
dataSource.setPassword(pwd);
return dataSource;
}
#Bean
ItemReader<Trade> reader() {
JdbcCursorItemReader<Trade> databaseReader = new JdbcCursorItemReader<>();
databaseReader.setDataSource(dataSource);
databaseReader.setSql(qry);
databaseReader.setRowMapper(new BeanPropertyRowMapper<>(Trade.class));
return databaseReader;
}
#Bean
public TradeProcessor processor() {
return new TradeProcessor();
}
#Bean
public FlatFileItemWriter<Trade> writer()
{
//Create writer instance
FlatFileItemWriter<Trade> writer = new FlatFileItemWriter<>();
//Set output file location
writer.setResource(outputResource);
//All job repetitions should "append" to same output file
writer.setAppendAllowed(true);
//Name field values sequence based on object properties
writer.setLineAggregator(new DelimitedLineAggregator<Trade>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<Trade>() {
{
setNames(new String[] { "colA", "colB", "colC" });
}
});
}
});
return writer;
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1").<Trade, Trade> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
public Job exportUserJob() {
return jobBuilderFactory.get("exportUserJob")
.incrementer(new RunIdIncrementer())
.flow(step1())
.end()
.build();
}
}
When I seperate the processing, loading and data reading in different classes, it works fine using autowire, unless I use batch job. On using the batch job it gives error in instantiating the database.
So I removed the autowire and tried to do something like this:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
public DBConfig dbConfig;
public DataConnection dataconnection=new DataConnection();
DataReader reader=new DataReader();
TradeProcessor processor=new TradeProcessor();
FlatFileWriter flatFileWriter=new FlatFileWriter();
DataSource ds=dataconnection.getDataSource(dbConfig);
#Bean
public Step step1() {
return stepBuilderFactory.get("step1").<Trade, Trade> chunk(10)
.reader(reader.reader(ds))
.processor(processor.processor())
.writer(flatFileWriter.writer())
.build();
}
#Bean
public Job exportUserJob() {
return jobBuilderFactory.get("exportUserJob")
.incrementer(new RunIdIncrementer())
.flow(step1())
.end()
.build();
}
}
This gives Failed to initialize BatchConfiguration
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'batchConfiguration'
I think I am missing something to aggregate it all. I am new to Spring, any help is appreciated
In your first example, you are autowiring a datasource and declaring a datasource bean in the same class which is incorrect. In the second example, instead of autowiring DBConfig, you can import it with #Import(DBConfig.class) and autowire the datasource in your job configuration as needed. Here is a typical configuration:
#Configuration
public class DBConfig {
#Bean
public DataSource dataSource() {
final DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(driver_class);
dataSource.setUrl("db_url");
dataSource.setUsername(usr);
dataSource.setPassword(pwd);
return dataSource;
}
}
#Configuration
#EnableBatchProcessing
#Import(DBConfig.class)
public class BatchConfiguration {
#Bean
ItemReader<Trade> reader(DataSource datasource) {
// use datasource to configure the reader
}
}
Since you use Spring Boot, you can remove the DBConfig class, configure the datasource as needed in your application.properties file and the datasource will be automatically injected in your BatchConfiguration.

Spring Batch Bean Placement

Does placement of beans make a different when loading them into a scoped context? Is this a bug or a timing of instantiation issue?
If I include the #StepScope and #Bean directly in the BatchConfiguration class, everything works seamlessly with StepScope. However, if I define another class, say "BatchProcessProcessor" as included below, and mark a method within that other class as a Bean with StepScope, it does not resolve properly. The actual symptom in spring batch is StepScope not triggering and the beans being loaded as Singletons.
Something about providing the #Bean and #StepScope from another class that is loaded via constructor injection in the BatchConfiguration does not resolve properly.
Format described above, included below:
Main batch configuration class
#Slf4j
#Configuration
#EnableAutoConfiguration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
private BatchProcessProcessor processor;
#Override
public void setDataSource(DataSource dataSource) {
// override to do not set datasource even if a datasource exist.
// initialize will use a Map based JobRepository (instead of database)
}
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autwired
public BatchConfiguration(BatchProcessProcessor processor){
this.processor = processor;
}
#Bean
#StepScope
public ListItemReader<String> reader() {
List<String> stringList = new ArrayList<>();
stringList.add("test");
stringList.add("another test");
log.info("LOGGING A BUNCH OF STUFF THIS IS UNIQUE" + String.valueOf(System.currentTimeMillis()));
return new ListItemReader<>(stringList);
}
#Bean
#StepScope
public CustomWriter writer() {
return new CustomWriter();
}
#Bean
public Job importUserJob(JobCompletionNotificationListener listener, Step step1) {
return jobBuilderFactory.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1)
.end()
.build();
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<String, String> chunk(10)
.reader(reader())
.processor(processor.processor())
.writer(writer()).build();
}
}
Processor container class
#Component
public class BatchProcessProcessor {
private MyService service;
#Autowired
BatchProcessProcessor(MyService service){
this.service= service;
}
/**
* Generate processor utilized for processing
* #return StringProcessor for testing
*/
#Bean
#StepScope
public DeploymentProcesser processor() {
return new DeploymentProcessor(service);
}
}
Actual Processor
#Slf4j
#Component
public class DeploymentProcesser implements ItemProcessor<Deployment, Model> {
private MyService service;
#Autowired
public DeploymentProcesser(MyService service){
this.service= service;
}
#Override
public Model process(final Deployment deployment) {
log.info(String.format("Processing %s details", deployment.getId()));
Model model = new Model();
model.setId(deployment.getId());
return model;
}
}
As far as I understand, when the BatchConfiguration loads it should inject the BatchProcessProcessor and load the bean with stepscope, but that doesn't seem to work.
As I said before, just copy-pasting the #Bean/#StepScope directly into the BatchConfiguration and returning the same DeploymentProcessor works perfectly and StepScope resolves.
Is this a lifecycle issue?
It does not make sense to declare a bean in a class annotated with #Component:
#Component
public class BatchProcessProcessor {
private MyService service;
#Autowired // This is correct, you can autowire collaborators
public DeploymentProcesser(MyService service){
this.service= service;
}
#Bean // THIS IS NOT CORRECT
#StepScope
public DeploymentProcesser processor() {
return new DeploymentProcessor(service);
}
}
You should rather do it in a configuration class annotated with #Configuration. That's why it works when you do it in BatchConfiguration.

Can't use CompositeItemWriter: ItemWriter is not a ItemStream

I have the following code
#Bean
public JdbcBatchItemWriter<QuotationDto> writer1() {
return new JdbcBatchItemWriterBuilder<QuotationDto>()
.itemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>())
.sql("INSERT INTO ...")
.dataSource(dataSource)
.build();
}
#Bean
public JdbcBatchItemWriter<QuotationDto> writer2() {
return new JdbcBatchItemWriterBuilder<QuotationDto>()
.itemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>())
.sql("INSERT INTO ...")
.dataSource(dataSource)
.build();
}
#Bean
public CompositeItemWriter<QuotationDto> compositeItemWriter() {
CompositeItemWriter writer = new CompositeItemWriter();
writer.setDelegates(Arrays.asList(writer1(), writer2()));
return writer;
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<QuotationDto, QuotationDto>chunk(5)
.reader(reader())
.processor(processor())
.writer(compositeItemWriter())
.stream(writer1())
.stream(writer2())
.build();
}
I get IntelliJ error in setting writer1 as stream, because id does no implement ItemStream.
What am I doing wrong? Do anyone has solutions? I don't find so much informations about java-based composite writer configuration.
JdbcBatchItemWriter does not implement ItemStream so it cannot be used as a stream in a chunk-oriented step.
If you want to compose two JDBC item writers, you can create a custom item writer that delegates to a JdbcTemplate. Here is a quick example:
class MyItemWriter implements ItemWriter<QuotationDto> {
private JdbcTemplate jdbcTemplate;
public MyItemWriter(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
#Override
public void write(List<? extends QuotationDto> items) throws Exception {
for (QuotationDto dto : items) {
// use jdbcTemplate to batch insert items.
// can do multiple inserts here, they will be part of
// the same transaction driven by Spring Batch
}
}
}

Spring Batch doesn't call both ItemProcessor and ItemWriter in chunk-flow

I have a spring batch application to get a file in samba server
and generate a new file in a different folder on the same server.
However,
only ItemReader is called in the flow.
What is the problem? Thanks.
BatchConfiguration:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration extends BaseConfiguration {
#Bean
public ValeTrocaItemReader reader() {
return new ValeTrocaItemReader();
}
#Bean
public ValeTrocaItemProcessor processor() {
return new ValeTrocaItemProcessor();
}
#Bean
public ValeTrocaItemWriter writer() {
return new ValeTrocaItemWriter();
}
#Bean
public Job importUserJob(JobCompletionNotificationListener listener) throws Exception {
return jobBuilderFactory()
.get("importUserJob")
.incrementer(new RunIdIncrementer())
.repository(getJobRepository())
.listener(listener)
.start(this.step1())
.build();
}
#Bean
public Step step1() throws Exception {
return stepBuilderFactory()
.get("step1")
.<ValeTroca, ValeTroca>chunk(10)
.reader(this.reader())
.processor(this.processor())
.writer(this.writer())
.build();
}
}
BaseConfiguration:
public class BaseConfiguration implements BatchConfigurer {
#Bean
#Override
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
#Override
public SimpleJobLauncher getJobLauncher() throws Exception {
final SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(this.getJobRepository());
return simpleJobLauncher;
}
#Bean
#Override
public JobRepository getJobRepository() throws Exception {
return new MapJobRepositoryFactoryBean(this.getTransactionManager()).getObject();
}
#Bean
#Override
public JobExplorer getJobExplorer() {
MapJobRepositoryFactoryBean repositoryFactory = this.getMapJobRepositoryFactoryBean();
return new SimpleJobExplorer(repositoryFactory.getJobInstanceDao(), repositoryFactory.getJobExecutionDao(),
repositoryFactory.getStepExecutionDao(), repositoryFactory.getExecutionContextDao());
}
#Bean
public MapJobRepositoryFactoryBean getMapJobRepositoryFactoryBean() {
return new MapJobRepositoryFactoryBean(this.getTransactionManager());
}
#Bean
public JobBuilderFactory jobBuilderFactory() throws Exception {
return new JobBuilderFactory(this.getJobRepository());
}
#Bean
public StepBuilderFactory stepBuilderFactory() throws Exception {
return new StepBuilderFactory(this.getJobRepository(), this.getTransactionManager());
}
}
ValeTrocaItemReader:
#Configuration
public class ValeTrocaItemReader implements ItemReader<ValeTroca>{
#Value(value = "${url}")
private String url;
#Value(value = "${user}")
private String user;
#Value(value = "${password}")
private String password;
#Value(value = "${domain}")
private String domain;
#Value(value = "${inputDirectory}")
private String inputDirectory;
#Bean
#Override
public ValeTroca read() throws MalformedURLException, SmbException, IOException, Exception {
File tempOutputFile = getInputFile();
DefaultLineMapper<ValeTroca> lineMapper = new DefaultLineMapper<>();
lineMapper.setLineTokenizer(new DelimitedLineTokenizer() {
{
setDelimiter(";");
setNames(new String[]{"id_participante", "cpf", "valor"});
}
});
lineMapper.setFieldSetMapper(
new BeanWrapperFieldSetMapper<ValeTroca>() {
{
setTargetType(ValeTroca.class);
}
});
FlatFileItemReader<ValeTroca> itemReader = new FlatFileItemReader<>();
itemReader.setLinesToSkip(1);
itemReader.setResource(new FileUrlResource(tempOutputFile.getCanonicalPath()));
itemReader.setLineMapper(lineMapper);
itemReader.open(new ExecutionContext());
tempOutputFile.deleteOnExit();
return itemReader.read();
}
Sample of ItemProcessor:
public class ValeTrocaItemProcessor implements ItemProcessor<ValeTroca, ValeTroca> {
#Override
public ValeTroca process(ValeTroca item) {
//Do anything
ValeTroca item2 = item;
System.out.println(item2.getCpf());
return item2;
}
EDIT:
- Spring boot 2.1.2.RELEASE - Spring batch 4.1.1.RELEASE
Looking at your configuration, here are a couple of notes:
BatchConfiguration looks good. That's a typical job with a single chunk-oriented step.
BaseConfiguration is actually the default configuration you get when using #EnableBatchProcessing without providing a datasource. So this class can be removed
Adding #Configuration on ValeTrocaItemReader and marking the method read() with #Bean is not correct. This means your are declaring a bean named read of type ValeTroca in your application context. Moreover, your custom reader uses a FlatFileItemReader but has no added value compared to a FlatFileItemReader. You can declare your reader as a FlatFileItemReader and configure it as needed (resource, line mapper, etc ). This will also avoid the mistake of opening the execution context in the read method, which should be done when initializaing the reader or in the ItemStream#open method if the reader implements ItemStream
Other than that, I don't see from what you shared why the processor and writer are not called.
SOLVED: The problem was that even though I'm not using any databases, the spring batch, although configured to have the JobRepository in memory, needs a database (usually H2) to save the configuration tables, jobs, etc.
In this case, the dependencies of JDBC and without H2 in pom.xml were disabled. Just added them to the project and the problem was solved!

Convert Message to Job to make it Spring Integration with Batch Processing

I am trying to process a series of files using Spring Integration in a batch fashion. I have this very old xml which tries to convert the messages into jobs
<int:transformer ref="messageToJobTransformer"/>
<batch-int:job-launching-gateway job-launcher="jobLauncher"/>
The messageToJobTransformer is a class which can convert a Message into a Job. The problem is I don't know where this file is now neither I want a xml config. I want it to be pure Java DSL. Here is my simple config.
return IntegrationFlows.from(Files.inboundAdapter(directory)
.preventDuplicates()
.patternFilter("*.txt")))
.handle(jobLaunchingGw())
.get();
And here is my bean for the gateway.
#Autowired
private JobLauncher jobLauncher;
#Bean
public MessageHandler jobLaunchingGw() {
return new JobLaunchingGateway(jobLauncher);
}
EDIT:Updating the Batch Config class.
#Configuration
#EnableBatchProcessing
public class BatchConfig
{
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public ItemReader<String> reader(#Value({jobParameters['input.file.name']}") String filename) throws MalformedURLException
{
FlatFileItemReader<String> reader = new FlatFileItemReader<String>();
return reader;
}
#Bean
public Job job() throws MalformedURLException
{
return jobs.get("job").start(step()).build();
}
#Bean
public Step step() throws MalformedURLException
{
return steps.get("step").<String, String> chunk(5).reader(reader())
.writer(writer()).build();
}
#Bean
public ItemWriter<String> writer(#Value("#{jobParameters['input.file.name']}")
{
FlatFileItemWriter writer = new FlatFileItemWriter();
return writer;
}
}
Your question isn't clear. The JobLaunchingGateway expects JobLaunchRequest as a payload.
Since your Integration Flow begins from the Files.inboundAdapter(directory), I can assume that you that you have there some Job definitions. So, what you need here is some class which can parse the file and return JobLaunchRequest.
Something like this from the Spring Batch Reference Manual:
public class FileMessageToJobRequest {
private Job job;
private String fileParameterName;
public void setFileParameterName(String fileParameterName) {
this.fileParameterName = fileParameterName;
}
public void setJob(Job job) {
this.job = job;
}
#Transformer
public JobLaunchRequest toRequest(Message<File> message) {
JobParametersBuilder jobParametersBuilder =
new JobParametersBuilder();
jobParametersBuilder.addString(fileParameterName,
message.getPayload().getAbsolutePath());
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
}
After the definition that class as a #Bean you can use it from the .transform() EIP-method just before your .handle(jobLaunchingGw()).
UPDATE
#Bean
public FileMessageToJobRequest fileMessageToJobRequest(Job job) {
FileMessageToJobRequest fileMessageToJobRequest = new FileMessageToJobRequest();
fileMessageToJobRequest.setJob(job);
fileMessageToJobRequest.setfileParameterName("file");
return fileMessageToJobRequest;
}
...
#Bean
public IntegrationFlow flowToBatch(FileMessageToJobRequest fileMessageToJobRequest) {
return IntegrationFlows
.from(Files.inboundAdapter(directory)
.preventDuplicates()
.patternFilter("*.txt")))
.transform(fileMessageToJobRequest)
.handle(jobLaunchingGw())
.get();
}

Categories

Resources