I'm working on projet spring batch simple process read, process write to csv file. There are already many readers ans sql scripts in this projet. The problem when i add new reader and sql script and i start the batch, i got an error . However, when i use the new reader to and old script sql, it works and the same thing if I use an existing reader to select the new sql script, it works fine but not i use both.
These are the elements:
/**
* Reader
*/
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import java.io.IOException;
#Component
public class BillDefaultReader extends AbstractBillReader {
public BillDEfaultReader(final DataSource dataSource) throws IOException {
super(dataSource);
this.setPreparedStatementSetter(s -> s.setString(1, "DEFAULT"));
}
#Override
protected String getSqlPath() {
return "/sql/get_script1.sql"; -> SQL
}
}
// The job config
#Bean
public Job batchExJob(final Step batchExStep, final ReportGeneratorJobListener reportGeneratorJobListener) {
return this.jobs.get("batchExJob").start(batchExStep).listener(reportGeneratorJobListener).build();
}
#Bean
public Step batchExStep(final ItemReader<BillTransmissionDTO> compositeBillReader,
final ItemProcessor<BillTransmissionDTO, BillTransmissionDTO> compositeProcessor,
final ItemWriter<BillTransmissionDTO> billTransmissionWriter) {
return this.stepBuilderFactory.get("batchExStep").<BillTransmissionDTO, BillTransmissionDTO>chunk(this.commitInterval).reader(compositeBillReader)
.processor(compositeProcessor)
.writer(billTransmissionWriter)
.faultTolerant()
.skipPolicy(this::skipExceptionsPolicy)
.build();
}
#StepScope
#Bean
public ItemProcessor<BillTransmissionDTO, BillTransmissionDTO> compositeProcessor(final IntegrateBillProcessor integrateBillProcessor,
final InitialisationPayEndowmentProcessor initialisationPaiementDotationProcessor) {
final CompositeItemProcessor<BillTransmissionDTO, BillTransmissionDTO> processor = new CompositeItemProcessor<>();
processor.setDelegates(Arrays.asList(integrateBillProcessor, initializePaiementEndowmentProcessor));
return processor;
}
So this is a part of code (it's translated), This is the same process and same code for all readers and sql , any idea please?
Thank you!
Related
I'm working with spring batch and have a job with two steps the first step (tasklet) validation the header CSV and the second step reads an CSV file and Write to another CSV file like this:
#Bean
public ClassifierCompositeItemWriter<POJO> classifierCompositeItemWriter() throws Exception {
Classifier<POJO, ItemWriter<? super POJO>> classifier = new ClassiItemWriter(ClassiItemWriter.itemWriter());
return new ClassifierCompositeItemWriterBuilder<POJO>()
.classifier(classifier)
.build();
}
#Bean
public Step readAndWriteCsvFile() throws Exception {
return stepBuilderFactory.get("readAndWriteCsvFile")
.<POJO, POJO>chunk(10000)
.reader(ClassitemReader.itemReader())
.processor(processor())
.writer(classifierCompositeItemWriter())
.build();
}
I used a FlatFileItemReader (in ClassitemReader) and a FlatFileItemWriter (in ClassiItemWriter), before reading CSV. I checked if the header of CSV file is correct via tasklet like this :
#Bean
public Step fileValidatorStep() {
return stepBuilderFactory
.get("fileValidatorStep")
.tasklet(fileValidator)
.build();
}
And if so I process the transformation from CSV file recieved to another file CSV.
in the jobBuilderFactory i check if ExistStatus comes from tasklet fileValidatorStep is "COMPLETED" to forward the process to readAndWriteCsvFile(), if not "COMPLETED" and tasklet fileValidatorStep return ExistStatus "ERROR" the job end and exit processing.
#Bean
public Job job() throws Exception {
return jobBuilderFactory.get("job")
.incrementer(new RunIdIncrementer())
.start(fileValidatorStep()).on("ERROR").end()
.next(fileValidatorStep()).on("COMPLETED").to(readAndWriteCsvFile())
.end().build();
}
The problem is that when I launch my Job the Bean readAndWriteCsvFile runs first of the tasklet, that means the standard Bean of reader and writer of spring batch always loaded in life cycle before i can validation the header and check the ExistStatus, the reader is still working and reades the file and puts data in another file without check because is loading Bean during launch of Job before all tasklet.
How i can launch readAndWriteCsvFile methode after fileValidatorStep ?
You don't need a flow job for that, a simple job is enough. Here is a quick example:
import java.util.Arrays;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.item.support.ListItemReader;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJobConfiguration {
#Bean
public Step validationStep(StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("validationStep")
.tasklet(new Tasklet() {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
if(!isValid()) {
throw new Exception("Invalid file");
}
return RepeatStatus.FINISHED;
}
private boolean isValid() {
// TODO implement validation logic
return false;
}
})
.build();
}
#Bean
public Step readAndWriteCsvFile(StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("readAndWriteCsvFile")
.<Integer, Integer>chunk(2)
.reader(new ListItemReader<>(Arrays.asList(1, 2, 3, 4)))
.writer(items -> items.forEach(System.out::println))
.build();
}
#Bean
public Job job(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
return jobBuilderFactory.get("job")
.start(validationStep(stepBuilderFactory))
.next(readAndWriteCsvFile(stepBuilderFactory))
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJobConfiguration.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
In this example, if the validationStep fails, the next step is will not be executed.
I solved my problem, i changed the bean FlatFileItemReader method inside the Job class configuration with the annotation #StepScope, now this bean has only loads when I need it also should avoid declaring the FlatFileItemReader bean out of scope of the job
I have an existing spring batch project that have multiple steps. I want to modify a step so I can stop the job : jobExecution.getStatus() == STOPPED.
My step :
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
private StepReader reader;
#Autowired
private StepProcessor processor;
#Autowired
private StepWriter writer;
#Autowired
public GenericListener listener;
#Bean
#JobScope
#Qualifier("mystep")
public Step MyStep() throws ReaderException {
return stepBuilderFactory.get("mystep")
.reader(reader.read())
.listener(listener)
.processor(processor)
.writer(writer)
.build();
}
GenericListener implements ItemReadListener, ItemProcessListener, ItemWriteListener and overrides before and after methods that basically write log.
The focus here is on the StepReader class and its read() method that returns a FlatFileItemReader :
#Component
public class StepReader {
public static final String DELIMITER = "|";
#Autowired
private ClassToAccessProperties classToAccessProperties;
private Logger log = Logger.create(StepReader.class);
#Autowired
private FlatFileItemReaderFactory<MyObject> flatFileItemReaderFactory;
public ItemReader<MyObject> read() throws ReaderException {
try {
String csv = classToAccessProperties.getInputCsv();
FlatFileItemReader<MyObject> reader = flatFileItemReaderFactory.create(csv, getLineMapper());
return reader;
} catch (ReaderException | EmptyInputfileException | IOException e) {
throw new ReaderException(e);
} catch (NoInputFileException e) {
log.info("Oh no !! No input file");
// Here I want to stop the job
return null;
}
}
private LineMapper<MyObject> getLineMapper () {
DefaultLineMapper<MyObject> mapper = new DefaultLineMapper<>();
DelimitedLineTokenizer delimitedLineTokenizer = new DelimitedLineTokenizer();
delimitedLineTokenizer.setDelimiter(DELIMITER);
mapper.setLineTokenizer(delimitedLineTokenizer);
mapper.setFieldSetMapper(new MyObjectFieldSetMapper());
return mapper;
}
}
I tried to implement StepExecutionListener in StepReader but with no luck, I think because the reader method in StepBuilderFactory is expecting an ItemReader from the reader.read() method and it doesn't care about the rest of the class.
I'm looking for ideas or solution to be able to stop the entire job (not fail it) when NoInputFileException is catched.
I'm looking for ideas or solution to be able to stop the entire job (not fail it) when NoInputFileException is catched.
This is a common pattern and is described in details in the Handling Step Completion When No Input is Found section of the reference documentation. The example in that section shows how to fail a job when no input file is found, but since you want to stop the job instead of failing it, you can use StepExecution#setTerminateOnly(); in the listener and your job will end with status STOPPED. In your example, you would add that listener to the MyStep step.
However, I would suggest to add a pre-validation step and stop the job if there is no file. Here is a quick example:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Step fileValidationStep() {
return steps.get("fileValidationStep")
.tasklet((contribution, chunkContext) -> {
// TODO add code to check if the file exists
System.out.println("file not found");
chunkContext.getStepContext().getStepExecution().setTerminateOnly();
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step fileProcessingStep() {
return steps.get("fileProcessingStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("processing file");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(fileValidationStep())
.next(fileProcessingStep())
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
System.out.println("Job status: " + jobExecution.getExitStatus().getExitCode());
}
}
The example prints:
file not found
Job status: STOPPED
Hope this helps.
Before Spring Batch job runs. I have a import table which contains all items that needs importing into our system. It is at this point verified to contain only items that does not exist in our system.
Next I have a Spring Batch Job, which reads from this import table using JpaPagingItemReader.
After work is done, it writes to db using ItemWriter.
I run with page-size and chunk-size at 10000.
Now this works absolutely fine when running on MySQL innoDB. I can even use multiple threading and everything works fine.
But now we are migrating to PostgreSQL, and the same Batch Job runs into some very strange problem
What happens is that it tries to insert duplicates into our system. This will naturally be rejected by unique index constraints and an error is thrown.
Since the import db table is verified to contain only non-existing before batch job starts, the only reason for this i can think of is that the JpaPagingItemReader reads some rows multiple times from import db table when i run on Postgres. But why would it do that?
I have experimented with a lot of settings. Turning chunk and page-size down to around 100 only makes import slower, but still same error. Running single-thread instead of multiple threads only makes the error happen slightly later.
So what on earth could be the reason for my JpaPagingItemReader reading the same items multiple times only on PostgresSQL?
The select statement backing the reader is simple, its a NamedQuery:
#NamedQuery(name = "ImportDTO.findAllForInsert",
query = "select h from ImportDTO h where h.toBeImported = true")
Please also note that the toBeImported flag will not be altered by the batch job at all during runtime, so the results from this query should always return the same before, under and after the batch job.
Any insights, tips or help is greatly appriciated!
Here is Batch Config code:
import javax.persistence.EntityManagerFactory;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.batch.core.launch.support.SimpleJobLauncher;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.item.database.JpaPagingItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
import org.springframework.core.task.TaskExecutor;
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private OrganizationItemWriter organizationItemWriter;
#Autowired
private EntityManagerFactory entityManagerFactory;
#Autowired
private OrganizationUpdateProcessor organizationUpdateProcessor;
#Autowired
private OrganizationInsertProcessor organizationInsertProcessor;
private Integer organizationBatchSize = 10000;
private Integer organizationThreadSize = 3;
private Integer maxThreadSize = organizationThreadSize;
#Bean
public SimpleJobLauncher jobLauncher(JobRepository jobRepository) {
SimpleJobLauncher launcher = new SimpleJobLauncher();
launcher.setJobRepository(jobRepository);
return launcher;
}
#Bean
public JpaPagingItemReader<ImportDTO> findNewImportsToImport() throws Exception {
JpaPagingItemReader<ImportDTO> databaseReader = new JpaPagingItemReader<>();
databaseReader.setEntityManagerFactory(entityManagerFactory);
JpaQueryProviderImpl<ImportDTO> jpaQueryProvider = new JpaQueryProviderImpl<>();
jpaQueryProvider.setQuery("ImportDTO.findAllForInsert");
databaseReader.setQueryProvider(jpaQueryProvider);
databaseReader.setPageSize(organizationBatchSize);
// must be set to false if multi threaded
databaseReader.setSaveState(false);
databaseReader.afterPropertiesSet();
return databaseReader;
}
#Bean
public JpaPagingItemReader<ImportDTO> findImportsToUpdate() throws Exception {
JpaPagingItemReader<ImportDTO> databaseReader = new JpaPagingItemReader<>();
databaseReader.setEntityManagerFactory(entityManagerFactory);
JpaQueryProviderImpl<ImportDTO> jpaQueryProvider = new JpaQueryProviderImpl<>();
jpaQueryProvider.setQuery("ImportDTO.findAllForUpdate");
databaseReader.setQueryProvider(jpaQueryProvider);
databaseReader.setPageSize(organizationBatchSize);
// must be set to false if multi threaded
databaseReader.setSaveState(false);
databaseReader.afterPropertiesSet();
return databaseReader;
}
#Bean
public OrganizationItemWriter writer() throws Exception {
return organizationItemWriter;
}
#Bean
public StepExecutionNotificationListener stepExecutionListener() {
return new StepExecutionNotificationListener();
}
#Bean
public ChunkExecutionListener chunkListener() {
return new ChunkExecutionListener();
}
#Bean
public TaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(maxThreadSize);
return taskExecutor;
}
#Bean
public Job importOrganizationsJob(JobCompletionNotificationListener listener) throws Exception {
return jobBuilderFactory.get("importAndUpdateOrganizationJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.start(importNewOrganizationsFromImports())
.next(updateOrganizationsFromImports())
.build();
}
#Bean
public Step importNewOrganizationsFromImports() throws Exception {
return stepBuilderFactory.get("importNewOrganizationsFromImports")
.<ImportDTO, Organization> chunk(organizationBatchSize)
.reader(findNewImportsToImport())
.processor(organizationInsertProcessor)
.writer(writer())
.taskExecutor(taskExecutor())
.listener(stepExecutionListener())
.listener(chunkListener())
.throttleLimit(organizationThreadSize)
.build();
}
#Bean
public Step updateOrganizationsFromImports() throws Exception {
return stepBuilderFactory.get("updateOrganizationsFromImports")
.<ImportDTO, Organization> chunk(organizationBatchSize)
.reader(findImportsToUpdate())
.processor(organizationUpdateProcessor)
.writer(writer())
.taskExecutor(taskExecutor())
.listener(stepExecutionListener())
.listener(chunkListener())
.throttleLimit(organizationThreadSize)
.build();
}
}
You need to add order by clause to select
I have just started learning Apache Camel. I understood the basics of Routes and Components. Now I want to give a try by connecting to Oracle database, reading records from one particular table and write those records to a file using File component. To read from database I assume I need to use JDBC component and give the dataSourceName.
However, I couldn't find any info on how to create a dataSource using camel. All info that I found related to this topic uses Spring DSL examples. I don't use Spring and I just need to test this using simple standalone Java application.
I am using JDK7u25 with Apache Camel 2.12.1.
Can someone please post a sample to read from the oracle table and write to a file?
[EDIT]
After checking several solutions on the web, I came to know about following two approaches:
Camel to run as standalone. Here is my code:
import javax.sql.DataSource;
import org.apache.camel.main.Main;
import org.apache.camel.builder.RouteBuilder;
import org.apache.commons.dbcp.BasicDataSource;
public class JDBCExample {
private Main main;
public static void main(String[] args) throws Exception {
JDBCExample example = new JDBCExample();
example.boot();
}
public void boot() throws Exception {
// create a Main instance
main = new Main();
// enable hangup support so you can press ctrl + c to terminate the JVM
main.enableHangupSupport();
String url = "jdbc:oracle:thin:#MYSERVER:1521:myDB";
DataSource dataSource = setupDataSource(url);
// bind dataSource into the registery
main.bind("myDataSource", dataSource);
// add routes
main.addRouteBuilder(new MyRouteBuilder());
// run until you terminate the JVM
System.out.println("Starting Camel. Use ctrl + c to terminate the JVM.\n");
main.run();
}
class MyRouteBuilder extends RouteBuilder {
public void configure() {
String dst = "C:/Local Disk E/TestData/Destination";
from("direct:myTable")
.setBody(constant("select * from myTable"))
.to("jdbc:myDataSource")
.to("file:" + dst);
}
}
private DataSource setupDataSource(String connectURI) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setUsername("sa");
ds.setPassword("devon1");
ds.setUrl(connectURI);
return ds;
}
}
Using the approach mentioned by Claus lbsen. Here is the code again:
import javax.sql.DataSource;
import org.apache.camel.CamelContext;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.impl.SimpleRegistry;
import org.apache.camel.main.Main;
import org.apache.camel.builder.RouteBuilder;
import org.apache.commons.dbcp.BasicDataSource;
public class JDBCExample {
private Main main;
public static void main(String[] args) throws Exception {
String url = "jdbc:oracle:thin:#MYSERVER:1521:myDB";
DataSource dataSource = setupDataSource(url);
SimpleRegistry reg = new SimpleRegistry() ;
reg.put("myDataSource",dataSource);
CamelContext context = new DefaultCamelContext(reg);
context.addRoutes(new JDBCExample().new MyRouteBuilder());
context.start();
Thread.sleep(5000);
context.stop();
}
class MyRouteBuilder extends RouteBuilder {
public void configure() {
String dst = "C:/Local Disk E/TestData/Destination";
from("direct:myTable")
.setBody(constant("select * from myTable"))
.to("jdbc:myDataSource")
.to("file:" + dst);
}
}
private static DataSource setupDataSource(String connectURI) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setUsername("sa");
ds.setPassword("devon1");
ds.setUrl(connectURI);
return ds;
}
}
But in both the cases I am getting below exception:
Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: jdbc://myDataSource due to: No component found with scheme: jdbc
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:534)
at org.apache.camel.util.CamelContextHelper.getMandatoryEndpoint(CamelContextHelper.java:63)
at org.apache.camel.model.RouteDefinition.resolveEndpoint(RouteDefinition.java:192)
at org.apache.camel.impl.DefaultRouteContext.resolveEndpoint(DefaultRouteContext.java:106)
at org.apache.camel.impl.DefaultRouteContext.resolveEndpoint(DefaultRouteContext.java:112)
at org.apache.camel.model.SendDefinition.resolveEndpoint(SendDefinition.java:61)
at org.apache.camel.model.SendDefinition.createProcessor(SendDefinition.java:55)
at org.apache.camel.model.ProcessorDefinition.makeProcessor(ProcessorDefinition.java:500)
at org.apache.camel.model.ProcessorDefinition.addRoutes(ProcessorDefinition.java:213)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:909)
... 12 more
[Thread-0] INFO org.apache.camel.main.MainSupport$HangupInterceptor - Received hang up - stopping the main instance.
There is the SQL example which shows how to setup a DataSource
http://camel.apache.org/sql-example.html
Yes that examples using Spring XML. But how you setup the DataSource can be done in Java code also. Then you need to register the DataSource in the Camel Registry.
For example you can use a JndiRegistry or the SimpleRegistry. The latter is easier.
Here is some pseudo code showing the principle, of creating a registry, add your beans into this registry, and then providing the registry to the constructor of DefaultCamelContext.
SimpleRegistry registry = new SimpleRegistry();
// code to create data source here
DateSource ds = ...
registry.put("myDataSource", ds);
CamelContext camel = new DefaultCamelContext(registry);
So silly me! I had not included camel-jdbc-2.12.1.jar in the CLASSPATH. Now above examples work.
Spring was mentioned there just because it is very useful paradigm of working with DB (mainly because of templates introduced by Spring Framework.) Of course you can hook up the standard JDBC connection and implement DAO by yourself - nothing wrong with that.
I want to read from the database and write the records to a file using Camel. Below is my code:
import javax.sql.DataSource;
import org.apache.camel.CamelContext;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.impl.SimpleRegistry;
import org.apache.commons.dbcp.BasicDataSource;
public class JDBCExampleSimpleRegistry {
public static void main(String[] args) throws Exception {
final String url = "jdbc:oracle:thin:#MYSERVER:1521:myDB";
DataSource dataSource = setupDataSource(url);
SimpleRegistry reg = new SimpleRegistry() ;
reg.put("myDataSource",dataSource);
CamelContext context = new DefaultCamelContext(reg);
context.addRoutes(new JDBCExampleSimpleRegistry().new MyRouteBuilder());
context.start();
Thread.sleep(5000);
context.stop();
}
class MyRouteBuilder extends RouteBuilder {
public void configure() {
String dst = "C:/Local Disk E/TestData/Destination/?fileName=output.txt";
from("direct:myTable")
.setBody(constant("select * from myTable"))
.to("jdbc:myDataSource")
.to("file://" + dst);
}
}
private static DataSource setupDataSource(String connectURI) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setUsername("sa");
ds.setPassword("devon1");
ds.setUrl(connectURI);
return ds;
}
}
The above program works fine and CamelContext is elegantly shutdown. However, the target file is not created. What am I doing wrong?
I am a newbie to Apache Camel so appreciate any help. I am using JDK7 with Apache Camel 2.12.1 and is not using Spring.
You can take a look at the SQL example: http://camel.apache.org/sql-example.html and then to write to file, is just to send to a file instead of calling the bean as the sql example does.
If you want to use the JDBC component then it doesnt have built-in consumer, so you need to trigger the route using a timer or a quartz scheduler to run the route every X time.