New here, first post... I am trying to connect to MongoDB using Spring Boot but getting sqlexception... Any suggestions? Why I get SQL exceptions in MongoDB configuration?
#Configuration
public class ApplicationConfig {
#Bean
--------public MongoItemReader<MongoDBEntity> reader() {
System.out.println("REader");
MongoItemReader<MongoDBEntity> reader = new ----------MongoItemReader<MongoDBEntity>();
reader.setTemplate(mongoTemplate);
reader.setQuery("{}");
reader.setTargetType(MongoDBEntity.class);
---------reader.setTargetType((Class<? extends MongoDBEntity>) MongoDBEntity.class);
reader.setSort(new HashMap<String, Sort.Direction>() {
{
put("_id", Direction.ASC);
}
});
return reader;
}
#Bean
public FlatFileItemWriter<MongoDBEntity> writer() {
System.out.println("Writer");
FlatFileItemWriter<MongoDBEntity> writer = new FlatFileItemWriter<MongoDBEntity>();
writer.setResource(new FileSystemResource(
"c://outputs//temp.all.csv"));
writer.setLineAggregator(new DelimitedLineAggregator<MongoDBEntity>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<MongoDBEntity>() {
{
setNames(new String[] { "id", "name" });
}
});
}
});
return writer;
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<MongoDBEntity, MongoDBEntity> chunk(10).reader(reader())
.writer(writer()).build();
}
#Bean
public Job exportUserJob() {
return jobBuilderFactory.get("exportUserJob")
.incrementer(new RunIdIncrementer()).flow(step1()).end()
.build();
}
---------- #Bean
public CustomConversions mongoCustomConversions() {
return new CustomConversions(Collections.emptyList());
}
}
Is there anything I am missing... Why I am getting SQL exception in Mongo? Checked the pom file... No references to Oracle etc...
thanks guys .. sorry it my mistake that while creating the file i used previous pom file which had some references. once after removing them and right version mongo jars fixed the issue.
thanks again..
Related
I am trying to read multiple csv files present on "https://raw.githubusercontent.com/Shrutika09/SpringBatchTemplateUploaderPOC/main/order-data-*.csv" and insert the same into database parallelly using Spring Batch.
When I use the URL for a single csv file (https://raw.githubusercontent.com/Shrutika09/SpringBatchTemplateUploaderPOC/main/order-data-1.csv), all records are read and inserted into database.
But when I try to read all files with a particular naming pattern (https://raw.githubusercontent.com/Shrutika09/SpringBatchTemplateUploaderPOC/main/order-data-*.csv), it doesn't recognizes the file and hence doesn't work as expected.
Is there any way where we can read all files matching a particular naming pattern from a github location.
I am using Spring Batch Partitioner
Partitioner:
#Bean
public Partitioner partitioner() throws Exception {
System.out.println("In Partitioner");
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
partitioner.setResources(resolver.getResources("https://raw.githubusercontent.com/Shrutika09/SpringBatchTemplateUploaderPOC/main/order-data-*.csv"));
partitioner.partition(5);
return partitioner;
}
Reader:
#Bean
#StepScope
public FlatFileItemReader<Orders> reader(#Value("#{stepExecutionContext['fileName']}") String path)
throws MalformedURLException {
System.out.println("In Reader");
System.out.println("In Reader" +path);
FlatFileItemReader<Orders> reader = new FlatFileItemReader<Orders>();
reader.setResource(new UrlResource(path));
reader.setLineMapper(new DefaultLineMapper<Orders>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[] { "id", "firstName", "lastName" });
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<Orders>() {
{
setTargetType(Orders.class);
}
});
}
});
return reader;
}
I have 2 jobs each one with 2 steps (each one with reader, processor, writer).
All is working well but when i launch job N°1 (command line with --spring.batch.job.names=job1Name), all the IteamReader are called (ItemReader from job N°1 and job N°2)
Log look like this :
start reader 1
start reader 2
start reader 3
start reader 4
From this Code (very simplified) for job 1 :
#Configuration
public class Job1Class
{
...
#Bean
public #NonNull Job job1(){
return jobBuilder.get("job1Name")
.start(step1())
.next(step2())
.build();
}
#Bean
public #NonNull Step step1()
{
return stepBuilder.get("step1")
.<MyClass, MyClass>chunk(1024)
.reader(reader1())
.processor(processor1())
.writer(writer1())
.build();
}
#Bean
public #NonNull Step step2()
{
return stepBuilder.get("step2")
.<MyClass, MyClass>chunk(1024)
.reader(reader2())
.processor(processor2())
.writer(writer2())
.build();
}
#Bean
public #NonNull ItemReader<MyClass> reader1()
{
log.debug("start reader 1");
//code
}
#Bean
public #NonNull ItemReader<MyClass> reader2()
{
log.debug("start reader 2");
//code
}
...
}
and the same for job2 :
#Configuration
public class Job2Class
{
...
#Bean
public #NonNull Job job2(){
return jobBuilder.get("job2Name")
.start(step3())
.next(step4())
.build();
}
#Bean
public #NonNull Step step3()
{
return stepBuilder.get("step3")
.<MyClass, MyClass>chunk(1024)
.reader(reader3())
.processor(processor3())
.writer(writer3())
.build();
}
#Bean
public #NonNull Step step4()
{
return stepBuilder.get("step4")
.<MyClass, MyClass>chunk(1024)
.reader(reader4())
.processor(processor4())
.writer(writer4())
.build();
}
#Bean
public #NonNull ItemReader<MyClass> reader3()
{
log.debug("start reader 3");
//code
}
#Bean
public #NonNull ItemReader<MyClass> reader4()
{
log.debug("start reader 4");
//code
}
...
}
I'm missing something ?
Thanks for your help.
When you start your Spring Boot application, all beans will be created and added to the application context (ie the bean definition methods will be called), that's why you see the log messages. But that does not mean all readers will be executed, only those of the specific job will be called at runtime.
I've tried to upgrade Spring Boot to 2.2.4.RELEASE version. Everzthing if fine exept problem with CompositeHealthIndicator which is deprecated.
I have this bean method
#Autowired
private HealthAggregator healthAggregator;
#Bean
public HealthIndicator solrHealthIndicator() {
CompositeHealthIndicator composite = new CompositeHealthIndicator(
this.healthAggregator);
composite.addHealthIndicator("solr1", createHealthIndicator(firstHttpSolrClient()));
composite.addHealthIndicator("solr2", createHealthIndicator(secondHttpSolrClient()));
composite.addHealthIndicator("querySolr", createHealthIndicator(queryHttpSolrClient()));
return composite;
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create helthCheckIndicator for solr client instance.", ex);
}
}
That registers HealthIndicator for 3 instances of SOLR (2 indexing, 1 for query). Everything worked fine until Spring Boot update. After update the method CompositeHealthIndicator.addHealthIndicator is not present, the whole class is marked as Deprecated.
The class which is created in createHealthIndicator is like this:
public class CustomSolrHealthIndicator extends SolrHealthIndicator {
private final SolrClient solrClient;
public CustomSolrHealthIndicator(SolrClient solrClient) {
super(solrClient);
this.solrClient = solrClient;
}
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
if (!this.solrClient.getClass().isAssignableFrom(HttpSolrClient.class)) {
super.doHealthCheck(builder);
}
HttpSolrClient httpSolrClient = (HttpSolrClient) this.solrClient;
if (StringUtils.isBlank(httpSolrClient.getBaseURL())) {
return;
}
super.doHealthCheck(builder);
}
}
Is there any easy way to transform the old way how to register the instances of SOLR i want to check if they are up or down at Spring Boot version 2.2.X?
EDIT:
I have tried this:
#Bean
public CompositeHealthContributor solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators = Maps.newLinkedHashMap();
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create healthCheckIndicator for solr client instance.", ex);
}
}
The CustomSolrHealthIndicator has no changes against start state.
But I cannot create that bean. When calling createHealthIndicator I am getting NoClassDefFoundError
Does anyone know where the problem is?
Looks like you can just use CompositeHealthContributor. It's not much different from what you have already. It appears something like this would work. You could override the functionality to add them one at a time if you'd like, also, which might be preferable if you have a large amount of configuration. Shouldn't be any harm with either approach.
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators;
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
Instead of deprecated CompositeHealthIndicator#addHealthIndicator use constructor with map:
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> healthIndicators = new HashMap<>();
healthIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
healthIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
healthIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return new CompositeHealthIndicator(this.healthAggregator, healthIndicators);
}
We have a Spring Integration DSL pipeline connected to a GCP Pubsub and things "work": The data is received and processed as defined in the pipeline, using a collection of Function implementations and .handle().
The problem we have (and why I used "work" in quotes) is that, in some handlers, when some of the data isn't found in the companion database, we raise IllegalStateException, which forces the data to be reprocessed (along the way, another service may complete the companion database and then function will now work). This exception is never shown anywhere.
We tried to capture the content of errorHandler, but we really can't find the proper way of doing it programmatically (no XML).
Our Functions have something like this:
Record record = recordRepository.findById(incomingData).orElseThrow(() -> new IllegalStateException("Missing information: " + incomingData));
This IllegalStateException is the one that is not appearing anywhere in the logs.
Also, maybe it's worth mentioning that we have our channels defined as
#Bean
public DirectChannel cardInputChannel() {
return new DirectChannel();
}
#Bean
public PubSubInboundChannelAdapter cardChannelAdapter(
#Qualifier("cardInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, SUBSCRIPTION_NAME);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO);
adapter.setPayloadType(CardDto.class);
return adapter;
}
I am not familiar with the adapter, but I just looked at the code and it looks like they just nack the message and don't log anything.
You can add an Advice to the handler's endpoint to capture and log the exception
.handle(..., e -> e.advice(exceptionLoggingAdvice)
#Bean
public MethodInterceptor exceptionLoggingAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception thrown) {
// log it
throw thrown;
}
}
}
EDIT
#SpringBootApplication
public class So57224614Application {
public static void main(String[] args) {
SpringApplication.run(So57224614Application.class, args);
}
#Bean
public IntegrationFlow flow(MethodInterceptor myAdvice) {
return IntegrationFlows.from(() -> "foo", endpoint -> endpoint.poller(Pollers.fixedDelay(5000)))
.handle("crasher", "crash", endpoint -> endpoint.advice(myAdvice))
.get();
}
#Bean
public MethodInterceptor myAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception e) {
System.out.println("Failed with " + e.getMessage());
throw e;
}
};
}
}
#Component
class Crasher {
public void crash(Message<?> msg) {
throw new RuntimeException("test");
}
}
and
Failed with nested exception is java.lang.RuntimeException: test
I'm trying to create a spring batch job that will read from MySQL database and write the data to different files depending on a value from the database. I am getting an error :
org.springframework.batch.item.WriterNotOpenException: Writer must be open before it can be written to
at org.springframework.batch.item.file.FlatFileItemWriter.write(FlatFileItemWriter.java:255)
Here's my ClassifierCompositeItemWriter
ClassifierCompositeItemWriter<WithdrawalTransaction> classifierCompositeItemWriter = new ClassifierCompositeItemWriter<WithdrawalTransaction>();
classifierCompositeItemWriter.setClassifier(new Classifier<WithdrawalTransaction,
ItemWriter<? super WithdrawalTransaction>>() {
#Override
public ItemWriter<? super WithdrawalTransaction> classify(WithdrawalTransaction wt) {
ItemWriter<? super WithdrawalTransaction> itemWriter = null;
if(wt.getPaymentMethod().equalsIgnoreCase("PDDTS")) { // condition
itemWriter = pddtsWriter();
} else {
itemWriter = swiftWriter();
}
return itemWriter;
}
});
As you can see, I only used two file writers for now.
#Bean("pddtsWriter")
private FlatFileItemWriter<WithdrawalTransaction> pddtsWriter()
And
#Bean("swiftWriter")
private FlatFileItemWriter<WithdrawalTransaction> swiftWriter()
I also added them as stream
#Bean
public Step processWithdrawalTransactions() throws Exception {
return stepBuilderFactory.get("processWithdrawalTransactions")
.<WithdrawalTransaction, WithdrawalTransaction> chunk(10)
.processor(withdrawProcessor())
.reader(withdrawReader)
.writer(withdrawWriter)
.stream(swiftWriter)
.stream(pddtsWriter)
.listener(headerWriter())
.build();
}
Am I doing something wrong?