all itemreader from each job are initliazed at startup - java

I have 2 jobs each one with 2 steps (each one with reader, processor, writer).
All is working well but when i launch job N°1 (command line with --spring.batch.job.names=job1Name), all the IteamReader are called (ItemReader from job N°1 and job N°2)
Log look like this :
start reader 1
start reader 2
start reader 3
start reader 4
From this Code (very simplified) for job 1 :
#Configuration
public class Job1Class
{
...
#Bean
public #NonNull Job job1(){
return jobBuilder.get("job1Name")
.start(step1())
.next(step2())
.build();
}
#Bean
public #NonNull Step step1()
{
return stepBuilder.get("step1")
.<MyClass, MyClass>chunk(1024)
.reader(reader1())
.processor(processor1())
.writer(writer1())
.build();
}
#Bean
public #NonNull Step step2()
{
return stepBuilder.get("step2")
.<MyClass, MyClass>chunk(1024)
.reader(reader2())
.processor(processor2())
.writer(writer2())
.build();
}
#Bean
public #NonNull ItemReader<MyClass> reader1()
{
log.debug("start reader 1");
//code
}
#Bean
public #NonNull ItemReader<MyClass> reader2()
{
log.debug("start reader 2");
//code
}
...
}
and the same for job2 :
#Configuration
public class Job2Class
{
...
#Bean
public #NonNull Job job2(){
return jobBuilder.get("job2Name")
.start(step3())
.next(step4())
.build();
}
#Bean
public #NonNull Step step3()
{
return stepBuilder.get("step3")
.<MyClass, MyClass>chunk(1024)
.reader(reader3())
.processor(processor3())
.writer(writer3())
.build();
}
#Bean
public #NonNull Step step4()
{
return stepBuilder.get("step4")
.<MyClass, MyClass>chunk(1024)
.reader(reader4())
.processor(processor4())
.writer(writer4())
.build();
}
#Bean
public #NonNull ItemReader<MyClass> reader3()
{
log.debug("start reader 3");
//code
}
#Bean
public #NonNull ItemReader<MyClass> reader4()
{
log.debug("start reader 4");
//code
}
...
}
I'm missing something ?
Thanks for your help.

When you start your Spring Boot application, all beans will be created and added to the application context (ie the bean definition methods will be called), that's why you see the log messages. But that does not mean all readers will be executed, only those of the specific job will be called at runtime.

Related

With spring-amqp, what's the best way to send a message to rabbitmq from inside a PublisherReturn callback?

I'm using spring-amqp:2.1.6.RELEASE
I have a RabbitTemplate with a PublisherReturn callback.
If I send a message to a routingKey which has no queues bound to
it, then the return callback is called correctly. When this happens I
want to send the message to an alternative routingKey. However, if
I use the RabbitTemplate in the ReturnCallback it just hangs up. I
don't see anything saying the message can/can't be sent, the
RabbitTemplate doesn't return control to my ReturnCallback and I
don't see any PublisherConfirm either.
If I create a new RabbitTemplate (with the same CachingConnectionFactory)
then it still behaves the same way. My call just hangs up.
If I send a message to a routingKey which does have a queue bound to it,
then the message correctly arrives at the queue. The ReturnCallback is not
called in this scenario.
After some investigation, I've come to the conclusion that the rabbitTemplate and/or connection is blocked until the original message is completely processed.
If I create a second CachingConnectionFactory and RabbitTemplate, and use these in the PublisherReturn callback, then it seems to work fine.
So, here's the question: What is this the best way to send a message in a PublisherReturn callback using spring-amqp?
I have searched, but can't find anything that explains how you should do this.
Here are simplified details of what I have:
#Configuration
public class MyConfig {
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setPublisherReturns(true);
// ... other settings left out for brevity
return connectionFactory;
}
#Bean
#Qualifier("rabbitTemplate")
public RabbitTemplate rabbitTemplate(ReturnCallbackForAlternative returnCallbackForAlternative) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMandatory(true);
rabbitTemplate.setReturnCallback(returnCallbackForAlternative);
// ... other settings left out for brevity
return rabbitTemplate;
}
#Bean
#Qualifier("connectionFactoryForUndeliverable")
public ConnectionFactory connectionFactoryForUndeliverable() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
// ... other settings left out for brevity
return connectionFactory;
}
#Bean
#Qualifier("rabbitTemplateForUndeliverable")
public RabbitTemplate rabbitTemplateForUndeliverable() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactoryForUndeliverable());
// ... other settings left out for brevity
return rabbitTemplate;
}
}
Then to send the message I'm using
#Autowired
#Qualifier("rabbitTemplate")
private RabbitTemplate rabbitTemplate;
public void send(Message message) {
rabbitTemplate.convertAndSend(
"exchange-name",
"primary-key",
message);
}
And the code in the ReturnCallback is
#Component
public class ReturnCallbackForAlternative implements RabbitTemplate.ReturnCallback {
#Autowired
#Qualifier("rabbitTemplateForUndeliverable")
private RabbitTemplate rabbitTemplate;
#Override
public void returnedMessage(Message message, int replyCode, String replyText, String exchange, String routingKey) {
rabbitTemplate.convertAndSend(
"exchange-name",
"alternative-key",
message);
}
}
EDIT
Simplified example to reproduce the problem.
To run it:
Have RabbitMq running
Have an exchange called foo bound to a queue called foo
Run as spring boot app
You'll see the following output:
in returnCallback before message send
but you won't see:
in returnCallback after message send
If you comment out the connectionFactory.setPublisherConfirms(true); it runs OK.
#SpringBootApplication
public class HangingApplication {
public static void main(String[] args) {
SpringApplication.run(HangingApplication.class, args);
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setPublisherReturns(true);
connectionFactory.setPublisherConfirms(true);
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("foo");
rabbitTemplate.setMandatory(true);
rabbitTemplate.setConfirmCallback((correlationData, ack, cause) -> {
System.out.println("Confirm callback for main template. Ack=" + ack);
});
rabbitTemplate.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
System.out.println("in returnCallback before message send");
rabbitTemplate.send("foo", message);
System.out.println("in returnCallback after message send");
});
return rabbitTemplate;
}
#Bean
public ApplicationRunner runner(#Qualifier("rabbitTemplate") RabbitTemplate template) {
return args -> {
template.convertAndSend("BADKEY", "foo payload");
};
}
#RabbitListener(queues = "foo")
public void listen(String in) {
System.out.println("Message received on undeliverable queue : " + in);
}
}
Here's the build.gradle I used:
plugins {
id 'org.springframework.boot' version '2.1.5.RELEASE'
id 'java'
}
apply plugin: 'io.spring.dependency-management'
group 'pcoates'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.11
repositories {
mavenCentral()
}
dependencies {
compile 'org.springframework.boot:spring-boot-starter-amqp'
}
It causes some kind of deadlock down in the amqp-client code. The simplest solution is to do the send on a separate thread - use a TaskExecutor within the callback...
exec.execute(() -> template.send(...));
You can use the same template/connection factory, but the send must run on a different thread.
I thought we had recently changed the framework to always call the return callback on a different thread (after the last person reported this), but it looks like it fell through the cracks.
I opened an issue this time.
EDIT
Are you sure you're using 2.1.6?
We fixed this problem in 2.1.0 by preventing the send from attempting to use the same channel that the return arrived on. This works fine for me...
#SpringBootApplication
public class So57234770Application {
public static void main(String[] args) {
SpringApplication.run(So57234770Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
template.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
template.send("foo", message);
});
return args -> {
template.convertAndSend("BADKEY", "foo");
};
}
#RabbitListener(queues = "foo")
public void listen(String in) {
System.out.println(in);
}
}
If you can provide a sample app that exhibits this behavior, I will take a look to see what's going on.

Capturing Errors on Spring Integration DSL

We have a Spring Integration DSL pipeline connected to a GCP Pubsub and things "work": The data is received and processed as defined in the pipeline, using a collection of Function implementations and .handle().
The problem we have (and why I used "work" in quotes) is that, in some handlers, when some of the data isn't found in the companion database, we raise IllegalStateException, which forces the data to be reprocessed (along the way, another service may complete the companion database and then function will now work). This exception is never shown anywhere.
We tried to capture the content of errorHandler, but we really can't find the proper way of doing it programmatically (no XML).
Our Functions have something like this:
Record record = recordRepository.findById(incomingData).orElseThrow(() -> new IllegalStateException("Missing information: " + incomingData));
This IllegalStateException is the one that is not appearing anywhere in the logs.
Also, maybe it's worth mentioning that we have our channels defined as
#Bean
public DirectChannel cardInputChannel() {
return new DirectChannel();
}
#Bean
public PubSubInboundChannelAdapter cardChannelAdapter(
#Qualifier("cardInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, SUBSCRIPTION_NAME);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO);
adapter.setPayloadType(CardDto.class);
return adapter;
}
I am not familiar with the adapter, but I just looked at the code and it looks like they just nack the message and don't log anything.
You can add an Advice to the handler's endpoint to capture and log the exception
.handle(..., e -> e.advice(exceptionLoggingAdvice)
#Bean
public MethodInterceptor exceptionLoggingAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception thrown) {
// log it
throw thrown;
}
}
}
EDIT
#SpringBootApplication
public class So57224614Application {
public static void main(String[] args) {
SpringApplication.run(So57224614Application.class, args);
}
#Bean
public IntegrationFlow flow(MethodInterceptor myAdvice) {
return IntegrationFlows.from(() -> "foo", endpoint -> endpoint.poller(Pollers.fixedDelay(5000)))
.handle("crasher", "crash", endpoint -> endpoint.advice(myAdvice))
.get();
}
#Bean
public MethodInterceptor myAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception e) {
System.out.println("Failed with " + e.getMessage());
throw e;
}
};
}
}
#Component
class Crasher {
public void crash(Message<?> msg) {
throw new RuntimeException("test");
}
}
and
Failed with nested exception is java.lang.RuntimeException: test

Java with MongoDB connection issues - getting SQL exception

New here, first post... I am trying to connect to MongoDB using Spring Boot but getting sqlexception... Any suggestions? Why I get SQL exceptions in MongoDB configuration?
#Configuration
public class ApplicationConfig {
#Bean
--------public MongoItemReader<MongoDBEntity> reader() {
System.out.println("REader");
MongoItemReader<MongoDBEntity> reader = new ----------MongoItemReader<MongoDBEntity>();
reader.setTemplate(mongoTemplate);
reader.setQuery("{}");
reader.setTargetType(MongoDBEntity.class);
---------reader.setTargetType((Class<? extends MongoDBEntity>) MongoDBEntity.class);
reader.setSort(new HashMap<String, Sort.Direction>() {
{
put("_id", Direction.ASC);
}
});
return reader;
}
#Bean
public FlatFileItemWriter<MongoDBEntity> writer() {
System.out.println("Writer");
FlatFileItemWriter<MongoDBEntity> writer = new FlatFileItemWriter<MongoDBEntity>();
writer.setResource(new FileSystemResource(
"c://outputs//temp.all.csv"));
writer.setLineAggregator(new DelimitedLineAggregator<MongoDBEntity>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<MongoDBEntity>() {
{
setNames(new String[] { "id", "name" });
}
});
}
});
return writer;
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<MongoDBEntity, MongoDBEntity> chunk(10).reader(reader())
.writer(writer()).build();
}
#Bean
public Job exportUserJob() {
return jobBuilderFactory.get("exportUserJob")
.incrementer(new RunIdIncrementer()).flow(step1()).end()
.build();
}
---------- #Bean
public CustomConversions mongoCustomConversions() {
return new CustomConversions(Collections.emptyList());
}
}
Is there anything I am missing... Why I am getting SQL exception in Mongo? Checked the pom file... No references to Oracle etc...
thanks guys .. sorry it my mistake that while creating the file i used previous pom file which had some references. once after removing them and right version mongo jars fixed the issue.
thanks again..

How to move error message to rabbitmq dead letter queue

I read a lot of documentation/stackoverflow and still I have problem when exception occurs to move message to dead letter queue. I'm using spring-boot Here is my configuration:
#Autowired
private RabbitTemplate rabbitTemplate;
#Bean
RetryOperationsInterceptor interceptor() {
RepublishMessageRecoverer recoverer = new RepublishMessageRecoverer(rabbitTemplate, "error_exchange ", "error_key");
return RetryInterceptorBuilder
.stateless()
.recoverer(recoverer)
.build();
}
Dead letter queue:
Features
x-dead-letter-routing-key: error_key
x-dead-letter-exchange: error_exchange
durable: true
Policy DLX
Name of the queue: error
My exchange:
name:error_exchange
binding: to: error, routing_key: error_key
Here is my conusmer:
#RabbitListener(queues = "${rss_reader_chat_queue}")
public void consumeMessage(Message message) {
try {
List<ChatMessage> chatMessages = messageTransformer.transformMessage(message);
List<ChatMessage> save = chatMessageRepository.save(chatMessages);
sendMessagesToChat(save);
}
catch(Exception ex) {
throw new AmqpRejectAndDontRequeueException(ex);
}
}
So when I send an invalid message and some exception occurs, it happens once (and it's good because previously message was sent over and over again) but the message doesn't go to my dead letter queue. Can you help me with this?
You need to show the rest of your configuration - boot properties, queue #Beans etc. You also seem to have some confusion between using a republishing recoverer Vs dead letter queues; they are different ways to achieve similar results. You typically wouldn't use both.
Here's a simple boot app that demonstrates using a DLX/DLQ...
#SpringBootApplication
public class So43694619Application implements CommandLineRunner {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So43694619Application.class, args);
context.close();
}
#Autowired
RabbitTemplate template;
#Autowired
AmqpAdmin admin;
private final CountDownLatch latch = new CountDownLatch(1);
#Override
public void run(String... arg0) throws Exception {
this.template.convertAndSend("so43694619main", "foo");
this.latch.await(10, TimeUnit.SECONDS);
this.admin.deleteExchange("so43694619dlx");
this.admin.deleteQueue("so43694619main");
this.admin.deleteQueue("so43694619dlx");
}
#Bean
public Queue main() {
Map<String, Object> args = new HashMap<>();
args.put("x-dead-letter-exchange", "so43694619dlx");
args.put("x-dead-letter-routing-key", "so43694619dlxRK");
return new Queue("so43694619main", true, false, false, args);
}
#Bean
public Queue dlq() {
return new Queue("so43694619dlq");
}
#Bean
public DirectExchange dlx() {
return new DirectExchange("so43694619dlx");
}
#Bean
public Binding dlqBinding() {
return BindingBuilder.bind(dlq()).to(dlx()).with("so43694619dlxRK");
}
#RabbitListener(queues = "so43694619main")
public void listenMain(String in) {
throw new AmqpRejectAndDontRequeueException("failed");
}
#RabbitListener(queues = "so43694619dlq")
public void listenDlq(String in) {
System.out.println("ReceivedFromDLQ: " + in);
this.latch.countDown();
}
}
Result:
ReceivedFromDLQ: foo

Spring batch : Job instances run sequentially when using annotaitons

I have a simple annotation configuration for a Spring batch job as follows :
#Configuration
#EnableBatchProcessing
public abstract class AbstractFileLoader<T> {
private static final String FILE_PATTERN = "*.dat";
#Bean
#StepScope
#Value("#{stepExecutionContext['fileName']}")
public FlatFileItemReader<T> reader(String file) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
String path = file.substring(file.indexOf(":") + 1, file.length());
FileSystemResource resource = new FileSystemResource(path);
reader.setResource(resource);
DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
lineMapper.setFieldSetMapper(getFieldSetMapper());
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
tokenizer.setNames(getColumnNames());
lineMapper.setLineTokenizer(tokenizer);
reader.setLineMapper(lineMapper);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<T, T> processor() {
// TODO add transformations here
return null;
}
//Exception when using JobScope for the writer
#Bean
public ItemWriter<T> writer() {
ListItemWriter<T> writer = new ListItemWriter<T>();
return writer;
}
#Bean
public Job loaderJob(JobBuilderFactory jobs, Step s1,
JobExecutionListener listener) {
return jobs.get(getLoaderName()).incrementer(new RunIdIncrementer())
.listener(listener).start(s1).build();
}
#Bean
public Step readStep(StepBuilderFactory stepBuilderFactory,
ItemReader<T> reader, ItemWriter<T> writer,
ItemProcessor<T, T> processor, TaskExecutor taskExecutor,
ResourcePatternResolver resolver) {
final Step readerStep = stepBuilderFactory
.get(getLoaderName() + " ReadStep:slave").<T, T> chunk(25254)
.reader(reader).processor(processor).writer(writer)
.taskExecutor(taskExecutor).throttleLimit(16).build();
final Step partitionedStep = stepBuilderFactory
.get(getLoaderName() + " ReadStep:master")
.partitioner(readerStep)
.partitioner(getLoaderName() + " ReadStep:slave",
partitioner(resolver)).taskExecutor(taskExecutor)
.build();
return partitionedStep;
}
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor();
}
#Bean
public Partitioner partitioner(
ResourcePatternResolver resourcePatternResolver) {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
Resource[] resources;
try {
resources = resourcePatternResolver.getResources("file:"
+ getFilesPath() + FILE_PATTERN);
} catch (IOException e) {
throw new RuntimeException(
"I/O problems when resolving the input file pattern.", e);
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
public JobExecutionListener listener(ItemWriter<T> writer) {
/* org.springframework.batch.core.scope.StepScope scope; */
return new JobCompletionNotificationListener<T>(writer);
}
public abstract FieldSetMapper<T> getFieldSetMapper();
public abstract String getFilesPath();
public abstract String getLoaderName();
public abstract String[] getColumnNames();
}
When I run the same instance of the job with two different job parameters, both instances run sequentially instead of running in parallel. I have a SimpleAysncTaskExecutor bean configured which I assume should cause the jobs to be triggered asynchronously.
Do I need to add any more configuration to this class to have the job instances execute in parallel?
You have to configure the jobLauncher that you're using to launch jobs to use your TaskExecutor (or a separate pool). The simplest way is to override the bean:
#Bean
JobLauncher jobLauncher(JobRepository jobRepository) {
new SimpleJobLauncher(
taskExecutor: taskExecutor(),
jobRepository: jobRepository)
}
Don't be confused by the warning that will be logged saying that a synchronous task executor will be used. This is due to an extra instance that is created owing to the very awkward way Spring Batch uses to configure the beans it provides in SimpleBatchConfiguration (long story short, if you want to get rid of the warning you'll need to provide a BatchConfigurer bean and specify how 4 other beans are to be created, even if you want to change just one).
Note that it being the same job is irrelevant here. The problem is that by default the job launcher will launch the job on the same thread.

Categories

Resources