Hi I have a file listener that is reading files parallel / more than one at a time
package com.example.demo.flow;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.dsl.*;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.integration.file.dsl.Files;
import org.springframework.stereotype.Component;
import java.io.File;
import java.util.concurrent.Executors;
/**
* Created by muhdk on 03/01/2020.
*/
#Component
#Slf4j
public class TestFlow {
#Bean
public StandardIntegrationFlow errorChannelHandler() {
return IntegrationFlows.from("testChannel")
.handle(o -> {
log.info("Handling error....{}", o);
}).get();
}
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(5)
.errorChannel("testChannel")))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
}
Its not going to my custom error channel
but if I remove the line
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
Then it goes to error channel.
How can I make it work so that it does go to my custom error channel with executor.
It looks like when using Executor services with multiple messages it doesn't work with normal errorChannel which I have no idea why
I made a change like this
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(10)
))
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
The line here
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
The rest remain the same and it works.
Related
Currently I am keeping track of active threads that in process due to not letting system shutdown until I do not have any procesing threads
For example
package com.example.demo.flow;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.dsl.*;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.integration.file.dsl.Files;
import org.springframework.stereotype.Component;
import java.io.File;
import java.util.concurrent.Executors;
/**
* Created by on 03/01/2020.
*/
#Component
#Slf4j
public class TestFlow {
#Bean
public StandardIntegrationFlow errorChannelHandler() {
return IntegrationFlows.from("testChannel")
.handle(o -> {
log.info("Handling error....{}", o);
}).get();
}
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(5)
.errorChannel("testChannel")))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
}
I have enabled multiple file for integration flow but in Error Handler the thread is different
How can I know which thread did it come from?
Is there anyway I can find out as this is very critical
According your current configuration the testChannel is a DrectChannel, so whatever you send to it is going to be processed on a thread your send from.
Therefore the Thread.currentThread() is enough for your to determine it.
For more general solution consider to have a MessagePublishingErrorHandler as a bean with the ChannelUtils.MESSAGE_PUBLISHING_ERROR_HANDLER_BEAN_NAME to override a default one. This MessagePublishingErrorHandler can be supplied with a custom ErrorMessageStrategy. There, when you create an ErrorMessage, you can add a custom header with the same Thread.currentThread() info to carry onto that error channel processing even if it is done in a separate thread.
You also could just throw an exception with that info, too, instead!
In the route set up we have a call for WebClient.build() being set up before the route is declared:
#Override
public void configure() {
createSubscription(activeProfile.equalsIgnoreCase("RESTART"));
from(String.format("reactive-streams:%s", streamName))
.to("log:camel.proxy?level=INFO&groupInterval=500000")
.to(String.format("kafka:%s?brokers=%s", kafkaTopic, kafkaBrokerUrls));
}
private void createSubscription(boolean restart) {
WebClient.builder()
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE)
.build()
.post()
.uri(initialRequestUri)
.body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody))
.retrieve()
.bodyToMono(String.class)
.map(initResp ->
new JSONObject(initResp)
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONObject("INFO")
.getString("SSEURL")
)
.flatMapMany(url -> {
log.info(url);
return WebClient.create()
.get()
.uri(url)
.retrieve()
.bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
})
.flatMap(sse -> {
val data = new JSONObject(sse.data())
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONArray(apiName);
val list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
}
);
}
)
.onBackpressureBuffer()
.flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class))
.doFirst(() -> log.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started")))
.doOnError(err -> {
log.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
createSubscription(true);
})
.doOnComplete(() -> {
log.warn(String.format("Reactive stream %s has completed, restarting", streamName));
createSubscription(true);
})
.subscribe();
}
for my understanding the WebClient set up is made for the whole Spring Boot app and not the specific route of the Apache Camel (it isn't bent to the specific route id or url somehow), that's why new routes using the new reactive steams of other urls and other needs with headers/initial messages will get this set up too, what isn't needed.
So, the question here, is it possible to make a specific WebClient set up, associated not with the whole application, but with the specific route and make it applied for the route?
Is this configuration possible with Spring DSL?
The way to be applied there is rather complex:
Create 2 routes, the first one is executed first and only once and is triggering a specific method of specific bean, passing the set up for the WebClient.builder() with method parameters and executing the subscription for the WebFlux. And yes, that reactive streams set up is done within the Spring Boot app's Spring context, not the Apache Camel context. So it has no direct associations with route rather than being called for set up when the specific route was started. So route looks like:
<?xml version="1.0" encoding="UTF-8"?>
Provide the bean. I thave put it to the Spring Boot app, not the Apache Camel context like below. The drawback here is that I have to put it here no matter will the specific route work or now. So it is always in the memory.
import org.apache.camel.CamelContext;
import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsService;
import org.json.JSONArray;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.codec.ServerSentEvent;
import org.springframework.stereotype.Component;
import org.springframework.web.reactive.function.BodyInserters;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
#Component
public class WebFluxSetUp {
private final Logger logger = LoggerFactory.getLogger(WebFluxSetUp.class);
private final CamelContext camelContext;
private final CamelReactiveStreamsService camelReactiveStreamsService;
WebFluxSetUp(CamelContext camelContext, CamelReactiveStreamsService camelReactiveStreamsService) {
this.camelContext = camelContext;
this.camelReactiveStreamsService = camelReactiveStreamsService;
}
public void executeWebfluxSetup(boolean restart, String initialRequestUri, String restartRequestBody, String initialRequestBody, String apiName, String streamName) {
{
WebClient.builder().defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE).build().post().uri(initialRequestUri).body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody)).retrieve().bodyToMono(String.class).map(initResp -> new JSONObject(initResp).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONObject("INFO").getString("SSEURL")).flatMapMany(url -> {
logger.info(url);
return WebClient.create().get().uri(url).retrieve().bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
}).flatMap(sse -> {
JSONArray data = new JSONObject(sse.data()).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONArray(apiName);
ArrayList<String> list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
});
}).onBackpressureBuffer().flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class)).doFirst(() -> logger.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started"))).doOnError(err -> {
logger.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).doOnComplete(() -> {
logger.warn(String.format("Reactive stream %s has completed, restarting", streamName));
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).subscribe();
}
}
}
Other drawbacks there is when the route is stopped, the WebFlux client still trying to spam the reactive stream url. And there is no route-associated api/event handler to stop it and make not had-coded to the specific route.
I am trying to send messages via rabbitmq to an axon4 spring boot based system. The message is received but no events are triggered. I am very sure I am missing an essential part, but up to now I wasn't able to figure it out.
Here the relevant part of my application.yml
axon:
amqp:
exchange: axon.fanout
transaction-mode: publisher_ack
# adding the following lines changed nothing
eventhandling:
processors:
amqpEvents:
source: in.queue
mode: subscribing
spring:
rabbitmq:
username: rabbit
password: rabbit
From the docs I found that I am supposed to create a SpringAMQPMessageSource bean:
import com.rabbitmq.client.Channel;
import lombok.extern.slf4j.Slf4j;
import org.axonframework.extensions.amqp.eventhandling.AMQPMessageConverter;
import org.axonframework.extensions.amqp.eventhandling.spring.SpringAMQPMessageSource;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Slf4j
#Configuration
public class AxonConfig {
#Bean
SpringAMQPMessageSource inputMessageSource(final AMQPMessageConverter messageConverter) {
return new SpringAMQPMessageSource(messageConverter) {
#RabbitListener(queues = "in.queue")
#Override
public void onMessage(final Message message, final Channel channel) {
log.debug("received external message: {}, channel: {}", message, channel);
super.onMessage(message, channel);
}
};
}
}
If I send a message to the queue from the rabbitmq admin panel I see the log:
AxonConfig : received external message: (Body:'[B#13f7aeef(byte[167])' MessageProperties [headers={}, contentLength=0, receivedDeliveryMode=NON_PERSISTENT, redelivered=false, receivedExchange=, receivedRoutingKey=in.queue, deliveryTag=2, consumerTag=amq.ctag-xi34jwHHA__xjENSteX5Dw, consumerQueue=in.queue]), channel: Cached Rabbit Channel: AMQChannel(amqp://rabbit#127.0.0.1:5672/,1), conn: Proxy#11703cc8 Shared Rabbit Connection: SimpleConnection#581cb879 [delegate=amqp://rabbit#127.0.0.1:5672/, localPort= 58614]
Here the Aggregate that should receive the events:
import lombok.extern.slf4j.Slf4j;
import org.axonframework.commandhandling.CommandHandler;
import org.axonframework.config.ProcessingGroup;
import org.axonframework.eventsourcing.EventSourcingHandler;
import org.axonframework.modelling.command.AggregateIdentifier;
import org.axonframework.spring.stereotype.Aggregate;
import pm.mbo.easyway.api.app.order.commands.ConfirmOrderCommand;
import pm.mbo.easyway.api.app.order.commands.PlaceOrderCommand;
import pm.mbo.easyway.api.app.order.commands.ShipOrderCommand;
import pm.mbo.easyway.api.app.order.events.OrderConfirmedEvent;
import pm.mbo.easyway.api.app.order.events.OrderPlacedEvent;
import pm.mbo.easyway.api.app.order.events.OrderShippedEvent;
import static org.axonframework.modelling.command.AggregateLifecycle.apply;
#ProcessingGroup("amqpEvents")
#Slf4j
#Aggregate
public class OrderAggregate {
#AggregateIdentifier
private String orderId;
private boolean orderConfirmed;
#CommandHandler
public OrderAggregate(final PlaceOrderCommand command) {
log.debug("command: {}", command);
apply(new OrderPlacedEvent(command.getOrderId(), command.getProduct()));
}
#CommandHandler
public void handle(final ConfirmOrderCommand command) {
log.debug("command: {}", command);
apply(new OrderConfirmedEvent(orderId));
}
#CommandHandler
public void handle(final ShipOrderCommand command) {
log.debug("command: {}", command);
if (!orderConfirmed) {
throw new IllegalStateException("Cannot ship an order which has not been confirmed yet.");
}
apply(new OrderShippedEvent(orderId));
}
#EventSourcingHandler
public void on(final OrderPlacedEvent event) {
log.debug("event: {}", event);
this.orderId = event.getOrderId();
orderConfirmed = false;
}
#EventSourcingHandler
public void on(final OrderConfirmedEvent event) {
log.debug("event: {}", event);
orderConfirmed = true;
}
#EventSourcingHandler
public void on(final OrderShippedEvent event) {
log.debug("event: {}", event);
orderConfirmed = true;
}
protected OrderAggregate() {
}
}
So the problem is that the messages are received by the system but no events are triggered. The content of the messages seem to be irrelevant. Whatever I send to the queue I only get a log message from my onMessage method.
JavaDoc of SpringAMQPMessageSource says this:
/**
* MessageListener implementation that deserializes incoming messages and forwards them to one or more event processors.
* <p>
* The SpringAMQPMessageSource must be registered with a Spring MessageListenerContainer and forwards each message
* to all subscribed processors.
* <p>
* Note that the Processors must be subscribed before the MessageListenerContainer is started. Otherwise, messages will
* be consumed from the AMQP Queue without any processor processing them.
*
* #author Allard Buijze
* #since 3.0
*/
But up to now I couldn't find out where or how to register it.
The axon.eventhandling entries in my config and #ProcessingGroup("amqpEvents") in my Aggregate are already from testing. But having those entries in or not made no difference at all. Also tried without the mode=subscribing.
Exact versions: Spring Boot 2.1.4, Axon 4.1.1, axon-amqp-spring-boot-autoconfigure 4.1
Any help or hints highly appreciated.
Update 23.04.19:
I tried to write my own class like this:
import com.rabbitmq.client.Channel;
import lombok.extern.slf4j.Slf4j;
import org.axonframework.common.Registration;
import org.axonframework.eventhandling.EventMessage;
import org.axonframework.extensions.amqp.eventhandling.AMQPMessageConverter;
import org.axonframework.messaging.SubscribableMessageSource;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.amqp.rabbit.listener.api.ChannelAwareMessageListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.function.Consumer;
#Slf4j
#Component
public class RabbitMQSpringAMQPMessageSource implements ChannelAwareMessageListener, SubscribableMessageSource<EventMessage<?>> {
private final List<Consumer<List<? extends EventMessage<?>>>> eventProcessors = new CopyOnWriteArrayList<>();
private final AMQPMessageConverter messageConverter;
#Autowired
public RabbitMQSpringAMQPMessageSource(final AMQPMessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
#Override
public Registration subscribe(final Consumer<List<? extends EventMessage<?>>> messageProcessor) {
eventProcessors.add(messageProcessor);
log.debug("subscribe to: {}", messageProcessor);
return () -> eventProcessors.remove(messageProcessor);
}
#RabbitListener(queues = "${application.queues.in}")
#Override
public void onMessage(final Message message, final Channel channel) {
log.debug("received external message: {}, channel: {}", message, channel);
log.debug("eventProcessors: {}", eventProcessors);
if (!eventProcessors.isEmpty()) {
messageConverter.readAMQPMessage(message.getBody(), message.getMessageProperties().getHeaders())
.ifPresent(event -> eventProcessors.forEach(
ep -> ep.accept(Collections.singletonList(event))
));
}
}
}
The result is the same and the log now proofs that the eventProcessors are just empty.
eventProcessors: []
So the question is, how to register the event processors correctly. Is there a way how to do that properly with spring?
Update2:
Also no luck with this:
#Slf4j
#Component("rabbitMQSpringAMQPMessageSource")
public class RabbitMQSpringAMQPMessageSource extends SpringAMQPMessageSource {
#Autowired
public RabbitMQSpringAMQPMessageSource(final AMQPMessageConverter messageConverter) {
super(messageConverter);
}
#RabbitListener(queues = "${application.queues.in}")
#Override
public void onMessage(final Message message, final Channel channel) {
try {
final var eventProcessorsField = this.getClass().getSuperclass().getDeclaredField("eventProcessors");
eventProcessorsField.setAccessible(true);
final var eventProcessors = (List<Consumer<List<? extends EventMessage<?>>>>) eventProcessorsField.get(this);
log.debug("eventProcessors: {}", eventProcessors);
} catch (NoSuchFieldException | IllegalAccessException e) {
e.printStackTrace();
}
log.debug("received message: message={}, channel={}", message, channel);
super.onMessage(message, channel);
}
}
axon:
eventhandling:
processors:
amqpEvents:
source: rabbitMQSpringAMQPMessageSource
mode: SUBSCRIBING
Registering it programmatically in addition to above also didn't help:
#Autowired
void configure(EventProcessingModule epm,
RabbitMQSpringAMQPMessageSource rabbitMessageSource) {
epm.registerSubscribingEventProcessor("rabbitMQSpringAMQPMessageSource", c -> rabbitMessageSource);
epm.assignProcessingGroup("amqpEvents", "rabbitMQSpringAMQPMessageSource");// this line also made no difference
}
Of course #ProcessingGroup("amqpEvents") is in place in my class that contains the #EventSourcingHandler annotated methods.
Update 25.4.19:
see accepted answer from Allard. Thanks a lot pointing me at the mistake I made: I missed that EventSourcingHandler don't receive messages from outside. This is for projections. Not for distributing Aggregates! ups
Here the config/classes that are receiving events from rabbitmq now:
axon:
eventhandling:
processors:
amqpEvents:
source: rabbitMQSpringAMQPMessageSource
mode: SUBSCRIBING
import com.rabbitmq.client.Channel;
import lombok.extern.slf4j.Slf4j;
import org.axonframework.extensions.amqp.eventhandling.AMQPMessageConverter;
import org.axonframework.extensions.amqp.eventhandling.spring.SpringAMQPMessageSource;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Slf4j
#Component("rabbitMQSpringAMQPMessageSource")
public class RabbitMQSpringAMQPMessageSource extends SpringAMQPMessageSource {
#Autowired
public RabbitMQSpringAMQPMessageSource(final AMQPMessageConverter messageConverter) {
super(messageConverter);
}
#RabbitListener(queues = "${application.queues.in}")
#Override
public void onMessage(final Message message, final Channel channel) {
log.debug("received message: message={}, channel={}", message, channel);
super.onMessage(message, channel);
}
}
import lombok.extern.slf4j.Slf4j;
import org.axonframework.config.ProcessingGroup;
import org.axonframework.eventhandling.EventHandler;
import org.axonframework.queryhandling.QueryHandler;
import org.springframework.stereotype.Service;
import pm.mbo.easyway.api.app.order.events.OrderConfirmedEvent;
import pm.mbo.easyway.api.app.order.events.OrderPlacedEvent;
import pm.mbo.easyway.api.app.order.events.OrderShippedEvent;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
#Slf4j
#ProcessingGroup("amqpEvents")
#Service
public class OrderedProductsEventHandler {
private final Map<String, OrderedProduct> orderedProducts = new HashMap<>();
#EventHandler
public void on(OrderPlacedEvent event) {
log.debug("event: {}", event);
String orderId = event.getOrderId();
orderedProducts.put(orderId, new OrderedProduct(orderId, event.getProduct()));
}
#EventHandler
public void on(OrderConfirmedEvent event) {
log.debug("event: {}", event);
orderedProducts.computeIfPresent(event.getOrderId(), (orderId, orderedProduct) -> {
orderedProduct.setOrderConfirmed();
return orderedProduct;
});
}
#EventHandler
public void on(OrderShippedEvent event) {
log.debug("event: {}", event);
orderedProducts.computeIfPresent(event.getOrderId(), (orderId, orderedProduct) -> {
orderedProduct.setOrderShipped();
return orderedProduct;
});
}
#QueryHandler
public List<OrderedProduct> handle(FindAllOrderedProductsQuery query) {
log.debug("query: {}", query);
return new ArrayList<>(orderedProducts.values());
}
}
I removed the #ProcessingGroup from my Aggregate of course.
My logs:
RabbitMQSpringAMQPMessageSource : received message: ...
OrderedProductsEventHandler : event: OrderShippedEvent...
In Axon, Aggregates do not receive events from "outside". The Event Handlers inside Aggregates (more specifically, they are EventSourcingHandlers) only handle events that have been published by that same aggregate instance, so that it can reconstruct its prior state.
It is only external event handlers, for example the ones that update projections, that will receive events from external sources.
For that to work, your application.yml should mention the bean name as a processors' source instead of the queue name. So in your first example:
eventhandling:
processors:
amqpEvents:
source: in.queue
mode: subscribing
Should become:
eventhandling:
processors:
amqpEvents:
source: inputMessageSource
mode: subscribing
But again, this only works for event handlers defined on components, not on Aggregates.
I am trying to achieve the flow shown in the image below using Spring batch. I was referring to java configuration on page 85 of https://docs.spring.io/spring-batch/4.0.x/reference/pdf/spring-batch-reference.pdf where it talks about Java Configuration.
For some reason, when the Decider returns TYPE2, the batch ends with Failed State without any error message. Following is the java configuration of my job:
jobBuilderFactory.get("myJob")
.incrementer(new RunIdIncrementer())
.preventRestart()
.start(firstStep())
.next(typeDecider()).on("TYPE1").to(stepType1()).next(lastStep())
.from(typeDecider()).on("TYPE2").to(stepType2()).next(lastStep())
.end()
.build();
I think something not right with the java configuration though it matches with the Spring document. A flow can be useful here but I am sure there would be a way without it. Any idea on how to achieve this?
You need to define the flow not only from the decider to next steps but also starting from stepType1 and stepType2 to lastStep. Here is an example:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.job.flow.FlowExecutionStatus;
import org.springframework.batch.core.job.flow.JobExecutionDecider;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Step firstStep() {
return steps.get("firstStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("firstStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public JobExecutionDecider decider() {
return (jobExecution, stepExecution) -> new FlowExecutionStatus("TYPE1"); // or TYPE2
}
#Bean
public Step stepType1() {
return steps.get("stepType1")
.tasklet((contribution, chunkContext) -> {
System.out.println("stepType1");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step stepType2() {
return steps.get("stepType2")
.tasklet((contribution, chunkContext) -> {
System.out.println("stepType2");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step lastStep() {
return steps.get("lastStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("lastStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(firstStep())
.next(decider())
.on("TYPE1").to(stepType1())
.from(decider()).on("TYPE2").to(stepType2())
.from(stepType1()).on("*").to(lastStep())
.from(stepType2()).on("*").to(lastStep())
.build()
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
This prints:
firstStep
stepType1
lastStep
If the decider returns TYPE2, the sample prints:
firstStep
stepType2
lastStep
Hope this helps.
Ran into the similar issue where else part is not being called (technically only first configured on() is being called)
Almost all the websites related to the flow and decider examples have the similar job configurations and was not able to figure it out what was the issue.
After some research, found the way how spring maintains the deciders and decisions.
At high level, while initializing the application, based on the job configuration spring maintains a list of decisions for a decider object (like decsion0, decision1, so on).
when we call the decider() method it always returns a new object for the decider. As it is returning a new object, the list contains only one mapping for each object (i.e., decision0 ) and since it is a list, it always return the first configured decision.So this is the reason why only the first configured transition only being called.
Solution:
Instead of making a method call to the decider, create a single-ton bean for the decider and use it in the job configuration
Example:
#Bean
public JobExecutionDecider stepDecider() {
return new CustomStepDecider();
}
inject it and use it in the job creation bean
#Bean
public Job sampleJob(Step step1, Step step2,Step step3,
JobExecutionDecider stepDecider) {
return jobBuilderFactory.get("sampleJob")
.start(step1)
.next(stepDecider).on("TYPE1").to(step2)
.from(stepDecider).on("TYPE2").to(step3)
}
Hope this helps.
Create a dummyStep which will return the FINISH status and jump to next decider. you need to redirect flow cursor to next decider or virtual step after finishing the current step
.next(copySourceFilesStep())
.next(firstStepDecider).on(STEP_CONTINUE).to(executeStep_1())
.from(firstStepDecider).on(STEP_SKIP).to(virtualStep_1())
//-executeStep_2
.from(executeStep_1()).on(ExitStatus.COMPLETED.getExitCode())
.to(secondStepDecider).on(STEP_CONTINUE).to(executeStep_2())
.from(secondStepDecider).on(STEP_SKIP).to(virtualStep_3())
.from(virtualStep_1()).on(ExitStatus.COMPLETED.getExitCode())
.to(secondStepDecider).on(STEP_CONTINUE).to(executeStep_2())
.from(secondStepDecider).on(STEP_SKIP).to(virtualStep_3())
//-executeStep_3
.from(executeStep_2()).on(ExitStatus.COMPLETED.getExitCode())
.to(thirdStepDecider).on(STEP_CONTINUE).to(executeStep_3())
.from(thirdStepDecider).on(STEP_SKIP).to(virtualStep_4())
.from(virtualStep_3()).on(ExitStatus.COMPLETED.getExitCode())
.to(thirdStepDecider).on(STEP_CONTINUE).to(executeStep_3())
.from(thirdStepDecider).on(STEP_SKIP).to(virtualStep_4())
#Bean
public Step virtulaStep_2() {
return stepBuilderFactory.get("continue-virtualStep2")
.tasklet((contribution, chunkContext) -> {
return RepeatStatus.FINISHED;
})
.build();
}
When using Java Lambda function to do a kinesis data firehose transformation , getting the below error. The below is my transformed JSON look like
{
"records": [
{
"recordId": "49586022990098427206724983301551059982279766660054253570000000",
"result": "Ok",
"data": "ZXlKMGFXTnJaWEpmYzNsdFltOXNJam9pVkVWVFZEY2lMQ0FpYzJWamRHOXlJam9pU0VWQlRGUklRMEZTUlNJc0lDSmphR0Z1WjJVaQ0KT2kwd0xqQTFMQ0FpY0hKcFkyVWlPamcwTGpVeGZRbz0="
}
]
}
error in the kinesis console is like
Invalid output structure: Please check your function and make sure the processed records contain valid result status of Dropped, Ok, or ProcessingFailed
Anyone have an idea on this , i could not find an example code using Java on the kinesis data transformation
https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
This document says about the output structure
I just got done struggling through this in scala (java compatible). The key is to use the return type: com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse
import java.nio.ByteBuffer
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse._
import com.amazonaws.services.lambda.runtime.events.{KinesisAnalyticsInputPreprocessingResponse, KinesisFirehoseEvent}
import com.amazonaws.services.lambda.runtime.{Context, LambdaLogger, RequestHandler}
import scala.collection.JavaConversions._
import scala.language.implicitConversions
class Handler extends RequestHandler[KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse] {
override def handleRequest(in: KinesisFirehoseEvent, context: Context): KinesisAnalyticsInputPreprocessingResponse = {
val logger: LambdaLogger = context.getLogger
val records = in.getRecords
val tranformed = records.flatMap(record => {
try {
val changed = record.getData.array()
//do some sort of transform
val rec = new Record(record.getRecordId, Result.Ok, ByteBuffer.wrap(changed))
Some(rec)
} catch {
case e: Exception => {
logger.log(e.toString)
Some(new Record(record.getRecordId, Result.Dropped, record.getData))
}
}
})
val response = new KinesisAnalyticsInputPreprocessingResponse()
response.setRecords(tranformed.toList)
response
}
}
A java example:
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse;
import com.amazonaws.services.lambda.runtime.events.KinesisFirehoseEvent;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.log4j.Log4j2;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
#Log4j2
#RequiredArgsConstructor
public class FirehoseHandler implements RequestHandler<KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse> {
private final ObjectMapper mapper;
#Override
public KinesisAnalyticsInputPreprocessingResponse handleRequest(KinesisFirehoseEvent kinesisFirehoseEvent, Context context) {
return Flux.fromIterable(kinesisFirehoseEvent.getRecords())
.flatMap(this::transformRecord)
.collectList()
.map(KinesisAnalyticsInputPreprocessingResponse::new)
.block();
}
private Mono<KinesisAnalyticsInputPreprocessingResponse.Record> transformRecord(KinesisFirehoseEvent.Record record) {
return Mono.just(record.getData())
.map(StandardCharsets.UTF_8::decode)
.flatMap(data -> Mono.fromCallable(() -> doYourOwnThing(data)))
.map(StandardCharsets.UTF_8::encode)
.map(data -> new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.Ok, data))
.onErrorResume(e -> Mono.just(new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.ProcessingFailed, record.getData())));
}
}