Spring Integration Java DSL - execute multiple service activators async? - java

There's a Job which has a list of tasks.
Each task has id, name, status.
I've created service activators for each tasks, as follows:
#ServiceActivator
public Message<Task> execute(Message<Task> message){
//do stuff
}
I've created a gateway for Job
and in the Integration flow, starting from the gateway:
#Bean
public IntegrationFlow startJob() {
return f -> f
.handle("jobService", "execute")
.channel("TaskRoutingChannel");
}
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.route("headers['Destination-Channel']")
.get();
}
#Bean
public IntegrationFlow TaskFlow() {
return IntegrationFlows.from("testTaskChannel")
.handle("aTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
#Bean
public IntegrationFlow TaskFlow2() {
return IntegrationFlows.from("test2TaskChannel")
.handle("bTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
I've got the tasks to execute sequentially, using routers as above.
However, I need to start the job, execute all of it's tasks in parallel.
I couldn't figure out how to get that going.. I tried using #Async on the service activator methods and making it return void. but in that case, how do i chain it back to the routing channel and make it start next task?
Please help. Thanks.
EDIT:
I used the RecepientListRouter along with ExecutorChannel to get the parallel execution:
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.routeToRecipients(r -> r
.recipient("testTaskChannel")
.recipient("test2TaskChannel"))
.get();
}
#Bean ExecutorChannel testTaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean ExecutorChannel test2TaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(10);
executor.initialize();
return executor;
}
Now, 3 questions:
1) If this is a good approach, how do i send specific parts of the payload to each recipient channel. Assume the payload is a List<>, and i want to send each list item to each channel.
2) How do I dynamically set the recipient channel? say from header? or a list?
3) Is this really a good approach? Is there a preferred way to do this?
Thanks in Advance.

Your TaskRoutingChannel must be an instance of ExecutorChannel. For example:
return f -> f
.handle("jobService", "execute")
.channel(c -> c.executor("TaskRoutingChannel", threadPoolTaskExecutor()));
Otherwise, yes: everything is invoked with the single Thread and it isn't good for your task.
UPDATE
Let me try to answer to your questions one by one, although it sounds like each of them must as separate SO one :-).
If you really need to send the same message to several services, you can use routeToRecipients, or can back to the publishSubscribe. Or even can do dynamic routing based on the header, for example.
To send the part of message to each channel there is just enough place .split() before your .routeToRecipients()
To answer to your last question I need to know the business requirements for the task.

Related

pubsub messages not being pulled with poller and serviceactivator

I've been trying to get pubsub to work within a spring application. To get up and running I've been reading through tutorials and documentation like this
I can get things to build and start but if I go through cloud console to send a message to the test subscription it never arrives.
This is what my code looks like right now:
#Configuration
#Import({GcpPubSubAutoConfiguration.class})
public class PubSubConfigurator {
#Bean
public GcpProjectIdProvider projectIdProvider(){
return () -> "project-id";
}
#Bean
public CredentialsProvider credentialsProvider(){
return GoogleCredentials::getApplicationDefault;
}
#Bean
public MessageChannel inputMessageChannel() {
return new PublishSubscribeChannel();
}
#Bean
#InboundChannelAdapter(channel = "inputMessageChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<Object> pubsubAdapter(PubSubTemplate pubSubTemplate) {
PubSubMessageSource messageSource = new PubSubMessageSource(pubSubTemplate, "tst-sandbox");
messageSource.setAckMode(AckMode.MANUAL);
messageSource.setPayloadType(String.class);
messageSource.setBlockOnPull(false);
messageSource.setMaxFetchSize(10);
//pubSubTemplate.pull("tst-sandbox", 10, true);
return messageSource;
}
// Define what happens to the messages arriving in the message channel.
#ServiceActivator(inputChannel = "inputMessageChannel")
public void messageReceiver(
String payload,
#Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {
System.out.println("Message arrived via an inbound channel adapter from sub-one! Payload: " + payload);
message.ack();
}
}
My thinking was that the poller annotation would start a poller to run every so often to check for messages and send them to the method annotated with service activator but this is clearly not the case as it is never hit.
Interestingly enough if I put a breakpoint right before "return messageSource" and check the result of the template.pull call the messages ARE returned so it is seemingly not an issue with the connection itself.
What am I missing here? Tutorials and documentation aren't helping much at this point as they all use pretty much the same bit of tutorial code like above...
I have tried variations of the above code like creating the adapter instead of the messagesource like so:
#Bean
public PubSubInboundChannelAdapter inboundChannelAdapter(
#Qualifier("inputMessageChannel") MessageChannel messageChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(pubSubTemplate, "tst-sandbox");
adapter.setOutputChannel(messageChannel);
adapter.setAckMode(AckMode.MANUAL);
adapter.setPayloadType(String.class);
return adapter;
}
to no avail. Any suggestions are welcome.
Found the problem after creating a spring boot project from scratch (main project is normal spring). Noticed in the debug output that it was auto starting the service activator bean and some other things like actually subscribing to the channels which it wasn't doing in the main project.
After a quick google the solution was simple, had to add
#EnableIntegration
annotation at class level and the messages started coming in.

How to solve if `RotatingServerAdvice` is not fetching file for multiple times?

I am using IntegrationFlow as Sftp Inbound DSL Configuration where I am using CustomTriggerAdvice to handle manual trigger. Please see below code snippet for reference.
I am also using RotatingServerAdvice for handling multiple path in same host.
But when I start the Sftp Inbound it fetch file for the first time from every path but it does not work for second time and onward. Sftp Inbound Starts but does not fetch file from paths. I couldn't figure out the problem. Is there anything that I am missing?
SftpConfiguration
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp
.inboundAdapter(dSF())
.preserveTimestamp(true)
.remoteDirectory(".")
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id(BEAN_ID)
.autoStartup(false)
.poller(Pollers
.fixedDelay(5000)
.advice(
customRotatingServerAdvice(dSF()),
customTriggerAdvice()
)
)
)
.channel(sftpReceiverChannel())
.handle(sftpInboundMessageHandler())
.get();
}
private MessageChannel sftpReceiverChannel() {
return MessageChannels.direct().get();
}
... ... ...
#Bean
public RotatingServerAdvice customRotatingServerAdvice(
DelegatingSessionFactory<LsEntry> dSF
) {
List<String> pathList = getSourcePathList();
for (String path : pathList) {
keyDirectories.add(new RotationPolicy.KeyDirectory(KEY, path));
}
return new RotatingServerAdvice(
dSF,
keyDirectories
);
}
#Bean
public CustomTriggerAdvice customTriggerAdvice() {
return new CustomTriggerAdvice(customControlChannel(),BEAN_ID);
}
#Bean
public IntegrationFlow customControlBus() {
return IntegrationFlows.from(customControlChannel())
.controlBus()
.get();
}
#Bean
public MessageChannel customControlChannel() {
return MessageChannels.direct().get();
}
CustomTriggerAdvice
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
private final MessageChannel controlChannel;
private final String BEAN_ID;
public CustomTriggerAdvice(MessageChannel controlChannel, String beanID) {
this.controlChannel = controlChannel;
this.BEAN_ID = beanID;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
controlChannel.send(new GenericMessage<>("#" + BEAN_ID + ".stop()"));
}
return result;
}
}
Starting Sftp Inbound using MessageChannel
#Qualifier("customControlChannel") MessageChannel controlChannel;
public void startSftpInbound(){
controlChannel.send(new GenericMessage<>("#" + beanID + ".start()"));
}
I need the system to start on demand and fetch file completing one cycle. If it is not stopped after that, it will continue polling and won't stop and my system will fall into an infinite loop. Is there any way to get that when the RotatingServerAdvice is completing polling from all servers at least one? Does it throw any event or something like that ?
You probably misunderstood the logic with the afterReceive(#Nullable Message<?> result, MessageSource<?> source) contract. You can't stop a channel adapter for your requirements when one of the servers has returned nothing to poll. This way you don't give a chance for another server to poll on the next polling cycle.
I think your idea is to iterate over all the servers only once and then stop. Probably independently of the result from any of them. It looks like the best way to stop for you it is to use a RotatingServerAdvice with the fair = true to move to the next server every time. The stop might be performed from the custom afterReceive() independently of the result when you see that the RotationPolicy.getCurrent() is equal to a last one in the list. So, this way you iterate over all of them and stop moving the first one for the next poling cycle.

How to enforce strict ordering for a Rabbit MQ message listener in Spring Integration?

I have a Spring Integration project where I send and receive messages from a RabbitMQ queue.
The order in which the system publishes messages is OK, but the order in which it afterwards receives messages is incorrect.
So I found this paragraph (https://docs.spring.io/spring-integration/reference/html/amqp.html#amqp-strict-ordering) and configured the listener with: simpleMessageListenerContainer.setPrefetchCount(1);.
We had some tests and it functioned well. However, after a week or so it started to give similar ordering issues.
Let me explain a bit more:
I have two flows (IntegrationFlows) in one spring integration application.
In the first IntegrationFlow it creates messages and publishes every message into a rabbit queue.
Just before the publishing it logs every message and I can confirm that the sequenceNumber increments as expected (in my case 1,2,3,4,5,6,7,8,9,10,11).
Then in the second flow is consumes these published messages. Right after each message is received, the flow logs it again. Here I found out that the sequenceNumber does not increment as expected (in my case 1,3,5,7,2,4,6,8,9,10,11).
It is very important for this application to handle messages in the right ordering.
When I looked into rabbit's UI I found out the following (most of them are what i expect):
rabbit has 3 connections (for 3 java applications)
the connection for my application has 3 channels. 2 of them are idle / have no consumers, 1 has 6 subscribers and a prefetch count of 1.
every subscriber has a prefetch count of 1
i am concerned with only 1 of these subscribers (a queue).
this queue has properties 'ack required' and not 'exclusive'.
I didn't expect 3 channels within my applications connection. I did not configure that myself, maybe Spring Integration / AMQP did that for me.
Now, I think that a another channel might become active and that this causes the ordering problem. But I cannot find this in the logging. And not in the configuration.
Pieces of code:
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(final ConnectionFactory connectionFactory,
final Jackson2JsonMessageConverter jackson2MessageConverter,
final MethodInterceptor retryInterceptor) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
simpleMessageListenerContainer.setMessageConverter(jackson2MessageConverter);
simpleMessageListenerContainer.setAdviceChain(retryInterceptor);
// force FIFO ordering (https://docs.spring.io/spring-integration/reference/html/amqp.html#amqp-strict-ordering):
simpleMessageListenerContainer.setPrefetchCount(1);
simpleMessageListenerContainer.setConcurrency();
return simpleMessageListenerContainer;
}
#Bean
public IntegrationFlow routeIncomingAmqpMessagesFlow(final SimpleMessageListenerContainer simpleMessageListenerContainer,
final Queue q1, final Queue q2, final Queue q3,
final Queue q4, final Queue q5,
final Queue q6) {
simpleMessageListenerContainer.setQueues(q1, q2, q3, q4, q5, q6);
return IntegrationFlows.from(
Amqp.inboundAdapter(simpleMessageListenerContainer)
.messageConverter(jackson2MessageConverter))
.log(LoggingHandler.Level.DEBUG, "com.my.thing")
.headerFilter(MyMessageHeaders.QUEUE_ROUTING_KEY)
.route(router())
.get();
}
private HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter(AmqpHeaders.CONSUMER_QUEUE);
router.setChannelMapping(AmqpConfiguration.Q1_NAME, Q1_CHANNEL);
router.setChannelMapping(AmqpConfiguration.Q2_NAME, Q2_CHANNEL);
router.setChannelMapping(AmqpConfiguration.Q3_NAME, Q3_CHANNEL);
router.setChannelMapping(AmqpConfiguration.Q4_NAME, Q4_CHANNEL);
router.setChannelMapping(AmqpConfiguration.Q5_NAME, Q5_CHANNEL);
router.setChannelMapping(AmqpConfiguration.Q6_NAME, Q6_CHANNEL);
router.setResolutionRequired(false);
router.setDefaultOutputChannelName("errorChannel");
return router;
}
publish:
#Bean
public IntegrationFlow prepareForUpload(final Handler1 handler1) {
BinaryFileSplitter binaryFileSplitter = new BinaryFileSplitter(true);
binaryFileSplitter.setChunkSize(chunksize);
return IntegrationFlows
.from(aFlow)
.handle(handler1)
.split(binaryFileSplitter)
.log(LoggingHandler.Level.TRACE, "com.my.log.identifyer")
// Send message to the correct AMQP queue after successful processing
.enrichHeaders(h -> h.header(QUEUE_ROUTING_KEY, AmqpConfiguration.Q4_NAME))
.channel(MyChannels.AMQP_OUTPUT)
.get();
}
#Bean
public IntegrationFlow outputAmqpFlow(final AmqpTemplate amqpTemplate, final UpdateDb updateDb) {
return IntegrationFlows.from(MyChannels.AMQP_OUTPUT)
.log(LoggingHandler.Level.DEBUG, "com.my.log.identify")
.handle(updateDb)
.handle(Amqp.outboundAdapter(amqpTemplate)
.exchangeName(AmqpConfiguration.THE_TOPIC_EXCHANGE)
.routingKeyExpression("headers['queueRoutingKey']"))
.get();
}
receive:
#Bean
public IntegrationFlow handleReceivedMessages() {
return IntegrationFlows
.from(Q4_CHANNEL)
.log(LoggingHandler.Level.DEBUG, "com.my.log.identifyer")
.handle(..)
.aggregate(a -> a.releaseStrategy(new ChunkReleaseStrategy()))
.transform(..)
....(..)..
...
As discussed in the documentation you pointed to, you need to add a BoundRabbitChannelAdvice to the splitter so that all the downstream flow uses the same channel.
#Bean
public IntegrationFlow flow(RabbitTemplate template) {
return IntegrationFlows.from(Gate.class)
.split(s -> s.delimiters(",")
.advice(new BoundRabbitChannelAdvice(template)))
.<String, String>transform(String::toUpperCase)
.handle(Amqp.outboundAdapter(template).routingKey("rk"))
.get();
}

spring webflux how to manage sequential business logic code in reactive world

Is this approach is reactive friendly?
I have a reactive controller "save" method calling myService.save(request).
The service layer needs to:
jdbc save(on another scheduler because code is blocking),
generate a template string (on another scheduler),
send an email(on another scheduler),
finally return the saved entity to the controller layer
I can't chain all my calls in one pipeline or I don't know how to achieve this, because I want to send back (1) that is lost as soon as I do ....flatMap(templateService::generateStringTemplate) for example.
So instead I trigger my sub operations inside (1).
Is it how am I supposed to handle this or is there a clever way to do it in one pipeline ?
Below code to support the question. Thanks.
Service called by Controller layer
public Mono<Prospect> save(final Prospect prospect) {
return Mono.fromCallable(
() -> {
Prospect savedProspect = transactionTemplate.execute(status -> prospectRepository.save(prospect));
templateService.generateProspectSubscription(savedProspect)
.map(t ->
EmailPostRequest.builder()
...
.build())
.flatMap(emailService::send)
.subscribe();
return savedProspect;
})
.subscribeOn(jdbcScheduler);
}
TemplateService
public Mono<String> generateProspectSubscription(final Prospect prospect) {
return Mono.fromCallable(
() -> {
Map<String, Object> model = new HashMap<>();
...
Template t = freemarkerConfig.getTemplate(WELCOME_EN_FTL);
String html = FreeMarkerTemplateUtils.processTemplateIntoString(t, model);
return html;
}
).subscribeOn(freemarkerScheduler);
}
EmailService
public Mono<Void> send(final EmailPostRequest e) {
return Mono.fromCallable(
() -> {
MimeMessage message = emailSender.createMimeMessage();
MimeMessageHelper mimeHelper = new MimeMessageHelper(message,
MimeMessageHelper.MULTIPART_MODE_MIXED_RELATED,
StandardCharsets.UTF_8.name());
mimeHelper.setTo(e.getTo());
mimeHelper.setText(e.getText(), true);
mimeHelper.setSubject(e.getSubject());
mimeHelper.setFrom(new InternetAddress(e.getFrom(), e.getPersonal()));
emailSender.send(message);
return Mono.empty();
}
).subscribeOn(emailScheduler).then();
}
EDITED SERVICE
I think this version of service layer is cleaner but any comments is appreciated
public Mono<Prospect> save(final Prospect prospect) {
return Mono.fromCallable(
() -> transactionTemplate.execute(status -> prospectRepository.save(prospect)))
.subscribeOn(jdbcScheduler)
.flatMap(savedProspect -> {
templateService.generateProspectSubscription(savedProspect)
.map(t ->
EmailPostRequest.builder()
...
.build())
.flatMap(emailService::send)
.subscribe();
return Mono.just(savedProspect);
}
);
}
This approach is not reactive friendly, as you're 100% wrapping blocking libraries.
With this use case, you can't really see the benefit of a reactive runtime and chances are the performance of your application is worse than a blocking one.
If your main motivation is performance, than this is probably counter-productive.
Offloading a lot of blocking I/O work on to specialized Schedulers has a runtime cost in term of memory (creating more threads) and CPU (context switching). If performance and scalability are your primary concern, then switching to Spring MVC and leveraging the Flux/Mono support where it fits, or even calling block() operators is probably a better fit.
If your main motivation is using a specific library, like Spring Framework's WebClient with Spring MVC, then you're better off using .block() operators in selected places rather than wrapping and scheduling everything.

Spring Integration DSL JDBC inbound channel adapter

I use spring integration to read data from database.
Now i use polling adapter
#Bean
public MessageSource<Object> jdbcMessageSource() {
JdbcPollingChannelAdapter a = new JdbcPollingChannelAdapter(dataSource(), "SELECT id, clientName FROM client");
return a;
}
Flow:
#Bean
public IntegrationFlow pollingFlow() throws Exception {
return IntegrationFlows.from(jdbcMessageSource(),
c -> c.poller(Pollers.fixedRate(30000).maxMessagesPerPoll(1)))
.channel(channel1())
.handle(handler())
.get();
}
But i would like to schedule my flow from other system.
Anyone know how to do this?
schedule my flow from other system
From your flow perspective it sounds like event driven action. For this purpose you should use JdbcOutboundGateway with the same SELECT.
And, of course, you should find the hook for that external system to trigger an event for your flow input channel. That might be any Inbound Channel Adapter or Message Driven Adapter, e.g. JMS, AMQP, HTTP and so. Depends what you already have in your middleware and what will be possible to expose from this your application to external systems.
I think i solve the problem with a custom trigger:
public Trigger onlyOnceTrigger() {
return new Trigger() {
private final AtomicBoolean invoked = new AtomicBoolean();
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
return this.invoked.getAndSet(true) ? null : new Date();
}
};
}
And my flow:
public IntegrationFlow pollingFlow() throws Exception {
return IntegrationFlows.from(jdbcMessageSource(),
c -> c.poller(Pollers.trigger(onlyOnceTrigger()).maxMessagesPerPoll(1)))
.channel(channel1())
.handle(handler())
.get();
}

Categories

Resources