How to pipeline amqp queues with spring integration? - java

I'm used to camel, where it is somewhat simple to pipeline output from one element to the input of another.
I want to send all application events to an AMQP queue, the fire hose, and then route events to different queues depending on the event type. For example, if the event type is session.created I'd like to take it out from the fire hose and send it to the session.created queue.
I have defined the following raabitmq configuration
<rabbit:connection-factory id="connectionFactory" host="localhost"/>
<rabbit:admin connection-factory="connectionFactory"/>
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory"/>
<rabbit:queue name="q.firehose"/>
<rabbit:queue name="q.session.created"/>
<rabbit:direct-exchange name="e.firehose">
<rabbit:bindings>
<rabbit:binding key="firehose" queue="q.firehose"/>
</rabbit:bindings>
</rabbit:direct-exchange>
<rabbit:headers-exchange name="e.router">
<rabbit:bindings>
<rabbit:binding queue="q.session.created">
<rabbit:binding-arguments>
<entry key="x-match" value="all"/>
<entry key="event_type" value="session.created"/>
</rabbit:binding-arguments>
</rabbit:binding>
</rabbit:bindings>
</rabbit:headers-exchange>
And I want to try something like this spring integration configuration:
<int:channel id="fromFirehose"/>
<int:channel id="toRouter"/>
<int-amqp:inbound-channel-adapter channel="fromFirehose" queue-names="q.firehose" connection-factory="connectionFactory"/>
<!-- Some config element here to move all input from the firehose out and put it into e.router--/>
<int-amqp:outbound-channel-adapter channel="toRouter" exchange-name="e.router" amqp-template="routerTemplate" />
Which component is best suited to move input from the fire hose into the e.router exchange? Is this a good approach?
Looks like a transformer can move from messages from one channel to the other but you are obliged to apply a transformation. If there is no other way, is it there a DoNothingTransformer available?
Thanks in advance!

Just link both inbound and outbound adapters to the same channel.
For example:
<int:channel id="fromFirehose"/>
<int-amqp:inbound-channel-adapter channel="fromFirehose" queue-names="q.firehose" connection-factory="connectionFactory"/>
<int-amqp:outbound-channel-adapter channel="fromFirehose" exchange-name="e.router" amqp-template="routerTemplate" />

Related

Spring RabbitMQ - xml configuration - manual ack

I have defined configuration for rabbit in spring:
<rabbit:connection-factory id="amqpConnectionFactory" addresses="${amqp.host}:${amqp.port}"
thread-factory="rabbitThreadFactory"
cache-mode="CHANNEL"
channel-cache-size="25"
username="${amqp.user}"
password="${amqp.pass}"
virtual-host="${amqp.vhost}"/>
<rabbit:admin connection-factory="amqpConnectionFactory" id="rabbitAdmin"/>
<rabbit:topic-exchange id="motoTopicExchange" name="moto.ex.topic" >
<rabbit:bindings>
<rabbit:binding pattern="moto.*.speed" queue="motoQueue8"/>
<rabbit:binding pattern="moto.*.tour" queue="motoQueue9"/>
<rabbit:binding pattern="moto.*.naked" queue="motoQueue10"/>
</rabbit:bindings>
</rabbit:topic-exchange>
<rabbit:queue id="motoQueue8" name="moto.queue.8"/>
<rabbit:queue id="motoQueue9" name="moto.queue.9"/>
<rabbit:queue id="motoQueue10" name="moto.queue.10"/>
<rabbit:template id="rabbitTemplate"
connection-factory="amqpConnectionFactory"
retry-template="retryTemplate"
message-converter="rabbitJsonConverter"/>
<bean id="rabbitJsonConverter" class="org.springframework.amqp.support.converter.Jackson2JsonMessageConverter"/>
<rabbit:listener-container connection-factory="amqpConnectionFactory" message-converter="rabbitJsonConverter"
max-concurrency="10" acknowledge="auto">
<rabbit:listener ref="amqpService8" method="handleSimple" queues="motoQueue8"/>
<rabbit:listener ref="amqpService9" method="handleSimple" queues="motoQueue9"/>
<rabbit:listener ref="amqpService10" method="handleSimple" queues="motoQueue10"/>
</rabbit:listener-container>
Where handleSimple method in listeners consumes object for example Motorcycle (there is also json conversion when sending thought amqp).
How could I manually ack massage which was passed into listeners ?
Is it possibleto get MessageHeader alongside object (Motorcycle) ?
I don't want to configure listeners thought annotation.
Thanks
What is the desire for manual acks? It's quite unusual to need them; the container will take care of acking for you.
To use manual acks, you need a ChannelAwareMessageListener implementation.
You would also have to invoke the message converter yourself.

AMQP Prevent automatic message reads

I'm using Spring AMQP to listen to messages (configuration has listener-container, service-activator, chain, bridge & aggregators). At application startup AMQP starts reading messages which we don't want. I tried auto-startup=false but it isn't working. Am I missing anything?
Also, if it does work then how do I programmatically start them again? I tried listenerContainer.start();. What about aggregators & others?
EDIT
Following is my config:
<rabbit:queue name="my_queue1" declared-by="consumerAdmin"/>
<rabbit:queue name="my_queue2" declared-by="consumerAdmin"/>
<rabbit:queue name="my_batch1" declared-by="consumerAdmin"/>
<int-amqp:channel id="myPollableChannel" message-driven="false" connection-factory="consumerConnFactory" queue-name="my_queue2"/>
<int-event:inbound-channel-adapter channel="myPollableChannel" auto-startup="false"/>
<int-amqp:channel id="myAggregateChannel" connection-factory="consumerConnFactory"/>
<int-event:inbound-channel-adapter channel="myAggregateChannel" auto-startup="false"/>
<int-amqp:channel id="myChannel" connection-factory="consumerConnFactory"/>
<int-event:inbound-channel-adapter channel="myChannel" auto-startup="false"/>
<int-amqp:channel id="myFailedChannel" connection-factory="consumerConnFactory"/>
<int-event:inbound-channel-adapter channel="myFailedChannel" auto-startup="false"/>
<rabbit:template id="genericTopicTemplateWithRetry" connection-factory="connectionFactory" exchange="my_exchange" retry-template="retryTemplate"/>
<rabbit:topic-exchange name="my_exchange" declared-by="consumerAdmin">
<rabbit:bindings>
<rabbit:binding queue="my_queue1" pattern="pattern1"/>
<rabbit:binding queue="my_queue2" pattern="pattern1"/>
</rabbit:bindings>
</rabbit:topic-exchange>
<int:handler-retry-advice id="retryAdvice" max-attempts="5" recovery-channel="myFailedChannel">
<int:exponential-back-off initial="3000" multiplier="5.0" maximum="300000"/>
</int:handler-retry-advice>
<int:bridge input-channel="myPollableChannel" output-channel="myAggregateChannel">
<int:poller max-messages-per-poll="100" fixed-rate="5000"/>
</int:bridge>
<int:aggregator id="myBatchAggregator"
ref="myAggregator"
correlation-strategy="myCorrelationStrategy"
release-strategy="myReleaseStrategy"
input-channel="myAggregateChannel"
output-channel="myChannel"
expire-groups-upon-completion="true"
send-partial-result-on-expiry="true"
group-timeout="1000" />
<int:chain input-channel="myFailedChannel">
<int:transformer expression="'Failed to publish messages to my channel:' + payload.failedMessage.payload" />
<int-stream:stderr-channel-adapter append-newline="true"/>
</int:chain>
<int:service-activator input-channel="myChannel" output-channel="nullChannel" ref="myWorker" method="myMethod">
<int:request-handler-advice-chain><ref bean="retryAdvice" /></int:request-handler-advice-chain>
</int:service-activator>
<rabbit:listener-container connection-factory="consumerConnFactory" requeue-rejected="false" concurrency="1">
<rabbit:listener ref="myListener" method="listen" queue-names="queues1" admin="consumerAdmin" />
</rabbit:listener-container>
OK. Thank you for the configuration!
Not sure why you need AMQP-bascked channels, but the main issue for you is exactly there.
But pay attention, the <int-amqp:channel> has auto-startup="false" option as well.
And you will be ready to receive data from the AMQP, you will need just start() those channels by their id.

RabbitMQ : Create Dynamic queues in Direct Exchange

I am new to RabbitMQ, I just went through Rabbitmq docs (Routing). I am quite confused between Exchange with routing keys. My requirement is , I want to create multiple queues dynamically. Please refer below diagram.
Ex. Lets say If producer create message for consumer c3, then it should go to Exchange and route to Queue 3 only and consume by C3 only.
At present I only require 3 Queues in future this count may increase. so how to deal with this situation too.
Note : I refer this blog Exchange
I have used Spring hibernate along with Rabbitmq.
below code shows Rabbit MQ Listener configuration File.
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/rabbit
http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd">
<rabbit:connection-factory id="connectionFactory" host="xx.xx.10.181" username="admin" password="admin" />
<rabbit:admin connection-factory="connectionFactory" />
<!-- Create Queues -->
<rabbit:queue id="q1" name="q1.queue" durable="true" />
<rabbit:queue id="q2" name="q2.queue" durable="true" />
<rabbit:queue id="q3" name="q3.queue" durable="true" />
<!--create Exchange here -->
<rabbit:direct-exchange id="myExchange" name="MY Exchange">
<rabbit:bindings>
<rabbit:binding queue="q1" key="my.queue.q1" />
<rabbit:binding queue="q2" key="my.queue.q2" />
<rabbit:binding queue="q3" key="my.queue.q3" />
</rabbit:bindings>
</rabbit:direct-exchange>
<!-- instantiate Listeners -->
<bean id="q1Listener" class="in.my.brocker.amqp.Q1Listener" />
<bean id="q2Listener" class="in.my.brocker.amqp.Q2Listener" />
<bean id="q3Listener" class="in.my.brocker.amqp.Q3Listener" />
<!-- glue the listener and myAnonymousQueue to the container-->
<rabbit:listener-container id="myListenerContainer" connection-factory="connectionFactory">
<rabbit:listener ref="q1Listener" queues="q1" />
<rabbit:listener ref="q2Listener" queues="q2" />
<rabbit:listener ref="q3Listener" queues="q3" />
</rabbit:listener-container>
</beans>
In above rabbit-listener-context.xml, I have created 3 Queues along with 3 Listener Classes. My aim is to Queue should be access by defined routing keys. I want to create nth no of queues this dynamically? How can we achieve this?

Java RabbitMQ + AMQP blocking producers for some period(Locking)

Issue: We have 2 or 3 instances of an application. Each instance has a producer and a consumer. We have to schedule some process and for this we use common spring scheduler. This scheduler produces messages and throws them to a "Broker" (RabbitMQ). In our case we process the same data 2 or 3 times because each instance throws the message. How would you block the producer of instances until first producer will throw a message?
Configuration:
<!-- RabbitMQ configuration -->
<rabbit:connection-factory
id="connection" host="${rabbit.host}" port="${rabbit.port}" username="${rabbit.username}" password="${rabbit.password}"
channel-cache-size="${rabbit.publisherCacheSize}" virtual-host="${rabbit.virtualHost}" />
<!-- Declare executor pool for worker threads -->
<!-- Ensure that the pool-size is greater than the sum of all number of concurrent consumers from rabbit that use this pool to ensure
you have enough threads for maximum concurrency. We do this by ensuring that this is 1 plus the size of the connection factory cache
size for all consumers -->
<task:executor id="worker-pool" keep-alive="60" pool-size="${rabbit.consumerChannelCacheSize}" queue-capacity="1000" rejection-policy="CALLER_RUNS"/>
<!-- Message converter -->
<bean id="baseMessageConverter" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound" value="com.company.model.Scraper"/>
</bean>
<bean id="messageConverter" class="org.springframework.amqp.support.converter.MarshallingMessageConverter">
<constructor-arg index="0" ref="baseMessageConverter"/>
</bean>
<!-- *********************************producer*********************************** -->
<!-- Outbound company Events -->
<int:channel id="producerChannelCompany"/>
<int:gateway id="jobcompanyCompleteEventGateway" service-interface="com.company.eventing.companyEventPublisher"
default-request-channel="producerChannelCompany"
default-request-timeout="2000"
error-channel="errors"/>
<amqp:outbound-channel-adapter id="companyEvents.amqpAdapter" channel="producerChannelCompany"
exchange-name="${rabbit.queue.topic}"
routing-key="${rabbit.queue.routing.key}"
amqp-template="psRabbitTemplate"/>
<rabbit:admin id="psRabbitAdmin" connection-factory="connection" />
<rabbit:template id="psRabbitTemplate" channel-transacted="${rabbit.channelTransacted}" encoding="UTF-8" message-converter="messageConverter" connection-factory="connection"/>
<rabbit:topic-exchange id="ps.topic" name="${rabbit.queue.topic}" durable="true" auto-delete="false"/>
<!-- *********************************consumer*********************************** -->
<rabbit:queue id="ps.queue" name="${rabbit.queue}" auto-delete="false" durable="true" exclusive="false" />
<!-- Exchange to queue binding -->
<rabbit:topic-exchange id="ps.topic" name="${rabbit.queue.topic}" durable="true" auto-delete="false" >
<rabbit:bindings>
<rabbit:binding queue="${rabbit.queue}" pattern="${rabbit.queue.pattern}"></rabbit:binding>
</rabbit:bindings>
</rabbit:topic-exchange>
<!-- Configuration for consuming company Complete events -->
<amqp:inbound-channel-adapter id="companyAdapter"
channel="companyCompleteEventChannel"
queue-names="${rabbit.queue}"
channel-transacted="${rabbit.channelTransacted}"
prefetch-count="${rabbit.prefetchCount}"
concurrent-consumers="${rabbit.concurrentConsumers}"
connection-factory="connection"
message-converter="messageConverter"
task-executor="worker-pool"
error-channel="errors"/>
<int:channel id="companyCompleteEventChannel"/>
<int:service-activator id="companyCompleteActivator" input-channel="companyCompleteEventChannel"
ref="companyEventHandler" method="runScraper"/>
<bean id="jvmLauncher" class="com.app.company.jvm.JvmLauncher" />
<!-- company Event handler -->
<bean id="companyEventHandler" class="com.app.company.eventing.consumer.companyEventHandler" depends-on="jvmLauncher">
<!--<property name="scriptHelper" ref="scriptHelper"/>-->
<property name="jvmLauncher" ref="jvmLauncher" />
<property name="defaultMemoryOptions" value="${company.memory.opts}"/>
<property name="defaultMemoryRegex" value="${company.memory.regex}"/>
</bean>
<!-- ERRORS -->
<int:channel id="errors"/>
<int:service-activator id="psErrorLogger" input-channel="errors" ref="psloggingHandler"/>
<bean id="psloggingHandler" class="org.springframework.integration.handler.LoggingHandler">
<constructor-arg index="0" value="DEBUG"></constructor-arg>
<!-- <property name="loggerName" value="com.app.travelerpayments.loggingHandler"/> -->
</bean>
It's not clear what architecture you have, but if all your instances consume messages from the same queue, each message will be consumed only once (unless requeued by consumer). It the best way to use AMQP power in you situation, I guess. And if I missed something, please clarify your question.
With a-la fanout messages delivery, when each instance has their own queue with own messages stack and you want to controls messages delivery by your own (definitely, it's bad idea in almost all situations), why not let all instances to listen to personal queue(s) bounded to fanout exchange and use this exchange for control messages. You can tell instances when to stop or start consuming, flush their queues, schedule restart, etc.
Note, you can use also topic exchange and bind queues by specific routing key, say "control.*"
The idea is to send who is free request, pick the random free worker and send payload to it. You can use specific routing key or just publish payload to default exchange with routing key same as queue name (by default queues bounded to default exchange with routing key same as queue name, see section Default Exchange in RabbitMQ docs).

Spring Integration - Gateway - Splitter - Aggregator with JMS

I am trying to use spring integration to do a Gateway --> Splitter-->ServiceActivator --> Aggregator Pattern in an event driven fashion backed by JMS . I Expect the service activator to be multi-threaded and any of the end points can be executed on a cluster and not necessarily the originating server . I could get this working in a single JVM without using JMS ( Using SI Channels ) but I understand that SI Channels will not help me scale horizontally i.e multiple VMs .
Here's the configuration I have so far
<int:gateway id="transactionGateway" default-reply-channel="transaction-reply"
default-request-channel="transaction-request" default-reply-timeout="10000"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int-jms:outbound-gateway id="transactionJMSGateway"
correlation-key="JMSCorrelationID" request-channel="transaction-request"
request-destination="transactionInputQueue" reply-channel="transaction-reply"
reply-destination="transactionOutputQueue" extract-reply-payload="true"
extract-request-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000"
max-messages-per-task="1" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway for Splitter -->
<int-jms:inbound-gateway id="splitterGateWay"
request-destination="transactionInputQueue" request-channel="splitter-input"
reply-channel="splitter-output" concurrent-consumers="1"
default-reply-destination="processInputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true" />
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" concurrent-consumers="1"
default-reply-destination="processOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true"
max-messages-per-task="1" />
<int-jms:inbound-gateway id="aggregatorGateway"
request-destination="processOutputQueue" request-channel="aggregator-input"
reply-channel="aggregator-output" concurrent-consumers="1"
default-reply-destination="transactionOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
extract-request-payload="true" max-messages-per-task="1"
correlation-key="JMSCorrelationID" />
<int:splitter id="transactionSplitter" input-channel="splitter-input"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" method="aggregate" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
Before using gateway I tried using JMS backed Channels and that approach was't successful either . The problem I am facing now is that the Splitter now reply back to the transactionOutputQueue . I tried playing around with jms:header-enricher without much success . I feel that my approach to the problem /SI might have fundamental flaw . Any help /guidance is highly appreciated .
Also , in the code snippet I have provided above use a simple in memory aggregator , I understand that If I need to get this working across the cluster I might need a JDBC backed Aggregator but for the for now , I am trying to get this pattern working on a single VM
Here's the updated working configuration based on Gary's Comment
<bean id="processOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.process.output" />
</bean>
<bean id="transactionOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.result" />
</bean>
<bean id="transactionInputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.input" />
</bean>
<int:gateway id="transactionGateway"
default-request-channel="transaction-request" default-reply-timeout="10000"
default-reply-channel="aggregator-output"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int:splitter id="transactionSplitter" input-channel="transaction-request"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int-jms:outbound-gateway id="splitterJMSGateway"
correlation-key="JMSCorrelationID" request-channel="splitter-output"
request-destination="processInputQueue" reply-channel="aggregator-input"
reply-destination="processOutputQueue" extract-request-payload="true"
extract-reply-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" default-reply-destination="processOutputQueue"
concurrent-consumers="5" max-concurrent-consumers="10"
extract-reply-payload="true" correlation-key="JMSCorrelationID"
extract-request-payload="true" max-messages-per-task="1" />
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
<bean id="processResultMessageStore"
class="org.springframework.integration.store.SimpleMessageStore" />
<bean id="processResultMessageStoreReaper"
class="org.springframework.integration.store.MessageGroupStoreReaper">
<property name="messageGroupStore" ref="processResultMessageStore" />
<property name="timeout" value="5000" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="processResultMessageStoreReaper"
method="run" fixed-rate="1000" />
</task:scheduled-tasks>
<int:logging-channel-adapter id="logger"
level="DEBUG" log-full-message="true" />
<int-stream:stdout-channel-adapter
id="stdoutAdapter" channel="logger" />
I limited the JMS pipeline only to the Service Activator , which is what I originally wanted .
The only question I have based on the above approach is that do I need to have my Aggregator backed by a database even if I use this across multiple VMS ( Since the JMS gateway in front of it make sure that it receives only the messages that have valid correlation ID ?)
Regards ,
You probably don't need to use JMS between every component. However we have lots of test cases for chained gateways like this, and all works fine.
Something must be wired up incorrectly. Since you didn't show your full configuration, it's hard to speculate.
Be sure to use the latest version (2.2.4) and turn on DEBUG logging and follow a message through the flow; as long as your message payload is identifiable across JMS boundaries, it should be easy to figure out where things go awry.

Categories

Resources