I am trying to use spring integration to do a Gateway --> Splitter-->ServiceActivator --> Aggregator Pattern in an event driven fashion backed by JMS . I Expect the service activator to be multi-threaded and any of the end points can be executed on a cluster and not necessarily the originating server . I could get this working in a single JVM without using JMS ( Using SI Channels ) but I understand that SI Channels will not help me scale horizontally i.e multiple VMs .
Here's the configuration I have so far
<int:gateway id="transactionGateway" default-reply-channel="transaction-reply"
default-request-channel="transaction-request" default-reply-timeout="10000"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int-jms:outbound-gateway id="transactionJMSGateway"
correlation-key="JMSCorrelationID" request-channel="transaction-request"
request-destination="transactionInputQueue" reply-channel="transaction-reply"
reply-destination="transactionOutputQueue" extract-reply-payload="true"
extract-request-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000"
max-messages-per-task="1" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway for Splitter -->
<int-jms:inbound-gateway id="splitterGateWay"
request-destination="transactionInputQueue" request-channel="splitter-input"
reply-channel="splitter-output" concurrent-consumers="1"
default-reply-destination="processInputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true" />
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" concurrent-consumers="1"
default-reply-destination="processOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true"
max-messages-per-task="1" />
<int-jms:inbound-gateway id="aggregatorGateway"
request-destination="processOutputQueue" request-channel="aggregator-input"
reply-channel="aggregator-output" concurrent-consumers="1"
default-reply-destination="transactionOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
extract-request-payload="true" max-messages-per-task="1"
correlation-key="JMSCorrelationID" />
<int:splitter id="transactionSplitter" input-channel="splitter-input"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" method="aggregate" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
Before using gateway I tried using JMS backed Channels and that approach was't successful either . The problem I am facing now is that the Splitter now reply back to the transactionOutputQueue . I tried playing around with jms:header-enricher without much success . I feel that my approach to the problem /SI might have fundamental flaw . Any help /guidance is highly appreciated .
Also , in the code snippet I have provided above use a simple in memory aggregator , I understand that If I need to get this working across the cluster I might need a JDBC backed Aggregator but for the for now , I am trying to get this pattern working on a single VM
Here's the updated working configuration based on Gary's Comment
<bean id="processOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.process.output" />
</bean>
<bean id="transactionOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.result" />
</bean>
<bean id="transactionInputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.input" />
</bean>
<int:gateway id="transactionGateway"
default-request-channel="transaction-request" default-reply-timeout="10000"
default-reply-channel="aggregator-output"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int:splitter id="transactionSplitter" input-channel="transaction-request"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int-jms:outbound-gateway id="splitterJMSGateway"
correlation-key="JMSCorrelationID" request-channel="splitter-output"
request-destination="processInputQueue" reply-channel="aggregator-input"
reply-destination="processOutputQueue" extract-request-payload="true"
extract-reply-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" default-reply-destination="processOutputQueue"
concurrent-consumers="5" max-concurrent-consumers="10"
extract-reply-payload="true" correlation-key="JMSCorrelationID"
extract-request-payload="true" max-messages-per-task="1" />
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
<bean id="processResultMessageStore"
class="org.springframework.integration.store.SimpleMessageStore" />
<bean id="processResultMessageStoreReaper"
class="org.springframework.integration.store.MessageGroupStoreReaper">
<property name="messageGroupStore" ref="processResultMessageStore" />
<property name="timeout" value="5000" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="processResultMessageStoreReaper"
method="run" fixed-rate="1000" />
</task:scheduled-tasks>
<int:logging-channel-adapter id="logger"
level="DEBUG" log-full-message="true" />
<int-stream:stdout-channel-adapter
id="stdoutAdapter" channel="logger" />
I limited the JMS pipeline only to the Service Activator , which is what I originally wanted .
The only question I have based on the above approach is that do I need to have my Aggregator backed by a database even if I use this across multiple VMS ( Since the JMS gateway in front of it make sure that it receives only the messages that have valid correlation ID ?)
Regards ,
You probably don't need to use JMS between every component. However we have lots of test cases for chained gateways like this, and all works fine.
Something must be wired up incorrectly. Since you didn't show your full configuration, it's hard to speculate.
Be sure to use the latest version (2.2.4) and turn on DEBUG logging and follow a message through the flow; as long as your message payload is identifiable across JMS boundaries, it should be easy to figure out where things go awry.
Related
We are implementing XA transaction between MQ and database and trying to create a connection factory as a service in karaf as per the below link.
https://access.redhat.com/documentation/fr-fr/red_hat_fuse/7.2/html/apache_karaf_transaction_guide/using-jms-connection-factories#manual-deployment-connection-factories
The MQ we are using is IBM and we are connecting to it through camel.
The karaf service is exposed from the same bundle that is going to use it. This is done through blueprint xml file present in the src/main/resources/OSGI-INF/blueprint folder.
When we use (through JNDI) the connection factory exposed as a service for setting the connection factory to be used by the JmsComponent of camel, we are able to get message from the queue but not able to put message into the queue. There is no error when the put operation fails and hence, the database gets updated with success. This happens specifically when using JmsPoolXAConnectionFactory as the pool connection factory. If we change it to JmsPoolConnectionFactory, the put operation works and the message is added to the queue.
Below are the sample routes for get and put to queue.
GET:
from("mq:queue:{{queueName}}")
.process(new CustomProcessor1())
.to("direct:call-sp")
.end();
from("direct:call-sp")
.to("sql-stored:call-sp")
.end();
PUT:
from("vm:send")
.process(new CustomProcessor2())
.to("mq:queue:{{queueName}}")
.to("sql-stored:update-sp")
.to("vm:nextroute")
.end();
Camel JmsComponent Configuration in camel-context.xml:
<reference id="ptm" interface="org.springframework.transaction.PlatformTransactionManager" />
<reference id="connectionFactory" interface="javax.jms.ConnectionFactory" filter="(osgi.jndi.service.name=jms/mq)" availability="optional" />
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="transacted" value="false" />
<property name="connectionFactory" ref="connectionFactory" />
<property name="transactionManager" ref="ptm" />
</bean>
<bean id="mq" class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration" ref="jmsConfig" />
<property name="destinationResolver" ref="customDestinationResolver" />
</bean>
<bean id="customDestinationResolver" class="com.example.CustomDestinationResolver">
</bean>
Is there any put related specific configuration that we are missing?
To coordinate XA transactions, you need a transaction manager which implements the Java Transaction API (JTA).
Therefore, I think you need to use a JtaTransactionManager rather than a org.springframework.transaction.PlatformTransactionManager.
Check this out:
https://tomd.xyz/camel-xa-transactions-checklist/
Is it possible to send messages with SimpMessageSendingOperations from a RabbitMQ listener bean?
I have the following listener class:
public class MyJobListener {
#Autowired
public SimpMessageSendingOperations messagingTemplate;
public void handleJob(JobMessage jobMessage) {
doWork(jobMessage);
messagingTemplate.convertAndSend("/topic/greetings", "TEST");
}
}
My Rabbit config file is:
<!-- RabbitMQ configuration -->
<rabbit:connection-factory id="connectionFactory" host="${rabbitmq.connection.host}" port="${rabbitmq.connection.port}" />
<rabbit:admin connection-factory="connectionFactory" />
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory" />
<!-- Queues -->
<rabbit:queue id="myQueue" name="myQueue" />
<!-- Listeners -->
<bean id="myListener01" class="com.xxx.MyJobListener" />
<bean id="myListener02" class="com.xxx.MyJobListener" />
<bean id="myListener03" class="com.xxx.MyJobListener" />
<bean id="myListener04" class="com.xxx.MyJobListener" />
<rabbit:listener-container connection-factory="connectionFactory" >
<rabbit:listener ref="myListener01" method="handleJob" queue-names="myQueue" />
<rabbit:listener ref="myListener02" method="handleJob" queue-names="myQueue" />
<rabbit:listener ref="myListener03" method="handleJob" queue-names="myQueue" />
<rabbit:listener ref="myListener04" method="handleJob" queue-names="myQueue" />
</rabbit:listener-container>
<!-- Bindings -->
<rabbit:direct-exchange name="directexchange" >
<rabbit:bindings>
<rabbit:binding queue="myQueue"/>
</rabbit:bindings>
</rabbit:direct-exchange>
When message is expected to be sent (messagingTemplate.convertAndSend("/topic/greetings", "TEST")) nothing happens, but if I do the same thing but in a #Controller everything works fine (message is sent through websocket to the browser)
I need to do this to send a notification to the user when the job is finished.
After many tests I changed my rabbit configuration file, leaving only one listener:
<!-- Listeners -->
<bean id="myListener01" class="com.xxx.MyJobListener" />
<rabbit:listener-container connection-factory="connectionFactory" error-handler="queueErrorHandler" >
<rabbit:listener ref="myListener01" method="handleJob" queue-names="myQueue" />
</rabbit:listener-container>
and now it works almost randomly. It's strange, but each 2 calls it works. I mean, two times yes, two times not, two times yes, two times not... and so... It's very strange. I think there is something with the rabbit config...
Definitely is Spring Security configuration. If I disable Spring Security everything works fine. I will find out what it is, and then I'll post the answer here.
I was able to solve it.
The problem was not Spring Security, the problem was I was declarating twice the websocket message broker:
<websocket:message-broker application-destination-prefix="/app" >
<websocket:stomp-endpoint path="/websocket" >
<websocket:sockjs />
</websocket:stomp-endpoint>
<websocket:simple-broker prefix="/topic,/user" />
</websocket:message-broker>
These lines resides in my websocket.xml, and this file was imported more than one time because of an "ugly" import sentences distributions along my .xml spring files.
After ordering these imports and ensuring the bean is only created once everything works fine.
May this helps!
I have 2 spring integration context files using similar file-based integration pattern. Both scan directory looking for a message and both work if deployed by themselves. If I include both modules into another spring context they are loading without issues. However only second one is working, and the first one is getting: MessageDeliveryException: Dispatcher has no subscribers. I've attempted to combine them into a single context file with no positive gain. We are currently on version 2.1.3 of Spring Integration and version 2.1 of Spring Integration File. Any ideas are greatly appreciated!
inpayment-context.xml:
<!-- START of in-bound message implementation -->
<int:channel id="file-inpayment-channel" datatype="java.io.File" />
<bean id="xmlPatternFileListFilter" class="org.springframework.integration.file.filters.SimplePatternFileListFilter">
<constructor-arg value="*.xml" />
</bean>
<task:executor id="batchInBoundExecuter" pool-size="1-1" queue-capacity="20" rejection-policy="CALLER_RUNS" />
<int-file:inbound-channel-adapter directory="file:${inpayment.inbox}" filter="xmlPatternFileListFilter"
channel="file-inpayment-channel">
<int:poller id="inPaymentrPoller" fixed-delay="1000" task-executor="batchInBoundExecuter" default="true" />
</int-file:inbound-channel-adapter>
<bean id="inPaymentService" class="com.somepackage.InPaymentBootstrapService" />
<int:service-activator id="batchJobLaunchService" ref="inPaymentService" input-channel="file-inpayment-channel"
method="schedule" />
<!-- START of out-bound message implementation -->
<int:channel id="inpayment-file-out-channel" datatype="java.io.File" />
<int:gateway id="inboundPaymentGateway" service-interface="com.somepackage.InboundPaymentGateway"
default-request-channel="inpayment-file-out-channel" />
<int-file:outbound-channel-adapter directory="file:${inpayment.inprocess}" channel="inpayment-file-out-channel"
auto-create-directory="true" delete-source-files="true" />
<!-- END of out-bound message implementation -->
scheduler-context.xml:
<!-- START of in-bound message implementation -->
<int:channel id="scheduler-file-in-channel" datatype="java.io.File" />
<bean id="simplePatternFileListFilter" class="org.springframework.integration.file.filters.SimplePatternFileListFilter">
<constructor-arg value="*.xml" />
</bean>
<task:executor id="batchJobRunExecuter" pool-size="1-1" queue-capacity="20" rejection-policy="CALLER_RUNS"/>
<int-file:inbound-channel-adapter directory="file:${scheduler.inbox}" filter="simplePatternFileListFilter"
channel="scheduler-file-in-channel">
<int:poller id="schedulerPoller" fixed-delay="5000" task-executor="batchJobRunExecuter" default="true" />
</int-file:inbound-channel-adapter>
<bean id="launchService" class="com.somepackage.BatchJobLaunchService" />
<int:service-activator id="batchJobLaunchService" ref="launchService" input-channel="scheduler-file-in-channel"
method="schedule" />
<!-- END of in-bound message implementation -->
<!-- START of out-bound message implementation -->
<int:channel id="scheduler-file-out-channel" datatype="java.io.File" />
<int:channel id="scheduler-xml-out-channel" datatype="com.somepackage.ScheduledJob" />
<int:gateway id="batchJobSchedulerGateway" service-interface="com.innovation.customers.guideone.scheduler.integration.SchedulerGateway"
default-request-channel="scheduler-xml-out-channel" />
<int:transformer input-channel="scheduler-xml-out-channel" output-channel="scheduler-file-out-channel" ref="schedulerFileTransformer"
method="transformToFile" />
<int-file:outbound-channel-adapter directory="file:${scheduler.completed}" channel="scheduler-file-out-channel"
auto-create-directory="true" delete-source-files="true" />
<!-- END of out-bound message implementation -->
Common Spring Context:
<context:component-scan base-package="com.somepackage" />
<import resource="classpath:g1-scheduler-context.xml"/>
<import resource="classpath:g1-inpayment-context.xml"/>
EDIT
2014-08-27 11:01:01,530 ERROR [batchJobRunExecuter-1][:] org.springframework.integration.handler.LoggingHandler : org.springframework.integration.MessageDeliveryException: Dispatcher has no subscribers. at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:108) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:101)
I see the issue in your config:
<int:service-activator id="batchJobLaunchService" ref="inPaymentService" input-channel="file-inpayment-channel"
method="schedule" />
and
<int:service-activator id="batchJobLaunchService" ref="launchService" input-channel="scheduler-file-in-channel"
method="schedule" />
They assume to be different services, but at the same time use the same id - batchJobLaunchService.
Spring by default allows to do that, but only the last bean definition with the same id wins. That's why the <service-activator> for the launchService hasn't been pupolated and hence the EventDrivenConsumer bean hasn't been subscribed to the scheduler-file-in-channel.
Be careful and use unique id for all your beans.
It isn't so easy to throw expection on the duplication case, but if you switch on INFO for the org.springframework category you'll the message that one bean overrrides another.
I'm new to Spring Integration and Spring Integration AMQP.
I have the following code:
<bean id="enricher" class="soft.Enricher"/>
<amqp:inbound-channel-adapter queue-names="QUEUE1" channel="amqpInboundChannel"/>
<int:channel id="amqpInboundChannel">
<int:interceptors>
<int:wire-tap channel="logger"/>
</int:interceptors>
</int:channel>
<int:header-enricher input-channel="amqpInboundChannel" output-channel="routingChannel">
<int:header name="store" value="sj" />
</int:header-enricher>
<int:channel id="routingChannel" />
<int:header-value-router input-channel="routingChannel" header-name="store">
<int:mapping value="sj" channel="channelSJ" />
<int:mapping value="jy" channel="channelJY" />
</int:header-value-router>
<amqp:outbound-channel-adapter channel="channelSJ" exchange-name="ex_store" routing-key="sj" amqp-template="rabbitTemplate"/>
<amqp:outbound-channel-adapter channel="channelJY" exchange-name="ex_store" routing-key="jy" amqp-template="rabbitTemplate"/>
<int:channel id="channelSJ" />
<int:channel id="channelJY" />
<int:logging-channel-adapter id="logger" level="ERROR" />
The setup is the following:
Everything is working fine except that headers are lost when a message is picked up by the inbound-channel-adapter.
Likewise the header being enriched called "store" is also lost when the message is being send to the exchange using the outbound-channel-adapter.
This is how a message is looking before being picked up by the inbound-channel-adapter:
This is how the same message is looking after the whole process (notice no headers)
I think your problem is described here:
"By default only standard AMQP properties (e.g. contentType) will be copied to and from Spring Integration MessageHeaders. Any user-defined headers within the AMQP MessageProperties will NOT be copied to or from an AMQP Message unless explicitly identified via 'requestHeaderNames' and/or 'replyHeaderNames' properties of this HeaderMapper. If you need to copy all user-defined headers simply use wild-card character ''.*"
So you need to define your own custom instance of DefaultAmqpHeaderMapper and configure the inbound-channel-adapter with it. See here.
It might look something like this:
<bean id="myHeaderMapper" class="org.springframework.integration.amqp.support.DefaultAmqpHeaderMapper">
<property name="requestHeaderNames" value="*"/>
<property name="replyHeaderNames" value="*"/>
</bean>
<amqp:inbound-channel-adapter queue-names="QUEUE1" channel="amqpInboundChannel"
header-mapper="myHeaderMapper"/>
I've got a fast producer ESB (converts CSV to XML) and a slow consumer ESB (performing zip/base64/SOAP wrapping of the XML). The ESBs communicate via a JMS topic. This design is legacy and cannot be changed. When a large CSV file is processed, JBoss AS (5.2) grinds to a halt as the producer is flooding out the consumer, this is even with a heap-size of 4096M. Forgive me I'm new to JBoss/JMS and finding it all bewildering.
Producer sending config
<action class="com.example.FooAction" name="ProcessFoo">
<property name="springJndiLocation" value="FooEsbSpring" />
<property name="exceptionMethod" value="exceptionHandler" />
<property name="okMethod" value="processSuccess" />
<property name="jndiName" value="topic/FooTopic" />
<property name="connection-factory" value="ConnectionFactory" />
<property name="unwrap" value="true" />
<property name="security-principal" value="guest" />
<property name="security-credential" value="guest" />
</action>
Producer sending code:
Message msg = MessageFactory.getInstance().getMessage(MessageType.JAVA_SERIALIZED);
msg.getBody().add(foo); // foo is the business specific message
new JMSRouter(config).process(msg);
Consumer receiving config:
<jms-jca-provider connection-factory="ConnectionFactory" name="FooMessaging">
<jms-bus busid="fooChannel">
<jms-message-filter dest-name="topic/FooTopic"
dest-type="TOPIC" transacted="false" />
</jms-bus>
<activation-config>
<property name="dLQMaxResent" value="1" />
</activation-config>
</jms-jca-provider>
Topic config
<server>
<mbean code="org.jboss.jms.server.destination.TopicService"
name="jboss.esb.quickstart.destination:service=Topic,name=FooTopic"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer
</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
</server>
Things I've tried so far.
Run the publisher ESB without the consumer ESB - as expected no problems.
Lots of googling, looking for existing questions on stackoverflow
Found some references to rate limiting but I can't see how to fit these into my config.
I've tried to find an API to discover how many messages are already on the topic unprocessed (with the hope I can implement my own back-off strategy).
Looked at this documentation.
Look at this section 6.3.17.2. org.jboss.mq.server.jmx.Topic and use the 'Depth' related attributes using JMX.
It might help you build the back-off strategy you're looking for