Persist message in topic after server restart - java

I am learning Spring Integration JMS. I ran into a problem where my topic does not persist pending messages which are not yet consumed by the client.
Basically I start ActiveMQ then using REST Client I am invoking producer to send message for 50 times so that 50 messages gets enqueued in topic. At consumer end I have applied sleep timer of 5 seconds so that each message gets consumed at regular interval of 5s. Then in between I stopped ActiveMQ. Meanwhile some messages are consumed by client lets say 15 out of 50 have been consumed. Then If I restart ActiveMQ I was expecting topic to persist pending 35 messages but I can not see that in admin console under topics tab.
Here is my configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:int-jme="http://www.springframework.org/schema/integration"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/integration/jms http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd">
<!-- Component scan to find all Spring components -->
<context:component-scan base-package="com.geekcap.springintegrationexample" />
<bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter">
<property name="order" value="1" />
<property name="messageConverters">
<list>
<!-- Default converters -->
<bean class="org.springframework.http.converter.StringHttpMessageConverter"/>
<bean class="org.springframework.http.converter.FormHttpMessageConverter"/>
<bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter" />
<bean class="org.springframework.http.converter.xml.SourceHttpMessageConverter"/>
<bean class="org.springframework.http.converter.BufferedImageHttpMessageConverter"/>
<bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" />
</list>
</property>
</bean>
<!-- Define a channel to communicate out to a JMS Destination -->
<int:channel id="topicChannel"/>
<!-- Define the ActiveMQ connection factory -->
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<!--
Define an adaptor that route topicChannel messages to the myTopic topic; the outbound-channel-adapter
automagically fines the configured connectionFactory bean (by naming convention
-->
<int-jms:outbound-channel-adapter channel="topicChannel"
destination-name="topic.myTopic"
pub-sub-domain="true" />
<!-- Create a channel for a listener that will consume messages-->
<int:channel id="listenerChannel" />
<int-jms:message-driven-channel-adapter id="messageDrivenAdapter"
channel="getPayloadChannel"
destination-name="topic.myTopic"
pub-sub-domain="true" />
<int:service-activator input-channel="listenerChannel" ref="messageListenerImpl" method="processMessage" />
<int:channel id="getPayloadChannel" />
<int:service-activator input-channel="getPayloadChannel" output-channel="listenerChannel" ref="retrievePayloadServiceImpl" method="getPayload" />
</beans>
I also read that default mode is persistent. But in my case it does not seems to be worked.
EDIT:
As per the answer given by Gary Russel after adding attributes
subscription-durable="true"
durable-subscription-name="mySubscription"
in <int-jms:message-driven-channel-adapter> I am facing XML related issues
cvc-complex-type.3.2.2: Attribute 'subscription-durable' is not allowed to appear in element 'int-jms:message-driven-channel- adapter'.
cvc-complex-type.3.2.2: Attribute 'durable-subscription-name' is not allowed to appear in element 'int-jms:message-driven-channel- adapter'.
Please Help

That is how topics work, by default, read the JMS specification.
Topics are publish/subscribe; only subscribers that are present get to receive the message.
If you publish 5, start the consumer, publish another 5; he will only get the second 5.
If you kill the broker before he gets all 5; during the restart, the broker sees there are no consumers so he purges the messages.
You can change this behavior by using durable subscriptions, in which case the broker will indeed retain messages for each such subscription, even if not currently connected.
To configure this with Spring Integration, set subscription-durable on the message-driven channel adapter and give it a unique subscription-name.

Topics in Activemq are not durable and persistent, so in case one of your consumer is down. You would lost your messages.
To make topic durable and persistent you can create a durable consumer by creating unique client id per consumer.
But again, that is not distributed in case you are following microservices architecture. So multiple pods or replicas will create problem while consuming messages as in no load balancing is possible for durable consumers.
To mitigate this scenario, there is a option of Virtual topics in Activemq.More details have been provided below,
You can send your messages via your producer in topic named as VirtualTopic.MyTopic.
** Note: you must have to follow this naming convention for default activemq configuration. But yes there is also a way to override this naming convention.
Now, to consume your messages via multiple consumers(A and B here), you have to set naming convention for your consumer side destination as well for eg. Consumer.A.VirtualTopic.MyTopic Consumer.B.VirtualTopic.MyTopic These two consumer will receive messages through the topic created above, also with load balancing enabled between multiple replicas of same consumer.
I hope this will help you fixing your problem with activemq topic.

Related

How do I implement a route in camel to receive messages from a JMS queue?

I've referred to the JMS page of the Camel documentation and many related SO questions such as this one, but I'm unable to find a comprehensive list on the implementation.
I'm using Spring XML along with Camel and Weblogic for the server. I've made a test queue with the following names:
Server: TestJMSServer, Module: TestJMSModule, Queue: TestJMSQueue, CF: TestConnectionFactory.
According to the Camel documentation, my route should look something like this:
<camel:route id="test">
<camel:from uri="jms:TestJMSQueue" />
<camel:to uri="file:/Users/...." />
</camel:route>
This gives me an error saying "connectionFactory must be specified". So exactly what else do I need to add to my applicationContext.xml in order to listen to this queue?
You need to tell Camel's jms-component which JMS connection factory to use. Most likely you'll get that from jndi if you're using WebLogic.
In the example below i am looking up the connection factory using spring's jee:jndi-lookup (i believe that might even be a name you can use in WebLogic). The looked up factory is then made available as a spring bean with id myConnectionFactory.
This connection factory bean is then used for the connectionFactory property for camel's JmsComponent. Notice the id attribute: jms. This defines the camel endpoint uri scheme to be used in your routes.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">
<jee:jndi-lookup id="myConnectionFactory" jndi-name="jms/connectionFactory"/>
<route id="test" xmlns="http://camel.apache.org/schema/spring">
<from uri="jms:TestJMSQueue"/>
<to uri="file:/Users/...."/>
</route>
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="myConnectionFactory"/>
<!-- more configuration required based on your requirements -->
</bean>
<!--
example uses invm amq broker:
<bean id="anothercnf" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://mybroker"/>
</bean>
-->
</beans>
Important Note: You will need to tune this further (setup transactions, setup concurrent consumers, possible configure a spring jms connection pool)

Disabling the listening to rabbit queues from spring application.properties

I want to create an application-development.properties file in spring to define a dev environment. In this environment want to disable the listening to the rabbit queues because I don't want to interfere with the staging queues while debugging etc.
Problem is - I can't find a property that controls this. No "active" property or "enabled" property or anything..
These are the properties I found in the Spring docs:
# RABBIT (RabbitProperties)
spring.rabbitmq.addresses= # connection addresses (e.g. myhost:9999,otherhost:1111)
spring.rabbitmq.dynamic=true # create an AmqpAdmin bean
spring.rabbitmq.host= # connection host
spring.rabbitmq.port= # connection port
spring.rabbitmq.password= # login password
spring.rabbitmq.requested-heartbeat= # requested heartbeat timeout, in seconds; zero for none
spring.rabbitmq.listener.acknowledge-mode= # acknowledge mode of container
spring.rabbitmq.listener.concurrency= # minimum number of consumers
spring.rabbitmq.listener.max-concurrency= # maximum number of consumers
spring.rabbitmq.listener.prefetch= # number of messages to be handled in a single request
spring.rabbitmq.listener.transaction-size= # number of messages to be processed in a transaction
spring.rabbitmq.ssl.enabled=false # enable SSL support
spring.rabbitmq.ssl.key-store= # path to the key store that holds the SSL certificate
spring.rabbitmq.ssl.key-store-password= # password used to access the key store
spring.rabbitmq.ssl.trust-store= # trust store that holds SSL certificates
spring.rabbitmq.ssl.trust-store-password= # password used to access the trust store
spring.rabbitmq.username= # login user
spring.rabbitmq.virtual-host= # virtual host to use when connecting to the broker
I did find a way not to load the amqp-context.xml beans that contain the listener definitions by using Spring profiles and add <beans profile="development"> .. </beans> to the xml but this is much less flexible as I have to define different profiles, and changing what they include involves changing the code.
EDIT this is how my amqp-context.xml looks like:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:task="http://www.springframework.org/schema/task"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:p="http://www.springframework.org/schema/p" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/task
http://www.springframework.org/schema/task/spring-task-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/rabbit
http://www.springframework.org/schema/rabbit/spring-rabbit-1.3.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="ignoreResourceNotFound" value="true" />
<property name="locations">
<list>
<value>application.${env:xxxx}.properties</value>
</list>
</property>
</bean>
<rabbit:connection-factory id="connectionFactory" host="${rabbit_host}"
virtual-host="${rabbit_virtual_host}" username="${rabbit_username}" password="${rabbit_password}" port="${rabbit_port}"/>
<!-- Connection Factory -->
<bean id="rabbitConnFactory"
class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
</bean>
<!-- Spring AMQP Template -->
<bean id="template" class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<property name="connectionFactory" ref="connectionFactory" />
<property name="routingKey" value="${my_queue}" />
<property name="queue" value="${my_queue}" />
</bean>
<!-- Spring AMQP Admin -->
<bean id="admin" class="org.springframework.amqp.rabbit.core.RabbitAdmin">
<constructor-arg ref="rabbitConnFactory" />
</bean>
<rabbit:listener-container connection-factory="connectionFactory" requeue-rejected="false" concurrency="10">
<rabbit:listener ref="ProcessMessage"
queue-names="${queue_name}" />
</rabbit:listener-container>
<bean id="ProcessStuff" class="Process" />
</beans>
Does anyone have an idea on how I can manage the listening to queues directly from the application.properties file? please?
As an alternative to waiting for Boot 1.3, you can add your own key to application-development.properties like
rabbit.auto-startup=false
Then modify your amqp-context.xml like this
<rabbit:listener-container connection-factory="connectionFactory" requeue-rejected="false" concurrency="10" auto-startup=${rabbit.auto-startup}>
Good catch! I've created #3587 which will be addressed for Spring Boot 1.3
Thanks!
This one "spring.autoconfigure.exclude: org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration" doesn't help me. So I just remove all properties start with spring.cloud.stream.* and spring.rabbitmq.addresses. Also add to your logback
<logger name="org.springframework.amqp" level="ERROR"/>
<logger name="org.springframework.boot.actuate.amqp" level="ERROR"/>.
Because when you remove the properties, spring output a lot of WARN logs.

Java RabbitMQ + AMQP blocking producers for some period(Locking)

Issue: We have 2 or 3 instances of an application. Each instance has a producer and a consumer. We have to schedule some process and for this we use common spring scheduler. This scheduler produces messages and throws them to a "Broker" (RabbitMQ). In our case we process the same data 2 or 3 times because each instance throws the message. How would you block the producer of instances until first producer will throw a message?
Configuration:
<!-- RabbitMQ configuration -->
<rabbit:connection-factory
id="connection" host="${rabbit.host}" port="${rabbit.port}" username="${rabbit.username}" password="${rabbit.password}"
channel-cache-size="${rabbit.publisherCacheSize}" virtual-host="${rabbit.virtualHost}" />
<!-- Declare executor pool for worker threads -->
<!-- Ensure that the pool-size is greater than the sum of all number of concurrent consumers from rabbit that use this pool to ensure
you have enough threads for maximum concurrency. We do this by ensuring that this is 1 plus the size of the connection factory cache
size for all consumers -->
<task:executor id="worker-pool" keep-alive="60" pool-size="${rabbit.consumerChannelCacheSize}" queue-capacity="1000" rejection-policy="CALLER_RUNS"/>
<!-- Message converter -->
<bean id="baseMessageConverter" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound" value="com.company.model.Scraper"/>
</bean>
<bean id="messageConverter" class="org.springframework.amqp.support.converter.MarshallingMessageConverter">
<constructor-arg index="0" ref="baseMessageConverter"/>
</bean>
<!-- *********************************producer*********************************** -->
<!-- Outbound company Events -->
<int:channel id="producerChannelCompany"/>
<int:gateway id="jobcompanyCompleteEventGateway" service-interface="com.company.eventing.companyEventPublisher"
default-request-channel="producerChannelCompany"
default-request-timeout="2000"
error-channel="errors"/>
<amqp:outbound-channel-adapter id="companyEvents.amqpAdapter" channel="producerChannelCompany"
exchange-name="${rabbit.queue.topic}"
routing-key="${rabbit.queue.routing.key}"
amqp-template="psRabbitTemplate"/>
<rabbit:admin id="psRabbitAdmin" connection-factory="connection" />
<rabbit:template id="psRabbitTemplate" channel-transacted="${rabbit.channelTransacted}" encoding="UTF-8" message-converter="messageConverter" connection-factory="connection"/>
<rabbit:topic-exchange id="ps.topic" name="${rabbit.queue.topic}" durable="true" auto-delete="false"/>
<!-- *********************************consumer*********************************** -->
<rabbit:queue id="ps.queue" name="${rabbit.queue}" auto-delete="false" durable="true" exclusive="false" />
<!-- Exchange to queue binding -->
<rabbit:topic-exchange id="ps.topic" name="${rabbit.queue.topic}" durable="true" auto-delete="false" >
<rabbit:bindings>
<rabbit:binding queue="${rabbit.queue}" pattern="${rabbit.queue.pattern}"></rabbit:binding>
</rabbit:bindings>
</rabbit:topic-exchange>
<!-- Configuration for consuming company Complete events -->
<amqp:inbound-channel-adapter id="companyAdapter"
channel="companyCompleteEventChannel"
queue-names="${rabbit.queue}"
channel-transacted="${rabbit.channelTransacted}"
prefetch-count="${rabbit.prefetchCount}"
concurrent-consumers="${rabbit.concurrentConsumers}"
connection-factory="connection"
message-converter="messageConverter"
task-executor="worker-pool"
error-channel="errors"/>
<int:channel id="companyCompleteEventChannel"/>
<int:service-activator id="companyCompleteActivator" input-channel="companyCompleteEventChannel"
ref="companyEventHandler" method="runScraper"/>
<bean id="jvmLauncher" class="com.app.company.jvm.JvmLauncher" />
<!-- company Event handler -->
<bean id="companyEventHandler" class="com.app.company.eventing.consumer.companyEventHandler" depends-on="jvmLauncher">
<!--<property name="scriptHelper" ref="scriptHelper"/>-->
<property name="jvmLauncher" ref="jvmLauncher" />
<property name="defaultMemoryOptions" value="${company.memory.opts}"/>
<property name="defaultMemoryRegex" value="${company.memory.regex}"/>
</bean>
<!-- ERRORS -->
<int:channel id="errors"/>
<int:service-activator id="psErrorLogger" input-channel="errors" ref="psloggingHandler"/>
<bean id="psloggingHandler" class="org.springframework.integration.handler.LoggingHandler">
<constructor-arg index="0" value="DEBUG"></constructor-arg>
<!-- <property name="loggerName" value="com.app.travelerpayments.loggingHandler"/> -->
</bean>
It's not clear what architecture you have, but if all your instances consume messages from the same queue, each message will be consumed only once (unless requeued by consumer). It the best way to use AMQP power in you situation, I guess. And if I missed something, please clarify your question.
With a-la fanout messages delivery, when each instance has their own queue with own messages stack and you want to controls messages delivery by your own (definitely, it's bad idea in almost all situations), why not let all instances to listen to personal queue(s) bounded to fanout exchange and use this exchange for control messages. You can tell instances when to stop or start consuming, flush their queues, schedule restart, etc.
Note, you can use also topic exchange and bind queues by specific routing key, say "control.*"
The idea is to send who is free request, pick the random free worker and send payload to it. You can use specific routing key or just publish payload to default exchange with routing key same as queue name (by default queues bounded to default exchange with routing key same as queue name, see section Default Exchange in RabbitMQ docs).

Spring Integration - Gateway - Splitter - Aggregator with JMS

I am trying to use spring integration to do a Gateway --> Splitter-->ServiceActivator --> Aggregator Pattern in an event driven fashion backed by JMS . I Expect the service activator to be multi-threaded and any of the end points can be executed on a cluster and not necessarily the originating server . I could get this working in a single JVM without using JMS ( Using SI Channels ) but I understand that SI Channels will not help me scale horizontally i.e multiple VMs .
Here's the configuration I have so far
<int:gateway id="transactionGateway" default-reply-channel="transaction-reply"
default-request-channel="transaction-request" default-reply-timeout="10000"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int-jms:outbound-gateway id="transactionJMSGateway"
correlation-key="JMSCorrelationID" request-channel="transaction-request"
request-destination="transactionInputQueue" reply-channel="transaction-reply"
reply-destination="transactionOutputQueue" extract-reply-payload="true"
extract-request-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000"
max-messages-per-task="1" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway for Splitter -->
<int-jms:inbound-gateway id="splitterGateWay"
request-destination="transactionInputQueue" request-channel="splitter-input"
reply-channel="splitter-output" concurrent-consumers="1"
default-reply-destination="processInputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true" />
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" concurrent-consumers="1"
default-reply-destination="processOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true"
max-messages-per-task="1" />
<int-jms:inbound-gateway id="aggregatorGateway"
request-destination="processOutputQueue" request-channel="aggregator-input"
reply-channel="aggregator-output" concurrent-consumers="1"
default-reply-destination="transactionOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
extract-request-payload="true" max-messages-per-task="1"
correlation-key="JMSCorrelationID" />
<int:splitter id="transactionSplitter" input-channel="splitter-input"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" method="aggregate" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
Before using gateway I tried using JMS backed Channels and that approach was't successful either . The problem I am facing now is that the Splitter now reply back to the transactionOutputQueue . I tried playing around with jms:header-enricher without much success . I feel that my approach to the problem /SI might have fundamental flaw . Any help /guidance is highly appreciated .
Also , in the code snippet I have provided above use a simple in memory aggregator , I understand that If I need to get this working across the cluster I might need a JDBC backed Aggregator but for the for now , I am trying to get this pattern working on a single VM
Here's the updated working configuration based on Gary's Comment
<bean id="processOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.process.output" />
</bean>
<bean id="transactionOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.result" />
</bean>
<bean id="transactionInputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.input" />
</bean>
<int:gateway id="transactionGateway"
default-request-channel="transaction-request" default-reply-timeout="10000"
default-reply-channel="aggregator-output"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int:splitter id="transactionSplitter" input-channel="transaction-request"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int-jms:outbound-gateway id="splitterJMSGateway"
correlation-key="JMSCorrelationID" request-channel="splitter-output"
request-destination="processInputQueue" reply-channel="aggregator-input"
reply-destination="processOutputQueue" extract-request-payload="true"
extract-reply-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" default-reply-destination="processOutputQueue"
concurrent-consumers="5" max-concurrent-consumers="10"
extract-reply-payload="true" correlation-key="JMSCorrelationID"
extract-request-payload="true" max-messages-per-task="1" />
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
<bean id="processResultMessageStore"
class="org.springframework.integration.store.SimpleMessageStore" />
<bean id="processResultMessageStoreReaper"
class="org.springframework.integration.store.MessageGroupStoreReaper">
<property name="messageGroupStore" ref="processResultMessageStore" />
<property name="timeout" value="5000" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="processResultMessageStoreReaper"
method="run" fixed-rate="1000" />
</task:scheduled-tasks>
<int:logging-channel-adapter id="logger"
level="DEBUG" log-full-message="true" />
<int-stream:stdout-channel-adapter
id="stdoutAdapter" channel="logger" />
I limited the JMS pipeline only to the Service Activator , which is what I originally wanted .
The only question I have based on the above approach is that do I need to have my Aggregator backed by a database even if I use this across multiple VMS ( Since the JMS gateway in front of it make sure that it receives only the messages that have valid correlation ID ?)
Regards ,
You probably don't need to use JMS between every component. However we have lots of test cases for chained gateways like this, and all works fine.
Something must be wired up incorrectly. Since you didn't show your full configuration, it's hard to speculate.
Be sure to use the latest version (2.2.4) and turn on DEBUG logging and follow a message through the flow; as long as your message payload is identifiable across JMS boundaries, it should be easy to figure out where things go awry.

How to configure JBoss/JMS message rate limit / flow control

I've got a fast producer ESB (converts CSV to XML) and a slow consumer ESB (performing zip/base64/SOAP wrapping of the XML). The ESBs communicate via a JMS topic. This design is legacy and cannot be changed. When a large CSV file is processed, JBoss AS (5.2) grinds to a halt as the producer is flooding out the consumer, this is even with a heap-size of 4096M. Forgive me I'm new to JBoss/JMS and finding it all bewildering.
Producer sending config
<action class="com.example.FooAction" name="ProcessFoo">
<property name="springJndiLocation" value="FooEsbSpring" />
<property name="exceptionMethod" value="exceptionHandler" />
<property name="okMethod" value="processSuccess" />
<property name="jndiName" value="topic/FooTopic" />
<property name="connection-factory" value="ConnectionFactory" />
<property name="unwrap" value="true" />
<property name="security-principal" value="guest" />
<property name="security-credential" value="guest" />
</action>
Producer sending code:
Message msg = MessageFactory.getInstance().getMessage(MessageType.JAVA_SERIALIZED);
msg.getBody().add(foo); // foo is the business specific message
new JMSRouter(config).process(msg);
Consumer receiving config:
<jms-jca-provider connection-factory="ConnectionFactory" name="FooMessaging">
<jms-bus busid="fooChannel">
<jms-message-filter dest-name="topic/FooTopic"
dest-type="TOPIC" transacted="false" />
</jms-bus>
<activation-config>
<property name="dLQMaxResent" value="1" />
</activation-config>
</jms-jca-provider>
Topic config
<server>
<mbean code="org.jboss.jms.server.destination.TopicService"
name="jboss.esb.quickstart.destination:service=Topic,name=FooTopic"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer
</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
</server>
Things I've tried so far.
Run the publisher ESB without the consumer ESB - as expected no problems.
Lots of googling, looking for existing questions on stackoverflow
Found some references to rate limiting but I can't see how to fit these into my config.
I've tried to find an API to discover how many messages are already on the topic unprocessed (with the hope I can implement my own back-off strategy).
Looked at this documentation.
Look at this section 6.3.17.2. org.jboss.mq.server.jmx.Topic and use the 'Depth' related attributes using JMX.
It might help you build the back-off strategy you're looking for

Categories

Resources