What is the maximum Spring inbound channel adapters? - java

I have a spring integration configuration file like:
<int-jms:inbound-channel-adapter
channel="fromjmsRecon"
jms-template="jmsTemplate"
destination-name="com.mycompany.inbound.recon">
<int:poller fixed-delay="3000" max-messages-per-poll="1"/>
</int-jms:inbound-channel-adapter>
<int:publish-subscribe-channel id="fromjmsRecon"/>
<int:service-activator input-channel="fromjmsRecon"
ref="processInboundReconFile"
method="execute"/>
... 10 More inbound channels ...
<int-jms:inbound-channel-adapter
channel="fromjmsVanRecon"
jms-template="jmsTemplate"
destination-name="com.mycompany.inbound.another">
<int:poller fixed-delay="3000" max-messages-per-poll="1"/>
</int-jms:inbound-channel-adapter>
<int:publish-subscribe-channel id="fromjmsVanRecon"/>
<int:service-activator input-channel="fromjmsVanRecon"
ref="processInboundAnother"
method="execute"/>
</beans>
There are 11 inbound-channel-adapter's. The first 10 connect to ActiveMQ, but the 11th one never does. It does not matter the order in which these adapters are listed, the 11th one always is ignored. The service adapter is initialized, but the channel adapter never connects to ActiveMQ.
Is there a limit to the number of inbound channel adapters? Is there a property that I can set somewhere that changes this limit?
Thanks for your help.

Correct, there is limit called TaskScheduler thread pool with size 10:
http://docs.spring.io/spring-integration/reference/html/configuration.html#namespace-taskscheduler
So, consider to change its size with spring.integration.taskScheduler.poolSize property, of use TaskExecutor for those adapters to shift tasks to other threads and don't eat expensive TaskScheduler.
There is other approach: don't use <int-jms:inbound-channel-adapter>, but switch to <int-jms:message-driven-channel-adapter>, which is listening by nature and much better.

Related

Camel delay overrides any redeliveryPolicy

Here is a simplified ftp polling mechanism.
<camelContext id="Fetcher" xmlns="http://camel.apache.org/schema/blueprint">
<redeliveryPolicyProfile id="redeliveryPolicy"
redeliveryDelay="10000"
maximumRedeliveries="-1" />
<camel:route id="fetchFiles">
<camel:from uri="ftp://10.20.30.40/From?username=user&password=RAW({{password}})&delay=3000" />
<camel:to uri="log:input?showAll=true&level=INFO"/>
<camel:to uri="file://incomingDirectory" />
<onException redeliveryPolicyRef="msRedeliveryPolicy">
<exception>java.lang.Exception</exception>
<redeliveryPolicy logRetryAttempted="true" retryAttemptedLogLevel="WARN"/>
</onException>
</camel:route>
</camelContext>
What do you think happens on failure? (Delay is 3 seconds, and
redeliveryDelay is 10 seconds.)
Answer: It polls every 3 seconds, forever.
So let's look at the docs. Maybe I need this
"repeatCount (scheduler)"
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.
Default: 0
Nope, it's not even a valid parameter. So why's it in the docs?
Unknown parameters=[{repeatCount=5}]
Ok, so I suppose every 3 seconds it polls. So how do I tell camel to stop that? Let's try set 'handled' to true?
<onException redeliveryPolicyRef="msRedeliveryPolicy">
<exception>java.lang.Exception</exception>
<redeliveryPolicy logRetryAttempted="true" retryAttemptedLogLevel="WARN"/>
<handled><constant>true</constant></handled>
</onException>
No luck. Still 3 seconds. It's clearly not even getting to the redelivery part.
What's the secret?
The fact is errors happen in from endpoint are not handled by user defined route (i.e. fetchFiles in above setup). So, onException and redeliveryPolicy are not involved as they only affect stuff belongs to user defined route.
To control the behavior of consumer defined in from endpoint, the obvious way is to use the option exist in that component. As suggested by #Screwtape, use backoffErrorThreshold and backoffMultplier for your case.
Why parameter repeatCount exist in doc, but is invalid to use? It probably does not exist in your camel version and Camel document writer forget to mark the first exist version in the doc.

ActiveMQ Pending messages

I have a problem with ActiveMQ similar to this one:
http://activemq.2283324.n4.nabble.com/Messages-stuck-in-pending-td4617979.html
and already tried the solution posted here.
Some messages seem to get stuck on the queue and can sit there for literally days without being consumed. I have more than enough consumers that are free most of the time, so it's not an issue of "saturation" of consumers.
Upon restart of the ActiveMQ SOME of the pending messages are consumed right away. Just a moment ago I had situation where I had 25 free consumers avaiable for queue (they are visible in the admin panel) with 7 of those "stuck" messages. Four of them were consumed right away but other 3 are still stuck. The other strange thing is - new messages kept coming to queue and were consumed right away, while the 3 old ones were still stuck.
On the consumer side my config in spring looks as follows:
<jms:listener-container concurrency="${activemq.concurrent.consumers}" prefetch="1">
<jms:listener destination="queue.request" response-destination="queue.response" ref="requestConsumer" method="onRequest"/>
</jms:listener-container>
<bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy">
<property name="queuePrefetch" value="1" />
</bean>
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="${activemq.broker.url}?initialReconnectDelay=100&maxReconnectDelay=10000&startupMaxReconnectAttempts=3"/>
<property name="prefetchPolicy" ref="prefetchPolicy"/>
</bean>
The "stuck" messages are probably considered as "in delivery", restarting the broker will close the connections and, as the message are yet not acknowledged, the broker considers them as not delivered and will deliver them again.
There may be several problem leading to such a situation, most common ones are a problem in transaction / acknowledgment configuration, bad error / acknowledgment management on consumer side (the message is consumed but never acknowledged) or consumer being stuck on an endless operation (for example a blocking call to a third party resource which doesn't respond and there is no timeout handling).

Message only gets published to one queue in a RabbitMQ Fanout exchange (java)

So, I have 2 queues, outboundEmailQueue and storeEmailQueue:
<rabbit:queue name="outboundEmailQueue"/>
<rabbit:queue name="storeEmailQueue"/>
binded to a fanout exchange called integrationExchange:
<rabbit:fanout-exchange name="integrationExchange" auto-declare="true">
<rabbit:bindings>
<rabbit:binding queue="outboundEmailQueue"/>
<rabbit:binding queue="storeEmailQueue"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
the template:
<rabbit:template id="integrationRabbitTemplate"
connection-factory="connectionFactory" exchange="integrationExchange"
message-converter="jsonMessageConverter" return-callback="returnCallback"
confirm-callback="confirmCallback" />
how I am sending an object to the exchange:
integrationRabbitTemplate.convertAndSend("integrationExchange", "", outboundEmail);
However, the message only gets published to storeEmailQueue:
What is wrong with my configuration? Why is the message not being queued to outboundEmailQueue?
From the screen captures, it seems your configuration is ok and the message is reaching both queues.
But the consumer configuration on each queue is not the same:
storeEmailQueue has consumer ack configured
outboundEmailQueue has autoack configured
If you have a doubt:
check the bindings section of either the exchange or the queues to confirm the link is there (but again, from your screen captures, seems likely to be present)
stop the consumers and push a message to the exchange, you should see the message ready count (and total count) increase on both queues.
I created the same example and its working fine, message is being added to both the queue, But I configure through annotations instead of the XML. If you want the annotations solution, please follow below link:
https://stackoverflow.com/questions/45803231/how-to-publish-messages-on-rabbitmq-with-fanout-exchange-using-spring-boot

Singleton Route running in Cluster for Messaging Consumer w/ Quartz and RoutePolicies

I'm trying to have a singleton cluster configuration with only one messaging consumer route running in the cluster (if it matters it's a rabbitmq consumer).
I've configured Quartz and am using the clustered features, which seems to only work for having only one concurrent execution.
Also to note: I've looked at using both the SimpleScheduledRoutePolicy and CronRoutePolicy. The issue I'm seeing there is I'm not seeing a way to set the quartz endpoint parameters for quartz. (stateful=true, JobName, GroupName etc...).
Am I doing something wrong here? I apologize, as I'm a bit new to both camel and quartz. Below is the route code to outline what i'm trying to do:
SimpleScheduledRoutePolicy policy = new SimpleScheduledRoutePolicy();
long startTime = System.currentTimeMillis() + 3000L;
policy.setRouteStartDate(new Date(startTime));
policy.setRouteStartRepeatCount(-1);
policy.setRouteStartRepeatInterval(10000);
from({consumer.endpoint}}").noAutoStartup().routePolicy(policy).to("log:example?showBody=true&multiline=false");
Maybe you mean something like this:
from("quartz2://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI").enrich("rabbitmq://<yourRabbitMQuri>).to("somewhere");
What is the reason behind limiting the application to one message consumer? Each processing step you add to a Camel route should be stateless.
I'm not sure I fully understand your requirements, so here's a general hint.
Camel offers JMS support out of the box, and JMS Queues may be what you are looking for:
JMS queue
A staging area that contains messages that have been sent and are waiting to be read (by only one consumer). Contrary to what the name queue suggests, messages don't have to be received in the order in which they were sent. A JMS queue only guarantees that each message is processed only once.
Your route could be something like:
<route>
<from uri="jms:queue:myqueue" />
<log message="Received message: ${body}" />
<to uri="bean:yourProcessorHere" />
</route>
ActiveMQ is also supported.
You might want to use another Camel route policy. Camel comes with support for a Zookeeper based route policy (http://camel.apache.org/zookeeper.html). In such a solution Zookeeper would elect the active node.
If you don't want to build up the necessary Zookeeper infrastructure, you could roll your own route policy and use a database to do the election of the active node by having your nodes compete to lock a table row, have a look here and here for inspiration. Be aware the code might be outdated. Could be a starting point though.

Keep messages in ActiveMQ queue if ThreadPoolTaskExecutor has no free capacity

I have two Java processes, the first one of them produces messages and puts
them onto an ActiveMQ queue. The second process (consumer) uses Spring
Integration to get messages from the queue and processes them in threads.
I have two requirements:
The consumer should have 3 processing threads. If I have 10 messages
coming in through the queue, I want to have 3 threads processing the first 3
messages, and the other 7 messages should be buffered.
When the consumer stops while some messages are not yet processed, it
should continue processing the messages after a restart.
Here's my config:
<bean id="messageActiveMqQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="example.queue" />
</bean>
<int-jms:message-driven-channel-adapter
destination="messageActiveMqQueue" channel="incomingMessageChannel" />
<int:channel id="incomingMessageChannel">
<int:dispatcher task-executor="incomingMessageChannelExecutor" />
</int:channel>
<bean id="incomingMessageChannelExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="daemon" value="false" />
<property name="maxPoolSize" value="3" />
</bean>
<int:service-activator input-channel="incomingMessageChannel"
ref="myMessageProcessor" method="processMessage" />
The first requirement works as expected. I produce 10 messages and 3
myMessageProcessors start processing a message each. As soon as the 1st message has
finished, the 4th message is processed.
However, when I kill the consumer before all messages are processed, those
messages are lost. After a restart, the consumer does not get those messages
again.
I think in the above configuration that's because the threads generated by the
ThreadPoolTaskExecutor queue the messages. So the messages are already removed
from the incomingMessageChannel. Hence I tried setting the queue capacity of
the incomingMessageChannelExecutor:
<property name="queueCapacity" value="0" />
But now I get error messages when I have more than 3 messages:
2013-06-12 11:47:52,670 WARN [org.springframework.jms.listener.DefaultMessageListenerContainer] - Execution of JMS message listener failed, and no ErrorHandler has been set.
org.springframework.integration.MessageDeliveryException: failed to send Message to channel 'incomingMessageChannel'
I also tried changing the message-driven-channel-adapter to an inbound-gateway,
but this gives me the same error.
Do I have to set an error handler in the inbound-gateway, so that the errors go back to the ActiveMQ queue? How do I have to configure the queue so that the messages are kept in the queue if the ThreadPoolTaskExecutor doesn't have a free thread?
Thanks in advance,
Benedikt
No; instead of using an executor channel, you should be controlling the concurrency with the <message-driven-channel-adapter/>.
Remove the <dispatcher/> from the channel and set concurrent-consumers="3" on the adapter.

Categories

Resources