How can I shutdown a Camel route context disgracefully?
As soon as I click the button, the Camel route should stop immediately. I don't want any delay.
Each time I do a camelroute.context.stop(), it takes some time to stop, and in that time since the route was active earlier queues and dequeues the messages are sent to the target queue.
I want to stop the route mid-way when I click the desired button.
Is there a way to handle it?
Have a look at the timeout property of the DefaultShutdownStrategy.
Try setting it to zero in your Camel Context:
<bean id="shutdownStrategy" class="org.apache.camel.impl.DefaultShutdownStrategy">
<property name="timeout" value="0"/>
</bean>
The value is in seconds by default.
Also, have a look at Graceful Shutdown in the Camel docs, if you haven't yet.
EDIT 1: The DefaultShutdownStrategy does not allow 0 timeouts. You could try setting it to 1 NANOSECOND which might help:
<bean id="shutdownStrategy" class="org.apache.camel.impl.DefaultShutdownStrategy">
<property name="timeout" value="1"/>
<property name="timeUnit" value="NANOSECONDS" /
</bean>
Alternatively, you can implement your own ShutdownStrategy if it's really important for you to guarantee absolute immediate shutdown.
Related
Here is a simplified ftp polling mechanism.
<camelContext id="Fetcher" xmlns="http://camel.apache.org/schema/blueprint">
<redeliveryPolicyProfile id="redeliveryPolicy"
redeliveryDelay="10000"
maximumRedeliveries="-1" />
<camel:route id="fetchFiles">
<camel:from uri="ftp://10.20.30.40/From?username=user&password=RAW({{password}})&delay=3000" />
<camel:to uri="log:input?showAll=true&level=INFO"/>
<camel:to uri="file://incomingDirectory" />
<onException redeliveryPolicyRef="msRedeliveryPolicy">
<exception>java.lang.Exception</exception>
<redeliveryPolicy logRetryAttempted="true" retryAttemptedLogLevel="WARN"/>
</onException>
</camel:route>
</camelContext>
What do you think happens on failure? (Delay is 3 seconds, and
redeliveryDelay is 10 seconds.)
Answer: It polls every 3 seconds, forever.
So let's look at the docs. Maybe I need this
"repeatCount (scheduler)"
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.
Default: 0
Nope, it's not even a valid parameter. So why's it in the docs?
Unknown parameters=[{repeatCount=5}]
Ok, so I suppose every 3 seconds it polls. So how do I tell camel to stop that? Let's try set 'handled' to true?
<onException redeliveryPolicyRef="msRedeliveryPolicy">
<exception>java.lang.Exception</exception>
<redeliveryPolicy logRetryAttempted="true" retryAttemptedLogLevel="WARN"/>
<handled><constant>true</constant></handled>
</onException>
No luck. Still 3 seconds. It's clearly not even getting to the redelivery part.
What's the secret?
The fact is errors happen in from endpoint are not handled by user defined route (i.e. fetchFiles in above setup). So, onException and redeliveryPolicy are not involved as they only affect stuff belongs to user defined route.
To control the behavior of consumer defined in from endpoint, the obvious way is to use the option exist in that component. As suggested by #Screwtape, use backoffErrorThreshold and backoffMultplier for your case.
Why parameter repeatCount exist in doc, but is invalid to use? It probably does not exist in your camel version and Camel document writer forget to mark the first exist version in the doc.
I have a problem with ActiveMQ similar to this one:
http://activemq.2283324.n4.nabble.com/Messages-stuck-in-pending-td4617979.html
and already tried the solution posted here.
Some messages seem to get stuck on the queue and can sit there for literally days without being consumed. I have more than enough consumers that are free most of the time, so it's not an issue of "saturation" of consumers.
Upon restart of the ActiveMQ SOME of the pending messages are consumed right away. Just a moment ago I had situation where I had 25 free consumers avaiable for queue (they are visible in the admin panel) with 7 of those "stuck" messages. Four of them were consumed right away but other 3 are still stuck. The other strange thing is - new messages kept coming to queue and were consumed right away, while the 3 old ones were still stuck.
On the consumer side my config in spring looks as follows:
<jms:listener-container concurrency="${activemq.concurrent.consumers}" prefetch="1">
<jms:listener destination="queue.request" response-destination="queue.response" ref="requestConsumer" method="onRequest"/>
</jms:listener-container>
<bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy">
<property name="queuePrefetch" value="1" />
</bean>
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="${activemq.broker.url}?initialReconnectDelay=100&maxReconnectDelay=10000&startupMaxReconnectAttempts=3"/>
<property name="prefetchPolicy" ref="prefetchPolicy"/>
</bean>
The "stuck" messages are probably considered as "in delivery", restarting the broker will close the connections and, as the message are yet not acknowledged, the broker considers them as not delivered and will deliver them again.
There may be several problem leading to such a situation, most common ones are a problem in transaction / acknowledgment configuration, bad error / acknowledgment management on consumer side (the message is consumed but never acknowledged) or consumer being stuck on an endless operation (for example a blocking call to a third party resource which doesn't respond and there is no timeout handling).
I am fairly new to Spring and Spring Batch, so feel free to ask any clarifying questions if you have any.
I am seeing an issue with Spring Batch that I cannot recreate in our test or local environments. We have a daily job that connects to Websphere MQ via JMS and retrieves a set of records. This job uses the out-of-the-box JMS ItemReader. We implement our own ItemProcessor, but it doesn't do anything special other than logging. There are no filters or processing that should affect incoming records.
The problem is that out of the 10,000+ daily records on MQ, only about 700 or so (the exact number is different each time) usually get logged in the ItemProcessor. All records are successfully pulled off the queue. The number of records logged is different each time and seems to have no pattern. By comparing the log files against the list of records in MQ, we can see that a seemingly random subset of records are being "processed" by our job. The first record might get picked up, then 50 are skipped, then 5 in a row, etc. And the pattern is different each time the job runs. No exceptions are logged either.
When running the same app in localhost and test using the same data set, all 10,000+ records are successfully retrieved and logged by the ItemProcessor. The job runs between 20 and 40 seconds in Production (also not constant), but in test and local it takes several minutes to complete (which obviously makes sense since it is handling so many more records).
So this is one of those tough issue to troubleshoot since we cannot recreate it. One idea is to implement our own ItemReader and add additional logging so that we can see if records are getting lost before the reader or after the reader - all we know now is that only a subset of records are being handled by the ItemProcessor. But even that will not solve our problem, and it will be somewhat timely to implement considering it is not even a solution.
Has anyone else seen an issue like this? Any possible ideas or troubleshooting suggestions would be greatly appreciated. Here are some of the jar version numbers we are using for reference.
Spring - 3.0.5.RELEASE
Spring Integration - 2.0.3.RELEASE
Spring Batch - 2.1.7.RELEASE
Active MQ - 5.4.2
Websphere MQ - 7.0.1
Thanks in advance for your input.
EDIT: Per request, code for processor:
public SMSReminderRow process(Message message) throws Exception {
SMSReminderRow retVal = new SMSReminderRow();
LOGGER.debug("Converting JMS Message to ClaimNotification");
ClaimNotification notification = createClaimNotificationFromMessage(message);
retVal.setShortCode(BatchCommonUtils
.parseShortCodeFromCorpEntCode(notification.getCorpEntCode()));
retVal.setUuid(UUID.randomUUID().toString());
retVal.setPhoneNumber(notification.getPhoneNumber());
retVal.setMessageType(EventCode.SMS_CLAIMS_NOTIFY.toString());
DCRContent content = tsContentHelper.getTSContent(Calendar
.getInstance().getTime(),
BatchCommonConstants.TS_TAG_CLAIMS_NOTIFY,
BatchCommonConstants.TS_TAG_SMSTEXT_TYP);
String claimsNotificationMessage = formatMessageToSend(content.getContent(),
notification.getCorpEntCode());
retVal.setMessageToSend(claimsNotificationMessage);
retVal.setDateTimeToSend(TimeUtils
.getGMTDateTimeStringForDate(new Date()));
LOGGER.debug(
"Finished processing claim notification for {}. Writing row to file.",
notification.getPhoneNumber());
return retVal;
}
JMS config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">
<bean id="claimsQueueConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jms/SMSClaimNotificationCF" />
<property name="lookupOnStartup" value="true" />
<property name="cache" value="true" />
<property name="proxyInterface" value="javax.jms.ConnectionFactory" />
</bean>
<bean id="jmsDestinationResolver"
class="org.springframework.jms.support.destination.DynamicDestinationResolver">
</bean>
<bean id="jmsJndiDestResolver"
class=" org.springframework.jms.support.destination.JndiDestinationResolver"/>
<bean id="claimsJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="claimsQueueConnectionFactory" />
<property name="defaultDestinationName" value="jms/SMSClaimNotificationQueue" />
<property name="destinationResolver" ref="jmsJndiDestResolver" />
<property name="pubSubDomain">
<value>false</value>
</property>
<property name="receiveTimeout">
<value>20000</value>
</property>
</bean>
As a rule, MQ will NOT lose messages when properly configured. The question then is what does "properly configured" look like?
Generally, lost messages are caused by non-persistence or non-transactional GETs.
If non-persistent messages are traversing QMgr-to-QMgr channels and NPMSPEED(FAST) is set then MQ will not log errors if they are lost. That is what those options are intended to be used for so no error is expected.
Fix: Set NPMSPEED(NORMAL) on the QMgr-to-QMgr channel or make the messages persistent.
If the client is getting messages outside of syncpoint, messages can be lost. This is nothing to do with MQ specifically, it's just how messaging in general works. If you tell MQ to get a message destructively off the queue and it cannot deliver that message to the remote application then the only way for MQ to roll it back is if the message was retrieved under syncpoint.
Fix: Use a transacted session.
There are some additional notes, born out of experience.
Everyone swears message persistence is set to what they think it is. But when I stop the application and inspect the messages manually it very often is not what is expected. It's easy to verify so don't assume.
If a message is rolled back on the queue, it won't happen until MQ or TCP times out the orphan channel This can be up to 2 hours so tune the channel parms and TCP Keepalive to reduce that.
Check MQ's error logs (the ones at the QMgr not the client) to look for messages about transactions rolling back.
If you still cannot determine where the messages are going, try tracing with SupportPac MA0W. This trace runs as an exit and it is extremely configurable. You can trace all GET operations on a single queue and only that queue. The output is in human-readable form.
See http://activemq.apache.org/jmstemplate-gotchas.html .
There are issues using the JMSTemplate. I only ran into these issues when I upgraded my hardware and suddenly exposed a pre-existing race condition.
The short form is that by design and intent the JMS Template opens and closes the connection on every invocaton. It will not see messages older than its creation. In high volume and/or high throughput scenarios, it will fail to read some messages.
I have two Java processes, the first one of them produces messages and puts
them onto an ActiveMQ queue. The second process (consumer) uses Spring
Integration to get messages from the queue and processes them in threads.
I have two requirements:
The consumer should have 3 processing threads. If I have 10 messages
coming in through the queue, I want to have 3 threads processing the first 3
messages, and the other 7 messages should be buffered.
When the consumer stops while some messages are not yet processed, it
should continue processing the messages after a restart.
Here's my config:
<bean id="messageActiveMqQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="example.queue" />
</bean>
<int-jms:message-driven-channel-adapter
destination="messageActiveMqQueue" channel="incomingMessageChannel" />
<int:channel id="incomingMessageChannel">
<int:dispatcher task-executor="incomingMessageChannelExecutor" />
</int:channel>
<bean id="incomingMessageChannelExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="daemon" value="false" />
<property name="maxPoolSize" value="3" />
</bean>
<int:service-activator input-channel="incomingMessageChannel"
ref="myMessageProcessor" method="processMessage" />
The first requirement works as expected. I produce 10 messages and 3
myMessageProcessors start processing a message each. As soon as the 1st message has
finished, the 4th message is processed.
However, when I kill the consumer before all messages are processed, those
messages are lost. After a restart, the consumer does not get those messages
again.
I think in the above configuration that's because the threads generated by the
ThreadPoolTaskExecutor queue the messages. So the messages are already removed
from the incomingMessageChannel. Hence I tried setting the queue capacity of
the incomingMessageChannelExecutor:
<property name="queueCapacity" value="0" />
But now I get error messages when I have more than 3 messages:
2013-06-12 11:47:52,670 WARN [org.springframework.jms.listener.DefaultMessageListenerContainer] - Execution of JMS message listener failed, and no ErrorHandler has been set.
org.springframework.integration.MessageDeliveryException: failed to send Message to channel 'incomingMessageChannel'
I also tried changing the message-driven-channel-adapter to an inbound-gateway,
but this gives me the same error.
Do I have to set an error handler in the inbound-gateway, so that the errors go back to the ActiveMQ queue? How do I have to configure the queue so that the messages are kept in the queue if the ThreadPoolTaskExecutor doesn't have a free thread?
Thanks in advance,
Benedikt
No; instead of using an executor channel, you should be controlling the concurrency with the <message-driven-channel-adapter/>.
Remove the <dispatcher/> from the channel and set concurrent-consumers="3" on the adapter.
I have message producers that are sending JMS messages about some events using ActiveMQ.
However, connection to ActiveMQ might not be up all the time. Thus, events are stored and when connection is established they are suppose to be read and sent over. Here is my code:
private void sendAndSave(MyEvent event) {
boolean sent = sendMessage(event);
event.setProcessed(sent);
boolean saved = repository.saveEvent(event);
if (!sent && !saved) {
logger.error("Change event lost for Id = {}", event.getId());
}
}
private boolean sendMessage(MyEvent event) {
try {
messenger.publishEvent(event);
return true;
} catch (JmsException ex) {
return false;
}
}
I'd like to create some kind of ApplicationEventListener that will be invoked when connection is established and process unsent events.
I went through JMS, Spring framework and ActiveMQ documentation but couldn't find any clues how to hook up my listener with ConnectionFactory.
If someone can help me out, I'll appreciate it greatly.
Here is what my app Spring context says about JMS:
<!-- Connection factory to the ActiveMQ broker instance. -->
<!-- The URI and credentials must match the values in activemq.xml -->
<!-- These credentials are shared by ALL producers. -->
<bean id="jmsTransportListener" class="com.rhd.ams.service.common.JmsTransportListener"
init-method="init" destroy-method="cleanup"/>
<bean id="amqJmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${jms.publisher.broker.url}"/>
<property name="userName" value="${jms.publisher.username}"/>
<property name="password" value="${jms.publisher.password}"/>
<property name="transportListener" ref="jmsTransportListener"/>
</bean>
<!-- JmsTemplate, by default, will create a new connection, session, producer for -->
<!-- each message sent, then close them all down again. This is very inefficient! -->
<!-- PooledConnectionFactory will pool the JMS resources. It can't be used with consumers.-->
<bean id="pooledAmqJmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory" ref="amqJmsConnectionFactory" />
</bean>
<!-- Although JmsTemplate instance is unique for each message, it is -->
<!-- thread-safe and therefore can be injected into referenced obj's. -->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<constructor-arg ref="pooledAmqJmsConnectionFactory"/>
</bean>
The way you describe the issue, it sure sounds like an open-and-shut case of JMS Durable Subscriptions. You might want to consider a more traditional implementation before going down this road. Caveats aside, ActiveMQ provides Advisory Messages which you can listen for and which will be sent for various events including new connections.
=========
Shoot, sorry... I did not understand what the issue was. I don't think Advisories are the solution at all.... after all, you need to be connected to the broker to get them, but being connected is what you know about.
So if I understand it correctly (prepare for retry #2....), what you need is a client connection which, when it fails, attempts to reconnect indefinitely. When it does reconnect, you want to trigger an event (or more) that flushes pending messages to the broker.
So detecting the lost connection is easy. You just register a JMS ExceptionListener. As far as detecting a reconnect, the simplest way I can think of is to start a reconnect thread. When it connects, stop the reconnect thread and notify interested parties using Observer/Observable or JMX notifications or the like. You could use the ActiveMQ Failover Transport which will do a connection retry loop for you, even if you only have one broker. At least, it is supposed to, but it's not doing that much for you that would not be done by your own reconnect thread... but if you're willing to delegate some control to it, it will cache your unflushed messages (see the trackMessages option), and then send them when it reconnects, which is sort of all of what you're trying to do.
I guess if your broker is down for a few minutes, that's not a bad way to go, but if you're talking hours, or you might accumulate 10k+ messages in the downtime, I just don't know if that cache mechanism is as reliable as you would need it to be.
==================
Mobile app ... right. Not really appropriate for the failover transport. Then I would implement a timer that periodically connects (might be a good idea to use the http transport, but not relevant). When it does connect, if there's nothing to flush, then see you in x minutes. If there is, send each message, wait for a handshake and purge the message from you mobile store. Then see you again in x minutes.
I assume this is Android ? If not, stop reading here. We actually implemented this some time ago. I only did the server side, but if I remember correctly, the connection timer/poller spun every n minutes (variable frequencies, I think, because getting too aggressive was draining the battery). Once a successful connection was made, I believe they used an intent broadcast to nudge the message pushers to do their thing. The thinking was that even though there was only one message pusher, we might add more.