Getting stocked while doing with RabbitMQ using Spring-AMQP.
Just need to get a way to configure AutomaticRecoveryEnabled and NetworkRecoveryInterval using Spring-AMQP. There is a direct option to set these flages if you you developing using native RabbitMQ library. But i didn't find a workaround to do the same using spring
Using RabbitMQ Native library(don't need any help)
factory.setAutomaticRecoveryEnabled(true);
factory.setNetworkRecoveryInterval(10000);
Using Spring-AMPQ(need help)
Like above i didn't find any such method while trying with Spring-AMPQ. This is what i am doing now.
#Bean(name="listener")
public SimpleMessageListenerContainer listenerContainer()
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(env.getProperty("mb.queue"));
container.setMessageListener(new MessageListenerAdapter(messageListener));
return container;
}
Any help in this regards is highly appreciable. Thanks in advance.
Just to clarify; Spring AMQP is NOT compatible with automaticRecoveryEnabled.
It has its own recovery mechanisms and has no awareness of the underlying recovery being performed by the client. This leaves dangling connection(s) and Channel(s).
I am working on a temporary work-around that will make it compatible (but will effectively disable the client recovery of any connections/channels used by Spring AMQP, while leaving the client recovery in place for other users of the same connection factory.
A longer term fix will require a major rewrite of the listener container to utilize the client recovery code instead.
Well, CachingConnectionFactory has another costructor to apply a com.rabbitmq.client.ConnectionFactory.
So, it just enough to cofigure the last one as a an additional #Bean with appropriate options and inject it to the CachingConnectionFactory.
As of version 4.0.0 of the Java client, automatic recovery is enabled by default.
It can be done like this,
ConnectionFactory factory = new ConnectionFactory();
factory.setAutomaticRecoveryEnabled(true);
// connection that will recover automatically
Connection conn = factory.newConnection();
Related
Our configuration is: 1...n Message receivers with a shared database.
Messages should only be processed once.
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "message-queue", durable = "true"),
exchange = #Exchange(value = TOPIC_EXCHANGE, type = "topic", durable = "true"),
key = MESSAGE_QUEUE1_RK)
)
public void receiveMessage(CustomMessage message) throws InterruptedException {
System.out.println("I have been received = " + message);
}
We want to to guarantee messages will be processed once, we have a message store with id's of messages already processed.
Is it possible to hook in this check before receiveMessage?
We tried to look at a MessagePostProcessor with a rabbitTemplate but didn't seem to work.
any advice on how to do this?
We tried with a MethodInterceptor and this works, but is pretty ugly.
Thanks
Solution found - thanks to Gary
I created a MessagePostProcessorInjector which implements SmartLifecycle
and on startup, I inspect each container and if it is a AbstractMessageListenerContainer add a customer MessagePostProccesser
and a custom ErrorHandler which looks for certain type of Exceptions and drops them (other forward to defaultErrorHandler)
Since we are using DLQ I found throwing exceptions or setting to null wouldn't really work.
I'll make a pull request to ignore null Messages after a MPP.
Interesting; the SimpleMessageListenerContainer does have a property afterReceivePostProcessors (not currently available via the listener container factory used by the annotation, but it could be injected later).
However, those postprocessors won't help because we still invoke the listener.
Please feel free to open a JIRA Improvement Issue for two things:
expose the afterReceivePostProcessors in the listener container factories
if a post processor returns null, skip calling the listener method.
(correction, the property is indeed exposed by the factory).
EDIT
How it works...
During context initialization...
For each annotation detected by the bean post processor the container is created and registered in the RabbitListenerEndpointRegistry
Near the end of context initialization, the registry is start()ed and it starts all containers that are configured for autoStartup (default).
To do further configuration of the container before it's started (e.g. for properties not currently exposed by the container factories), set autoStartup to false.
You can then get the container(s) from the registry (either as a collection or by id). Simply #Autowire the registry in your app.
Cast the container to a SimpleMessageListenerContainer (or alternatively a DirectMessageListenerContainer if using Spring AMQP 2.0 or later and you are using its factory instead).
Set the additional properties (such as the afterReceiveMessagePostProcessors); then start() the container.
Note: until we enhance the container to allow MPPs that return null, a possible alternative is to throw an AmqpRejectAndDontRequeueException from the MPP. However, this is probably not what you want if you have DLQs configured.
Throwing an exception extending from ImmediateAcknowledgeAmqpException from postProcessMessage() of DuplicateChecking MPP when message is duplicate will also not pass the message to the rabbit Listener.
I have a problem with using ActiveMQ in Spring application.
I have a few environments on separate machines. On each machine I had one ActiveMQ instance installed. Now, I realized that I can have only one ActiveMQ instance installed on one server, and few applications can use that ActiveMQ for sending messages. So, I must change queue names in order to have different queues for different environments ("queue.search.sandbox", "queue.search.production", ...).
After that change, now ActiveMQ is generating new queues, but also the old ones, although there is no such configuration for doing that.
I am using Java Spring application with Java configuration, not XML.
First, I create queueTemplate as a Spring bean:
#Bean
public JmsTemplate jmsAuditQueueTemplate() {
log.debug("ActiveMQConfiguration jmsAuditQueueTemplate");
JmsTemplate jmsTemplate = new JmsTemplate();
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
jmsTemplate.setDefaultDestination(new ActiveMQQueue(queueName));
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
Second, I create ActiveMQ Listener configuration:
#Bean
public DefaultMessageListenerContainer jmsAuditQueueListenerContainer() {
log.debug("ActiveMQConfiguration jmsAuditQueueListenerContainer");
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(connectionFactory);
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
ActiveMQQueue activeMQ = new ActiveMQQueue(queueName);
dmlc.setDestination(activeMQ);
dmlc.setRecoveryInterval(30000);
dmlc.setSessionTransacted(true);
// To perform actual message processing
dmlc.setMessageListener(auditQueueListenerService);
dmlc.setConcurrentConsumers(10);
// ... more parameters that you might want to inject ...
return dmlc;
}
After building my application, as the result I have properly created queue with suffix ("queue.audit.sandbox"), but after some time ActiveMQ generates and the old version ("queue.audit").
Does someone knows how ActiveMQ is doing this? Thanks in advance.
There is probably still an entry in the index for the queue, so when ActiveMQ restarts it is displaying the queue. If you want to be certain about destinations, use startup destinations and disable auto-creation by denying the "admin" permission to the connecting user account in the authorization entry
After some time ActiveMQ just stopped creating queues that don't exist.
Now, we have expected behavior, without unnecessary queues.
Still I didn't found out what solved this problem, to be sincere...
So from time to time we see exceptions like these:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.bson.io.Bits.readFully(Bits.java:48)
at org.bson.io.Bits.readFully(Bits.java:35)
at org.bson.io.Bits.readFully(Bits.java:30)
at com.mongodb.Response.<init>(Response.java:42)
at com.mongodb.DBPort$1.execute(DBPort.java:141)
at com.mongodb.DBPort$1.execute(DBPort.java:135)
at com.mongodb.DBPort.doOperation(DBPort.java:164)
at com.mongodb.DBPort.call(DBPort.java:135)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
at com.mongodb.DBCollection.findOne(DBCollection.java:870)
at com.mongodb.DBCollection.findOne(DBCollection.java:844)
at com.mongodb.DBCollection.findOne(DBCollection.java:790)
at org.springframework.data.mongodb.core.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:2000)
Whats the best way to handle and recover from these in code?
Do we need to put a 'retry' around each and every mongodb call?
You can use MongoClientOptions object to set different optional connection parameters. You are looking at setting heart beat frequency to make sure driver retry for connection. Also set socket time out to make sure it does not continue for too long.
MinHeartbeatFrequency: In the event that the driver has to frequently re-check a server's availability, it will wait at least this long since the previous check to avoid wasted effort. The default value is 10ms.
HeartbeatSocketTimeout: Timeout for heart beat check
SocketTimeout: Time out for connection
Reference API
To avoid too much code duplication, optionally you can follow some pattern as given below.
Basic idea is to avoid any database connection related configuration littered everywhere in the projects.
/**
* This class is an abstraction for all mongo connection config
**/
#Component
public class MongoConnection{
MongoClient mongoClient = null;
...
#PostConstruct
public void init() throws Exception {
// Please watch out for deprecated methods in new version of driver.
mongoClient = new MongoClient(new ServerAddress(url, port),
MongoClientOptions.builder()
.socketTimeout(3000)
.minHeartbeatFrequency(25)
.heartbeatSocketTimeout(3000)
.build());
mongoDb = mongoClient.getDB(db);
.....
}
public DBCollection getCollection(String name) {
return mongoDb.getCollection(name);
}
}
Now you can use MongoConnection in DAO-s
#Repository
public class ExampleDao{
#Autowired
MongoConnection mongoConnection;
public void insert(BasicDBObject document) {
mongoConnection.getCollection("example").insert(document);
}
}
You can also implement all the database operations inside MongoConnection to introduce some common functionality across the board. For example add logging for all "inserts"
One of the many options, to handle retry is Spring retry project
https://github.com/spring-projects/spring-retry
Which provides declarative retry support for Spring applications.
This is basically Spring answer for this problem. It is used in Spring Batch, Spring Integration, Spring for Apache Hadoop (amongst others).
If you want to approach timeouts (and related) problems not only for your MongoDB but also for any other external references then you should try Netflix's Hystrix (https://github.com/Netflix/Hystrix).
It is an awesome library that integrates nicely with RX and Asynchronous processing that becomes so much more popular lately.
If i'm not mistaken, i think you need to config your properties like timeout or so when you try to build the connection or just prepare them well in connection pool.
Or,you may just check your network or machine, and split your request-data by more times to reduce network trans time
https://github.com/Netflix/Hystrix is your tool for handling dependecies.
I need to be able to update the refreshInterval for JMS client programmatically.
I tried to do it through JmsConfiguration bean, but that's useless, and I couldn't find any configuration on the ActiveMQConnectionFactory class that I could use to update that value.
You can set the recoveryInterval property on the ActiveMQComponent/JmsComponent or set the same property on the JmsConfiguration POJO.
However, since the Camel ActiveMQ/JMS consumer is based on a Spring JMS listeners, like DefaultMessageListenerContainer, you cannot simply change that parameter during runtime (if that's what you intend). You need to set the recoveryInterval before the route is created. You can of course recreate the route and possibly ActiveMQ component and have the recovery interval set programmatically.
If you really need this feature, you can subclass DefaultMessageListenerContainer to allow setRecoveryInterval to actually trigger during runtime (not sure how easy that is, there is some thread handling to watch out for). Your custom MLC can be supplied to camel via the messageListenerContainerFactoryRef option.
I am using Spring annotations to initialize my BayeuxServer. I enabled websocket by setting the transport in my Spring bean -
BayeuxServerImpl bean = new BayeuxServerImpl();
bean.setTransports(new WebSocketTransport(bean));
But now, when websocket connection fails or is disabled in js($.cometd.websocketEnabled = false;), it's not falling back to long polling successfully. It throws error "400 Unknown Bayeux Transport" in firebug console.
I couldn't set LongPollingTransport in setTransports since LongPollingTransport is an abstract class in the library. I tried creating a class which extends LongPollingTransport and specifying it in the setTransports API, but that didn't work either. Please let me know if I am doing something wrong. We need long polling to work in case websockets fail.
cometd version: 2.5.1
jetty version: 7.6.8
By calling BayeuxServer.setTransports(...) with just one transport, you basically disable any fall back capability since you are explicitly telling CometD to use 1 transport only.
Class LongPollingTransport has 2 subclasses depending on the specific mechanism to use; you may want to use class JSONTransport.
Note that the CometD documentation has an example of how to setup WebSocket with Spring using XML, but it is enough to translate the XML into code to have it working with annotations.
Basically, it all boils down to:
bayeuxServer.setTransports(new WebSocketTransport(bayeuxServer), new JSONTransport(bayeuxServer));