Handle reconnection/retries using Lyra-style in Spring AMQP - java

I am using RabbitMQ covered with Spring AMQP abstraction.
So in essence I am using Spring AMQP.
I need to handle conneciton failures. It's fairly easy to achieve this using Lyra when you use raw RabbitMQ classes.
How do you achieve the same in Spring AMQP? I want my code to be unaware of any network problems.
I know Spring handles reconnections by default (in some way), but what I want is a Lyra-style configuration (be it in XML or wherever), so I can define timeout, max retries, backoff etc..

On the consuming side there is no way to configure that - the container will simply retry the connection on a fixed schedule; configurable by setting recoveryInterval in the SimpleMessageListenerContainer which defaults to 5 seconds. There's not much value in configuring a backoff for consumers.
On the publishing side, you can use spring-aop to wrap the RabbitTemplate (AmqpTemplate interface) in a MethodInterceptor that wraps the send*() calls in a RetryTemplate from spring-retry. The RetryTemplate can be configured with all sorts of options, including backoff policy etc.
If you need help with that, I can try to find some time to post a Gist.
EDIT:
Per the comment below - correct, the recoveryInterval is currently not available with the namespace (but you can still define the container as a <bean ... class="...SimpleMessageListenerContainer...>.
However, it was added a few weeks ago to the master branch (commit here). It is available in the 1.3.0.BUILD-SNAPSHOT.
Also, as a result of your question here, I have added a RetryTemplate option to the RabbitTemplate (pull request here). It should be merged soon. The release candidate for 1.3.0 (1.3.0.RC1) is due Friday and the 1.3.0 GA release will follow within a couple of weeks.

Related

Spring Rabbitmq - how to configure the consumer without using #RabbitListener

I'm writing some core features for developers who will use my library.
One of the features is to provide an ability to switch the message consumption between two different sources by flag configuration, the handling of these messages should remain as is, no mater to the source - for example switching message consumption from kafka to rabbitmq. the same business logic will be executed with income message.
**I try to figure out how to configure the consumer without using the #RabbitListener, is it possible? **
RabbitAdmin - responsible for connect to the source.
RabbitTemplete - responsible for publishing messages.
The only clue that I found is to use springs SimpleMessageListenerContainer, but the issue with him is that there is no ability to set multiple onMessage handlers?
Also I saw the option to use MessageListenerAdapter in this answer
The main issue with these answers is that I'm going to deal with multiple number of queues and bindings.. and here it seem like solution for single consumer in whole application. - am I wrong here?
You need a listener container for each listener.
You can use Boot's auto configured listener container factory to created each container and add a listener to it.
If the same listener can consume from multiple queues, you can configure the container to listen to those queues.

Spring Webflux - Actuator - Netty thread metrics?

Small question regarding Netty metrics for a Spring Webflux + actuator project please.
In the Spring MVC world, combined with actuator, we have metrics such as:
tomcat_threads_busy_threads
tomcat_threads_current_threads
tomcat_threads_config_max_threads
jetty_threads_busy
jetty_threads_current
jetty_threads_config_max
Which helps a lot to get the overall status of the application.
However, in Webflux, it seems there is no equivalent.
I was expecting something like netty_threads_busy or something equivalent, but could not find anything related.
May I ask what would be the equivalent in Netty Webflux world please?
Thank you
The metrics expose by reactor-netty are not enabled by default in spring boot. There was a previous discussion on this github issue and the decision was not to enable these by default.
If you wanted to enable the netty server metrics in your own application, you can add the following bean to customise the Netty HttpServer.
#Bean
public NettyServerCustomizer nettyServerCustomizer(){
return httpServer -> httpServer.metrics(true, uriMappingFunction);
}
Caveat:
If you have path parameters in any of your URIs you should provide a uriMappingFunction that converts them to templated URIs ie. /user/1 -> /user/{id}. Failure to do so could lead to cardinality explosion in your MeterRegistry.
Enabling this feature also comes with the following recommendation:
It is strongly recommended applications to configure an upper limit for the number of the URI tags.
Reference Documentation
Java Doc

Performance settings for ActiveMQ producer using Apache Camel in Spring boot framework

We have a spring boot application and we are using apache camel as a framework for message processing. We are trying to best optimize our application settings to make the enqueue of messages on the ActiveMQ queue fast which is received by the Logstash on the other end of the queue as consumers.
The documentation is scattered at many places and there are too many configurations available.
For example, the camel link for spring boot specifies 102 options. Similarly, the activemq apache camel link details these with much more.
This is what we have currently configured:
Application.properties:
################################################
# Spring Active MQ
################################################
spring.activemq.broker-url=tcp://localhost:61616
spring.activemq.packages.trust-all=true
spring.activemq.user=admin
spring.activemq.password=admin
Apache Camel
.to("activemq:queue:"dataQueue"?messageConverter=#queueMessageConverter");
Problem:
1 - We suspect that we have to use poolConnectionFactory and not default Spring JMS Template bean which is somehow auto picked up.
2 - We also want the process to be asynchornous. We just want to put the message on queue and dont want to wait for any ACK from activemq or do anyretry or something.
3 - We want to wait for retry only if queue is full.
4 - Where should we set the settings for ActiveMq size? and also the activemq is putting things in Dead letter queue in case no consumer availaible? We want to override that behaviour and want to keep the message in there. (Is this have to be configured in Activemq and not in Our app/apache camel)
Update
Here is we have solved it after some more investigation and based on feedback for now. Note: this does not involve retrying, for that we will try the option suggested in the answer.
For Seda queues:
producer:
.to("seda:somequeue?waitForTaskToComplete=Never");
consumer:
.from("seda:somequeue?concurrentConsumers=20");
Active MQ:
.to("activemq:queue:dataQueue?disableReplyTo=true);
Application.Properties:
#Enable poolconnection factory
spring.activemq.pool.enabled=true
spring.activemq.pool.blockIfFull=true
spring.activemq.pool.max-connections=50
Yes, you need to use pooledConnectionFactory. Especially with Camel+Spring Boot. Or look to use the camel-sjms component. The culprit is Spring's JMSTemplate. Super high latency.
Send NON_PERSISTENT and AUTO_ACK, also turn on sendAsync on the connection factory
You need to catch javax.jms.ResourceAllocationException in your route to do retries when Producer Flow Control kicks in (aka queue or broker is full)
ActiveMQ does sizing based on bytes, not message count. See the SystemUsage settings in Producer Flow Control docs and Per-Destination Policies policies for limiting queue size based on bytes.

Springboot - Spring Kafka - Lazy Container Factory initialization

With Spring JDBC Templates, you can initialize connections lazily with a simple flag. Is there a similar capability for Kafka Container Factories for Springboot 1.5.x / Spring Kafka 1.3.x deployments?
The best answer I have seen so far is to disable autoStart and manage the start on your own catching any exceptions that may occur during start up here - How to start spring application even if Kafka listener (spring-kafka) doesn't initialize
Is this the only way and is there any caveats when using KafkaListenerEndpointRegistry to self-manage the lifecycle of the container(s)?
Would Lazy annotation work with #KafkaListener, #Configuration class for Kafka configurations or similar component class? Shooting this question out there since there does not appear to be a documented approach for handling, while in parallel attempting some of these approaches in order to obtain feedback in parallel.
When using Springboot 1.5.x, how does this change (if any) with Springboot 2.1.x (or above) and compatible Spring Kafka version for those versions?
Lazy initialization of a listener container makes no sense. What would trigger the instantiation?
The KafkaTemplate lazily creates its producer on the first operation.
autoStartup and starting/stopping using the registry is the correct approach.
The version makes no difference.

how to make persistent JMS messages with java spring boot application?

I am trying to make a queue with activemq and spring boot using this link and it looks fine. What I am unable to do is to make this queue persistent after application goes down. I think that SimpleJmsListenerContainerFactory should be durable to achieve that but when I set factory.setSubscriptionDurable(true) and factory.setClientId("someid") I am unable to receive messages any more. I would be greatfull for any suggestions.
I guess you are embedding the broker in your application. While this is ok for integration tests and proof of concepts, you should consider having a broker somewhere in your infrastructure and connect to it. If you choose that, refer to the ActiveMQ documentation and you should be fine.
If you insist on embedding it, you need to provide a brokerUrl that enables message persistence.
Having said that, it looks like you misunderstand durable subscriber and message persistence. The latter can be achieved by having a broker that actually stores the content of the queue somewhere so that if the broker is stopped and restarted, it can restore the content of its queue. The former is to be able to receive a message even if the listener is not active at a period of time.
you can enable persistence of messages using ActiveMQConnectionFactory.
as mentioned in the spring boot link you provided, this ActiveMQConnectionFactory gets created automatically by spring boot.so you can have this bean in your application configuration created manually and you can set various property as well.
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
Here is the link http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html

Categories

Resources