I read a lot of documentation/stackoverflow and still I have problem when exception occurs to move message to dead letter queue. I'm using spring-boot Here is my configuration:
#Autowired
private RabbitTemplate rabbitTemplate;
#Bean
RetryOperationsInterceptor interceptor() {
RepublishMessageRecoverer recoverer = new RepublishMessageRecoverer(rabbitTemplate, "error_exchange ", "error_key");
return RetryInterceptorBuilder
.stateless()
.recoverer(recoverer)
.build();
}
Dead letter queue:
Features
x-dead-letter-routing-key: error_key
x-dead-letter-exchange: error_exchange
durable: true
Policy DLX
Name of the queue: error
My exchange:
name:error_exchange
binding: to: error, routing_key: error_key
Here is my conusmer:
#RabbitListener(queues = "${rss_reader_chat_queue}")
public void consumeMessage(Message message) {
try {
List<ChatMessage> chatMessages = messageTransformer.transformMessage(message);
List<ChatMessage> save = chatMessageRepository.save(chatMessages);
sendMessagesToChat(save);
}
catch(Exception ex) {
throw new AmqpRejectAndDontRequeueException(ex);
}
}
So when I send an invalid message and some exception occurs, it happens once (and it's good because previously message was sent over and over again) but the message doesn't go to my dead letter queue. Can you help me with this?
You need to show the rest of your configuration - boot properties, queue #Beans etc. You also seem to have some confusion between using a republishing recoverer Vs dead letter queues; they are different ways to achieve similar results. You typically wouldn't use both.
Here's a simple boot app that demonstrates using a DLX/DLQ...
#SpringBootApplication
public class So43694619Application implements CommandLineRunner {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So43694619Application.class, args);
context.close();
}
#Autowired
RabbitTemplate template;
#Autowired
AmqpAdmin admin;
private final CountDownLatch latch = new CountDownLatch(1);
#Override
public void run(String... arg0) throws Exception {
this.template.convertAndSend("so43694619main", "foo");
this.latch.await(10, TimeUnit.SECONDS);
this.admin.deleteExchange("so43694619dlx");
this.admin.deleteQueue("so43694619main");
this.admin.deleteQueue("so43694619dlx");
}
#Bean
public Queue main() {
Map<String, Object> args = new HashMap<>();
args.put("x-dead-letter-exchange", "so43694619dlx");
args.put("x-dead-letter-routing-key", "so43694619dlxRK");
return new Queue("so43694619main", true, false, false, args);
}
#Bean
public Queue dlq() {
return new Queue("so43694619dlq");
}
#Bean
public DirectExchange dlx() {
return new DirectExchange("so43694619dlx");
}
#Bean
public Binding dlqBinding() {
return BindingBuilder.bind(dlq()).to(dlx()).with("so43694619dlxRK");
}
#RabbitListener(queues = "so43694619main")
public void listenMain(String in) {
throw new AmqpRejectAndDontRequeueException("failed");
}
#RabbitListener(queues = "so43694619dlq")
public void listenDlq(String in) {
System.out.println("ReceivedFromDLQ: " + in);
this.latch.countDown();
}
}
Result:
ReceivedFromDLQ: foo
Related
I have created messaging component which will be called by other service for consuming and sending message from kafka, producer part is working fine, I am not sure what wrong with the below consumer listner part why it not printing messages or in debug mode control also not going inside the #kafkaListner method, but GUI based kafkamanager app shows offset is got committed even thought its mannual offset commit.
Here is my Message listner class code , I have checked topic and groupid is setting and fetched properly
#Component
public class SpringKafkaMessageListner {
public CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(topics = "#{consumerFactory.getConfigurationProperties().get(\"topic-name\")}",
groupId = "#{consumerFactory.getConfigurationProperties().get(\"group.id\")}",
containerFactory = "springKafkaListenerContainerFactory")
public void listen(ConsumerRecord<?, ?> consumerRecord, Acknowledgment ack) {
System.out.println("listening...");
System.out.println("Received Message in group : "
+ " and message: " + consumerRecord.value());
System.out.println("current offsetId : " + consumerRecord.offset());
ack.acknowledge();
latch.countDown();
}
}
Consumer config class-
#Configuration
#EnableKafka
public class KafkaConsumerBeanConfig<T> {
#Autowired
#Lazy
private KafkaConsumerConfigDTO kafkaConsumerConfigDTO;
#Bean
public ConsumerFactory<Object, T> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(kafkaConsumerConfigDTO.getConfigs());
}
//for spring kafka with manual offset commit
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object,
springKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, T> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
//manual commit
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
#Bean
SpringKafkaMessageListner consumerListner(){
return new SpringKafkaMessageListner();
}
}
Below code snippet is consumer interface implementation which expose subscribe() method and all other bean creation is done thru ConfigurableApplicationContext.
public class SpringKafkaConsumer<T> implements Consumer<T> {
public SpringKafkaConsumer(ConsumerConfig<T> consumerConfig,
ConfigurableApplicationContext context) {
this.consumerConfig = consumerConfig;
this.context = context;
this.consumerFactory = context.getBean("consumerFactory", ConsumerFactory.class);
this.springKafkaContainer = context.getBean("springKafkaListenerContainerFactory",
ConcurrentKafkaListenerContainerFactory.class);
}
// here is it just simple code to initialize SpringKafkaMessageListner class and invoking
listening part
#Override
public void subscribe() {
consumerListner = context.getBean("consumerListner", SpringKafkaMessageListner.class);
try {
consumerListner.latch.await(30, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Test class with my local docker kafka setup
#RunWith(SpringRunner.class)
#DirtiesContext
#ContextConfiguration(classes = QueueManagerSpringConfig.class)
public class SpringKafkaTest extends AbstractJUnit4SpringContextTests {
#Autowired
private QueueManager queueManager;
private Consumer<KafkaMessage> consumer;`
// test method
#Test
public void testSubscribeWithLocalBroker() {
String topicName = "topic1";
String brokerServer = "127.0.0.1:9092";
String groupId = "grp1";
Map<String, String> additionalProp = new HashMap<>();
additionalProp.put(KafkaConsumerConfig.GROUP_ID, groupId);
additionalProp.put(KafkaConsumerConfig.AUTO_COMMIT, "false");
additionalProp.put(KafkaConsumerConfig.AUTO_COMMIT_INTERVAL, "100");
ConsumerConfig<KafkaMessage> consumerConfig =
new ConsumerConfig.Builder<>(topicName, new KafkaSuccessMessageHandler(new
KafkaMessageSerializerTest()),
new KafkaMessageDeserializerTest())
.additionalProperties(additionalProp)
.enableSpringKafka(true)
.offsetPositionStrategy(new EarliestPositionStrategy())
.build();
consumer = queueManager.getConsumer(consumerConfig);
System.out.println("start subscriber");
// calling subcribe method of consumer that will invoke kafkalistner
consumer.subscribe();
}
#Configuration
public class QueueManagerSpringConfig {
#Bean
public QueueManager queueManager() {
Map<String, String> kafkaProperties = new HashMap<>();
kafkaProperties.put(KafkaPropertyNamespace.NS_PREFIX +
KafkaPropertyNamespace.BOOTSTRAP_SERVERS,
"127.0.0.1:9092");
return QueueManagerFactory.getInstance(new KafkaPropertyNamespace(kafkaProperties)); } }
I have a spring boot application that uses the libraries: SimpleMessageListenerContainer (https://docs.spring.io/spring-amqp/docs/current/api/org/springframework/amqp/rabbit/listener/SimpleMessageListenerContainer.html) and SimpleMessageListenerContainerFactory (https://www.javadoc.io/static/org.springframework.cloud/spring-cloud-aws-messaging/2.2.0.RELEASE/org/springframework/cloud/aws/messaging/config/SimpleMessageListenerContainerFactory.html). The application uses ASW SQS and Kafka but I'm experiencing some out of order data and trying to investigate why. Is there a way to view logging from the libraries? I know I cannot edit them directly but when I create the bean, I want to be able to see the logs from those two libraries and if possible to add to them.
Currently I'm setting up the bean in this way:
#ConditionalOnProperty(value = "application.listener-mode", havingValue = "SQS")
#Component
public class SqsConsumer {
private final static Logger logger = LoggerFactory.getLogger(SqsConsumer.class);
#Autowired
private ConsumerMessageHandler consumerMessageHandler;
#Autowired
private KafkaProducer producer;
#PostConstruct
public void init() {
logger.info("Loading SQS Listener Bean");
}
#SqsListener("${application.aws-iot.sqs-url}")
public void receiveMessage(String message) {
byte[] decodedValue = Base64.getDecoder().decode(message);
consumerMessageHandler.handle(decodedValue, message);
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(10);
factory.setWaitTimeOut(20);
logger.info("Created simpleMessageListenerContainerFactory");
logger.info(factory.toString());
return factory;
}
}
For reference, this is a method in the SimpleMessageListenerContainer. It is these logs which I would like to investigate and potentially add to:
#Override
public void run() {
while (isQueueRunning()) {
try {
ReceiveMessageResult receiveMessageResult = getAmazonSqs()
.receiveMessage(
this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(
receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isQueueRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(
this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(
messageBatchLatch, messageExecutor));
}
else {
messageBatchLatch.countDown();
}
}
try {
messageBatchLatch.await();
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
catch (Exception e) {
getLogger().warn(
"An Exception occurred while polling queue '{}'. The failing operation will be "
+ "retried in {} milliseconds",
this.logicalQueueName, getBackOffTime(), e);
try {
// noinspection BusyWait
Thread.sleep(getBackOffTime());
}
catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
SimpleMessageListenerContainer.this.scheduledFutureByQueue
.remove(this.logicalQueueName);
}
How would I be able to see all of that logging from where I create the bean?
Any help would be much appreciated!
I have a Springboot app configured with spring-kafka where I want to handle all sorts of error that can happen while listening to a topic. If any message is missed / not able to be consumed because of either Deserialization or any other Exception, there will be 2 retries and after which the message should be logged to an error file. I have two approaches that can be followed :-
First Approach( Using SeekToCurrentErrorHandler with DeadLetterPublishingRecoverer):-
#Autowired
KafkaTemplate<String,Object> template;
#Bean(name = "kafkaSourceProvider")
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".DLT", r.partition());
}
});
ErrorHandler errorHandler = new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 2L));
factory.setErrorHandler(errorHandler);
return factory;
}
But for this we require addition topic(a new .DLT topic) and then we can log it to a file.
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#KafkaListener( topics = MY_TOPIC + ".DLT", groupId = MY_ID)
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Approach 2 ( Using custom SeekToCurrentErrorHandler) :-
#Bean
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
factory.setErrorHandler(new CustomSeekToCurrentErrorHandler());
factory.setRetryTemplate(retryTemplate());
return factory;
}
private RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy());
retryTemplate.setRetryPolicy(aSimpleReturnPolicy);
}
public class CustomSeekToCurrentErrorHandler extends SeekToCurrentErrorHandler {
private static final int MAX_RETRY_ATTEMPTS = 2;
CustomSeekToCurrentErrorHandler() {
super(MAX_RETRY_ATTEMPTS);
}
#Override
public void handle(Exception exception, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
try {
if (!records.isEmpty()) {
log.warn("Exception: {} occurred with message: {}", exception, exception.getMessage());
super.handle(exception, records, consumer, container);
}
} catch (SerializationException e) {
log.warn("Exception: {} occurred with message: {}", e, e.getMessage());
}
}
}
Can anyone provide their suggestions on what's the standard way to implement this kind of feature. In first approach we do see an overhead of creation of .DLT topics and an additional #KafkaListener. In second approach, we can directly log our consumer record exception.
With the first approach, it is not necessary to use a DeadLetterPublishingRecoverer, you can use any ConsumerRecordRecoverer that you want; in fact the default recoverer simply logs the failed message.
/**
* Construct an instance with the default recoverer which simply logs the record after
* the backOff returns STOP for a topic/partition/offset.
* #param backOff the {#link BackOff}.
* #since 2.3
*/
public SeekToCurrentErrorHandler(BackOff backOff) {
this(null, backOff);
}
And, in the FailedRecordTracker...
if (recoverer == null) {
this.recoverer = (rec, thr) -> {
...
logger.error(thr, "Backoff "
+ (failedRecord == null
? "none"
: failedRecord.getBackOffExecution())
+ " exhausted for " + ListenerUtils.recordToString(rec));
};
}
Backoff (and a limit to retries) was added to the error handler after adding retry in the listener adapter, so it's "newer" (and preferred).
Also, using in-memory retry can cause issues with rebalancing if long BackOffs are employed.
Finally, only the SeekToCurrentErrorHandler can deal with deserialization problems (via the ErrorHandlingDeserializer).
EDIT
Use the ErrorHandlingDeserializer together with a SeekToCurrentErrorHandler. Deserialization exceptions are considered fatal and the recoverer is called immediately.
See the documentation.
Here is a simple Spring Boot application that demonstrates it:
public class So63236346Application {
private static final Logger log = LoggerFactory.getLogger(So63236346Application.class);
public static void main(String[] args) {
SpringApplication.run(So63236346Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63236346").partitions(1).replicas(1).build();
}
#Bean
ErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler((rec, ex) -> log.error(ListenerUtils.recordToString(rec, true) + "\n"
+ ex.getMessage()));
}
#KafkaListener(id = "so63236346", topics = "so63236346")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so63236346", "{\"field\":\"value1\"}");
template.send("so63236346", "junk");
template.send("so63236346", "{\"field\":\"value2\"}");
};
}
}
package com.example.demo;
public class Thing {
private String field;
public Thing() {
}
public Thing(String field) {
this.field = field;
}
public String getField() {
return this.field;
}
public void setField(String field) {
this.field = field;
}
#Override
public String toString() {
return "Thing [field=" + this.field + "]";
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.properties.spring.json.value.default.type=com.example.demo.Thing
Result
Thing [field=value1]
2020-08-10 14:30:14.780 ERROR 78857 --- [o63236346-0-C-1] com.example.demo.So63236346Application : so63236346-0#7
Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is org.apache.kafka.common.errors.SerializationException: Can't deserialize data [[106, 117, 110, 107]] from topic [so63236346]
2020-08-10 14:30:14.782 INFO 78857 --- [o63236346-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-so63236346-1, groupId=so63236346] Seeking to offset 8 for partition so63236346-0
Thing [field=value2]
The expectation was to log any exception that we might get at the container level as well as the listener level.
Without retrying, following is the way I have done error handling:-
If we encounter any exception at the container level, we should be able to log the message payload with the error description and seek that offset and skip it and go ahead receiving the next offset. Though it is done only for DeserializationException, the rest of the exceptions also needs to be seek and offsets needs to be skipped for them.
#Component
public class KafkaContainerErrorHandler implements ErrorHandler {
private static final Logger logger = LoggerFactory.getLogger(KafkaContainerErrorHandler.class);
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
String s = thrownException.getMessage().split("Error deserializing key/value for partition ")[1].split(". If needed, please seek past the record to continue consumption.")[0];
// modify below logic according to your topic nomenclature
String topics = s.substring(0, s.lastIndexOf('-'));
int offset = Integer.parseInt(s.split("offset ")[1]);
int partition = Integer.parseInt(s.substring(s.lastIndexOf('-') + 1).split(" at")[0]);
logger.error("...")
TopicPartition topicPartition = new TopicPartition(topics, partition);
logger.info("Skipping {} - {} offset {}", topics, partition, offset);
consumer.seek(topicPartition, offset + 1);
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) {
}
}
factory.setErrorHandler(kafkaContainerErrorHandler);
If we get any exception at the #KafkaListener level, then I am configuring my listener with my custom error handler and logging the exception with the message as can be seen below:-
#Bean("customErrorHandler")
public KafkaListenerErrorHandler listenerErrorHandler() {
return (m, e) -> {
logger.error(...);
return m;
};
}
I'm using spring-amqp:2.1.6.RELEASE
I have a RabbitTemplate with a PublisherReturn callback.
If I send a message to a routingKey which has no queues bound to
it, then the return callback is called correctly. When this happens I
want to send the message to an alternative routingKey. However, if
I use the RabbitTemplate in the ReturnCallback it just hangs up. I
don't see anything saying the message can/can't be sent, the
RabbitTemplate doesn't return control to my ReturnCallback and I
don't see any PublisherConfirm either.
If I create a new RabbitTemplate (with the same CachingConnectionFactory)
then it still behaves the same way. My call just hangs up.
If I send a message to a routingKey which does have a queue bound to it,
then the message correctly arrives at the queue. The ReturnCallback is not
called in this scenario.
After some investigation, I've come to the conclusion that the rabbitTemplate and/or connection is blocked until the original message is completely processed.
If I create a second CachingConnectionFactory and RabbitTemplate, and use these in the PublisherReturn callback, then it seems to work fine.
So, here's the question: What is this the best way to send a message in a PublisherReturn callback using spring-amqp?
I have searched, but can't find anything that explains how you should do this.
Here are simplified details of what I have:
#Configuration
public class MyConfig {
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setPublisherReturns(true);
// ... other settings left out for brevity
return connectionFactory;
}
#Bean
#Qualifier("rabbitTemplate")
public RabbitTemplate rabbitTemplate(ReturnCallbackForAlternative returnCallbackForAlternative) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMandatory(true);
rabbitTemplate.setReturnCallback(returnCallbackForAlternative);
// ... other settings left out for brevity
return rabbitTemplate;
}
#Bean
#Qualifier("connectionFactoryForUndeliverable")
public ConnectionFactory connectionFactoryForUndeliverable() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
// ... other settings left out for brevity
return connectionFactory;
}
#Bean
#Qualifier("rabbitTemplateForUndeliverable")
public RabbitTemplate rabbitTemplateForUndeliverable() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactoryForUndeliverable());
// ... other settings left out for brevity
return rabbitTemplate;
}
}
Then to send the message I'm using
#Autowired
#Qualifier("rabbitTemplate")
private RabbitTemplate rabbitTemplate;
public void send(Message message) {
rabbitTemplate.convertAndSend(
"exchange-name",
"primary-key",
message);
}
And the code in the ReturnCallback is
#Component
public class ReturnCallbackForAlternative implements RabbitTemplate.ReturnCallback {
#Autowired
#Qualifier("rabbitTemplateForUndeliverable")
private RabbitTemplate rabbitTemplate;
#Override
public void returnedMessage(Message message, int replyCode, String replyText, String exchange, String routingKey) {
rabbitTemplate.convertAndSend(
"exchange-name",
"alternative-key",
message);
}
}
EDIT
Simplified example to reproduce the problem.
To run it:
Have RabbitMq running
Have an exchange called foo bound to a queue called foo
Run as spring boot app
You'll see the following output:
in returnCallback before message send
but you won't see:
in returnCallback after message send
If you comment out the connectionFactory.setPublisherConfirms(true); it runs OK.
#SpringBootApplication
public class HangingApplication {
public static void main(String[] args) {
SpringApplication.run(HangingApplication.class, args);
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setPublisherReturns(true);
connectionFactory.setPublisherConfirms(true);
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("foo");
rabbitTemplate.setMandatory(true);
rabbitTemplate.setConfirmCallback((correlationData, ack, cause) -> {
System.out.println("Confirm callback for main template. Ack=" + ack);
});
rabbitTemplate.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
System.out.println("in returnCallback before message send");
rabbitTemplate.send("foo", message);
System.out.println("in returnCallback after message send");
});
return rabbitTemplate;
}
#Bean
public ApplicationRunner runner(#Qualifier("rabbitTemplate") RabbitTemplate template) {
return args -> {
template.convertAndSend("BADKEY", "foo payload");
};
}
#RabbitListener(queues = "foo")
public void listen(String in) {
System.out.println("Message received on undeliverable queue : " + in);
}
}
Here's the build.gradle I used:
plugins {
id 'org.springframework.boot' version '2.1.5.RELEASE'
id 'java'
}
apply plugin: 'io.spring.dependency-management'
group 'pcoates'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.11
repositories {
mavenCentral()
}
dependencies {
compile 'org.springframework.boot:spring-boot-starter-amqp'
}
It causes some kind of deadlock down in the amqp-client code. The simplest solution is to do the send on a separate thread - use a TaskExecutor within the callback...
exec.execute(() -> template.send(...));
You can use the same template/connection factory, but the send must run on a different thread.
I thought we had recently changed the framework to always call the return callback on a different thread (after the last person reported this), but it looks like it fell through the cracks.
I opened an issue this time.
EDIT
Are you sure you're using 2.1.6?
We fixed this problem in 2.1.0 by preventing the send from attempting to use the same channel that the return arrived on. This works fine for me...
#SpringBootApplication
public class So57234770Application {
public static void main(String[] args) {
SpringApplication.run(So57234770Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
template.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
template.send("foo", message);
});
return args -> {
template.convertAndSend("BADKEY", "foo");
};
}
#RabbitListener(queues = "foo")
public void listen(String in) {
System.out.println(in);
}
}
If you can provide a sample app that exhibits this behavior, I will take a look to see what's going on.
I would like to achieve following scenario in my application:
If a business error occurs, the message should be send from the incomingQueue to the deadLetter Queue and delayed there for 10 seconds
The step number 1 should be repeated 3 times
The message should be published to the parkingLot Queue
I am able (see the code below) to delay the message for a certain amount of time in a deadLetter Queue. And the message is looped infinitely between the incoming Queue and the deadLetter Queue. So far so good.
The main question: How can I intercept the process and manually route the message (as described in the step 3) to the parkingLot Queue for later further analysis?
A secondary question: Can I achieve the same process with only one exchange?
Here is a shortened version of my two classes:
Configuration class
#Configuration
public class MailRabbitMQConfig {
#Bean
TopicExchange incomingExchange() {
TopicExchange incomingExchange = new TopicExchange(incomingExchangeName);
return incomingExchange;
}
#Bean
TopicExchange dlExchange() {
TopicExchange dlExchange = new TopicExchange(deadLetterExchangeName);
return dlExchange;
}
#Bean
Queue incomingQueue() {
return QueueBuilder.durable(incomingQueueName)
.withArgument(
"x-dead-letter-exchange",
dlExchange().getName()
)
.build();
}
#Bean
public Queue parkingLotQueue() {
return new Queue(parkingLotQueueName);
}
#Bean
Binding incomingBinding() {
return BindingBuilder
.bind(incomingQueue())
.to(incomingExchange())
.with("#");
}
#Bean
public Queue dlQueue() {
return QueueBuilder
.durable(deadLetterQueueName)
.withArgument("x-message-ttl", 10000)
.withArgument("x-dead-letter-exchange", incomingExchange()
.getName())
.build();
}
#Bean
Binding dlBinding() {
return BindingBuilder
.bind(dlQueue())
.to(dlExchange())
.with("#");
}
#Bean
public Binding bindParkingLot(
Queue parkingLotQueue,
TopicExchange dlExchange
) {
return BindingBuilder.bind(parkingLotQueue)
.to(dlExchange)
.with(parkingLotRoutingKeyName);
}
}
Consumer class
#Component
public class Consumer {
private final Logger logger = LoggerFactory.getLogger(Consumer.class);
#RabbitListener(queues = "${mail.rabbitmq.queue.incoming}")
public Boolean receivedMessage(MailDataExternalTemplate mailDataExternalTemplate) throws Exception {
try {
// business logic here
} catch (Exception e) {
throw new AmqpRejectAndDontRequeueException("Failed to handle a business logic");
}
return Boolean.TRUE;
}
}
I know I could define an additional listener for a deadLetter Queue in a Consumer class like that:
#RabbitListener(queues = "${mail.rabbitmq.queue.deadletter}")
public void receivedMessageFromDlq(Message failedMessage) throws Exception {
// Logic to count x-retries header property value and send a failed message manually
// to the parkingLot Queue
}
However it does not work as expected because this listener is called as soon as the message arrives the head of the deadLetter Queue without to be delayed.
Thank you in advance.
EDIT: I was able with #ArtemBilan and #GaryRussell help to solve the problem. The main solution hints are within their comments in the accepted answer. Thank you guys for the help! Below you will find a new diagram that shows the messaging process and the Configuration and the Consumer classes. The main changes were:
The definition of the routes between the incoming exchange -> incoming queue and the dead letter exchange -> dead letter queue in the MailRabbitMQConfig class.
The loop handling with the manual publishing of the message to the parking lot queue in the Consumer class
Configuration class
#Configuration
public class MailRabbitMQConfig {
#Autowired
public MailConfigurationProperties properties;
#Bean
TopicExchange incomingExchange() {
TopicExchange incomingExchange = new TopicExchange(properties.getRabbitMQ().getExchange().getIncoming());
return incomingExchange;
}
#Bean
TopicExchange dlExchange() {
TopicExchange dlExchange = new TopicExchange(properties.getRabbitMQ().getExchange().getDeadletter());
return dlExchange;
}
#Bean
Queue incomingQueue() {
return QueueBuilder.durable(properties.getRabbitMQ().getQueue().getIncoming())
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_EXCHANGE_HEADER,
dlExchange().getName()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_ROUTING_KEY_HEADER,
properties.getRabbitMQ().getRoutingKey().getDeadLetter()
)
.build();
}
#Bean
public Queue parkingLotQueue() {
return new Queue(properties.getRabbitMQ().getQueue().getParkingLot());
}
#Bean
Binding incomingBinding() {
return BindingBuilder
.bind(incomingQueue())
.to(incomingExchange())
.with(properties.getRabbitMQ().getRoutingKey().getIncoming());
}
#Bean
public Queue dlQueue() {
return QueueBuilder
.durable(properties.getRabbitMQ().getQueue().getDeadLetter())
.withArgument(
properties.getRabbitMQ().getMessages().X_MESSAGE_TTL_HEADER,
properties.getRabbitMQ().getMessages().getDelayTime()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_EXCHANGE_HEADER,
incomingExchange().getName()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_ROUTING_KEY_HEADER,
properties.getRabbitMQ().getRoutingKey().getIncoming()
)
.build();
}
#Bean
Binding dlBinding() {
return BindingBuilder
.bind(dlQueue())
.to(dlExchange())
.with(properties.getRabbitMQ().getRoutingKey().getDeadLetter());
}
#Bean
public Binding bindParkingLot(
Queue parkingLotQueue,
TopicExchange dlExchange
) {
return BindingBuilder.bind(parkingLotQueue)
.to(dlExchange)
.with(properties.getRabbitMQ().getRoutingKey().getParkingLot());
}
}
Consumer class
#Component
public class Consumer {
private final Logger logger = LoggerFactory.getLogger(Consumer.class);
#Autowired
public MailConfigurationProperties properties;
#Autowired
protected EmailClient mailJetEmailClient;
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = "${mail.rabbitmq.queue.incoming}")
public Boolean receivedMessage(
#Payload MailDataExternalTemplate mailDataExternalTemplate,
Message amqpMessage
) {
logger.info("Received message");
try {
final EmailTransportWrapper emailTransportWrapper = mailJetEmailClient.convertFrom(mailDataExternalTemplate);
mailJetEmailClient.sendEmailUsing(emailTransportWrapper);
logger.info("Successfully sent an E-Mail");
} catch (Exception e) {
int count = getXDeathCountFromHeader(amqpMessage);
logger.debug("x-death count: " + count);
if (count >= properties.getRabbitMQ().getMessages().getRetryCount()) {
this.rabbitTemplate.send(
properties.getRabbitMQ().getExchange().getDeadletter(),
properties.getRabbitMQ().getRoutingKey().getParkingLot(),
amqpMessage
);
return Boolean.TRUE;
}
throw new AmqpRejectAndDontRequeueException("Failed to send an E-Mail");
}
return Boolean.TRUE;
}
private int getXDeathCountFromHeader(Message message) {
Map<String, Object> headers = message.getMessageProperties().getHeaders();
if (headers.get(properties.getRabbitMQ().getMessages().X_DEATH_HEADER) == null) {
return 0;
}
//noinspection unchecked
ArrayList<Map<String, ?>> xDeath = (ArrayList<Map<String, ?>>) headers
.get(properties.getRabbitMQ().getMessages().X_DEATH_HEADER);
Long count = (Long) xDeath.get(0).get("count");
return count.intValue();
}
To delay message to be available in the queue, you should consider to use DelayedExchange: https://docs.spring.io/spring-amqp/docs/2.0.2.RELEASE/reference/html/_reference.html#delayed-message-exchange.
As for manually sending to the parkingLot queue, that's just easy to use RabbitTemplate and send message using its name:
/**
* Send a message to a default exchange with a specific routing key.
*
* #param routingKey the routing key
* #param message a message to send
* #throws AmqpException if there is a problem
*/
void send(String routingKey, Message message) throws AmqpException;
All the queues are bound to the default exchange via their names as routing keys.