Better way of error handling in Kafka Consumer - java

I have a Springboot app configured with spring-kafka where I want to handle all sorts of error that can happen while listening to a topic. If any message is missed / not able to be consumed because of either Deserialization or any other Exception, there will be 2 retries and after which the message should be logged to an error file. I have two approaches that can be followed :-
First Approach( Using SeekToCurrentErrorHandler with DeadLetterPublishingRecoverer):-
#Autowired
KafkaTemplate<String,Object> template;
#Bean(name = "kafkaSourceProvider")
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".DLT", r.partition());
}
});
ErrorHandler errorHandler = new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 2L));
factory.setErrorHandler(errorHandler);
return factory;
}
But for this we require addition topic(a new .DLT topic) and then we can log it to a file.
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#KafkaListener( topics = MY_TOPIC + ".DLT", groupId = MY_ID)
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Approach 2 ( Using custom SeekToCurrentErrorHandler) :-
#Bean
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
factory.setErrorHandler(new CustomSeekToCurrentErrorHandler());
factory.setRetryTemplate(retryTemplate());
return factory;
}
private RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy());
retryTemplate.setRetryPolicy(aSimpleReturnPolicy);
}
public class CustomSeekToCurrentErrorHandler extends SeekToCurrentErrorHandler {
private static final int MAX_RETRY_ATTEMPTS = 2;
CustomSeekToCurrentErrorHandler() {
super(MAX_RETRY_ATTEMPTS);
}
#Override
public void handle(Exception exception, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
try {
if (!records.isEmpty()) {
log.warn("Exception: {} occurred with message: {}", exception, exception.getMessage());
super.handle(exception, records, consumer, container);
}
} catch (SerializationException e) {
log.warn("Exception: {} occurred with message: {}", e, e.getMessage());
}
}
}
Can anyone provide their suggestions on what's the standard way to implement this kind of feature. In first approach we do see an overhead of creation of .DLT topics and an additional #KafkaListener. In second approach, we can directly log our consumer record exception.

With the first approach, it is not necessary to use a DeadLetterPublishingRecoverer, you can use any ConsumerRecordRecoverer that you want; in fact the default recoverer simply logs the failed message.
/**
* Construct an instance with the default recoverer which simply logs the record after
* the backOff returns STOP for a topic/partition/offset.
* #param backOff the {#link BackOff}.
* #since 2.3
*/
public SeekToCurrentErrorHandler(BackOff backOff) {
this(null, backOff);
}
And, in the FailedRecordTracker...
if (recoverer == null) {
this.recoverer = (rec, thr) -> {
...
logger.error(thr, "Backoff "
+ (failedRecord == null
? "none"
: failedRecord.getBackOffExecution())
+ " exhausted for " + ListenerUtils.recordToString(rec));
};
}
Backoff (and a limit to retries) was added to the error handler after adding retry in the listener adapter, so it's "newer" (and preferred).
Also, using in-memory retry can cause issues with rebalancing if long BackOffs are employed.
Finally, only the SeekToCurrentErrorHandler can deal with deserialization problems (via the ErrorHandlingDeserializer).
EDIT
Use the ErrorHandlingDeserializer together with a SeekToCurrentErrorHandler. Deserialization exceptions are considered fatal and the recoverer is called immediately.
See the documentation.
Here is a simple Spring Boot application that demonstrates it:
public class So63236346Application {
private static final Logger log = LoggerFactory.getLogger(So63236346Application.class);
public static void main(String[] args) {
SpringApplication.run(So63236346Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63236346").partitions(1).replicas(1).build();
}
#Bean
ErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler((rec, ex) -> log.error(ListenerUtils.recordToString(rec, true) + "\n"
+ ex.getMessage()));
}
#KafkaListener(id = "so63236346", topics = "so63236346")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so63236346", "{\"field\":\"value1\"}");
template.send("so63236346", "junk");
template.send("so63236346", "{\"field\":\"value2\"}");
};
}
}
package com.example.demo;
public class Thing {
private String field;
public Thing() {
}
public Thing(String field) {
this.field = field;
}
public String getField() {
return this.field;
}
public void setField(String field) {
this.field = field;
}
#Override
public String toString() {
return "Thing [field=" + this.field + "]";
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.properties.spring.json.value.default.type=com.example.demo.Thing
Result
Thing [field=value1]
2020-08-10 14:30:14.780 ERROR 78857 --- [o63236346-0-C-1] com.example.demo.So63236346Application : so63236346-0#7
Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is org.apache.kafka.common.errors.SerializationException: Can't deserialize data [[106, 117, 110, 107]] from topic [so63236346]
2020-08-10 14:30:14.782 INFO 78857 --- [o63236346-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-so63236346-1, groupId=so63236346] Seeking to offset 8 for partition so63236346-0
Thing [field=value2]

The expectation was to log any exception that we might get at the container level as well as the listener level.
Without retrying, following is the way I have done error handling:-
If we encounter any exception at the container level, we should be able to log the message payload with the error description and seek that offset and skip it and go ahead receiving the next offset. Though it is done only for DeserializationException, the rest of the exceptions also needs to be seek and offsets needs to be skipped for them.
#Component
public class KafkaContainerErrorHandler implements ErrorHandler {
private static final Logger logger = LoggerFactory.getLogger(KafkaContainerErrorHandler.class);
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
String s = thrownException.getMessage().split("Error deserializing key/value for partition ")[1].split(". If needed, please seek past the record to continue consumption.")[0];
// modify below logic according to your topic nomenclature
String topics = s.substring(0, s.lastIndexOf('-'));
int offset = Integer.parseInt(s.split("offset ")[1]);
int partition = Integer.parseInt(s.substring(s.lastIndexOf('-') + 1).split(" at")[0]);
logger.error("...")
TopicPartition topicPartition = new TopicPartition(topics, partition);
logger.info("Skipping {} - {} offset {}", topics, partition, offset);
consumer.seek(topicPartition, offset + 1);
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) {
}
}
factory.setErrorHandler(kafkaContainerErrorHandler);
If we get any exception at the #KafkaListener level, then I am configuring my listener with my custom error handler and logging the exception with the message as can be seen below:-
#Bean("customErrorHandler")
public KafkaListenerErrorHandler listenerErrorHandler() {
return (m, e) -> {
logger.error(...);
return m;
};
}

Related

Kafka: ErrorHandler for Serializer [duplicate]

I'm getting up an application consuming kafka messages.
I followed Spring-docs about Deserialization Error Handling in order to catch deserialization exception. I've tried the failedDeserializationFunction method.
This is my Consumer Configuration Class
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> consumerProps = new HashMap<>();
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetReset);
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommit);
/* Error Handling */
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
consumerProps.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class.getName());
consumerProps.put(ErrorHandlingDeserializer2.VALUE_FUNCTION, FailedNTCMessageBodyProvider.class);
return consumerProps;
}
#Bean
public ConsumerFactory<String, NTCMessageBody> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<>(NTCMessageBody.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, NTCMessageBody> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, NTCMessageBody> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
This is the BiFunction Provider
public class FailedNTCMessageBodyProvider implements BiFunction<byte[], Headers, NTCMessageBody> {
#Override
public NTCMessageBody apply(byte[] t, Headers u) {
return new NTCBadMessageBody(t);
}
}
public class NTCBadMessageBody extends NTCMessageBody{
private final byte[] failedDecode;
public NTCBadMessageBody(byte[] failedDecode) {
this.failedDecode = failedDecode;
}
public byte[] getFailedDecode() {
return this.failedDecode;
}
}
When I send just one corrupted message on the topic I got this error (in loop):
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value
I understood that the ErrorHandlingDeserializer2 should delegate the NTCBadMessageBody type and continue the consumption. I also saw (in debug mode) it didn't never go in the constructor of the NTCBadMessageBody class.
Use ErrorHandlingDeserializer.
When a deserializer fails to deserialize a message, Spring has no way to handle the problem because it occurs before the poll() returns. To solve this problem, version 2.2 introduced the ErrorHandlingDeserializer. This deserializer delegates to a real deserializer (key or value). If the delegate fails to deserialize the record content, the ErrorHandlingDeserializer returns a DeserializationException instead, containing the cause and raw bytes. When using a record-level MessageListener, if either the key or value contains a DeserializationException, the container’s ErrorHandler is called with the failed ConsumerRecord. When using a BatchMessageListener, the failed record is passed to the application along with the remaining records in the batch, so it is the responsibility of the application listener to check whether the key or value in a particular record is a DeserializationException.
You can use the DefaultKafkaConsumerFactory constructor that takes key and value Deserializer objects and wire in appropriate ErrorHandlingDeserializer configured with the proper delegates. Alternatively, you can use consumer configuration properties which are used by the ErrorHandlingDeserializer to instantiate the delegates. The property names are ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS and ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS; the property value can be a class or class name
package com.mypackage.app.config;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeoutException;
import com.mypacakage.app.model.kafka.message.KafkaEvent;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ListenerExecutionFailedException;
import org.springframework.kafka.support.serializer.ErrorHandlingDeserializer;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import lombok.extern.slf4j.Slf4j;
#EnableKafka
#Configuration
#Slf4j
public class KafkaConsumerConfig {
#Value("${kafka.bootstrap-servers}")
private String servers;
#Value("${listener.group-id}")
private String groupId;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, KafkaEvent> ListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, KafkaEvent> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setErrorHandler(((exception, data) -> {
/*
* here you can do you custom handling, I am just logging it same as default
* Error handler does If you just want to log. you need not configure the error
* handler here. The default handler does it for you. Generally, you will
* persist the failed records to DB for tracking the failed records.
*/
log.error("Error in process with Exception {} and the record is {}", exception, data);
}));
return factory;
}
#Bean
public ConsumerFactory<String, KafkaEvent> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
config.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
config.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
config.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class.getName());
config.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"com.mypackage.app.model.kafka.message.KafkaEvent");
config.put(JsonDeserializer.TRUSTED_PACKAGES, "com.mypackage.app");
return new DefaultKafkaConsumerFactory<>(config);
}
private RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
/*
* here retry policy is used to set the number of attempts to retry and what
* exceptions you wanted to try and what you don't want to retry.
*/
retryTemplate.setRetryPolicy(retryPolicy());
return retryTemplate;
}
private SimpleRetryPolicy retryPolicy() {
Map<Class<? extends Throwable>, Boolean> exceptionMap = new HashMap<>();
// the boolean value in the map determines whether exception should be retried
exceptionMap.put(IllegalArgumentException.class, false);
exceptionMap.put(TimeoutException.class, true);
exceptionMap.put(ListenerExecutionFailedException.class, true);
return new SimpleRetryPolicy(3, exceptionMap, true);
}
}
ErrorHandlingDeserializer
When a deserializer fails to deserialize a message, Spring has no way to handle the problem because it occurs before the poll() returns. To solve this problem, version 2.2 introduced the ErrorHandlingDeserializer. This deserializer delegates to a real deserializer (key or value). If the delegate fails to deserialize the record content, the ErrorHandlingDeserializer returns a DeserializationException instead, containing the cause and raw bytes. When using a record-level MessageListener, if either the key or value contains a DeserializationException, the container’s ErrorHandler is called with the failed ConsumerRecord. When using a BatchMessageListener, the failed record is passed to the application along with the remaining records in the batch, so it is the responsibility of the application listener to check whether the key or value in a particular record is a DeserializationException.
So according to your code you are using record-level MessageListener then just add ErrorHandler to Container
Handling Exceptions
If your error handler implements this interface you can, for example, adjust the offsets accordingly. For example, to reset the offset to replay the failed message, you could do something like the following; note however, these are simplistic implementations and you would probably want more checking in the error handler.
#Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() {
return (m, e, c) -> {
this.listen3Exception = e;
MessageHeaders headers = m.getHeaders();
c.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
Or you can do custom implementation like in this example
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericRecord>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory
= new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setErrorHandler(new ErrorHandler() {
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
String s = thrownException.getMessage().split("Error deserializing key/value for partition ")[1].split(". If needed, please seek past the record to continue consumption.")[0];
String topics = s.split("-")[0];
int offset = Integer.valueOf(s.split("offset ")[1]);
int partition = Integer.valueOf(s.split("-")[1].split(" at")[0]);
TopicPartition topicPartition = new TopicPartition(topics, partition);
//log.info("Skipping " + topic + "-" + partition + " offset " + offset);
consumer.seek(topicPartition, offset + 1);
System.out.println("OKKKKK");
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) {
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord, Consumer<?,?> consumer) {
String s = e.getMessage().split("Error deserializing key/value for partition ")[1].split(". If needed, please seek past the record to continue consumption.")[0];
String topics = s.split("-")[0];
int offset = Integer.valueOf(s.split("offset ")[1]);
int partition = Integer.valueOf(s.split("-")[1].split(" at")[0]);
TopicPartition topicPartition = new TopicPartition(topics, partition);
//log.info("Skipping " + topic + "-" + partition + " offset " + offset);
consumer.seek(topicPartition, offset + 1);
System.out.println("OKKKKK");
}
});
return factory;
}
Above answer may have problem if the partion name have character like '-'. so, i have modified same logic with regex.
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.errors.SerializationException;
import org.springframework.kafka.listener.ErrorHandler;
import org.springframework.kafka.listener.MessageListenerContainer;
import lombok.extern.slf4j.Slf4j;
#Slf4j
public class KafkaErrHandler implements ErrorHandler {
/**
* Method prevents serialization error freeze
*
* #param e
* #param consumer
*/
private void seekSerializeException(Exception e, Consumer<?, ?> consumer) {
String p = ".*partition (.*) at offset ([0-9]*).*";
Pattern r = Pattern.compile(p);
Matcher m = r.matcher(e.getMessage());
if (m.find()) {
int idx = m.group(1).lastIndexOf("-");
String topics = m.group(1).substring(0, idx);
int partition = Integer.parseInt(m.group(1).substring(idx));
int offset = Integer.parseInt(m.group(2));
TopicPartition topicPartition = new TopicPartition(topics, partition);
consumer.seek(topicPartition, (offset + 1));
log.info("Skipped message with offset {} from partition {}", offset, partition);
}
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> record, Consumer<?, ?> consumer) {
log.error("Error in process with Exception {} and the record is {}", e, record);
if (e instanceof SerializationException)
seekSerializeException(e, consumer);
}
#Override
public void handle(Exception e, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer,
MessageListenerContainer container) {
log.error("Error in process with Exception {} and the records are {}", e, records);
if (e instanceof SerializationException)
seekSerializeException(e, consumer);
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> record) {
log.error("Error in process with Exception {} and the record is {}", e, record);
}
}
finally use the error handler in config.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericType> macdStatusListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericType> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(macdStatusConsumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setErrorHandler(new KafkaErrHandler());
return factory;
}
However parsing error string to get parition, topic and offset is not recommended. If anyone have better solution please post here.
in my factory I've added commonErrorHander
factory.setCommonErrorHandler(new KafkaMessageErrorHandler());
and KafkaMessageErrorHandler is created as follow
class KafkaMessageErrorHandler implements CommonErrorHandler {
#Override
public void handleRecord(Exception thrownException, ConsumerRecord<?, ?> record, Consumer<?, ?> consumer, MessageListenerContainer container) {
manageException(thrownException, consumer);
}
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener) {
manageException(thrownException, consumer);
}
private void manageException(Exception ex, Consumer<?, ?> consumer) {
log.error("Error polling message: " + ex.getMessage());
if (ex instanceof RecordDeserializationException) {
RecordDeserializationException rde = (RecordDeserializationException) ex;
consumer.seek(rde.topicPartition(), rde.offset() + 1L);
consumer.commitSync();
} else {
log.error("Exception not handled");
}
}
}

Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo

DeadLetterPublishingRecoverer - Dead-letter publication failed with InvalidTopicException for name topic at TopicPartition ends with _ERR

I identified an error when I changed the DeadLetterPublishingRecoverer destionationResolver.
When I use:
private static final BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition>
DESTINATION_RESOLVER = (cr, e) -> new TopicPartition(cr.topic() + ".ERR", cr.partition());
it works perfectly.
However, if you use _ERR instead of .ERR, an error occurs:
2020-08-05 12:53:10,277 [kafka-producer-network-thread | producer-kafka-tx-group1.ABC_TEST_XPTO.0] WARN o.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-tx-group1.ABC_TEST_XPTO.0, transactionalId=kafka-tx-group1.ABC_TEST_XPTO.0] Error while fetching metadata with correlation id 7 : {ABC_TEST_XPTO_ERR=INVALID_TOPIC_EXCEPTION}
2020-08-05 12:53:10,278 [kafka-producer-network-thread | producer-kafka-tx-group1.ABC_TEST_XPTO.0] ERROR org.apache.kafka.clients.Metadata - [Producer clientId=producer-kafka-tx-group1.ABC_TEST_XPTO.0, transactionalId=kafka-tx-group1.ABC_TEST_XPTO.0] Metadata response reported invalid topics [ABC_TEST_XPTO_ERR]
2020-08-05 12:53:10,309 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] ERROR o.s.k.s.LoggingProducerListener - Exception thrown when sending a message with key='null' and payload='XPTOEvent(super=Event(id=CAPBA2548, destination=ABC_TEST_XPTO, he...' to topic ABC_TEST_XPTO_ERR and partition 0:
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [ABC_TEST_XPTO_ERR]
2020-08-05 12:53:10,320 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] ERROR o.s.k.l.DeadLetterPublishingRecoverer - Dead-letter publication failed for: ProducerRecord(topic=ABC_TEST_XPTO_ERR, partition=0, headers=RecordHeaders(headers = ..
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [ABC_TEST_XPTO_ERR]
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:573)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:388)
at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.publish(DeadLetterPublishingRecoverer.java:290)
at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.accept(DeadLetterPublishingRecoverer.java:226)
at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.accept(DeadLetterPublishingRecoverer.java:54)
at org.springframework.kafka.listener.FailedRecordTracker.skip(FailedRecordTracker.java:106)
strong text
My topics use _ in the middle of the name, for example ABC_TEST_XPTO, so I would like to set up the dead letter topic with _ERR, if possible
My enviroment
Spring Boot 2.3.2.RELEASE
Spring-Kafka 2.5.3.RELEASE but the same problem occurs with 2.5.4.RELEASE
Java 11
private static final BiFunction<ConsumerRecord, Exception, TopicPartition>
DESTINATION_RESOLVER = (cr, e) -> new TopicPartition(cr.topic() + "_ERR", cr.partition());
#Component
class ContainerFactoryConfigurer {
ContainerFactoryConfigurer(ConcurrentKafkaListenerContainerFactory<?, ?> factory,
ChainedKafkaTransactionManager<?, ?> tm,
KafkaTemplate<Object, Object> template) {
factory.getContainerProperties().setTransactionManager(tm);
DefaultAfterRollbackProcessor rollbackProcessor = new DefaultAfterRollbackProcessor((record, exception) -> {
}, new FixedBackOff(0L, Long.valueOf(maxAttemps)), template, true);
factory.setAfterRollbackProcessor(rollbackProcessor);
SeekToCurrentErrorHandler errorHandler = new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(Collections.singletonMap(Object.class, template), DESTINATION_RESOLVER), new FixedBackOff(0L, Long.valueOf(maxAttemps)));
errorHandler.setCommitRecovered(true);
errorHandler.setAckAfterHandle(true);
factory.setErrorHandler(errorHandler);
}
}
Thanks
DPG
This works fine for me...
#SpringBootApplication
public class So63270367Application {
public static void main(String[] args) {
SpringApplication.run(So63270367Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("ABC_TEST_XPTO_ERR").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.send("ABC_TEST_XPTO_ERR", "foo");
}
#KafkaListener(id = "so63270367", topics = "ABC_TEST_XPTO_ERR")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
Maybe your brokers have some rules about topic names; maybe look at the broker logs?
EDIT
As I said in my comment; it shouldn't matter where the record is published from; this still works for me...
#SpringBootApplication
public class So63270367Application {
public static void main(String[] args) {
SpringApplication.run(So63270367Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("ABC_TEST_XPTO").partitions(1).replicas(1).build();
}
#Bean
public NewTopic topicErr() {
return TopicBuilder.name("ABC_TEST_XPTO_ERR").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.send("ABC_TEST_XPTO_ERR", "foo");
}
#KafkaListener(id = "so63270367", topics = "ABC_TEST_XPTO")
public void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#KafkaListener(id = "so63270367err", topics = "ABC_TEST_XPTO_ERR")
public void listenErr(String in) {
System.out.println("From DLT:" + in);
}
#Bean
public SeekToCurrentErrorHandler eh(KafkaOperations<String, String> template) {
return new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(
template,
(cr, e) -> new TopicPartition(cr.topic() + "_ERR", cr.partition())),
new FixedBackOff(0L, 0L));
}
}
From DLT:foo

Spring for Kafka 2.3 setting an offset during runtime for specific listener with KafkaMessageListenerContainer

I have to implement a functionality to (re-)set a listener for a certain topic/partition to any given offset. So if events are commited to the offset 5 and the admin decides to reset the offset to 2 then the event 3, 4 and 5 should be reprocessed.
We are using Spring for Kafka 2.3 and I was trying to follow the documentation on ConsumerSeekAware which seems to be exactly what I am looking for.
The problem however is that we are using topics that are created on runtime as well. We use a KafkaMessageListenerContainer through a DefaultKafkaConsumerFactory for that purpose and I don't know where to put the registerSeekCallback or something alike.
Is there any way to achieve this? I have problems understanding how the class using the #KafkaListener annotations maps to the way how listeners are created in the factory.
Any help would be appreciated. Even if it is only an explanation on how these things work together.
This is how the KafkaMessageListenerContainer are basically created:
public KafkaMessageListenerContainer<String, Object> createKafkaMessageListenerContainer(String topicName,
ContainerPropertiesStrategy containerPropertiesStrategy) {
MessageListener<String, String> messageListener = getMessageListener(topicName);
ConsumerFactory<String, Object> consumerFactory = new DefaultKafkaConsumerFactory<>(getConsumerFactoryConfiguration());
KafkaMessageListenerContainer<String, Object> kafkaMessageListenerContainer = createKafkaMessageListenerContainer(topicName, messageListener, bootstrapServers, containerPropertiesStrategy, consumerFactory);
return kafkaMessageListenerContainer;
}
public MessageListener<String, String> getMessageListener(String topic) {
MessageListener<String, String> messageListener = new MessageListener<String, String>() {
#Override
public void onMessage(ConsumerRecord<String, String> message) {
try {
consumerService.consume(topic, message.value());
} catch (IOException e) {
log.log(Level.WARNING, "Message couldn't be consumed", e);
}
}
};
return messageListener;
}
public static KafkaMessageListenerContainer<String, Object> createKafkaMessageListenerContainer(
String topicName, MessageListener<String, String> messageListener, String bootstrapServers, ContainerPropertiesStrategy containerPropertiesStrategy,
ConsumerFactory<String, Object> consumerFactory) {
ContainerProperties containerProperties = containerPropertiesStrategy.getContainerPropertiesForTopic(topicName);
containerProperties.setMessageListener(messageListener);
KafkaMessageListenerContainer<String, Object> kafkaMessageListenerContainer = new KafkaMessageListenerContainer<>(
consumerFactory, containerProperties);
kafkaMessageListenerContainer.setBeanName(topicName);
return kafkaMessageListenerContainer;
}
Hope that helps.
The key component is the AbstractConsumerSeekAware. Hopefully this will provide enough to get you started...
#SpringBootApplication
public class So59682801Application {
public static void main(String[] args) {
SpringApplication.run(So59682801Application.class, args).close();
}
#Bean
public ApplicationRunner runner(ListenerCreator creator,
KafkaTemplate<String, String> template, GenericApplicationContext context) {
return args -> {
System.out.println("Hit enter to create a listener");
System.in.read();
ConcurrentMessageListenerContainer<String, String> container =
creator.createContainer("so59682801group", "so59682801");
// register the container as a bean so that all the "...Aware" interfaces are satisfied
context.registerBean("so59682801", ConcurrentMessageListenerContainer.class, () -> container);
context.getBean("so59682801", ConcurrentMessageListenerContainer.class); // re-fetch to initialize
container.start();
// send some messages
IntStream.range(0, 10).forEach(i -> template.send("so59682801", "test" + i));
System.out.println("Hit enter to reseek");
System.in.read();
((MyListener) container.getContainerProperties().getMessageListener())
.reseek(new TopicPartition("so59682801", 0), 5L);
System.out.println("Hit enter to exit");
System.in.read();
};
}
}
#Component
class ListenerCreator {
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
ListenerCreator(ConcurrentKafkaListenerContainerFactory<String, String> factory) {
factory.getContainerProperties().setIdleEventInterval(5000L);
this.factory = factory;
}
ConcurrentMessageListenerContainer<String, String> createContainer(String groupId, String... topics) {
ConcurrentMessageListenerContainer<String, String> container = factory.createContainer(topics);
container.getContainerProperties().setGroupId(groupId);
container.getContainerProperties().setMessageListener(new MyListener());
return container;
}
}
class MyListener extends AbstractConsumerSeekAware implements MessageListener<String, String> {
#Override
public void onMessage(ConsumerRecord<String, String> data) {
System.out.println(data);
}
public void reseek(TopicPartition partition, long offset) {
getSeekCallbackFor(partition).seek(partition.topic(), partition.partition(), offset);
}
}
Calling reseek() on the listener queues the seek for the consumer thread when it wakes from the poll() (actually before the next one).
I think you can use some annotation for spring kafka like this although might be awkward setting the offset in annotation at runtime
#KafkaListener(topicPartitions =
#TopicPartition(topic = "${kafka.consumer.topic}", partitionOffsets = {
#PartitionOffset(partition = "0", initialOffset = "2")}),
containerFactory = "filterKafkaListenerContainerFactory", id = "${kafka.consumer.groupId}")
public void receive(ConsumedObject event) {
log.info(String.format("Consumed message with correlationId: %s", event.getCorrelationId()));
consumerHelper.start(event);
}
Alternatively here is some code I wrote to consume from a given offset, I simulated consumer failing on a message, this is using KafkaConsumer though rather than the KafkaMessageListenerContainer.
private static void ConsumeFromOffset(KafkaConsumer<String, Customer> consumer, boolean flag, String topic) {
Scanner scanner = new Scanner(System.in);
System.out.print("Enter offset: ");
int offsetInput = scanner.nextInt();
while (true) {
ConsumerRecords<String, Customer> records = consumer.poll(500);
for (ConsumerRecord<String, Customer> record : records) {
Customer customer = record.value();
System.out.println(customer + " has offset ->" + record.offset());
if (record.offset() == 7 && flag) {
System.out.println("simulating consumer failing after offset 7..");
break;
}
}
consumer.commitSync();
if (flag) {
// consumer.seekToBeginning(Stream.of(new TopicPartition(topic, 0)).collect(Collectors.toList())); // consume from the beginning
consumer.seek(new TopicPartition(topic, 0), 3); // consume
flag = false;
}
}
}

How to move error message to rabbitmq dead letter queue

I read a lot of documentation/stackoverflow and still I have problem when exception occurs to move message to dead letter queue. I'm using spring-boot Here is my configuration:
#Autowired
private RabbitTemplate rabbitTemplate;
#Bean
RetryOperationsInterceptor interceptor() {
RepublishMessageRecoverer recoverer = new RepublishMessageRecoverer(rabbitTemplate, "error_exchange ", "error_key");
return RetryInterceptorBuilder
.stateless()
.recoverer(recoverer)
.build();
}
Dead letter queue:
Features
x-dead-letter-routing-key: error_key
x-dead-letter-exchange: error_exchange
durable: true
Policy DLX
Name of the queue: error
My exchange:
name:error_exchange
binding: to: error, routing_key: error_key
Here is my conusmer:
#RabbitListener(queues = "${rss_reader_chat_queue}")
public void consumeMessage(Message message) {
try {
List<ChatMessage> chatMessages = messageTransformer.transformMessage(message);
List<ChatMessage> save = chatMessageRepository.save(chatMessages);
sendMessagesToChat(save);
}
catch(Exception ex) {
throw new AmqpRejectAndDontRequeueException(ex);
}
}
So when I send an invalid message and some exception occurs, it happens once (and it's good because previously message was sent over and over again) but the message doesn't go to my dead letter queue. Can you help me with this?
You need to show the rest of your configuration - boot properties, queue #Beans etc. You also seem to have some confusion between using a republishing recoverer Vs dead letter queues; they are different ways to achieve similar results. You typically wouldn't use both.
Here's a simple boot app that demonstrates using a DLX/DLQ...
#SpringBootApplication
public class So43694619Application implements CommandLineRunner {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So43694619Application.class, args);
context.close();
}
#Autowired
RabbitTemplate template;
#Autowired
AmqpAdmin admin;
private final CountDownLatch latch = new CountDownLatch(1);
#Override
public void run(String... arg0) throws Exception {
this.template.convertAndSend("so43694619main", "foo");
this.latch.await(10, TimeUnit.SECONDS);
this.admin.deleteExchange("so43694619dlx");
this.admin.deleteQueue("so43694619main");
this.admin.deleteQueue("so43694619dlx");
}
#Bean
public Queue main() {
Map<String, Object> args = new HashMap<>();
args.put("x-dead-letter-exchange", "so43694619dlx");
args.put("x-dead-letter-routing-key", "so43694619dlxRK");
return new Queue("so43694619main", true, false, false, args);
}
#Bean
public Queue dlq() {
return new Queue("so43694619dlq");
}
#Bean
public DirectExchange dlx() {
return new DirectExchange("so43694619dlx");
}
#Bean
public Binding dlqBinding() {
return BindingBuilder.bind(dlq()).to(dlx()).with("so43694619dlxRK");
}
#RabbitListener(queues = "so43694619main")
public void listenMain(String in) {
throw new AmqpRejectAndDontRequeueException("failed");
}
#RabbitListener(queues = "so43694619dlq")
public void listenDlq(String in) {
System.out.println("ReceivedFromDLQ: " + in);
this.latch.countDown();
}
}
Result:
ReceivedFromDLQ: foo

Categories

Resources