How to publish Spring Kafka DLQ in 2.5.4 version - java

need your help and guidance on this.
I was using 2.2.X version spring-kafka in my current project.
The error handling that I created looks like this:
#Bean("kafkaConsumer")
public ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> eventKafkaConsumer() {
ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setErrorHandler(new SeekToCurrentErrorHandler(createDeadLetterPublishingRecoverer(), 3));
return factory;
}
public DeadLetterPublishingRecoverer createDeadLetterPublishingRecoverer() {
return new DeadLetterPublishingRecoverer(getEventKafkaTemplate(),
(record, ex) -> new TopicPartition("topic-undelivered", -1));
}
And then I upgraded all my project dependency version, such as spring-boot and the spring-kafka into the latest one : 2.5.4 RELEASE
I found that some of the methods were deprecated and changed.
SeekToCurrentErrorHandler
SeekToCurrentErrorHandler errorHandler =
new SeekToCurrentErrorHandler((record, exception) -> {
// recover after 3 failures, woth no back off - e.g. send to a dead-letter topic
}, new FixedBackOff(0L, 2L));
My question is,
how to produce the DLQ with these configurations:
EDITED
#Bean("kafkaConsumer")
public ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> kafkaConsumer() {
ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(consumerConcurrencyCount);
factory.setErrorHandler(errorHandler());
return factory;
}
public SeekToCurrentErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler(
deadLetterPublishingRecoverer(),
new FixedBackOff(0L, 2L)
);
}
public DeadLetterPublishingRecoverer deadLetterPublishingRecoverer() {
return new DeadLetterPublishingRecoverer(
getEventKafkaTemplate(),
(record, ex) -> {
if (ex.getCause() instanceof BusinessException || ex.getCause() instanceof TechnicalException) {
return new TopicPartition("topic-undelivered", -1);
}
return new TopicPartition("topic-fail", -1);
});
}
public KafkaOperations<String, Object> getEventKafkaTemplate() { // producer to DLQ
return new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()));
}
This configurations work, thanks to Gary!
Thanks in advance

It's not clear what you mean by
The problem is, in the documentation, it's still using the old method, which is deprecated for 2.5.X version
The KafkaOperations is an interface that the KafkaTemplate implements; the only change you need to make is to change the maxAttempts to a BackOff...
#Bean("kafkaConsumer")
public ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> eventKafkaConsumer() {
ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setErrorHandler(new SeekToCurrentErrorHandler(createDeadLetterPublishingRecoverer(), new FixedBackOff(0, 2L));
return factory;
}
public DeadLetterPublishingRecoverer createDeadLetterPublishingRecoverer() {
return new DeadLetterPublishingRecoverer(getEventKafkaTemplate(),
(record, ex) -> new TopicPartition("topic-undelivered", -1));
}

Related

how to read message from Kafka consumer after some time interval

In my Spring boot application I have kafka consumer class which reads message frequently whenever there are message available in the topic. I want to limit the consumer to consume message in every 2 hours interval time. Like after reading one message the consumer will be paused for 2 hours then again consumer another message.
This is my consumer config method :-
#Bean
public Map<String, Object> scnConsumerConfigs() {
Map<String, Object> propsMap = new HashMap<>();
// common props
logger.info("KM Dataloader :: Kafka Brokers for Software topic: {}", bootstrapServersscn);
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServersscn);
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
propsMap.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 7200000);
// ssl props
propsMap.put("security.protocol", mpaasSecurityProtocol);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, truststorePath);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, truststorePassword);
propsMap.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, keystorePath);
propsMap.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, keystorePassword);
return propsMap;
}
then I create this container method where I setup rest of the kafka configuration
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
LOGGER.info("Setting concurrency to {} for {}", config.getConcurrency(), topicName);
factory.setConcurrency(config.getConcurrency());
factory.setConsumerFactory(cFactory);
factory.setRetryTemplate(retryTemplate);
factory.getContainerProperties().setIdleBetweenPolls(7200000);
return factory;
using this code partitions is rebalanced every 2 hours, but its not reading message at all.
My kafka consumer method :-
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService) {
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
};
}
Try to add to add #KafkaListener annotated method in KmKafkaListenerso that Spring kafka will take care of calling it.
public class KmKafkaListener{
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
}
and initalize the bean this way
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService);
}

How to Make Subscribers Not Consuming The Same Messages - Redis Pubsub Messaging Using Spring Data Redis

So recently I was exploring on how to make a client app utilising Redis PubSub feature and make it resemble Apache Kafka messaging. I know that those two have their own characteristics, therefore I might not be able to make the client app, a Redis Subscriber, to work RESEMBLE Apache Kafka Consumer.
What I'm currently tweaking is The Redis Subscribers behaviour on the client app side, so they will not receiving (or 'consuming') the same message sent their subbed channel. I know basically it's kind of impossible, given Redis Pubsub's behaviour that all subscribers will received the published message equally...
... But, I did found a way.
FYI, I am using :
Java
Maven, as dependecy injection
SpringBoot
Dependecies...
Spring Data Redis
Jedis (3.6.0)
json - org.json (20210307)
So here's my setup...
BeanConfiguration.java
#Configuration
public class BeanConfiguration{
#Bean
public JedisConnectionFactory connectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
JedisConnectionFactory connFactory = new JedisConnectionFactory(redisStandaloneConfiguration);
return connFactory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> connTemplate = new RedisTemplate<String, Object>();
connTemplate.setConnectionFactory(connectionFactory());
connTemplate.setDefaultSerializer(new GenericJackson2JsonRedisSerializer());
connTemplate.setKeySerializer(new StringRedisSerializer());
connTemplate.setHashKeySerializer(new GenericJackson2JsonRedisSerializer());
connTemplate.setValueSerializer(new GenericJackson2JsonRedisSerializer());
return connTemplate;
}
#Bean
public MessageListenerAdapter messageListener() {
return new MessageListenerAdapter(new RedisMessageSubscriber(redisTemplate()));
}
#Bean
public ChannelTopic topic() {
return new ChannelTopic("mychannel");
}
#Bean
public RedisMessageListenerContainer redisContainer() {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.addMessageListener(messageListener(), topic());
container.setTaskExecutor(Executors.newFixedThreadPool(4));
return container;
}}
RedisMessageSubscriber.java
#Service
public class RedisMessageSubscriber implements MessageListener {
private RedisTemplate<String, Object> redisTemplate;
public RedisMessageSubscriber(RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#Override
public void onMessage(Message message, byte[] pattern) {
JSONObject jsonMessage = new JSONObject(message.toString());
lockAndProcessMessage(jsonMessage);
}
private void lockAndProcessMessage(JSONObject jsonMessage) {
String key = jsonMessage.getString("id");
String value = jsonMessage.getString("value");
Boolean isNotExist = redisTemplate.opsForValue().setIfAbsent(key, value, 5, TimeUnit.MINUTES);
if (isNotExist) {
System.out.println("SUCCESSFULLY SET ['" + key + "'] Expired in '1 minute'");
try {
System.out.println("Processing ['" + key + "'] ...");
Thread.sleep(60000);
} catch(InterruptedException ex) {
ex.printStackTrace();
} finally {
redisTemplate.opsForValue().getOperations().delete(key);
System.out.println("UNLOCKED ['" + key + "']");
}
} else {
System.out.println("FAILED TO LOCKED ['" + key + "']");
}
}}
Using codes above, I was able to achieve the "no-double-consuming" condition, but there is some circumstances that made the "double-consuming" happened.
If instance "A" is processing at queue 15, while instance "B" is still processing at queue 6 and by the time "A" is done with 15 and "B" caught up with queue 15, "B" WILL PROCESSED queue 15 again due to the marker left by "A" had already been deleted (Due to the processing of 15 is done by "A").
Is there anyway to counter this "weakness"? I am also open to any solution, suggestion, discussion, and findings of weakness in my sample codes. Thank you
Sorry if my english is bad :D

Spring boot kafka listening for every two hours and incase connection lost sending info

how can ı check if kafka server running or not for every two hours and incase of connection lost throw and event to the server which ı created method called "throwEven()" and listen kafka with 'listenKafkaEveryThisMs: 200000'
my Kafka yaml
kafka:
url: localhost:9092
topic: topicName
groupid: conver
offsetResetConfig: earliest
concurrency: 1
maxPollInternalMsConfig: 300000
maxPollRecordsConfig: 30
errorHandlerRetryCount: 5
listenKafkaEveryThisMs: 200000
my KafkaConsumerConfig class
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.concurrency}")
private String concurrency;
#Value("${kafka.url}")
private String kafkaUrl;
#Value("${kafka.groupid}")
private String groupid;
#Value("${kafka.offsetResetConfig}")
private String offsetResetConfig;
#Value("${kafka.maxPollInternalMsConfig}")
private String maxPollInternalMsConfig;
#Value("${kafka.maxPollRecordsConfig}")
private String maxPollRecordsConfig;
#Value("${kafka.errorHandlerRetryCount}")
private String retryCount;
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaUrl);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupid);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetResetConfig);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.parseInt(maxPollInternalMsConfig));
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Integer.parseInt(maxPollRecordsConfig));
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(concurrency));
factory.getContainerProperties().setSyncCommits(true);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(false);
factory.setStatefulRetry(true);
factory.setErrorHandler(new SeekToCurrentErrorHandler(Integer.parseInt(retryCount)));
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new AlwaysRetryPolicy());
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
factory.setRetryTemplate(retryTemplate);
return factory;
}
}
my KafkaConsumer class
#Service
public class KafkaConsumer implements ConsumerSeekAware {
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
#KafkaListener(topics = "#{'${kafka.topic}'}", groupId = "#{'${kafka.groupid}'}")
public void consume(String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition,
#Header(KafkaHeaders.OFFSET) Long offset, Acknowledgment ack, KafkaMessageDrivenChannelAdapter kafkaMessageDrivenChannelAdapter) {
if (!kafkaMessageDrivenChannelAdapter.isRunning()) {
throwEvent();
}
try {
mobileSubscriptionService.processMessage(message, ack, null);
} catch (ParseException e) {
logger.error(e.getMessage());
}
}
#Scheduled(fixedDelayString = "${kafka.listenKafkaEveryThisMs}")
private void throwEvent() {
Map<String, String> eventDetails = new HashMap<>();
eventDetails.put("eventDetailsKey", "eventDetailsValue");
AppDynamicsEventUtil.publishEvent("eventSummary", EventSeverityStatus.INFO, EventTypeStatus.CUSTOM, eventDetails);
}
I really dont know what I should use to listen kafka server runnig or not
Thank you
For checking cluster connection state, most straightforward would be AdminClient.describeCluster()
Alternatively, you can hook some check into Actuator

RabbitMq RepublishMessageRecover ignore ImmediateAcknowledgeAmqpException

Im creating a RabbitMQ consumer using...
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
My Listener looks like this...
#RabbitListener(queues = "queue")
public void getMessages(String message) {
if(message.contains("error")) {
throw new ImmediateAcknowledgeAmqpException("Invalid message, discarding");
}
throw new AmqpRejectAndDontRequeueException("Re-queuing message");
}
Im expecting ImmediateAcknowledgeAmqpException to discard the message totally but it keeps ending up on the deadletter queue :( Heres my configuration
#Bean
public RepublishMessageRecoverer republishMessageRecoverer(RabbitTemplate rabbitTemplate) {
RepublishMessageRecoverer republishMessageRecoverer = new RepublishMessageRecoverer(rabbitTemplate,"exchange", "deadletter_routing_key");
return republishMessageRecoverer;
}
Bean
public SimpleRetryPolicy simpleRetryPolicy() {
Map<Class<? extends Throwable>, Boolean> includeExceptions = new HashMap<>();
includeExceptions.put(ImmediateAcknowledgeAmqpException.class, false);
includeExceptions.put(AmqpRejectAndDontRequeueException.class, true);
return new SimpleRetryPolicy(5, includeExceptions, true);
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRetryPolicy simpleRetryPolicy, RepublishMessageRecoverer republishMessageRecoverer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDefaultRequeueRejected(false);
factory.setAdviceChain(RetryInterceptorBuilder
.stateless()
.retryPolicy(simpleRetryPolicy)
.recoverer(republishMessageRecoverer)
.backOffOptions(5000, 1,
1000)
.build());
return factory;
}
Is there a way I can exclude the ImmediateAcknowledgeAmqpException from the recoverer?
Thanks

How to send Kafka offsets in transaction, which is created by KafkaTemplate?

I want to implement read-process-write pattern - https://www.confluent.io/blog/transactions-apache-kafka/. So, I need to consume records, process them and then to commit consumed offsets.
I use org.apache.kafka.clients.consumer.KafkaConsumer for consuming messages. I mean, it is not spring related consumer.
I use org.springframework.kafka.core.KafkaTemplate for producing messages. I create its bean like this:
#Bean
public Map<String, Object> producerConfigs() {
final Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "bootstrapServers");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, UUID.randomUUID().toString());
props.put(ProducerConfig.ACKS_CONFIG, "all");
return props;
}
#Bean
public DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory() {
DefaultKafkaProducerFactory<String, String> kafkaProducerFactory = new DefaultKafkaProducerFactory<>(producerConfigs());
kafkaProducerFactory.setTransactionIdPrefix("transaction-id-prefix");
return kafkaProducerFactory;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate(DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory) {
return new KafkaTemplate<>(defaultKafkaProducerFactory);
}
I produce result messages like this:
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(POLL_INTERVAL_IN_MS));
List<List<String>> outputMessages = produceOutput(consumerRecords);
kafkaTemplate.executeInTransaction(kafkaProducer -> {
for (List<String> resultTasks : outputMessages) {
for (String resultTask : resultTasks) {
kafkaProducer.send("topic", "key", resultTask);
}
}
kafkaProducer.sendOffsetsToTransaction(getOffsetsForCommit(consumerRecords), "consumerGroupId");
return true;
});
Finally, I have this error:
java.lang.IllegalArgumentException: No transaction in process
at org.springframework.util.Assert.isTrue(Assert.java:118)
at org.springframework.kafka.core.KafkaTemplate.sendOffsetsToTransaction(KafkaTemplate.java:345)
Exception throws in this method:
#Override
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets, String consumerGroupId) {
#SuppressWarnings("unchecked")
KafkaResourceHolder<K, V> resourceHolder = (KafkaResourceHolder<K, V>) TransactionSynchronizationManager
.getResource(this.producerFactory);
Assert.isTrue(resourceHolder != null, "No transaction in process"); // here
if (resourceHolder.getProducer() != null) {
resourceHolder.getProducer().sendOffsetsToTransaction(offsets, consumerGroupId);
}
}
So, how to properly commit these offsets?
It's a bug; sendOffsetsToTransaction() doesn't work in executeInTransaction - it assumes a Spring transaction is bound to the thread.
As a work-around, you can either use #Transactional on the method or use a transaction template with a KafkaTransactionManager to start the Spring transaction instead of using executeInTransaction().
TransactionTemplate tt = new TransactionTemplate(tm);
...
this.tt.execute(s -> {
template.send(...);
template.sendOffsetsToTransaction(...);
return null;
});
Please open a GitHub Issue and we'll fix this.

Categories

Resources