Using Spring Kafka with LINGER_MS_CONFIG causes error - java

I'm experimenting with compression with Kafka because my records are getting too large. This is what my Kafka configuration looks like -
public Map<String, Object> producerConfigs(int blockTime ) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//spring.kafka.producer.request.timeout.ms
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,blockTime);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG,1048576);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy" );
props.put(ProducerConfig.LINGER_MS_CONFIG,5 );
//ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG
log.info("Created producer config kafka ");
return props;
}
public ProducerFactory<String, String> finalproducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs(10));
}
#Bean(name = "commonKafkaTemplate")
public KafkaTemplate<String, String> getFinalKafkaTemplate() {
return new KafkaTemplate<>(finalproducerFactory());
}
If I comment out the line with LINGER_MS_CONFIG
, the code works.
I saw LINGER_MS_CONFIG being used in an example, which said that compression was more "effective". What is it about LINGER_MS_CONFIG which causes an error? Is there some conflict with any of the other properties?

Related

Multi-kafka connections

There is a data stream application. It is necessary to connect and listen from several Kafka brokers (different ip-addresses, more than 2) and to write to the one.
Pls advise how to arrange multi-kafka connection?
Configuration class for a single kafka connection:
#Configuration
public class KafkaProducer {
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
It is expected several connections to be arranged and listened in the same time.
Bootstrap server config option accepts a CSV list for multiple brokers of one cluster. But you only need to provide multiple options for fault tolerance, as Kafka automatically returns all servers in the same cluster on first connection.
If you need to connect to distinct Kafka clusters, create a Bean with the different bootstrap address

Kafka Listener method could not be invoked with the incoming message Endpoint handler details

I'm sending a json object from producer spring boot but when I'm trying to receive the message in consumer
I am getting following error :
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.***.***.kafka.messages.Message] for GenericMessage [payload={"type":"HHH","actorId":1,"entity":null,"entityType":"ssss"}, headers={kafka_offset=45, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#789f2579, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=isomantopictest, kafka_receivedTimestamp=1671973825381, __TypeId__=[B#500ac626, kafka_groupId=isoman5}]
at org.springframework.messaging.handler.annotation.support.PayloadMethodArgumentResolver.resolveArgument(PayloadMethodArgumentResolver.java:144) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.kafka.annotation.KafkaNullAwarePayloadArgumentResolver.resolveArgument(KafkaNullAwarePayloadArgumentResolver.java:46) ~[spring-kafka-3.0.0.jar:3.0.0]
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:118) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:147) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:115) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:56) ~[spring-kafka-3.0.0.jar:3.0.0]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:366) ~[spring-kafka-3.0.0.jar:3.0.0]
... 15 common frames omitted
KafkaConfig :
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
// configProps.put(
// ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
// StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate kafkaTemplate() {
KafkaTemplate<String, Message> kafkaTemplate = new KafkaTemplate(producerFactory());
kafkaTemplate.setConsumerFactory(messageConsumerFactory());
return kafkaTemplate;
}
public ConsumerFactory<String, Message> messageConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
return new DefaultKafkaConsumerFactory<>(
props,
new StringDeserializer(),
new JsonDeserializer<>(Message.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Message> messageKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Message> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(messageConsumerFactory());
// factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
Receive File :
#KafkaListener(idIsGroup = false, groupId = "${kafka.group-id}",
topics="#{'${kafka.topics}'.split(',')}")
void hearing(#Payload Message message ) {
if (verbose) {
log.trace("New message received : type = {}, payload = {}", message.getType(), message);
log.trace("Calling processMessage()");
}
}
I solve it by adding this statement on messageKafkaListenerContainerFactory:
factory.setMessageConverter(new StringJsonMessageConverter());
And change #Payload Message message to #Payload String message.

Scaling KAFKA to manage higher TPS using a synchronic request/reply configuration

I am new to KAFKA and I am working on a POC to migrate our current ESB into microservices. Our current ESB work using SOAP services so I need to keep using the request/reply paradigm.
The POC consist on a spring boot microservice. One instance of the application is the producer using the following code:
#Endpoint
public class AcctInfoSoapServiceController {
private static final Logger LOGGER = LoggerFactory.getLogger(AcctInfoSoapServiceController.class);
#Autowired
KafkaAsyncService kafkaAsyncService;
#PayloadRoot(namespace = "http://firstbankpr.com/feis/common/model", localPart = "GetAccountInfoRequest")
#ResponsePayload
public GetAccountInfoResponse getModelResponse(#RequestPayload GetAccountInfoRequest accountInfo) throws Exception {
long start = System.currentTimeMillis();
LOGGER.info("Returning request for account " + accountInfo.getInGetAccountInfo().getAccountNumber() );
AccountInquiryDto modelResponse = kafkaAsyncService.getModelResponse(accountInfo.getInGetAccountInfo());
GetAccountInfoResponse response = ObjectFactory.getGetAccountInfoResponse(modelResponse);
long elapsedTime = System.currentTimeMillis() - start;
LOGGER.info("Returning request in " + elapsedTime + " ms for account = " + accountInfo.getInGetAccountInfo().getAccountNumber() + " " + response );
return response;
}
}
public AccountInquiryDto getModelResponse(InGetAccountInfo accountInfo) throws Exception{
LOGGER.info("Received request for request for account " + accountInfo);
// create producer record
ProducerRecord<String, InGetAccountInfo> record = new ProducerRecord<String, InGetAccountInfo>(requestTopic,accountInfo);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, requestReplyTopic.getBytes()));
// post in kafka topic
RequestReplyFuture<String, InGetAccountInfo, AccountInquiryDto> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, InGetAccountInfo> sendResult = sendAndReceive.getSendFuture().get();
// //print all headers
sendResult.getProducerRecord().headers().forEach(header -> System.out.println(header.key() + ":" + header.value().toString()));
// get consumer record
ConsumerRecord<String, AccountInquiryDto> consumerRecord = sendAndReceive.get();
ObjectMapper mapper = new ObjectMapper();
AccountInquiryDto modelResponse = mapper.convertValue(
consumerRecord.value(),
new TypeReference<AccountInquiryDto>() { });
LOGGER.info("Returning record for " + modelResponse);
return modelResponse;
}
The following is the configuration of the producer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
//props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
// props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public ReplyingKafkaTemplate<String, InGetAccountInfo, AccountInquiryDto> replyKafkaTemplate(ProducerFactory<String, InGetAccountInfo> pf, KafkaMessageListenerContainer<String, AccountInquiryDto> container){
return new ReplyingKafkaTemplate(pf, container);
}
#Bean
public ProducerFactory<String, InGetAccountInfo> requestProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public ConsumerFactory<String, AccountInquiryDto> replyConsumerFactory() {
JsonDeserializer<AccountInquiryDto> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
jsonDeserializer.addTrustedPackages(AccountInquiryDto.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaMessageListenerContainer<String, AccountInquiryDto> replyContainer(ConsumerFactory<String, AccountInquiryDto> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public KafkaAsyncService kafkaAsyncService(){
return new KafkaAsyncService();
}
I have one instance of the spring boot application working as the KAFKA consumer with the following code:
#KafkaListener(topics = "${kafka.topic.acct-info.request}", containerFactory = "requestReplyListenerContainerFactory")
#SendTo
public Message<?> listenPartition0(InGetAccountInfo accountInfo,
#Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
LOGGER.info("Received request for accountInfo = " + accountInfo.getAccountNumber());
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
LOGGER.info("Returning accountInfoDto = " + accountInfoDto.toString());
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
The following is the configuration of the consumer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
return props;
}
#Bean
public ConsumerFactory<String, InGetAccountInfo> requestConsumerFactory() {
JsonDeserializer<InGetAccountInfo> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, InGetAccountInfo>> requestReplyListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, InGetAccountInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(requestConsumerFactory());
factory.setConcurrency(3);
factory.setReplyTemplate(replyTemplate());
return factory;
}
#Bean
public ProducerFactory<String, AccountInquiryDto> replyProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, AccountInquiryDto> replyTemplate() {
return new KafkaTemplate<>(replyProducerFactory());
}
#Bean
public DepAcctInqConsumerController Controller() {
return new DepAcctInqConsumerController();
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
Map<String, String> configs = new HashMap<>();
configs.put("retention.ms", replyTimeout.toString());
return new NewTopic(requestTopic, 2, (short) 2).configs(configs);
}
KAFKA is using one topic with 5 partitions. When I run a load test using SoapUI I get about 19 TPS using 20 threads. The TPS remain the same even when I increment the number of partitions to 10 using 20 threads, incrementing the number of threads to 40 does not increment the TPS even if the number of partitions is higher (up to 10). Also incrementing the number of instances for a consumer and a producer to 2 and configuring a Load balancer to distribute the load does not change the TPS (which remains in about 20). In this case it seems that KAFKA is the bottleneck.
The only way I was able to increment the TPS was by assigning a different topic to each instance of the consumer/producer pair. When using the load balancer The TPS incremented to 38 TPS which is about double what I obtained with one instance of the consumer/producer pair. Monitoring the servers running the spring boot application does not provide any meaningful information since the CPU load and memory remains very low. The KAFKA servers also remains low on usage with about 20% CPU utilization. Currently kafka is using only one broker.
I am looking for advice on how to configure KAFKA so that I can increase the number of instances of the spring boot application using the same topic, this will allow me to increase the TPS with every new instance of the consumer/producer that starts. I understand that with every new instance of the consumer/producer I may need to increment the number of partitions on a topic. The final goal is to run these as PODS inside an openshift cluster where the application should be able to automatically grow when traffic increases.

How to send Kafka offsets in transaction, which is created by KafkaTemplate?

I want to implement read-process-write pattern - https://www.confluent.io/blog/transactions-apache-kafka/. So, I need to consume records, process them and then to commit consumed offsets.
I use org.apache.kafka.clients.consumer.KafkaConsumer for consuming messages. I mean, it is not spring related consumer.
I use org.springframework.kafka.core.KafkaTemplate for producing messages. I create its bean like this:
#Bean
public Map<String, Object> producerConfigs() {
final Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "bootstrapServers");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, UUID.randomUUID().toString());
props.put(ProducerConfig.ACKS_CONFIG, "all");
return props;
}
#Bean
public DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory() {
DefaultKafkaProducerFactory<String, String> kafkaProducerFactory = new DefaultKafkaProducerFactory<>(producerConfigs());
kafkaProducerFactory.setTransactionIdPrefix("transaction-id-prefix");
return kafkaProducerFactory;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate(DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory) {
return new KafkaTemplate<>(defaultKafkaProducerFactory);
}
I produce result messages like this:
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(POLL_INTERVAL_IN_MS));
List<List<String>> outputMessages = produceOutput(consumerRecords);
kafkaTemplate.executeInTransaction(kafkaProducer -> {
for (List<String> resultTasks : outputMessages) {
for (String resultTask : resultTasks) {
kafkaProducer.send("topic", "key", resultTask);
}
}
kafkaProducer.sendOffsetsToTransaction(getOffsetsForCommit(consumerRecords), "consumerGroupId");
return true;
});
Finally, I have this error:
java.lang.IllegalArgumentException: No transaction in process
at org.springframework.util.Assert.isTrue(Assert.java:118)
at org.springframework.kafka.core.KafkaTemplate.sendOffsetsToTransaction(KafkaTemplate.java:345)
Exception throws in this method:
#Override
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets, String consumerGroupId) {
#SuppressWarnings("unchecked")
KafkaResourceHolder<K, V> resourceHolder = (KafkaResourceHolder<K, V>) TransactionSynchronizationManager
.getResource(this.producerFactory);
Assert.isTrue(resourceHolder != null, "No transaction in process"); // here
if (resourceHolder.getProducer() != null) {
resourceHolder.getProducer().sendOffsetsToTransaction(offsets, consumerGroupId);
}
}
So, how to properly commit these offsets?
It's a bug; sendOffsetsToTransaction() doesn't work in executeInTransaction - it assumes a Spring transaction is bound to the thread.
As a work-around, you can either use #Transactional on the method or use a transaction template with a KafkaTransactionManager to start the Spring transaction instead of using executeInTransaction().
TransactionTemplate tt = new TransactionTemplate(tm);
...
this.tt.execute(s -> {
template.send(...);
template.sendOffsetsToTransaction(...);
return null;
});
Please open a GitHub Issue and we'll fix this.

Kafkalistener reading messages twice

So with the below configuration when we scale the spring boot containers to 10 jvms , the number of event is randomly more than published , for eg , if there are 320000 messages published the events are sometimes 320500 etc..
//Consumer container bean
private static final int CONCURRENCY = 1;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put("enable.auto.commit", "false");
//props.put("isolation.level", "read_committed");
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
//factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
factory.getContainerProperties().setPollTimeout(3000);
factory.setConcurrency(CONCURRENCY);
return factory;
}
//Listener
#KafkaListener(id="claimserror",topics = "${kafka.topic.dataintakeclaimsdqerrors}",groupId = "topic1", containerFactory = "kafkaListenerContainerFactory")
public void receiveClaimErrors(String event,Acknowledgment ack) throws JsonProcessingException {
//save event to table ..
}
Updated
The below change seems to be working fine now , i will have just add a duplicate check in the consumer to prevent a consumer failure scenario
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "-1");
//props.put("isolation.level", "read_committed");
return props;
}
You can try setting ENABLE_IDEMPOTENCE_CONFIG as true, this will help ensure that exactly one copy of each message is written in the stream by the producer.
This way work for me.
You have to configure KafkaListenerContainerFactory like this :
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaFactory);
factory.setConcurrency(10);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
and use ConcurrentMessageListenerContainer like this :
#Bean
public IntegrationFlow inboundFlow() {
final ContainerProperties containerProps = new ContainerProperties(PartitionConfig.TOPIC);
containerProps.setGroupId(GROUP_ID);
ConcurrentMessageListenerContainer concurrentListener = new ConcurrentMessageListenerContainer(kafkaFactory, containerProps);
concurrentListener.setConcurrency(10);
final KafkaMessageDrivenChannelAdapter kafkaMessageChannel = new KafkaMessageDrivenChannelAdapter(concurrentListener);
return IntegrationFlows
.from(kafkaMessageChannel)
.channel(requestsIn())
.get();
}
You can see this for more informations how-does-kafka-guarantee-consumers-doesnt-read-a-single-message-twice and documentation-ConcurrentMessageListenerContainer

Categories

Resources