I am new to KAFKA and I am working on a POC to migrate our current ESB into microservices. Our current ESB work using SOAP services so I need to keep using the request/reply paradigm.
The POC consist on a spring boot microservice. One instance of the application is the producer using the following code:
#Endpoint
public class AcctInfoSoapServiceController {
private static final Logger LOGGER = LoggerFactory.getLogger(AcctInfoSoapServiceController.class);
#Autowired
KafkaAsyncService kafkaAsyncService;
#PayloadRoot(namespace = "http://firstbankpr.com/feis/common/model", localPart = "GetAccountInfoRequest")
#ResponsePayload
public GetAccountInfoResponse getModelResponse(#RequestPayload GetAccountInfoRequest accountInfo) throws Exception {
long start = System.currentTimeMillis();
LOGGER.info("Returning request for account " + accountInfo.getInGetAccountInfo().getAccountNumber() );
AccountInquiryDto modelResponse = kafkaAsyncService.getModelResponse(accountInfo.getInGetAccountInfo());
GetAccountInfoResponse response = ObjectFactory.getGetAccountInfoResponse(modelResponse);
long elapsedTime = System.currentTimeMillis() - start;
LOGGER.info("Returning request in " + elapsedTime + " ms for account = " + accountInfo.getInGetAccountInfo().getAccountNumber() + " " + response );
return response;
}
}
public AccountInquiryDto getModelResponse(InGetAccountInfo accountInfo) throws Exception{
LOGGER.info("Received request for request for account " + accountInfo);
// create producer record
ProducerRecord<String, InGetAccountInfo> record = new ProducerRecord<String, InGetAccountInfo>(requestTopic,accountInfo);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, requestReplyTopic.getBytes()));
// post in kafka topic
RequestReplyFuture<String, InGetAccountInfo, AccountInquiryDto> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, InGetAccountInfo> sendResult = sendAndReceive.getSendFuture().get();
// //print all headers
sendResult.getProducerRecord().headers().forEach(header -> System.out.println(header.key() + ":" + header.value().toString()));
// get consumer record
ConsumerRecord<String, AccountInquiryDto> consumerRecord = sendAndReceive.get();
ObjectMapper mapper = new ObjectMapper();
AccountInquiryDto modelResponse = mapper.convertValue(
consumerRecord.value(),
new TypeReference<AccountInquiryDto>() { });
LOGGER.info("Returning record for " + modelResponse);
return modelResponse;
}
The following is the configuration of the producer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
//props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
// props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public ReplyingKafkaTemplate<String, InGetAccountInfo, AccountInquiryDto> replyKafkaTemplate(ProducerFactory<String, InGetAccountInfo> pf, KafkaMessageListenerContainer<String, AccountInquiryDto> container){
return new ReplyingKafkaTemplate(pf, container);
}
#Bean
public ProducerFactory<String, InGetAccountInfo> requestProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public ConsumerFactory<String, AccountInquiryDto> replyConsumerFactory() {
JsonDeserializer<AccountInquiryDto> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
jsonDeserializer.addTrustedPackages(AccountInquiryDto.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaMessageListenerContainer<String, AccountInquiryDto> replyContainer(ConsumerFactory<String, AccountInquiryDto> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public KafkaAsyncService kafkaAsyncService(){
return new KafkaAsyncService();
}
I have one instance of the spring boot application working as the KAFKA consumer with the following code:
#KafkaListener(topics = "${kafka.topic.acct-info.request}", containerFactory = "requestReplyListenerContainerFactory")
#SendTo
public Message<?> listenPartition0(InGetAccountInfo accountInfo,
#Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
LOGGER.info("Received request for accountInfo = " + accountInfo.getAccountNumber());
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
LOGGER.info("Returning accountInfoDto = " + accountInfoDto.toString());
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
The following is the configuration of the consumer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
return props;
}
#Bean
public ConsumerFactory<String, InGetAccountInfo> requestConsumerFactory() {
JsonDeserializer<InGetAccountInfo> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, InGetAccountInfo>> requestReplyListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, InGetAccountInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(requestConsumerFactory());
factory.setConcurrency(3);
factory.setReplyTemplate(replyTemplate());
return factory;
}
#Bean
public ProducerFactory<String, AccountInquiryDto> replyProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, AccountInquiryDto> replyTemplate() {
return new KafkaTemplate<>(replyProducerFactory());
}
#Bean
public DepAcctInqConsumerController Controller() {
return new DepAcctInqConsumerController();
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
Map<String, String> configs = new HashMap<>();
configs.put("retention.ms", replyTimeout.toString());
return new NewTopic(requestTopic, 2, (short) 2).configs(configs);
}
KAFKA is using one topic with 5 partitions. When I run a load test using SoapUI I get about 19 TPS using 20 threads. The TPS remain the same even when I increment the number of partitions to 10 using 20 threads, incrementing the number of threads to 40 does not increment the TPS even if the number of partitions is higher (up to 10). Also incrementing the number of instances for a consumer and a producer to 2 and configuring a Load balancer to distribute the load does not change the TPS (which remains in about 20). In this case it seems that KAFKA is the bottleneck.
The only way I was able to increment the TPS was by assigning a different topic to each instance of the consumer/producer pair. When using the load balancer The TPS incremented to 38 TPS which is about double what I obtained with one instance of the consumer/producer pair. Monitoring the servers running the spring boot application does not provide any meaningful information since the CPU load and memory remains very low. The KAFKA servers also remains low on usage with about 20% CPU utilization. Currently kafka is using only one broker.
I am looking for advice on how to configure KAFKA so that I can increase the number of instances of the spring boot application using the same topic, this will allow me to increase the TPS with every new instance of the consumer/producer that starts. I understand that with every new instance of the consumer/producer I may need to increment the number of partitions on a topic. The final goal is to run these as PODS inside an openshift cluster where the application should be able to automatically grow when traffic increases.
Related
We see some messages are lost in consuming messages from Kafka topic, especially during restarting of service when using the default properties
#Bean
public ConsumerFactory<String, String> consumerFactory()
{
// Creating a Map of string-object pairs
Map<String, Object> config = new HashMap<>();
// Adding the Configuration
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
"127.0.0.1:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG,
"group_id");
config.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
config.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(config);
}
// Creating a Listener
public ConcurrentKafkaListenerContainerFactory
concurrentKafkaListenerContainerFactory()
{
ConcurrentKafkaListenerContainerFactory<
String, String> factory
= new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
From the documentation, it was mentioned the default value for ackMode is BATCH which states this
Commit the offset when all the records returned by the poll() have been processed
How does spring know that all the messages are processed is a sample example like in here ? and does it mean, when we restart the service offsets are committed and we haven't processed the messages leads to loosing of the messages
#KafkaListener(topics = "topicName", groupId = "foo")
public void listenGroupFoo(String message) {
System.out.println("Received Message in group foo: " + message);
}
Here is my kafka listener:
#KafkaListener(
containerFactory = "kafkaChangeClientPhoneListenerContainerFactory",
topics = "${kafka.topic.changeClientPhone}"
)
public void consume(ChangeClientPhoneEvent changeClientPhoneEvent) {
//TODO
}
Here are the topic settings:
#Bean
public ConsumerFactory<String, String> changeClientPhoneConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(saslConsumerConfig(changeClientPhoneConsumerConfig()));
}
private Map<String, Object> changeClientPhoneConsumerConfig() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ChangeClientPhoneEventDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommitFlag);
return props;
}
I need to get the "siebel-id" param which comes as a header/key:
How to get it as a String var?
See the documentation.
Finally, metadata about the record is available from message headers. You can use the following header names to retrieve the headers of the message:
KafkaHeaders.OFFSET
KafkaHeaders.RECEIVED_MESSAGE_KEY
KafkaHeaders.RECEIVED_TOPIC
KafkaHeaders.RECEIVED_PARTITION_ID
KafkaHeaders.RECEIVED_TIMESTAMP
KafkaHeaders.TIMESTAMP_TYPE
The following example shows how to use the headers:
#KafkaListener(id = "qux", topicPattern = "myTopic1")
public void listen(#Payload String foo,
#Header(name = KafkaHeaders.RECEIVED_MESSAGE_KEY, required = false) Integer key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
) {
...
}
I have an app using Spring Boot 2.5.2
We have a kafka consumer and it's running in some instance of applications.
Here is the Consumer Config
#Bean("KafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(isBatchConsumer);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setConsumerTaskExecutor(messageProcessorExecutor());
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, messagingAddress);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxBatch);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.DEFAULT_ISOLATION_LEVEL, IsolationLevel.READ_COMMITTED);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeoutKafkaConfig);
props.put(
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
CooperativeStickyAssignor.class.getName());
return new DefaultKafkaConsumerFactory<>(props);
}
Here is my Consumer
#KafkaListener(topics = "${topic.mesage}", groupId = "#{'${groupid.ms}'}", properties = {
"key.deserializer=org.apache.kafka.common.serialization.StringDeserializer",
"value.deserializer=org.apache.kafka.common.serialization.StringDeserializer"}, concurrency = "${messaging.consumer.concurrent.thread}", containerFactory = "KafkaListenerContainerFactory")
public void consumerListener(String data, Acknowledgment acknowledgment, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partitions,
#Header(KafkaHeaders.OFFSET) Long offsets) {
logger.info("MessagingConsumer partitions {} offsets {}", partitions, offsets);
acknowledgment.acknowledge();
logger.info("MessagingConsumer acknowledge: " + data);
...
When I redeploy my application, an error occur. I found a request has been duplicated consume.
In the first instance, we found 2 logs show that the message has been ack ("MessagingConsumer partitions 29 offsets 21204" and "MessagingConsumer acknowledge:") . But after sometime, the messsage has been consume again in second instance with same partition and offset. between that log of 2 instance, I found some "partitions revoked:" and "partitions assigned:".
But I cannot understand why if I has ack sucessfully , why the message still consume twice.
Could anyone help me?
how can ı check if kafka server running or not for every two hours and incase of connection lost throw and event to the server which ı created method called "throwEven()" and listen kafka with 'listenKafkaEveryThisMs: 200000'
my Kafka yaml
kafka:
url: localhost:9092
topic: topicName
groupid: conver
offsetResetConfig: earliest
concurrency: 1
maxPollInternalMsConfig: 300000
maxPollRecordsConfig: 30
errorHandlerRetryCount: 5
listenKafkaEveryThisMs: 200000
my KafkaConsumerConfig class
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.concurrency}")
private String concurrency;
#Value("${kafka.url}")
private String kafkaUrl;
#Value("${kafka.groupid}")
private String groupid;
#Value("${kafka.offsetResetConfig}")
private String offsetResetConfig;
#Value("${kafka.maxPollInternalMsConfig}")
private String maxPollInternalMsConfig;
#Value("${kafka.maxPollRecordsConfig}")
private String maxPollRecordsConfig;
#Value("${kafka.errorHandlerRetryCount}")
private String retryCount;
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaUrl);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupid);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetResetConfig);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.parseInt(maxPollInternalMsConfig));
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Integer.parseInt(maxPollRecordsConfig));
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(concurrency));
factory.getContainerProperties().setSyncCommits(true);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(false);
factory.setStatefulRetry(true);
factory.setErrorHandler(new SeekToCurrentErrorHandler(Integer.parseInt(retryCount)));
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new AlwaysRetryPolicy());
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
factory.setRetryTemplate(retryTemplate);
return factory;
}
}
my KafkaConsumer class
#Service
public class KafkaConsumer implements ConsumerSeekAware {
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
#KafkaListener(topics = "#{'${kafka.topic}'}", groupId = "#{'${kafka.groupid}'}")
public void consume(String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition,
#Header(KafkaHeaders.OFFSET) Long offset, Acknowledgment ack, KafkaMessageDrivenChannelAdapter kafkaMessageDrivenChannelAdapter) {
if (!kafkaMessageDrivenChannelAdapter.isRunning()) {
throwEvent();
}
try {
mobileSubscriptionService.processMessage(message, ack, null);
} catch (ParseException e) {
logger.error(e.getMessage());
}
}
#Scheduled(fixedDelayString = "${kafka.listenKafkaEveryThisMs}")
private void throwEvent() {
Map<String, String> eventDetails = new HashMap<>();
eventDetails.put("eventDetailsKey", "eventDetailsValue");
AppDynamicsEventUtil.publishEvent("eventSummary", EventSeverityStatus.INFO, EventTypeStatus.CUSTOM, eventDetails);
}
I really dont know what I should use to listen kafka server runnig or not
Thank you
For checking cluster connection state, most straightforward would be AdminClient.describeCluster()
Alternatively, you can hook some check into Actuator
So with the below configuration when we scale the spring boot containers to 10 jvms , the number of event is randomly more than published , for eg , if there are 320000 messages published the events are sometimes 320500 etc..
//Consumer container bean
private static final int CONCURRENCY = 1;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put("enable.auto.commit", "false");
//props.put("isolation.level", "read_committed");
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
//factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
factory.getContainerProperties().setPollTimeout(3000);
factory.setConcurrency(CONCURRENCY);
return factory;
}
//Listener
#KafkaListener(id="claimserror",topics = "${kafka.topic.dataintakeclaimsdqerrors}",groupId = "topic1", containerFactory = "kafkaListenerContainerFactory")
public void receiveClaimErrors(String event,Acknowledgment ack) throws JsonProcessingException {
//save event to table ..
}
Updated
The below change seems to be working fine now , i will have just add a duplicate check in the consumer to prevent a consumer failure scenario
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "-1");
//props.put("isolation.level", "read_committed");
return props;
}
You can try setting ENABLE_IDEMPOTENCE_CONFIG as true, this will help ensure that exactly one copy of each message is written in the stream by the producer.
This way work for me.
You have to configure KafkaListenerContainerFactory like this :
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaFactory);
factory.setConcurrency(10);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
and use ConcurrentMessageListenerContainer like this :
#Bean
public IntegrationFlow inboundFlow() {
final ContainerProperties containerProps = new ContainerProperties(PartitionConfig.TOPIC);
containerProps.setGroupId(GROUP_ID);
ConcurrentMessageListenerContainer concurrentListener = new ConcurrentMessageListenerContainer(kafkaFactory, containerProps);
concurrentListener.setConcurrency(10);
final KafkaMessageDrivenChannelAdapter kafkaMessageChannel = new KafkaMessageDrivenChannelAdapter(concurrentListener);
return IntegrationFlows
.from(kafkaMessageChannel)
.channel(requestsIn())
.get();
}
You can see this for more informations how-does-kafka-guarantee-consumers-doesnt-read-a-single-message-twice and documentation-ConcurrentMessageListenerContainer