How to get a key from kafka message in java? - java

Here is my kafka listener:
#KafkaListener(
containerFactory = "kafkaChangeClientPhoneListenerContainerFactory",
topics = "${kafka.topic.changeClientPhone}"
)
public void consume(ChangeClientPhoneEvent changeClientPhoneEvent) {
//TODO
}
Here are the topic settings:
#Bean
public ConsumerFactory<String, String> changeClientPhoneConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(saslConsumerConfig(changeClientPhoneConsumerConfig()));
}
private Map<String, Object> changeClientPhoneConsumerConfig() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ChangeClientPhoneEventDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommitFlag);
return props;
}
I need to get the "siebel-id" param which comes as a header/key:
How to get it as a String var?

See the documentation.
Finally, metadata about the record is available from message headers. You can use the following header names to retrieve the headers of the message:
KafkaHeaders.OFFSET
KafkaHeaders.RECEIVED_MESSAGE_KEY
KafkaHeaders.RECEIVED_TOPIC
KafkaHeaders.RECEIVED_PARTITION_ID
KafkaHeaders.RECEIVED_TIMESTAMP
KafkaHeaders.TIMESTAMP_TYPE
The following example shows how to use the headers:
#KafkaListener(id = "qux", topicPattern = "myTopic1")
public void listen(#Payload String foo,
#Header(name = KafkaHeaders.RECEIVED_MESSAGE_KEY, required = false) Integer key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
) {
...
}

Related

Kafka Listener method could not be invoked with the incoming message Endpoint handler details

I'm sending a json object from producer spring boot but when I'm trying to receive the message in consumer
I am getting following error :
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.***.***.kafka.messages.Message] for GenericMessage [payload={"type":"HHH","actorId":1,"entity":null,"entityType":"ssss"}, headers={kafka_offset=45, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#789f2579, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=isomantopictest, kafka_receivedTimestamp=1671973825381, __TypeId__=[B#500ac626, kafka_groupId=isoman5}]
at org.springframework.messaging.handler.annotation.support.PayloadMethodArgumentResolver.resolveArgument(PayloadMethodArgumentResolver.java:144) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.kafka.annotation.KafkaNullAwarePayloadArgumentResolver.resolveArgument(KafkaNullAwarePayloadArgumentResolver.java:46) ~[spring-kafka-3.0.0.jar:3.0.0]
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:118) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:147) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:115) ~[spring-messaging-6.0.2.jar:6.0.2]
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:56) ~[spring-kafka-3.0.0.jar:3.0.0]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:366) ~[spring-kafka-3.0.0.jar:3.0.0]
... 15 common frames omitted
KafkaConfig :
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
// configProps.put(
// ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
// StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate kafkaTemplate() {
KafkaTemplate<String, Message> kafkaTemplate = new KafkaTemplate(producerFactory());
kafkaTemplate.setConsumerFactory(messageConsumerFactory());
return kafkaTemplate;
}
public ConsumerFactory<String, Message> messageConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
return new DefaultKafkaConsumerFactory<>(
props,
new StringDeserializer(),
new JsonDeserializer<>(Message.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Message> messageKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Message> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(messageConsumerFactory());
// factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
Receive File :
#KafkaListener(idIsGroup = false, groupId = "${kafka.group-id}",
topics="#{'${kafka.topics}'.split(',')}")
void hearing(#Payload Message message ) {
if (verbose) {
log.trace("New message received : type = {}, payload = {}", message.getType(), message);
log.trace("Calling processMessage()");
}
}
I solve it by adding this statement on messageKafkaListenerContainerFactory:
factory.setMessageConverter(new StringJsonMessageConverter());
And change #Payload Message message to #Payload String message.

Using Spring Kafka with LINGER_MS_CONFIG causes error

I'm experimenting with compression with Kafka because my records are getting too large. This is what my Kafka configuration looks like -
public Map<String, Object> producerConfigs(int blockTime ) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//spring.kafka.producer.request.timeout.ms
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,blockTime);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG,1048576);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy" );
props.put(ProducerConfig.LINGER_MS_CONFIG,5 );
//ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG
log.info("Created producer config kafka ");
return props;
}
public ProducerFactory<String, String> finalproducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs(10));
}
#Bean(name = "commonKafkaTemplate")
public KafkaTemplate<String, String> getFinalKafkaTemplate() {
return new KafkaTemplate<>(finalproducerFactory());
}
If I comment out the line with LINGER_MS_CONFIG
, the code works.
I saw LINGER_MS_CONFIG being used in an example, which said that compression was more "effective". What is it about LINGER_MS_CONFIG which causes an error? Is there some conflict with any of the other properties?

Scaling KAFKA to manage higher TPS using a synchronic request/reply configuration

I am new to KAFKA and I am working on a POC to migrate our current ESB into microservices. Our current ESB work using SOAP services so I need to keep using the request/reply paradigm.
The POC consist on a spring boot microservice. One instance of the application is the producer using the following code:
#Endpoint
public class AcctInfoSoapServiceController {
private static final Logger LOGGER = LoggerFactory.getLogger(AcctInfoSoapServiceController.class);
#Autowired
KafkaAsyncService kafkaAsyncService;
#PayloadRoot(namespace = "http://firstbankpr.com/feis/common/model", localPart = "GetAccountInfoRequest")
#ResponsePayload
public GetAccountInfoResponse getModelResponse(#RequestPayload GetAccountInfoRequest accountInfo) throws Exception {
long start = System.currentTimeMillis();
LOGGER.info("Returning request for account " + accountInfo.getInGetAccountInfo().getAccountNumber() );
AccountInquiryDto modelResponse = kafkaAsyncService.getModelResponse(accountInfo.getInGetAccountInfo());
GetAccountInfoResponse response = ObjectFactory.getGetAccountInfoResponse(modelResponse);
long elapsedTime = System.currentTimeMillis() - start;
LOGGER.info("Returning request in " + elapsedTime + " ms for account = " + accountInfo.getInGetAccountInfo().getAccountNumber() + " " + response );
return response;
}
}
public AccountInquiryDto getModelResponse(InGetAccountInfo accountInfo) throws Exception{
LOGGER.info("Received request for request for account " + accountInfo);
// create producer record
ProducerRecord<String, InGetAccountInfo> record = new ProducerRecord<String, InGetAccountInfo>(requestTopic,accountInfo);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, requestReplyTopic.getBytes()));
// post in kafka topic
RequestReplyFuture<String, InGetAccountInfo, AccountInquiryDto> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, InGetAccountInfo> sendResult = sendAndReceive.getSendFuture().get();
// //print all headers
sendResult.getProducerRecord().headers().forEach(header -> System.out.println(header.key() + ":" + header.value().toString()));
// get consumer record
ConsumerRecord<String, AccountInquiryDto> consumerRecord = sendAndReceive.get();
ObjectMapper mapper = new ObjectMapper();
AccountInquiryDto modelResponse = mapper.convertValue(
consumerRecord.value(),
new TypeReference<AccountInquiryDto>() { });
LOGGER.info("Returning record for " + modelResponse);
return modelResponse;
}
The following is the configuration of the producer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
//props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
// props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public ReplyingKafkaTemplate<String, InGetAccountInfo, AccountInquiryDto> replyKafkaTemplate(ProducerFactory<String, InGetAccountInfo> pf, KafkaMessageListenerContainer<String, AccountInquiryDto> container){
return new ReplyingKafkaTemplate(pf, container);
}
#Bean
public ProducerFactory<String, InGetAccountInfo> requestProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public ConsumerFactory<String, AccountInquiryDto> replyConsumerFactory() {
JsonDeserializer<AccountInquiryDto> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
jsonDeserializer.addTrustedPackages(AccountInquiryDto.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaMessageListenerContainer<String, AccountInquiryDto> replyContainer(ConsumerFactory<String, AccountInquiryDto> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public KafkaAsyncService kafkaAsyncService(){
return new KafkaAsyncService();
}
I have one instance of the spring boot application working as the KAFKA consumer with the following code:
#KafkaListener(topics = "${kafka.topic.acct-info.request}", containerFactory = "requestReplyListenerContainerFactory")
#SendTo
public Message<?> listenPartition0(InGetAccountInfo accountInfo,
#Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
LOGGER.info("Received request for accountInfo = " + accountInfo.getAccountNumber());
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
LOGGER.info("Returning accountInfoDto = " + accountInfoDto.toString());
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
The following is the configuration of the consumer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
return props;
}
#Bean
public ConsumerFactory<String, InGetAccountInfo> requestConsumerFactory() {
JsonDeserializer<InGetAccountInfo> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, InGetAccountInfo>> requestReplyListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, InGetAccountInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(requestConsumerFactory());
factory.setConcurrency(3);
factory.setReplyTemplate(replyTemplate());
return factory;
}
#Bean
public ProducerFactory<String, AccountInquiryDto> replyProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, AccountInquiryDto> replyTemplate() {
return new KafkaTemplate<>(replyProducerFactory());
}
#Bean
public DepAcctInqConsumerController Controller() {
return new DepAcctInqConsumerController();
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
Map<String, String> configs = new HashMap<>();
configs.put("retention.ms", replyTimeout.toString());
return new NewTopic(requestTopic, 2, (short) 2).configs(configs);
}
KAFKA is using one topic with 5 partitions. When I run a load test using SoapUI I get about 19 TPS using 20 threads. The TPS remain the same even when I increment the number of partitions to 10 using 20 threads, incrementing the number of threads to 40 does not increment the TPS even if the number of partitions is higher (up to 10). Also incrementing the number of instances for a consumer and a producer to 2 and configuring a Load balancer to distribute the load does not change the TPS (which remains in about 20). In this case it seems that KAFKA is the bottleneck.
The only way I was able to increment the TPS was by assigning a different topic to each instance of the consumer/producer pair. When using the load balancer The TPS incremented to 38 TPS which is about double what I obtained with one instance of the consumer/producer pair. Monitoring the servers running the spring boot application does not provide any meaningful information since the CPU load and memory remains very low. The KAFKA servers also remains low on usage with about 20% CPU utilization. Currently kafka is using only one broker.
I am looking for advice on how to configure KAFKA so that I can increase the number of instances of the spring boot application using the same topic, this will allow me to increase the TPS with every new instance of the consumer/producer that starts. I understand that with every new instance of the consumer/producer I may need to increment the number of partitions on a topic. The final goal is to run these as PODS inside an openshift cluster where the application should be able to automatically grow when traffic increases.

Spring boot kafka listening for every two hours and incase connection lost sending info

how can ı check if kafka server running or not for every two hours and incase of connection lost throw and event to the server which ı created method called "throwEven()" and listen kafka with 'listenKafkaEveryThisMs: 200000'
my Kafka yaml
kafka:
url: localhost:9092
topic: topicName
groupid: conver
offsetResetConfig: earliest
concurrency: 1
maxPollInternalMsConfig: 300000
maxPollRecordsConfig: 30
errorHandlerRetryCount: 5
listenKafkaEveryThisMs: 200000
my KafkaConsumerConfig class
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.concurrency}")
private String concurrency;
#Value("${kafka.url}")
private String kafkaUrl;
#Value("${kafka.groupid}")
private String groupid;
#Value("${kafka.offsetResetConfig}")
private String offsetResetConfig;
#Value("${kafka.maxPollInternalMsConfig}")
private String maxPollInternalMsConfig;
#Value("${kafka.maxPollRecordsConfig}")
private String maxPollRecordsConfig;
#Value("${kafka.errorHandlerRetryCount}")
private String retryCount;
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaUrl);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupid);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetResetConfig);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.parseInt(maxPollInternalMsConfig));
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Integer.parseInt(maxPollRecordsConfig));
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(concurrency));
factory.getContainerProperties().setSyncCommits(true);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(false);
factory.setStatefulRetry(true);
factory.setErrorHandler(new SeekToCurrentErrorHandler(Integer.parseInt(retryCount)));
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new AlwaysRetryPolicy());
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
factory.setRetryTemplate(retryTemplate);
return factory;
}
}
my KafkaConsumer class
#Service
public class KafkaConsumer implements ConsumerSeekAware {
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
#KafkaListener(topics = "#{'${kafka.topic}'}", groupId = "#{'${kafka.groupid}'}")
public void consume(String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition,
#Header(KafkaHeaders.OFFSET) Long offset, Acknowledgment ack, KafkaMessageDrivenChannelAdapter kafkaMessageDrivenChannelAdapter) {
if (!kafkaMessageDrivenChannelAdapter.isRunning()) {
throwEvent();
}
try {
mobileSubscriptionService.processMessage(message, ack, null);
} catch (ParseException e) {
logger.error(e.getMessage());
}
}
#Scheduled(fixedDelayString = "${kafka.listenKafkaEveryThisMs}")
private void throwEvent() {
Map<String, String> eventDetails = new HashMap<>();
eventDetails.put("eventDetailsKey", "eventDetailsValue");
AppDynamicsEventUtil.publishEvent("eventSummary", EventSeverityStatus.INFO, EventTypeStatus.CUSTOM, eventDetails);
}
I really dont know what I should use to listen kafka server runnig or not
Thank you
For checking cluster connection state, most straightforward would be AdminClient.describeCluster()
Alternatively, you can hook some check into Actuator

Kafkalistener reading messages twice

So with the below configuration when we scale the spring boot containers to 10 jvms , the number of event is randomly more than published , for eg , if there are 320000 messages published the events are sometimes 320500 etc..
//Consumer container bean
private static final int CONCURRENCY = 1;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put("enable.auto.commit", "false");
//props.put("isolation.level", "read_committed");
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
//factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
factory.getContainerProperties().setPollTimeout(3000);
factory.setConcurrency(CONCURRENCY);
return factory;
}
//Listener
#KafkaListener(id="claimserror",topics = "${kafka.topic.dataintakeclaimsdqerrors}",groupId = "topic1", containerFactory = "kafkaListenerContainerFactory")
public void receiveClaimErrors(String event,Acknowledgment ack) throws JsonProcessingException {
//save event to table ..
}
Updated
The below change seems to be working fine now , i will have just add a duplicate check in the consumer to prevent a consumer failure scenario
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "-1");
//props.put("isolation.level", "read_committed");
return props;
}
You can try setting ENABLE_IDEMPOTENCE_CONFIG as true, this will help ensure that exactly one copy of each message is written in the stream by the producer.
This way work for me.
You have to configure KafkaListenerContainerFactory like this :
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaFactory);
factory.setConcurrency(10);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
and use ConcurrentMessageListenerContainer like this :
#Bean
public IntegrationFlow inboundFlow() {
final ContainerProperties containerProps = new ContainerProperties(PartitionConfig.TOPIC);
containerProps.setGroupId(GROUP_ID);
ConcurrentMessageListenerContainer concurrentListener = new ConcurrentMessageListenerContainer(kafkaFactory, containerProps);
concurrentListener.setConcurrency(10);
final KafkaMessageDrivenChannelAdapter kafkaMessageChannel = new KafkaMessageDrivenChannelAdapter(concurrentListener);
return IntegrationFlows
.from(kafkaMessageChannel)
.channel(requestsIn())
.get();
}
You can see this for more informations how-does-kafka-guarantee-consumers-doesnt-read-a-single-message-twice and documentation-ConcurrentMessageListenerContainer

Categories

Resources