How to send Kafka offsets in transaction, which is created by KafkaTemplate? - java

I want to implement read-process-write pattern - https://www.confluent.io/blog/transactions-apache-kafka/. So, I need to consume records, process them and then to commit consumed offsets.
I use org.apache.kafka.clients.consumer.KafkaConsumer for consuming messages. I mean, it is not spring related consumer.
I use org.springframework.kafka.core.KafkaTemplate for producing messages. I create its bean like this:
#Bean
public Map<String, Object> producerConfigs() {
final Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "bootstrapServers");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, UUID.randomUUID().toString());
props.put(ProducerConfig.ACKS_CONFIG, "all");
return props;
}
#Bean
public DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory() {
DefaultKafkaProducerFactory<String, String> kafkaProducerFactory = new DefaultKafkaProducerFactory<>(producerConfigs());
kafkaProducerFactory.setTransactionIdPrefix("transaction-id-prefix");
return kafkaProducerFactory;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate(DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory) {
return new KafkaTemplate<>(defaultKafkaProducerFactory);
}
I produce result messages like this:
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(POLL_INTERVAL_IN_MS));
List<List<String>> outputMessages = produceOutput(consumerRecords);
kafkaTemplate.executeInTransaction(kafkaProducer -> {
for (List<String> resultTasks : outputMessages) {
for (String resultTask : resultTasks) {
kafkaProducer.send("topic", "key", resultTask);
}
}
kafkaProducer.sendOffsetsToTransaction(getOffsetsForCommit(consumerRecords), "consumerGroupId");
return true;
});
Finally, I have this error:
java.lang.IllegalArgumentException: No transaction in process
at org.springframework.util.Assert.isTrue(Assert.java:118)
at org.springframework.kafka.core.KafkaTemplate.sendOffsetsToTransaction(KafkaTemplate.java:345)
Exception throws in this method:
#Override
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets, String consumerGroupId) {
#SuppressWarnings("unchecked")
KafkaResourceHolder<K, V> resourceHolder = (KafkaResourceHolder<K, V>) TransactionSynchronizationManager
.getResource(this.producerFactory);
Assert.isTrue(resourceHolder != null, "No transaction in process"); // here
if (resourceHolder.getProducer() != null) {
resourceHolder.getProducer().sendOffsetsToTransaction(offsets, consumerGroupId);
}
}
So, how to properly commit these offsets?

It's a bug; sendOffsetsToTransaction() doesn't work in executeInTransaction - it assumes a Spring transaction is bound to the thread.
As a work-around, you can either use #Transactional on the method or use a transaction template with a KafkaTransactionManager to start the Spring transaction instead of using executeInTransaction().
TransactionTemplate tt = new TransactionTemplate(tm);
...
this.tt.execute(s -> {
template.send(...);
template.sendOffsetsToTransaction(...);
return null;
});
Please open a GitHub Issue and we'll fix this.

Related

Using Spring Kafka with LINGER_MS_CONFIG causes error

I'm experimenting with compression with Kafka because my records are getting too large. This is what my Kafka configuration looks like -
public Map<String, Object> producerConfigs(int blockTime ) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//spring.kafka.producer.request.timeout.ms
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,blockTime);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG,10000);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG,1048576);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy" );
props.put(ProducerConfig.LINGER_MS_CONFIG,5 );
//ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG
log.info("Created producer config kafka ");
return props;
}
public ProducerFactory<String, String> finalproducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs(10));
}
#Bean(name = "commonKafkaTemplate")
public KafkaTemplate<String, String> getFinalKafkaTemplate() {
return new KafkaTemplate<>(finalproducerFactory());
}
If I comment out the line with LINGER_MS_CONFIG
, the code works.
I saw LINGER_MS_CONFIG being used in an example, which said that compression was more "effective". What is it about LINGER_MS_CONFIG which causes an error? Is there some conflict with any of the other properties?

how to read message from Kafka consumer after some time interval

In my Spring boot application I have kafka consumer class which reads message frequently whenever there are message available in the topic. I want to limit the consumer to consume message in every 2 hours interval time. Like after reading one message the consumer will be paused for 2 hours then again consumer another message.
This is my consumer config method :-
#Bean
public Map<String, Object> scnConsumerConfigs() {
Map<String, Object> propsMap = new HashMap<>();
// common props
logger.info("KM Dataloader :: Kafka Brokers for Software topic: {}", bootstrapServersscn);
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServersscn);
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
propsMap.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 7200000);
// ssl props
propsMap.put("security.protocol", mpaasSecurityProtocol);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, truststorePath);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, truststorePassword);
propsMap.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, keystorePath);
propsMap.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, keystorePassword);
return propsMap;
}
then I create this container method where I setup rest of the kafka configuration
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
LOGGER.info("Setting concurrency to {} for {}", config.getConcurrency(), topicName);
factory.setConcurrency(config.getConcurrency());
factory.setConsumerFactory(cFactory);
factory.setRetryTemplate(retryTemplate);
factory.getContainerProperties().setIdleBetweenPolls(7200000);
return factory;
using this code partitions is rebalanced every 2 hours, but its not reading message at all.
My kafka consumer method :-
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService) {
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
};
}
Try to add to add #KafkaListener annotated method in KmKafkaListenerso that Spring kafka will take care of calling it.
public class KmKafkaListener{
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
}
and initalize the bean this way
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService);
}

Does default commit strategy when using spring with kafka with default properties loose messages?

We see some messages are lost in consuming messages from Kafka topic, especially during restarting of service when using the default properties
#Bean
public ConsumerFactory<String, String> consumerFactory()
{
// Creating a Map of string-object pairs
Map<String, Object> config = new HashMap<>();
// Adding the Configuration
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
"127.0.0.1:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG,
"group_id");
config.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
config.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(config);
}
// Creating a Listener
public ConcurrentKafkaListenerContainerFactory
concurrentKafkaListenerContainerFactory()
{
ConcurrentKafkaListenerContainerFactory<
String, String> factory
= new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
From the documentation, it was mentioned the default value for ackMode is BATCH which states this
Commit the offset when all the records returned by the poll() have been processed
How does spring know that all the messages are processed is a sample example like in here ? and does it mean, when we restart the service offsets are committed and we haven't processed the messages leads to loosing of the messages
#KafkaListener(topics = "topicName", groupId = "foo")
public void listenGroupFoo(String message) {
System.out.println("Received Message in group foo: " + message);
}

Scaling KAFKA to manage higher TPS using a synchronic request/reply configuration

I am new to KAFKA and I am working on a POC to migrate our current ESB into microservices. Our current ESB work using SOAP services so I need to keep using the request/reply paradigm.
The POC consist on a spring boot microservice. One instance of the application is the producer using the following code:
#Endpoint
public class AcctInfoSoapServiceController {
private static final Logger LOGGER = LoggerFactory.getLogger(AcctInfoSoapServiceController.class);
#Autowired
KafkaAsyncService kafkaAsyncService;
#PayloadRoot(namespace = "http://firstbankpr.com/feis/common/model", localPart = "GetAccountInfoRequest")
#ResponsePayload
public GetAccountInfoResponse getModelResponse(#RequestPayload GetAccountInfoRequest accountInfo) throws Exception {
long start = System.currentTimeMillis();
LOGGER.info("Returning request for account " + accountInfo.getInGetAccountInfo().getAccountNumber() );
AccountInquiryDto modelResponse = kafkaAsyncService.getModelResponse(accountInfo.getInGetAccountInfo());
GetAccountInfoResponse response = ObjectFactory.getGetAccountInfoResponse(modelResponse);
long elapsedTime = System.currentTimeMillis() - start;
LOGGER.info("Returning request in " + elapsedTime + " ms for account = " + accountInfo.getInGetAccountInfo().getAccountNumber() + " " + response );
return response;
}
}
public AccountInquiryDto getModelResponse(InGetAccountInfo accountInfo) throws Exception{
LOGGER.info("Received request for request for account " + accountInfo);
// create producer record
ProducerRecord<String, InGetAccountInfo> record = new ProducerRecord<String, InGetAccountInfo>(requestTopic,accountInfo);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, requestReplyTopic.getBytes()));
// post in kafka topic
RequestReplyFuture<String, InGetAccountInfo, AccountInquiryDto> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, InGetAccountInfo> sendResult = sendAndReceive.getSendFuture().get();
// //print all headers
sendResult.getProducerRecord().headers().forEach(header -> System.out.println(header.key() + ":" + header.value().toString()));
// get consumer record
ConsumerRecord<String, AccountInquiryDto> consumerRecord = sendAndReceive.get();
ObjectMapper mapper = new ObjectMapper();
AccountInquiryDto modelResponse = mapper.convertValue(
consumerRecord.value(),
new TypeReference<AccountInquiryDto>() { });
LOGGER.info("Returning record for " + modelResponse);
return modelResponse;
}
The following is the configuration of the producer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
//props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
// props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public ReplyingKafkaTemplate<String, InGetAccountInfo, AccountInquiryDto> replyKafkaTemplate(ProducerFactory<String, InGetAccountInfo> pf, KafkaMessageListenerContainer<String, AccountInquiryDto> container){
return new ReplyingKafkaTemplate(pf, container);
}
#Bean
public ProducerFactory<String, InGetAccountInfo> requestProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public ConsumerFactory<String, AccountInquiryDto> replyConsumerFactory() {
JsonDeserializer<AccountInquiryDto> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
jsonDeserializer.addTrustedPackages(AccountInquiryDto.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaMessageListenerContainer<String, AccountInquiryDto> replyContainer(ConsumerFactory<String, AccountInquiryDto> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public KafkaAsyncService kafkaAsyncService(){
return new KafkaAsyncService();
}
I have one instance of the spring boot application working as the KAFKA consumer with the following code:
#KafkaListener(topics = "${kafka.topic.acct-info.request}", containerFactory = "requestReplyListenerContainerFactory")
#SendTo
public Message<?> listenPartition0(InGetAccountInfo accountInfo,
#Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
LOGGER.info("Received request for accountInfo = " + accountInfo.getAccountNumber());
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
LOGGER.info("Returning accountInfoDto = " + accountInfoDto.toString());
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
The following is the configuration of the consumer
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
//props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId + "-" + UUID.randomUUID().toString());
props.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId + "-" + UUID.randomUUID().toString());
props.put(ProducerConfig.RETRIES_CONFIG,"2");
return props;
}
#Bean
public ConsumerFactory<String, InGetAccountInfo> requestConsumerFactory() {
JsonDeserializer<InGetAccountInfo> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages(InGetAccountInfo.class.getPackage().getName());
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),jsonDeserializer);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, InGetAccountInfo>> requestReplyListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, InGetAccountInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(requestConsumerFactory());
factory.setConcurrency(3);
factory.setReplyTemplate(replyTemplate());
return factory;
}
#Bean
public ProducerFactory<String, AccountInquiryDto> replyProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, AccountInquiryDto> replyTemplate() {
return new KafkaTemplate<>(replyProducerFactory());
}
#Bean
public DepAcctInqConsumerController Controller() {
return new DepAcctInqConsumerController();
}
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
Map<String, String> configs = new HashMap<>();
configs.put("retention.ms", replyTimeout.toString());
return new NewTopic(requestTopic, 2, (short) 2).configs(configs);
}
KAFKA is using one topic with 5 partitions. When I run a load test using SoapUI I get about 19 TPS using 20 threads. The TPS remain the same even when I increment the number of partitions to 10 using 20 threads, incrementing the number of threads to 40 does not increment the TPS even if the number of partitions is higher (up to 10). Also incrementing the number of instances for a consumer and a producer to 2 and configuring a Load balancer to distribute the load does not change the TPS (which remains in about 20). In this case it seems that KAFKA is the bottleneck.
The only way I was able to increment the TPS was by assigning a different topic to each instance of the consumer/producer pair. When using the load balancer The TPS incremented to 38 TPS which is about double what I obtained with one instance of the consumer/producer pair. Monitoring the servers running the spring boot application does not provide any meaningful information since the CPU load and memory remains very low. The KAFKA servers also remains low on usage with about 20% CPU utilization. Currently kafka is using only one broker.
I am looking for advice on how to configure KAFKA so that I can increase the number of instances of the spring boot application using the same topic, this will allow me to increase the TPS with every new instance of the consumer/producer that starts. I understand that with every new instance of the consumer/producer I may need to increment the number of partitions on a topic. The final goal is to run these as PODS inside an openshift cluster where the application should be able to automatically grow when traffic increases.

How to pass topics dynamically to a kafka listener?

From a couple of days I'm trying out ways to dynamically pass topics to Kafka listener rather than using them through keys from a Java DSL. Anyone around done this before or could throw some light on what is the best way to achieve this?
The easiest solution I found was to use SpEL:
#Autowired
private SomeBean kafkaTopicNameProvider;
#KafkaListener(topics = "#{kafkaTopicNameProvider.provideName()}")
public void listener() { ... }
You cannot "dynamically pass topics to Kafka listener "; you have to programmatically create a listener container instead.
Here is a working solution:
// Start brokers without using the "#KafkaListener" annotation
Map<String, Object> consumerProps = consumerProps("my-srv1:9092", "my-group", "false");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
ContainerProperties containerProperties = new ContainerProperties("my-topic");
KafkaMessageListenerContainer container = new KafkaMessageListenerContainer<>(cf, containerProperties);
final BlockingQueue<ConsumerRecord<String, String>> records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<String, String>) record -> {
log.error("Message received: " + record);
records.add(record);
});
container.start();
/**
* Set up test properties for an {#code <Integer, String>} consumer.
* #param brokersCommaSep the bootstrapServers property (comma separated servers).
* #param group the group id.
* #param autoCommit the auto commit.
* #return the properties.
*/
public static Map<String, Object> consumerProps(String brokersCommaSep, String group, String autoCommit) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokersCommaSep);
props.put(ConsumerConfig.GROUP_ID_CONFIG, group);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommit);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "10");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 60000);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return props;
}
Hope it can help.
I made kafka listener for runtime registration, de-registration, start, stop.
public class KafkaListener {
private final KafkaListenerContainerFactory kafkaListenerContainerFactory;
private final Map<String, MessageListenerContainer> registeredTopicMap;
/** Kafka listener registration at runtime.**/
public void register(final Supplier<Set<String>> topicSupplier, final Supplier<MessageListener> messageListenerSupplier) {
synchronized (lock) {
final Set<String> registeredTopics = getRegisteredTopics();
final Set<String> topics = topicSupplier.get();
if (topics.isEmpty()) {
return;
}
topics.stream()
.filter(topic -> !registeredTopics.contains(topic))
.forEach(topic -> doRegister(topic, messageListenerSupplier.get()));
}
}
private void doRegister(final String topic, final MessageListener messageListener) {
final MessageListenerContainer messageListenerContainer = kafkaListenerContainerFactory.createContainer(topic);
messageListenerContainer.setupMessageListener(messageListener);
messageListenerContainer.start();
registeredTopicMap.put(topic, messageListenerContainer);
}
Full source code
: https://github.com/pkgonan/kafka-listener
First, try it.
docker-compose up -d
And then. call api.
curl -XPOST /consumers/order/register .....
curl -XPOST /consumers/order/de-register .....
curl -XPOST /consumers/order/stop
curl -XPOST /consumers/order/start
you can change Topics at runtime dynamicly!!!!
#Component
public class StoppingErrorHandler implements ErrorHandler {
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> record) {
ConcurrentMessageListenerContainer listenerContainer = (ConcurrentMessageListenerContainer)kafkaListenerEndpointRegistry.getListenerContainer("fence");
ContainerProperties cp=listenerContainer.getContainerProperties();
String[] topics =cp.getTopics();
topics[0]="gaonb";
listenerContainer.stop();
listenerContainer.start();
}
}

Categories

Resources