In our application we use a kafka consumer to determine to send an email.
There was an issue we had the other day, where the kafka partition was timing out before it could read and process all its records. As a result, it looped back to the start of the partition and wasn't able to finished the set of records it had received and new data generated after the loop start never got processed.
My teams suggested that we could tell Kafka to commit after each message is read, however I can't figure out how to do that from Spring-kakfa.
The application uses spring-kafka 2.1.6, and the consumer code kinda resembles this.
#KafkaListener(topics = "${kafka.topic}", groupId = "${kafka.groupId}")
public void consume(String message, #Header("kafka_offset") int offSet) {
try{
EmailData data = objectMapper.readValue(message, EmailData.class);
if(isEligableForEmail(data)){
emailHandler.sendEmail(data)
}
} catch (Exception e) {
log.error("Error: "+e.getMessage(), e);
}
}
Note: sendEmail function uses CompletableFutures, as it has to call a different API before sending out an email.
Configuration: (snippet of the yaml file for the consumer, and a part of the producer)
consumer:
max.poll.interval.ms: 3600000
producer:
retries: 0
batch-size: 100000
acks: 0
buffer-memory: 33554432
request.timeout.ms: 60000
linger.ms: 10
max.block.ms: 5000
If you want manual Acknowledgment, then you can provide the Acknowledgment in method arguments
#KafkaListener(topics = "${kafka.topic}", groupId = "${kafka.groupId}")
public void consume(String message, Acknowledgment ack, #Header("kafka_offset") int offSet) {
When using manual AckMode, you can also provide the listener with the Acknowledgment. The following example also shows how to use a different container factory.
Example from docs #KafkaListener Annotation
#KafkaListener(id = "cat", topics = "myTopic",
containerFactory = "kafkaManualAckListenerContainerFactory")
public void listen(String data, Acknowledgment ack) {
...
ack.acknowledge();
}
Set the container ackMode property to AckMode.RECORD to commit the offset after each record.
You should also consider reducing max.poll.records or increasing max.poll.interval.ms Kafka consumer properties.
Related
I wanted to enable the manual commit for my consumer and for that i have below code + configuration. Here i am trying to manually commit the offset in case signIn client throws exception and till manually comitting offet itw works fine but with this code the message which failed to process is not being consumed again so for that what i want to do is calling seek method and consume same failed offset again -
consumer.seek(newTopicPartition(atCommunityTopic,communityFeed.partition()),communityFeed.offset());
But the actual problem is here how do i get partition and offset details from. If somehow i can get ConsumerRecord object along with message then it will work.
spring.cloud.stream.kafka.bindings.atcommnity.consumer.autoCommitOffset=false
And Below is the consumer code through StreamListener
#StreamListener(ConsumerConstants.COMMUNITY_IN)
public void handleCommFeedConsumer(
#Payload Account consumerRecords,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer,
#Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
consumerRecords.forEach(communityFeed -> {
try{
AccountClient.signIn(
AccountIn.builder()
.Id(atCommunityEvent.getId())
.build());
log.debug("Calling Client for Id : "
+ communityEvent.getId());
}catch(RuntimeException ex){
log.info("");
//consumer.seek(new TopicPartition(communityTopic,communityFeed.partition()),communityFeed.offset());
return;
}
acknowledgment.acknowledge();
});
}
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#consumer-record-metadata
#Header(KafkaHeaders.PARTITION_ID) int partition
#Header(KafkaHeaders.OFFSET) long offset
IMPORTANT
Seeking the consumer yourself might not do what you want because the container may already have other records after this one; it's best to throw an exception and the error handler will do the seeks for you.
https://pulsar.apache.org/api/client/2.4.0/org/apache/pulsar/client/api/Consumer.html#seek-long-
When calling seek(long timestamp) method on the consumer, does timestamp have to equal the exact time a message was published?
For example, if i sent three messages at t=1, 5, 7 and if i call consumer.seek(3), will i get an error? or will my consumer get reset to t=3, so that if i call consumer.next(), i'll get my second message?
Thanks in advance,
The Consumer#seek(long timestamp) allows you to reset your subscription to a given timestamp. After seeking the consumer will start receiving messages with a publish time equal to or greater than the timestamp passed to the seek method.
The below example show how to reset a consumer to the previous hour:
try (
// Create PulsarClient
PulsarClient client = PulsarClient
.builder()
.serviceUrl("pulsar://localhost:6650")
.build();
// Create Consumer subscription
Consumer<String> consumer = client.newConsumer(Schema.STRING)
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionMode(SubscriptionMode.Durable)
.subscriptionType(SubscriptionType.Key_Shared)
.subscriptionInitialPosition(SubscriptionInitialPosition.Latest)
.subscribe()
) {
// Seek consumer to previous hour
consumer.seek(Instant.now().minus( Duration.ofHours(1)).toEpochMilli());
while (true) {
final Message<String> msg = consumer.receive();
System.out.printf(
"Message received: key=%s, value=%s, topic=%s, id=%s%n",
msg.getKey(),
msg.getValue(),
msg.getTopicName(),
msg.getMessageId().toString());
consumer.acknowledge(msg);
}
}
Note that if you have multiple consumers that belong to the same subscriptio ( e.g., Key_Shared) then all consumers will be reset.
I'm currently trying to set up a consumer to consume messages from a topic. My log says it subscribes to the topic successfully
[Consumer clientId=consumer-1, groupId=consumer-group] Subscribed to topic(s): MY-TOPIC
and it clearly shows it is a part of a group, but when I go to the control center I can't find that group, however I can find the topic that I am subscribed too. It isn't consuming the records from the topic either which I attribute to not being apart of a valid group. I know it is polling the correct topic and I know there is records on the topic as I am constantly putting them on.
Here is my start method
#PostConstruct public void start()
{
// check if the config indicates whether to start the daemon or not
if (!parseBoolean(maskBlank(shouldStartConsumer, "true")))
{
System.err.println("CONSUMER DISABLED");
logger.warn("consumer not starting -- see value of " + PROP_EXTRACTOR_START_CONSUMER);
return;
}
System.err.println("STARTING CONSUMER");
Consumer<String, String> consumer = this.createConsumer(kafkaTopicName,
StringDeserializer.class, StringDeserializer.class);
Thread daemon = new Thread(() -> {
while (true)
{
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(10));
if (records.count() > 0) // IS ALWAYS 0, Poll doesn't return records
{
printRecord(records);
records.iterator().forEachRemaining(r -> {
System.err.println("received record: " + r);
});
}
else { logger.debug("KafkaTopicConsumer::consumeMessage -- No messages found in topic {}", kafkaTopicName); }
}
});
daemon.setName(kafkaTopicName);
daemon.setDaemon(true);
daemon.start();
}
Note: createConsumer method is just adding all of my config settings and where I subscribe to my topic.
I have a feeling it has something to do with the thread... I can post some of my config if that would help as well just leave a comment. Thanks
We are trying to implement Kafka as our message broker solution. We are deploying our Spring Boot microservices in IBM BLuemix, whose internal message broker implementation is Kafka version 0.10. Since my experience is more on the JMS, ActiveMQ end, I was wondering what should be the ideal way to handle system level errors in the java consumers?
Here is how we have implemented it currently
Consumer properties
enable.auto.commit=false
auto.offset.reset=latest
We are using the default properties for
max.partition.fetch.bytes
session.timeout.ms
Kafka Consumer
We are spinning up 3 threads per topic all having the same groupId, i.e one KafkaConsumer instance per thread. We have only one partition as of now. The consumer code looks like this in the constructor of the thread class
kafkaConsumer = new KafkaConsumer<String, String>(properties);
final List<String> topicList = new ArrayList<String>();
topicList.add(properties.getTopic());
kafkaConsumer.subscribe(topicList, new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(final Collection<TopicPartition> partitions) {
}
#Override
public void onPartitionsAssigned(final Collection<TopicPartition> partitions) {
try {
logger.info("Partitions assigned, consumer seeking to end.");
for (final TopicPartition partition : partitions) {
final long position = kafkaConsumer.position(partition);
logger.info("current Position: " + position);
logger.info("Seeking to end...");
kafkaConsumer.seekToEnd(Arrays.asList(partition));
logger.info("Seek from the current position: " + kafkaConsumer.position(partition));
kafkaConsumer.seek(partition, position);
}
logger.info("Consumer can now begin consuming messages.");
} catch (final Exception e) {
logger.error("Consumer can now begin consuming messages.");
}
}
});
The actual reading happens in the run method of the thread
try {
// Poll on the Kafka consumer every second.
final ConsumerRecords<String, String> records = kafkaConsumer.poll(1000);
// Iterate through all the messages received and print their
// content.
for (final TopicPartition partition : records.partitions()) {
final List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
logger.info("consumer is alive and is processing "+ partitionRecords.size() +" records");
for (final ConsumerRecord<String, String> record : partitionRecords) {
logger.info("processing topic "+ record.topic()+" for key "+record.key()+" on offset "+ record.offset());
final Class<? extends Event> resourceClass = eventProcessors.getResourceClass();
final Object obj = converter.convertToObject(record.value(), resourceClass);
if (obj != null) {
logger.info("Event: " + obj + " acquired by " + Thread.currentThread().getName());
final CommsEvent event = resourceClass.cast(converter.convertToObject(record.value(), resourceClass));
final MessageResults results = eventProcessors.processEvent(event
);
if ("Success".equals(results.getStatus())) {
// commit the processed message which changes
// the offset
kafkaConsumer.commitSync();
logger.info("Message processed sucessfully");
} else {
kafkaConsumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset());
logger.error("Error processing message : {} with error : {},resetting offset to {} ", obj,results.getError().getMessage(),record.offset());
break;
}
}
}
}
// TODO add return
} catch (final Exception e) {
logger.error("Consumer has failed with exception: " + e, e);
shutdown();
}
You will notice the EventProcessor which is a service class which processes each record, in most cases commits the record in database. If the processor throws an error (System Exception or ValidationException) we do not commit but programatically set the seek to that offset, so that subsequent poll will return from that offset for that group id.
The doubt now is that, is this the right approach? If we get an error and we set the offset then until that is fixed no other message is processed. This might work for system errors like not able to connect to DB, but if the problem is only with that event and not others to process this one record we wont be able to process any other record. We thought of the concept of ErrorTopic where when we get an error the consumer will publish that event to the ErrorTopic and in the meantime it will keep on processing other subsequent events. But it looks like we are trying to bring in the design concepts of JMS (due to my previous experience) into kafka and there may be better way to solve error handling in kafka. Also reprocessing it from error topic may change the sequence of messages which we don't want for some scenarios
Please let me know how anyone has handled this scenario in their projects following the Kafka standards.
-Tatha
if the problem is only with that event and not others to process this one record we wont be able to process any other record
that's correct and your suggestion to use an error topic seems a possible one.
I also noticed that with your handling of onPartitionsAssigned you essentially do not use the consumer committed offset, as you seem you'll always seek to the end.
If you want to restart from the last succesfully committed offset, you should not perform a seek
Finally, I'd like to point out, though it looks like you know that, having 3 consumers in the same group subscribed to a single partition - means that 2 out of 3 will be idle.
HTH
Edo
I am creating consumers (a consumer group with single consumer in it) :
Properties properties = new Properties();
properties.put("zookeeper.connect","localhost:2181");
properties.put("auto.offset.reset", "largest");
properties.put("group.id", groupId);
properties.put("auto.commit.enable", "true");
ConsumerConfig consumerConfig = new ConsumerConfig(properties);
ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector.createMessageStreams(topicCountMap);
consumerMap.entrySet().stream().forEach(
streams -> {
streams.getValue().stream().forEach(
stream -> {
KafkaBasicConsumer customConsumer = new KafkaBasicConsumer();
try {
Future<?> consumerFuture = kafkaConsumerExecutor.submit(customConsumer);
kafkaConsumersFuture.put(groupId, consumerFuture);
} catch (Exception e) {
logger.error("---- Got error : "+ e.getMessage());
logger.error("Exception : ", e);
}
}
);
}
);
I have subscribed 2 consumers for the same topic.
I am unsubscribing the consumer by storing its future object and then invoking
consumerFuture.cancel(Boolean.TRUE);
Now I subscribe the same consumer again with above code and it gets successfully registered.
However, when the publisher now publishes the newly subscribed consumer is not getting messages whereas the other consumer which was registered is getting messages
I am also checking offsets of consumers, they are getting updated when producer publishes but consumers are not getting messages.
Before producing :
Group Topic Pid Offset logSize Lag
A T1 0 94 94 1
Group Topic Pid Offset logSize Lag
B T1 0 94 94 1
After producing :
Group Topic Pid Offset logSize Lag
A T1 0 95 97 2
Group Topic Pid Offset logSize Lag
B T1 0 94 97 2
I am not able to figure out that if this an issue from producer side (partitions not enough) or if I have created consumer in an incorrect way
Also, I am not able to figure out what is log and lag column means in this.
Let me know if anyone can help or need more details.
I found to solution to my problem, thanks #nautilus for reminding to update.
My main intent was to provide endpoint to subscribe and unsubscribe a consumer in kafka.
Since kafka provides only subscribing and not unsubscribing (only manually possible) I had to write layer over kafka implementation.
I stored the consumer object in a static map with key as group id (since my consumer group can have only one consumer)
Problem was I was not closing consumer once created when unsubscribing and old consumer with same group id was preventing new from getting messages
private static Map kafkaConsumersFuture
Based on some parameter, find out group id
kafkaConsumersFuture.put(groupId, consumerConnector);
And while unsubcribing I did
ConsumerConnector consumerConnector = kafkaConsumersFuture.get(groupId);
if(consumerConnector!=null) {
consumerConnector.shutdown();
kafkaConsumersFuture.remove(groupId);
}