I have a problem with some micro service when running the microservice with kubernetes with many PODs.
I use the Manual commit strategy so I should acknowledge or not every message.
All the instance of the application belong the same kafka group. And the topic have at least 20 partition divided between the PODs. When consuming the message the listener make a call to a extern component (like a rest API by WebClient or RestTemplate, or a kafka producer of a different topic) . The Kafka consumer look like this:
#KafkaListener(topics = "topic")
#Trace
public void listen(#Payload Object message, , Acknowledgment acknowledgment)) {
try {
api.call(message);
acknowledgment.acknowledge();
} catch (InfraException e) {
acknowledgment.nack(1000);
}
But sometime this external component have infra problems and is not available. The problem usually happens when for some reason a single POD have connectivity issues. As the message is not acknowledge, he continue to be consumed, what is good. But the problem is that the message continue to be send to the same problematic instance of the application and is never redirected to another 'healthy' consumer.. As the consumer is able to get message from kafka and send heartbeats it is never considered a problematic consumer for kafka even after the rebalancing.
There some strategy or something we can do in the config to solve this problem or avoid the partition to be blocked?
Thank you for your attention so far.
Related
I am working on a microservices project. In this project, there is a Microservice A is doing a process in various steps. At the completion of each step, Microservice sends a message into a kafka topic. Then another Microservice B consumes the message from the kafka topic and sends an email notifying the successful completion of the step. I need Exactly once semantics for this. I am using KafkaTemplate.send in Microservice A and #KafkaListener to read the message in Microservice B. My question is whether KafkaTemplate producer and #KafkaListener consumer are idempotent and if not, how can I make them idempotent.
Regards,
I am creating autowiring the KafkaTemplate using the following code:-
#Autowired
public EventProducer(NewTopic topic, KafkaTemplate<String, Event> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
Exactly once semantics in Kafka apply to consume->process->produce operations within the same application - even then, only the entire cpp is "exactly once"; the consume->process part is at least once; consumption is always at least once (or at most once), including in your scenario (for B).
We have a requirement where we are consuming messages from one topic then there is some enrichment happening and then we are publishing the message to another topic. below are the events
Consumer - Consume the message
Enrichment - Enriched the consumed message
Producer - Published Enriched message to other topic
I am using Spring cloud kafka binder and things are working fine. suddenly we observed that producer is sending duplicate message to the topic and then we made Producer is idempotent. We have autocommitOffSet to false for better control. Below is what we are doing in the method
#StreamListener("INPUT")
#SendTo("OUTPUT")
public void consumer(Message message){
String inputMessage = message.getPayload.toString();
String enrichMessage = // Enrichment on inputMessage
return enrichMessage;
}
We observed if ack.acknowledge() failed due to some issue, Message still sent to the outbound channel. How can we handle entire consumer/producer as part of one transaction so that if acknowledge fail message will not sent to the topic.
I have set below transaction properties as well
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix=TX-
spring.cloud.stream.kafka.binder.transaction.producer.configuration.ack=all
spring.cloud.stream.kafka.binder.transaction.producer.configuration.retries=1
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=true
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=true
spring.cloud.stream.kafka.bindings.input.consumer.dlqName=error.topic
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=true
If there is any example available that would be really helpful.
Cheers
You need to make the binder transactional. See the documentation
https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
Note that consumers on the output topic must be configured with isolation.level=read_committed to avoid receiving rolled-back records.
I am using hystrix to handle falback scenarios when my messages are not delivered to RabbitMQ server. My fallback is getting invoked when RabbitMQ server is down(as AMQPException is thrown).
If broker is unable to accept/route the messages then returnCallback/returnConfirm(with nack) is invoked.
What I understand is that RabbitTemplate returnCallbacks/returnConfirms will be executed in different thread than the Hystrix thread.
Is it possible to throw Exception in these scenarios so that Hystrix fallback gets executed?
I have referred these q's : Spring AMQP return callback vs retry callback
Spring RabbitTemplate- How to get hold of the published message for NACKs in Publisher confirm mode
Any pointer to handle this scenario is much appreciated.
No; returns are completely asynchronous; even if you enable transactions - from the rabbit mq documentation...
AMQP does not specify when errors (e.g. lack of permissions, references to unknown exchanges) in transactional basic.publish and basic.ack commands should be detected. RabbitMQ performs the necessary checks immediately (rather than, say, at the time of commit), but note that both basic.publish and basic.ack are asynchronous commands so any errors will be reported back to the client asynchronously.
If you publish to a non-existent exchange (and setChannelTransacted(true)*) you will get an exception on the commit, but publishing to an exchange with no routable queue will never get an exception (only an async return callback).
enabling transactions is quite expensive for all operations so consider it carefully if you want to catch this particular scenario
I have a #RabbitListener annotated method for which Spring AMQP blocks after returning from the method. The underlying SimpleRabbitListenerContainerFactory uses AcknowledgeMode.MANUAL. I don’t want to acknowledge the message in the listener method, yet.
Is there any way to not have Spring AMQP block in such a scenario?
In more detail
I use a listener like this:
#RabbitListener(queues = "#{ #myQueue }")
void recordRequestsFromMyMessages(
#Payload MyMessage myMessagePayload,
#Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag,
Channel channel) {
// record relevant parts of the given message and combine them with
// parts from previous/future messages
// DON'T acknowledge the consumed message, yet; instead only keep a
// record of the channel and the delivery tag
}
Since I batch/combine multiple messages before I actually process them (asynchronously) later, I don’t want to acknowledge the consumed message right away. Instead, I only want to do this once the messages have been successfully processed later.
With my current approach, Spring AMQP blocks after returning from calling the recordRequestsFromMyMessages method above and no further messages are consumed from the same queue anymore.
This SO answer suggests that batch processing should work, however, I’m not sure how.
It's not the container that's "blocking".
You need to increase the prefetchCount on the container (default 1) - the broker only allows that number of unacked messages to be outstanding.
I am trying to use Camel to consume a single message from a JMS queue in a transacted manner. Specifically in a flow like this:
Wait until message is published on JMS queue
Try to consume and process the single message
If processing fails (exception occurs), rollback the consumption
If the processing passes, acknowledge and stop consuming anymore messages
Later in the application lifecycle, another process triggers consumption to start again from (1)
At first I tried to do this using a polling consumer, using the ConsumerTemplate, but I can't figure out if its possible to do this transactionally - it seems like the transaction is internal to the ConsumerTemplate so regardless of what I do the message is already acknowledged as consumed by the time the ConsumerTemplate returns.
Can I do this using the ConsumerTemplate? Can I do this using Camel and if so what is the best approach (Simple examples would be appreciated)?
I ended up using the pollEnrich dsl to achieve this. For example my route builder looks like:
from("direct:service-endpont").transacted("PROPOGATION_REQUIRED").setExchangePattern(ExchangePattern.InOut).pollEnrich("activemq:test-queue").bean(myHandler);
I use the direct endpoint as a service, sending a "request" message to the direct endpoint polls the jms queue for a single message (blocking if required). The transaction started extends to the pollEnrich so if, for example, myHandler bean fails then the message taken during the pollEnrich is not consumed and left on the queue.