How should Apache Pulsar Consumer.acknowledgeAsync() failure be handled? - java

I am using Consumer.acknowledgeAsync() to ack messages in my Java service and was wondering what happens if the ack fails? Should I retry the operation a few times and discard my consumer when retries are exhausted?
I am counting the number of messages being processed for flow-control to limit memory usage.

Usually, If message was not ack-ed successfully, after ackTimeout, the message will be redelivered from broker to consumer again.
So here, most of the case, it is no need to retry.
maybe some handling like this is enough:
consumer.acknowledgeAsync(msgId)
.thenAccept(consumer -> successHandlerMethod())
.exceptionally(exception -> failHandlerMethod());

Related

Kafka producer config: Why request.timeout.ms should be larger than replica.lag.time.max.ms

From Kafka doc https://kafka.apache.org/11/documentation.html#producerconfigs , it says that:
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.
Why a small request.timeout.ms may cause duplication? Could someone elaborate more on that?
And does this still hold true if the producer retries config is set to 0?
if it's smaller than replica.lag.time.max.ms , then follower broker might persistent the msg successfully but producer req timed out, which will cause the duplication due to unnecessary producer retries

Producer taking longer time to throw exception in case of kafka broker down

I want to handle the case if Kafka Broker is down on the Kafka Producer end then its taking longer time to show the below error.
Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for logging-0: 30030 ms has passed since batch creation plus linger time
How to handle this?
The producer waits for request.timeout.ms for a response from the broker.
The configuration controls the maximum amount of time the client will
wait for the response of a request. If the response is not received
before the timeout elapses the client will resend the request if
necessary or fail the request if retries are exhausted. This should be
larger than replica.lag.time.max.ms (a broker configuration) to reduce
the possibility of message duplication due to unnecessary producer
retries.
It is set to 30000ms by default. Be careful if you try reducing it as if too short it can cause the producer to retry too quickly and produce duplicates.

Acknowledge message in SQS queue

I am using Amazon SQS with Amazon SQS-JMS java library with Java EE 7. What I want to achieve is after receiving a message, depending on business logic of the application either confirm (consume) the message or resend it to the queue again and after 3 failed retries move it to DLQ.
I though about using CLIENT_Acknowledge mode in JMS and only acknowledging the messages that were successfully processed, but this is from their official documentation:
In this mode, when a message is acknowledged, all messages received before this message are implicitly acknowledged as well. For example, if 10 messages are received, and only the 10th message is acknowledged (in the order the messages are received), then all of the previous nine messages are also acknowledged.
This example also seems to confirm this: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/code-examples.html#example-synchronous-receiver-client-acknowledge-mode.
For me this is kind of a strange behavior and opposite what I would expect from a client_acknowledge. Is there a more elegant solution here than just manually sending message throughout the code to main SQS queue or DLQ depending on process status?
You can use:
UNORDERED_ACKNOWLEDGE
SQSSession.UNORDERED_ACKNOWLEDGE
Which comes from 'com.amazon.sqs.javamessaging;' and as it states in the documentation it is a variation of Client_Acknowledge which only acknowledges the message for which it is called.
/**
* Non standard acknowledge mode. This is a variation of CLIENT_ACKNOWLEDGE
* where Clients need to remember to call acknowledge on message. Difference
* is that calling acknowledge on a message only acknowledge the message
* being called.
*/
dependency example:
"com.amazonaws:amazon-sqs-java-messaging-lib:1.0.3"
To handle this case you can use RedrivePolicy attribute for the DLQ that you created. Solution for this case can be:
Create a 2 sqs Qs say my_q and my_q_dl (latter one is for DLQ)
Set DLQ my_q_dl as the DLQ of my_q by using RedrivePolicy.
Here, care should be taken to specify deadLetterTargetArn and maxReceiveCount. This maxReceiveCount is the number of times you want to process any message without acknowledging before sending it to the DLQ. If you set maxReceiveCount=3 then, the msg will remain in my_q up to 3rd pull by the consumer with no ack.
2 cases here:
Normal case: msg gets deleted as soon as ack is received.
If no ack (msg delete) for that msg upto third time then the msg gets deleted from my_q and pushed to
my_q_dl itself.
*RedrivePolicy - The string that includes the parameters for the deadletter queue functionality of the source queue.
deadLetterTargetArn - The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value
of maxReceiveCount is exceeded.
maxReceiveCount - The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
Note
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a
standard queue.*

RabbitMQ Spring AMQP - Message Processing after some time

Using Spring AMQP (Using RabbitMQ as message broker), I am preparing a message and I want my message to consume after sometimes. Till then it can wait in some queue like waiting queue and then moved to our main queue where we have the consumer which is waiting to process the message from main queue.
I got confused whether I should apply dead letter exchange in this scenario and how to apply the dead letter exchange in it is the big question for me.
Any Idea how can we make it work.
P.S > If it is possible without rabbitmq_delayed_message_exchange plugins.
If you don't want to use the delayed exchange plugin, you can send a message to a queue with a time to live (ttl set on the queue or message).
Configure the queue to route expired messages to a dead letter exchange which routes to the final queue.
someExchange -> ttlQueueWithDLX -> DLX -> liveQueue

Kafka - producer - handle "failed to send"

I'm running a 0.8 Kafka, and build a producer using the provided Java API.
The API functions of sending a message (or messages) return void.
Is there a way to get the status of the sent message? If it sent or failed?
This is extremely important to us since we are reading the messages from a file and we want to delete the file after all messages were sent. But if there were errors and some messages weren't sent and I delete the file it will cause a loss of a very important data.
You can configure your producer to wait until it gets n acks from the Kafka cluster (request.required.acks) so that you have some kind of guarantee that the data has been committed properly before deleting your source file.
If really you need to be sure that the message sent succeeded, you might want to consider the alternative of making the producer to be synchronous (producer.type=sync). This way, you would be able to catch any exception thrown by the blocking invocation and act accordingly. The exception thrown by send() is kafka.common.FailedToSendMessageException.
Kafka's Java API is not ideal, hope this helps you.

Categories

Resources