Using Spring AMQP (Using RabbitMQ as message broker), I am preparing a message and I want my message to consume after sometimes. Till then it can wait in some queue like waiting queue and then moved to our main queue where we have the consumer which is waiting to process the message from main queue.
I got confused whether I should apply dead letter exchange in this scenario and how to apply the dead letter exchange in it is the big question for me.
Any Idea how can we make it work.
P.S > If it is possible without rabbitmq_delayed_message_exchange plugins.
If you don't want to use the delayed exchange plugin, you can send a message to a queue with a time to live (ttl set on the queue or message).
Configure the queue to route expired messages to a dead letter exchange which routes to the final queue.
someExchange -> ttlQueueWithDLX -> DLX -> liveQueue
Related
I have tried to create a queue and I want to create delayed exchange and send message to a corresponding queue. But I find that just after creating the exchange, the message is not sent to the queue (it will not also be consumed as well).
But this strange thing happens, after a while, lets say 30 minutes, I tried again with the same code, the message is sent to the queue and consumed.
Here is my application.properties look like:
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=output.webhook.delay
spring.cloud.stream.bindings.output.producer.required-groups=webhook.delay
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.input.group=webhook.delay
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.input.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.input.consumer.lazy=true
spring.cloud.stream.rabbit.bindings.input.consumer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.output.producer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.output.producer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.output.producer.lazy=true
spring.cloud.stream.rabbit.bindings.output.producer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.delay-expression=3000
In the Admin Page of RabbitMQ it shows that it create exchange with exchange type: x-delayed-message and I have installed delayed exchange plugin.
What am I doing wrong? Thanks in advance.
I am using Amazon SQS with Amazon SQS-JMS java library with Java EE 7. What I want to achieve is after receiving a message, depending on business logic of the application either confirm (consume) the message or resend it to the queue again and after 3 failed retries move it to DLQ.
I though about using CLIENT_Acknowledge mode in JMS and only acknowledging the messages that were successfully processed, but this is from their official documentation:
In this mode, when a message is acknowledged, all messages received before this message are implicitly acknowledged as well. For example, if 10 messages are received, and only the 10th message is acknowledged (in the order the messages are received), then all of the previous nine messages are also acknowledged.
This example also seems to confirm this: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/code-examples.html#example-synchronous-receiver-client-acknowledge-mode.
For me this is kind of a strange behavior and opposite what I would expect from a client_acknowledge. Is there a more elegant solution here than just manually sending message throughout the code to main SQS queue or DLQ depending on process status?
You can use:
UNORDERED_ACKNOWLEDGE
SQSSession.UNORDERED_ACKNOWLEDGE
Which comes from 'com.amazon.sqs.javamessaging;' and as it states in the documentation it is a variation of Client_Acknowledge which only acknowledges the message for which it is called.
/**
* Non standard acknowledge mode. This is a variation of CLIENT_ACKNOWLEDGE
* where Clients need to remember to call acknowledge on message. Difference
* is that calling acknowledge on a message only acknowledge the message
* being called.
*/
dependency example:
"com.amazonaws:amazon-sqs-java-messaging-lib:1.0.3"
To handle this case you can use RedrivePolicy attribute for the DLQ that you created. Solution for this case can be:
Create a 2 sqs Qs say my_q and my_q_dl (latter one is for DLQ)
Set DLQ my_q_dl as the DLQ of my_q by using RedrivePolicy.
Here, care should be taken to specify deadLetterTargetArn and maxReceiveCount. This maxReceiveCount is the number of times you want to process any message without acknowledging before sending it to the DLQ. If you set maxReceiveCount=3 then, the msg will remain in my_q up to 3rd pull by the consumer with no ack.
2 cases here:
Normal case: msg gets deleted as soon as ack is received.
If no ack (msg delete) for that msg upto third time then the msg gets deleted from my_q and pushed to
my_q_dl itself.
*RedrivePolicy - The string that includes the parameters for the deadletter queue functionality of the source queue.
deadLetterTargetArn - The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value
of maxReceiveCount is exceeded.
maxReceiveCount - The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
Note
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a
standard queue.*
I have a rabbitmq queue and two spring cloud spring consumers.
I want that each consumers process messages in order.
I thought that when consumer1 send ack, consumer2 receive second message,
so I expected message1, message2 is processed in order in each consumers.
-------------------- time pass ------------------------>
consumer1: message1 message3
consumer2: message2 message4
But it wasn't. consumer1, consumer2 receive message1, message2, and process simultaneously.
-------------------- time pass ------------------------>
consumer1: message1 message3
consumer2: message2 message4
Is there a way for spring cloud stream to consume messages exclusively?
RabbitMQ (AMQP) doesn't support that; each consumer gets prefetch messages.
It does support exclusive consumers, but it means consumer1 would get all the messages and consumer2 would only get messages if consumer1 dies.
However, Spring Cloud Stream doesn't currently provide a property to set that option.
you would have to model your queues in a different way. E.g. by having an "incoming" queue which has exactly one consumer-coordinator. This consumer would relay messages to the "work" queue where consumer1+2 are both waiting and pick up work in a round robin way.
They would then signal completion to the coordinator on a third queue which would cause it to resume relaying a single message to the work queue.
I am quite new to using RabbitMQ as message queuing protocol.I have written a code for sender and consumer work queue as given in RabbitMQ tutorials.
[Link : http://www.rabbitmq.com/tutorials/tutorial-two-java.html ]
The above thing works fine when we start the consumer before the sender.
But there is an issue if we start the consumer after running the sender.
None of the messages are consumed by those consumers which are started
after running the sender.
After looking into the architecture of RabbitMQ and the AMQP related things,it seems quite difficult.
1] Is it possible,that we start the consumer after sender and the consumer which are started after the sender receives the messages in Queue?
2] If yes.Then how this thing can be done.Is there some technique to do the same?
Yes, it is possible. Make sure that your queue is declared with auto-delete set to false. Once the last consumer unsubscribes if auto-delete is set to true then the queue will be deleted and when your sender pushes messages to it they will be lost. If auto-delete is set to false then the queue will continue to exist after your consumer has unsubscribed and your sender will be able to push messages to the queue without them being lost.
Find more info about queues at http://www.rabbitmq.com/tutorials/amqp-concepts.html#queues
I suppose in the first case(starting consumer first), the consumer properly creates/registers the queue it wants to listen on the RabbitMQ server. So when sender sends it's able to receive it.
In the second case probably what's happening is sender is trying to send to a queue which is not existent/not created and goes to default/dead-letter.
I suggest you can open the RabbitMQ Management console and see whether the queues are created properly.
I have a Message Producer running on one JVM that puts in messages in a JMS Queue .I have a Message Consumer which implements Message-Driven-Bean and MessageListener interface that listens to this queue.This Message consumer is on a different JVM.
The producer puts in messages in the queue properly.But the MDB is not able to pop out messages from queue.The weird thing is that when I restart my Message Consumer , all the messages in the queue are popped out by the Message Consumer at once.After this,no matter how many messages producer puts in the queue ,the Message Consumer does not pop them out.
What could be the reason??
The application server I am using is JBOSS4.0.5.GA.
Thanks
Please provide more details. From what you have provided:
is your consumer running and waiting for messages ? (inside some sort of while loop or a blocking call)
you can set prefetch size for your consumer to be 1 in your jms connection settings so that it fetches only 1 (or whatever number) message from the queue.