String queueA = "rabbitmq://host:5672/queue-a.exchange?queue=queue-a.exchange..etc
from(queueA)
.routeId("idForQueueA")
.onException(Exception.class)
.maximumRedeliveries(0)
// .processRef("sendEmailAlert") * not sure this belongs here*
.to(deadLetterQueueA)
.useOriginalMessage()
.end()
.processRef("dataProcessing")
.processRef("dataExporting")
.end();
Explaining the code above:
Messages are taken from queueA. Upon various processes being successful the message is consumed. If it fails its added to the dead letter queue "deadLetterQueueA". This all works ok.
My question is
When messages arrive in the deadletter queue I want to add alerts so we know to do something about it... How could I to add an email alert when a message arrives in the dead letter queue. I dont want to lose the original message if the alert fails - nor do I want the alert to consume the message.
My thoughts are.. I would need to split the message on an exception so its sent to two different queues? One for the alert which then sends out an email alert and then consumes itself. Then one for the dead letter queue that just sites there? However I'm not sure how to do this?
You can split a message to go to multiple endpoints using a multicast (details here):
.useOriginalMessage().multicast().to(deadLetterQueueA, "smtp://username#host:port?options")
This uses the camel mail component endpoints described here. Alternatively, you can continue processing the message after the to. So something like:
.useOriginalMessage()
.to(deadLetterQueueA)
.transform().simple("Hi <name>, there has been an error on the object ${body.toString}")
.to("smtp://username#host:port?options")
If you had multiple recipients, you could use a recipients list
public class EmailListBean {
#RecipientList
public String[] emails() {
return new String[] {"smtp://joe#host:port?options",
"smtp://fred#host:port?options"};
}
}
.useOriginalMessage()
.to(deadLetterQueueA)
.transform().simple("...")
.bean(EmailListBean.class)
Be careful of using JMS queues to store messages while waiting for a human to action them. I don't know what sort of message traffic you're getting. I'm assuming if you want to send an email for every failure, it's not a lot. But I would normally be wary of this sort of thing, and chose to use logging or database persistence to store the results of errors, and only use a JMS error queue to notify other processes or consumers of the error or to schedule a re-try.
There are two ways you can do this , but based on your message volume you might not want to send email on every failed message.
You can use the solution provided by AndyN , or you can use the Advisory Topics ActiveMQ.Advisory.MessageDLQd.Queue.* , whenever a message gets in to the DLQ the enqueue count of the topic will increase by 1 . By monitoring the Queue Depth you might now be able to send a mail to based on the number of the errors that ocurred.
If you want to do it at the producer end. You can use any one of the solutions provided by AndyN
Related
Is there a way we can push messages to RabbitMQ and have an expiry time for it and once it expires, it should provide a notification.
Or
Is there a way we can deliver the messages in RabbitMQ after a certain amount of time. For example, I want to push a message in the queue and wants it to get delivered after 10 seconds..and simultaneously next messages.
Regarding the first part of your question, the routing of messages that have expired due to a per-message TTL is a feature of the RabbitMQ dead letter exchange (DLX).
Regarding a delay, this is not something supported by RabbitMQ out of the box, nor in my opinion should it be a feature of a message broker. I can't imagine a legitimate use case where you would deliberately want to introduce a delay into a message queue. In fact, it is a design goal of any message broker to minimize delay with enqueued messages. If you find a delay to be appropriate, then it is also likely that a message queue is not the appropriate means of conveyance.
The RabbitMQ Delayed Message Plugin adds a new exchange type to RabbitMQ where messages routed by that exchange can be delayed if the users chooses to do so.
You can use it in a way like described below.
// ... elided code ...
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
channel.exchangeDeclare("my-exchange", "x-delayed-message", true, false, args);
// ... more code ...
I have tried to create a queue and I want to create delayed exchange and send message to a corresponding queue. But I find that just after creating the exchange, the message is not sent to the queue (it will not also be consumed as well).
But this strange thing happens, after a while, lets say 30 minutes, I tried again with the same code, the message is sent to the queue and consumed.
Here is my application.properties look like:
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=output.webhook.delay
spring.cloud.stream.bindings.output.producer.required-groups=webhook.delay
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.input.group=webhook.delay
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.input.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.input.consumer.lazy=true
spring.cloud.stream.rabbit.bindings.input.consumer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.output.producer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.output.producer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.output.producer.lazy=true
spring.cloud.stream.rabbit.bindings.output.producer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.delay-expression=3000
In the Admin Page of RabbitMQ it shows that it create exchange with exchange type: x-delayed-message and I have installed delayed exchange plugin.
What am I doing wrong? Thanks in advance.
I am using Amazon SQS with Amazon SQS-JMS java library with Java EE 7. What I want to achieve is after receiving a message, depending on business logic of the application either confirm (consume) the message or resend it to the queue again and after 3 failed retries move it to DLQ.
I though about using CLIENT_Acknowledge mode in JMS and only acknowledging the messages that were successfully processed, but this is from their official documentation:
In this mode, when a message is acknowledged, all messages received before this message are implicitly acknowledged as well. For example, if 10 messages are received, and only the 10th message is acknowledged (in the order the messages are received), then all of the previous nine messages are also acknowledged.
This example also seems to confirm this: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/code-examples.html#example-synchronous-receiver-client-acknowledge-mode.
For me this is kind of a strange behavior and opposite what I would expect from a client_acknowledge. Is there a more elegant solution here than just manually sending message throughout the code to main SQS queue or DLQ depending on process status?
You can use:
UNORDERED_ACKNOWLEDGE
SQSSession.UNORDERED_ACKNOWLEDGE
Which comes from 'com.amazon.sqs.javamessaging;' and as it states in the documentation it is a variation of Client_Acknowledge which only acknowledges the message for which it is called.
/**
* Non standard acknowledge mode. This is a variation of CLIENT_ACKNOWLEDGE
* where Clients need to remember to call acknowledge on message. Difference
* is that calling acknowledge on a message only acknowledge the message
* being called.
*/
dependency example:
"com.amazonaws:amazon-sqs-java-messaging-lib:1.0.3"
To handle this case you can use RedrivePolicy attribute for the DLQ that you created. Solution for this case can be:
Create a 2 sqs Qs say my_q and my_q_dl (latter one is for DLQ)
Set DLQ my_q_dl as the DLQ of my_q by using RedrivePolicy.
Here, care should be taken to specify deadLetterTargetArn and maxReceiveCount. This maxReceiveCount is the number of times you want to process any message without acknowledging before sending it to the DLQ. If you set maxReceiveCount=3 then, the msg will remain in my_q up to 3rd pull by the consumer with no ack.
2 cases here:
Normal case: msg gets deleted as soon as ack is received.
If no ack (msg delete) for that msg upto third time then the msg gets deleted from my_q and pushed to
my_q_dl itself.
*RedrivePolicy - The string that includes the parameters for the deadletter queue functionality of the source queue.
deadLetterTargetArn - The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value
of maxReceiveCount is exceeded.
maxReceiveCount - The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
Note
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a
standard queue.*
RabbitMQ's Channel#basicConsume method gives us the following arguments:
channel.basicConsume(queueName, autoAck, consumerTag, noLocal,
exclusive, arguments, callback);
Giving us the ability to tell RabbitMQ exactly which queue we want to consume from.
But Channel#basicPublish has no such equivalency:
channel.basicPublish(exchangeName, routingKey, mandatory, immediateFlag,
basicProperties, messageAsBytes);
Why can't I specify the queue to publish to here?!? How do I get a Channel publishing to, say, a queue named logging? Thanks in advance!
To expand on #Tien Nguyen's answer, there is a "cheat" in RabbitMQ that effectively lets you publish directly to a queue. Each queue is automatically bound to the AMQP default exchange, with the queue's name as the routing key. The default exchange is also known as the "nameless exchange" - ie its name is the empty string. So if you publish to the exchange named "" with routing key equal to your queue's name, the message will go to just that queue. It is going through an exchange as #John said, it's just not one that you need to declare or bind yourself.
I don't have the Java client handy to try this code, but it should work.
channel.basicPublish("", myQueueName, false, false, null, myMessageAsBytes);
That said, this is mostly contrary to the spirit of how RabbitMQ works. For normal application flow you should declare and bind exchanges. But for exceptional cases the "cheat" can be useful. For example, I believe this is how the Rabbit Admin Console allows you to manually publish messages to a queue without all the ceremony of creating and binding exchanges.
Basically queues can be binded to an exchange based on routingKeys.
Assume that you have 3 different publishers.
Publisher1 sending message to exchange with routingKey "events"
Publisher2 sending message to exchange with routingKey "tasks"
Publisher3 sending message to exchange with routingKey "jobs"
You can have a consumer that consumes only messages with specific routhingKey.
For example in order to have a consumer for "events" messages you declare like this
channel.queueBind(queueName, exchangeName, "events");
If you want to consume all the messages coming to the exchange you give the routing as '#'
So in short what i can say is,
1. Messages will be published to an exchange.
2. Queues will be bound to exchange based on routingKeys.
3. RabbitMQ will forward messages with matching routing keys to the corresponding queues.
Please see the tutorial - http://www.rabbitmq.com/tutorials/tutorial-three-java.html
The core idea in the messaging model in RabbitMQ is that the producer never sends any messages directly to a queue. Actually, quite often the producer doesn't even know if a message will be delivered to any queue at all. Instead, the producer can only send messages to an exchange
please try this:
channel.basicPublish("", yourQueueName, null,
message.getBytes((Charset.forName("UTF-8"))));
It worked for my project.
Is it possible to send message to particular receiver using JMS Queue(HornetQ)?
Among so many receivers, I want certain message to be received by receiver which
are running on Linux OS.
Every suggestion is appriciated.
Thanks.
You can set a message property using Message.setObjectProperty(String, Object) and then have your consumers select the messages they are interested in using Session.createConsumer(Destination, String)
Sender example:
Message message = session.createMessage();
message.setObjectProperty("OS", "LINUX");
producer.send(message);
Receiver example:
MessageConsumer consumer = session.createConsumer(destination, "OS = 'LINUX'");
//Use consumer to receive messages.
The receiver in the example will ignore (they will go to some other receiver) all messages that do not match the selector. In this case all message where the 'OS' property is not 'LINUX' will be ignored by this consumer.
You can set properties of JMS message: http://download.oracle.com/javaee/1.4/api/javax/jms/TextMessage.html and filter messages at client side.
For example,
message.setStringProperty("TARGET_OS", "LINUX") - at sender
http://www.mkyong.com/java/how-to-detect-os-in-java-systemgetpropertyosname/ - detect OS at receivers and filter messages with correct TARGET_OS property
You can use JMS selectors on the consumer side to look for messages that fit specific criteria.
Not sure if I am missing something, you could keep things simple by having multiple queues - specific to each platform, then the linux based consumers can listen to the linux specific queue alone. Now your challenge probably will be to route the messages to the appropriate queue from the producer side, that should be fairly easy if the routing is based on some attribute of the message?