I have tried to create a queue and I want to create delayed exchange and send message to a corresponding queue. But I find that just after creating the exchange, the message is not sent to the queue (it will not also be consumed as well).
But this strange thing happens, after a while, lets say 30 minutes, I tried again with the same code, the message is sent to the queue and consumed.
Here is my application.properties look like:
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=output.webhook.delay
spring.cloud.stream.bindings.output.producer.required-groups=webhook.delay
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.input.group=webhook.delay
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.input.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.input.consumer.lazy=true
spring.cloud.stream.rabbit.bindings.input.consumer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.output.producer.dead-letter-routing-key=output.webhook.delay.dlq
spring.cloud.stream.rabbit.bindings.output.producer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.output.producer.lazy=true
spring.cloud.stream.rabbit.bindings.output.producer.delayed-exchange=true
spring.cloud.stream.rabbit.bindings.output.producer.delay-expression=3000
In the Admin Page of RabbitMQ it shows that it create exchange with exchange type: x-delayed-message and I have installed delayed exchange plugin.
What am I doing wrong? Thanks in advance.
Related
I have a problem with my client , I dont know where to look or pinpoint the problem but as far as I know im using qos 2 and my broker is mosquitto. Does anyone have any problem with messages that are not received but delivered?
My process is like these
ClientServer(acts as a bridge to the database) subscribed to "topic1"
Client publishes a payload to "topic1"
Something went wrong then ClientServer then send back to Client that it has not been saved.
Client receives the message and send the message with correct payload again.
ClientServer doesn't receive anymore (Mostly 2 - multiple times publish)
Then i use another client to send some mqtt-client statistics to send a payload message to the ClientServer and in the ClientServer publish tokens most ImqttDeliveryToken data is pending. I dont know why is it because of QOS 2?
So is there a problem with my current pseudo-code when using qos 2 with Client(having the same unique client-id) and ClientServer(having the same unique-client-id)?
PS: What i meant about same unique client-id is that since runtime my clients dont use generated client-id to allow qos 2 to work.
I think i found my answer.
Seems like in order to get over from this problem is to enter higher number of
max_inflight_messages
(in mosquitto.conf )
to N number of messages that makes the ClientServer accommodate, it was its default was 10 that i think thats why 100+ records sent asynchonously will be pending or i don't know what happened but it stopped processing incoming messages.
As for my testing I set it temporarily to 1000.
Hope some people might enlighten me for additional information about this inflight messages?
I have an application using MQTT implemented with the paho-mqtt-1.0.2 and I am using ActiveMQ as the broker. I have a class implementing the MqttCallback, what I am wondering is why does the client hang
#Override
messageArrived(...)
do work
mqtt.publish(TOPIC,PAYLOAD,2,false) <- here
I want to send a "response" message to the broker for the next step of the work to be done. Similar to this, I read in the docs for that callback function
It is possible to send a new message within an implementation of this callback (for example, a response to this message), but the implementation must not disconnect the client, as it will be impossible to send an acknowledgment for the message being processed, and a deadlock will occur.
Has anyone out there tried doing the above and get it to work?
I also tried using the MqttAsyncClient and that ended up with
"Error too many publishes in progress" leading to undelivered messages.
I know how to get around this issue, I'm not looking for workaround; I'm looking for receiving and publishing on the thread where messageArrived() gets executed.
Happy Hunting!
String queueA = "rabbitmq://host:5672/queue-a.exchange?queue=queue-a.exchange..etc
from(queueA)
.routeId("idForQueueA")
.onException(Exception.class)
.maximumRedeliveries(0)
// .processRef("sendEmailAlert") * not sure this belongs here*
.to(deadLetterQueueA)
.useOriginalMessage()
.end()
.processRef("dataProcessing")
.processRef("dataExporting")
.end();
Explaining the code above:
Messages are taken from queueA. Upon various processes being successful the message is consumed. If it fails its added to the dead letter queue "deadLetterQueueA". This all works ok.
My question is
When messages arrive in the deadletter queue I want to add alerts so we know to do something about it... How could I to add an email alert when a message arrives in the dead letter queue. I dont want to lose the original message if the alert fails - nor do I want the alert to consume the message.
My thoughts are.. I would need to split the message on an exception so its sent to two different queues? One for the alert which then sends out an email alert and then consumes itself. Then one for the dead letter queue that just sites there? However I'm not sure how to do this?
You can split a message to go to multiple endpoints using a multicast (details here):
.useOriginalMessage().multicast().to(deadLetterQueueA, "smtp://username#host:port?options")
This uses the camel mail component endpoints described here. Alternatively, you can continue processing the message after the to. So something like:
.useOriginalMessage()
.to(deadLetterQueueA)
.transform().simple("Hi <name>, there has been an error on the object ${body.toString}")
.to("smtp://username#host:port?options")
If you had multiple recipients, you could use a recipients list
public class EmailListBean {
#RecipientList
public String[] emails() {
return new String[] {"smtp://joe#host:port?options",
"smtp://fred#host:port?options"};
}
}
.useOriginalMessage()
.to(deadLetterQueueA)
.transform().simple("...")
.bean(EmailListBean.class)
Be careful of using JMS queues to store messages while waiting for a human to action them. I don't know what sort of message traffic you're getting. I'm assuming if you want to send an email for every failure, it's not a lot. But I would normally be wary of this sort of thing, and chose to use logging or database persistence to store the results of errors, and only use a JMS error queue to notify other processes or consumers of the error or to schedule a re-try.
There are two ways you can do this , but based on your message volume you might not want to send email on every failed message.
You can use the solution provided by AndyN , or you can use the Advisory Topics ActiveMQ.Advisory.MessageDLQd.Queue.* , whenever a message gets in to the DLQ the enqueue count of the topic will increase by 1 . By monitoring the Queue Depth you might now be able to send a mail to based on the number of the errors that ocurred.
If you want to do it at the producer end. You can use any one of the solutions provided by AndyN
I have a Queue and Topic with 2 messages in Activemq.If I restart Activemq.I am losing messages and also Topic.
Even If I restart Activemq,I don't want to lose any messages from any Topicand Queue.Is it possible.
I am using Activemq 5.8.0.
A producer produces the message and send it to the Topic, which ever
consumer is running at that point of time, will receive the message.
If you want consumer which is not up now, but might be running in
future to get this message, you will have to tell the Broker to
persist the message and store the information that this perticular
consumer has not received the message.
If you have working code with-out durable subscriber, you will have to do the following changes.
In the consumer,
1. set the clinetId. Because Topic should know which consumer is yet to receive the message. Or has received the message.
Connection.setClientID(String)
2. Should be creating a durable subscriber for your topic
Connection.createDurableSubscriber()
3. Add your listener to this subscriber.
subscriber.setMessageListener(yourlistener)
4. Once you receive the message, you will have to acknowledge it
This link shows how it is done: But its in c# i guess.
http://myadventuresincoding.wordpress.com/2011/08/16/jms-how-to-setup-a-durablesubscriber-with-a-messagelistener-using-activemq/
Read these links for more info :
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
http://activemq.apache.org/why-do-i-not-receive-messages-on-my-durable-topic-subscription.html
http://activemq.apache.org/manage-durable-subscribers.html
RabbitMQ's Channel#basicConsume method gives us the following arguments:
channel.basicConsume(queueName, autoAck, consumerTag, noLocal,
exclusive, arguments, callback);
Giving us the ability to tell RabbitMQ exactly which queue we want to consume from.
But Channel#basicPublish has no such equivalency:
channel.basicPublish(exchangeName, routingKey, mandatory, immediateFlag,
basicProperties, messageAsBytes);
Why can't I specify the queue to publish to here?!? How do I get a Channel publishing to, say, a queue named logging? Thanks in advance!
To expand on #Tien Nguyen's answer, there is a "cheat" in RabbitMQ that effectively lets you publish directly to a queue. Each queue is automatically bound to the AMQP default exchange, with the queue's name as the routing key. The default exchange is also known as the "nameless exchange" - ie its name is the empty string. So if you publish to the exchange named "" with routing key equal to your queue's name, the message will go to just that queue. It is going through an exchange as #John said, it's just not one that you need to declare or bind yourself.
I don't have the Java client handy to try this code, but it should work.
channel.basicPublish("", myQueueName, false, false, null, myMessageAsBytes);
That said, this is mostly contrary to the spirit of how RabbitMQ works. For normal application flow you should declare and bind exchanges. But for exceptional cases the "cheat" can be useful. For example, I believe this is how the Rabbit Admin Console allows you to manually publish messages to a queue without all the ceremony of creating and binding exchanges.
Basically queues can be binded to an exchange based on routingKeys.
Assume that you have 3 different publishers.
Publisher1 sending message to exchange with routingKey "events"
Publisher2 sending message to exchange with routingKey "tasks"
Publisher3 sending message to exchange with routingKey "jobs"
You can have a consumer that consumes only messages with specific routhingKey.
For example in order to have a consumer for "events" messages you declare like this
channel.queueBind(queueName, exchangeName, "events");
If you want to consume all the messages coming to the exchange you give the routing as '#'
So in short what i can say is,
1. Messages will be published to an exchange.
2. Queues will be bound to exchange based on routingKeys.
3. RabbitMQ will forward messages with matching routing keys to the corresponding queues.
Please see the tutorial - http://www.rabbitmq.com/tutorials/tutorial-three-java.html
The core idea in the messaging model in RabbitMQ is that the producer never sends any messages directly to a queue. Actually, quite often the producer doesn't even know if a message will be delivered to any queue at all. Instead, the producer can only send messages to an exchange
please try this:
channel.basicPublish("", yourQueueName, null,
message.getBytes((Charset.forName("UTF-8"))));
It worked for my project.