let's say I have a REST endpoint that does this:
Receives a json body, do some mapping and then send the message via a messaging producer. (RabbitMQ)
The producer is async.
I have a consumer for the producer in 2 that will do some business logic and post a reply.
Now, I need to receive a reply after some interactions in my rest endpoint.
As the client of my rest call is expecting a reply, the solution that comes to mind is to have the endpoint listening on reply queue with a short timeout, so that I can return the response via REST.
Am I thinking the right way or should I just have a blocking producer and use RPC like it is stated here: https://www.rabbitmq.com/tutorials/tutorial-six-java.html
I want to find the most optimal solution.
Note: I'm not using Spring as I'm learning all these concepts to have a clear understanding.
Producing on queue from an HTTP call on server is fine.
Waiting for a queue and and replying for HTTP call on its response is not preferred way.
A reply queue may have many messages on it. What happens if your consume a message meant for different client call? Do you pass the message to appropriate consumer? How do you find out appropriate consumer? You may re-enqueue the message which is wastage of cycles. A message may stay on queue forever.
If request queue is flooded and you may have to poll many times to get your result. If you receive reply before client polls you would need to store reply so that you can pass it to client on next call.
These are just scratching the surface of the problems that may arise. If your client is really an HTTP client, you can use above to cover important bases of your application.
A better solution would be to write a consumer on reply queue which would call some endpoint on consumer. However it would still require you to consider cases like failures and retries both on server and client side.
I didn't answer your question but gave you some points to think about. Please don't mark this as accepted answer.
I know this is an old but if someone is still looking for this. This is mostly related to RPC in rabbitmq. One way could be to set Correlation ID on each produced message (on consumer you receive this Correlation ID and you can match it back), then check if that's the one you are expecting. You set it to unique value (UUID for example) on each request and compare to see if that's the response you are expecting back.
There is also a feature called Direct Reply-to explained nicely: https://www.rabbitmq.com/direct-reply-to.html
The producer consume from "amq.rabbitmq.reply-to" and produce with ReplyTo property set to "amq.rabbitmq.reply-to", while the consumer once received and processed message reply to the messages "ReplyTo" property as routing key and empty exchange.
Hope so this helps someone in future.
Related
I am trying to implement the following scenario and I could really use and appreciate some help. I am using ActiveMQ 5.14 with camel 2.21.
In the queue, each message corresponds to a single machine. The machines connect to the queue through a single polling consumer and are indistinguishable to the consumer. The messages should be kept in the queue until one machine acknowledges that it has reached the correct machine via a separate request. After each fetch of a message said message should be locked for a certain time.
I could not find any ActiveMQ functionality that translates to my problem. My approach would be to send the message after each fetch to a second queue, which serves as a lock mechanism and send it back to the fetchable queue after the specified timeout.
Maybe a better approach would be to rollback the session after each fetch if the message has not been acknowledged by the machine.
Do you have any suggestions what a viable solution to this problem would look like?
edit: more details to clarify the situation
The application communicates to the clients via exposing a REST API to the web with two calls: GET and DELETE.
GET fetches the next message from the queue and DELETE deletes the message from the queue. I need to make sure that a message is only fetched once in a given time period and that it makes its way back to the queue if the client doesn't send a DELETE request. Currently I have a route from the rest service to a bean which fetches a message from the queue returns it to the GET request and sends it back to the queue after. On a DELETE request I dequeue the message from the queue with the given id.
I still need to find a way to ensure that the last fetched message cant be accessed for a specified time period.
I am a bit confused about the part with the indistinguishable machines, but I understood the following:
You have 1 queue with messages
You have 1 consumer
The consumer takes a message and calls a service or similar
If the call is successful the message can be deleted
If the call fails the message must be reprocessed
If these assumptions are correct, you can build a Camel route that consumes messages from the queue (transacted) and calls the service.
If the Camel route fails to complete (service returns error) and the error is not handled, the broker does a rollback and the message is redelivered (immediately)
If the route fails multiple times (when max redelivery value is reached), the message is sent to the dead letter queue (by the broker) to move it out of the way but save it
If the route succeeds the message consumption is committed to the broker and the message deleted
In such a setup you could also configure more consumers to process the messages in parallel (if the called service allows this)
This behaviour is more or less the default behaviour if
Your broker is configured as persistent (avoid message loss)
You consume messages transacted (a local transaction with the broker is enough)
Camel does not handle the errors (when they are handled, the message is committed because the broker does not "see" any error)
You get an error from the service or you can at least decide if there was a problem and throw the error yourself. The broker must get an Exception so that a rollback is done
EDIT
Based on your clarification I understand that it is the other way round than I assumed.
Well then I would probably see the two request types as "workflow steps" since they are triggered from the clients.
GET
Consume a message, send it to requestor
Add a timestamp to the message header
Send the message to another queue (let's call it delievered)
DELETE
Dequeue the message from the delievered queue
Not deleted messages
Use the timestamp header and message selectors to consume not deleted messages after a certain amount of time
Move them back to the source queue
With a second queue you have various advantages
Messages in processing cannot be consumed again and therefore need no "lock"
The source queue contains only waiting messages, the delievered queue only messages in processing
You could increase message priority when sending not deleted messages back to the source queue so they are re-consumed fast
You could also add a counter header when sending not deleted messages back to the source queue to identify messages that are failed multiple times and process them in another way.
I am using Artemis 1.3 and I want to monitor it using jConsole (as proposed in How to monitor Apache Artemis).
I am generally able to connect to Artemis, but I have some questions to its usage.
(These questions are mainly questions to the interface org.apache.activemq.artemis.api.jms.management.JMSQueueControl as I believe that
these are the methods that will be called via JMX):
1) I can display all messages on a queue by execution a queue's operation "listMessages" with a parameter null.
It will tell me the message's parameters like messageID, priority, whether it's durable, etc.
However, I cannot get the payload of the message. Which command can give me the contents of the message?
2) what is the filter parameter for "listMessages"?
I only get a response when I set it to null, but with every other value I only get an empty result.
3) While reading messages from queues works, I fail to read messages that were sent on a topic.
This is somehow logic due to the way topics work, but I would have hoped that when I call "pause" on a topic, then the messages
remain until I call "resume". Unfortunately this does not work. Is there another way to see what messages arrive on a topic?
You can try with browse() operation.
For filter parameter, you need to specify property-value pair like JMSPriority=4 -> listMessages(JMSPriority=4)
No. Until subscriber is durable, messages will not be store for topic.
I am begining to implement an ActiveMQ based messaging service to send worker tasks to various servers, however I am noticing that in the default mode, if no one is "listening" to a producer's topic, any message from that producer will be lost.
I.e.,
If Producer Senders Message with a live broker
But No Consumer is there to listen
Message goes no where
I would like instead for the Broker to hold on to messages until at least one listener receives it.
I am trying a couple ways of implementing this, but not sure on the most optimal/right way way:
Implement a Message Acknowledgement feature
(Caveat to this is I need the producer to wait on its listener after every message which seems very, very clunky and last resort...)
Implement the Session Transaction
(I am having trouble with this one, it sounds like the right thing to use here because of the word transaction, but I think it has more to do with the producer-broker interaction, not the producer-consumer)
Ideally, there is a mode to send a (or a set of) messages, and after sending a Boolean is returned stating if the message(s) were listened by at least one consumer.
Transactions and acknowlegdement conflict somehow with the general idea of a JMS topic.
Just use a queue instead of a topic. Access this queue using CLIENT_ACKNOWLEDGE or a transacted session. A worker task is to be processed by one worker only anyway, so the queue solves another problem.
If there was a special reason to use topics, you could consider a message driven bean (MDB) on the same host like the JMS provider (you could achieve this by using JBoss with its integrated HornetQ for example), but this is still not really correct.
Another possibility is to have both a topic and a queue. The latter is only for guaranteed delivery of each message.
This isn't really a typical messaging pattern. Typically, you have one receiver and a durable queue or multiple receivers with durable subscriptions to a topic. in either situation, each receiver will always receive the message. i don't really understand a use case where "at least one" receiver should receive it.
and yes, transactions only deal with the interactions between client and broker, not between client and eventual receiver(s).
I'm trying to understand the best way to coalesce or chunk incoming messages in RabbitMQ (using Spring AMQP or the Java client directly).
In other words I would like to take say 100 incoming messages and combine them as 1 and resend it to another queue in a reliable (correctly ACKed way). I believe this is called the aggregator pattern in EIP.
I know Spring Integration provides an aggregator solution but the implementation looks like its not fail safe (that is it looks like it has to ack and consume messages to build the coalesced message thus if you shutdown it down while its doing this you will loose messages?).
I can't comment directly on the Spring Integration library, so I'll speak generally in terms of RabbitMQ.
If you're not 100% convinced by the Spring Integration implementation of the Aggregator and are going to try to implement it yourself then I would recommend avoiding using tx which uses transactions under the hood in RabbitMQ.
Transactions in RabbitMQ are slow and you will definitely suffer performance problems if you're building a high traffic/throughput system.
Rather I would suggest you take a look at Publisher Confirms which is an extension to AMQP implemented in RabbitMQ. Here is an introduction to it when it was new http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/.
You will need to tweak the prefetch setting to get the performance right, take a look at http://www.rabbitmq.com/blog/2012/05/11/some-queuing-theory-throughput-latency-and-bandwidth/ for some details.
All the above gives you some background to help solve your problem. The implementation is rather straightforward.
When creating your consumer you will need to ensure you set it so that ACK is required.
Dequeue n messages, as you dequeue you will need to make note of the DeliveryTag for each message (this is used to ACK the message)
Aggregate the messages into a new message
Publish the new message
ACK each dequeued message
One thing to note is that if your consumer dies after 3 and before 4 has completed then those messages that weren't ACK'd will be reprocessed when it comes back to life
If you set the <amqp-inbound-channel-adapter/> tx-size attribute to 100, the container will ack every 100 messages so this should prevent message loss.
However, you might want to make the send of the aggregated message (on the 100th receive) transactional so you can confirm the broker has the message before the ack for the inbound messages.
I am fairly new to Java EE and JMS and am looking at doing an implementation using JMS.
Think of the following scenario:
Scenario
A user hits a servlet. A message is then put into a JMS server/Queue from this servlet. A response is then sent back to the user saying "Message Queued".
Option 1
The consumer/MDB receives the message from the JMS queue and processes it. This is normal operation and pretty standard.
Option 2
There is no consumer(for what ever reason) or the receiver is processing messages too slow. So what I would like is for the message in the queue to timeout. Once timed out, and email should be sent etc (email is just as an example).
Reading the API spec/Java EE 6 tutorial I have found in the QueuSender class
void send(Message message, int deliveryMode, int priority, long timeToLive)
So by settings the timeToLive the message will be evicted from the queue. The problem is that the is no "interface/call back" to know that the message was evicted. It just disappears. Or am I mistaken?
Another approach I thought of was for a thread to monitor the queue and evict messages that are "expired" and pull them from the queue. But I don't think that is possible, is it?
Any light shed on this matter would greatly be appreciated.
You have to make use of some implementation specific functionality to fulfill your requirements. The JMS specification does neither define which action is taken with a timed out message, nor does it offer you any reasonable criteria selection when polling messages from a queue.
Most (if not all) JMS implementations do however offer the concept of DLQs (dead letter queues). If a message cannot be delivered to a regular consumer or times out, the JMS implementation will most likely be able to move the message to a DLQ, which is basically also a regular queue with its own listener.
So, if you set up two queues, Q1 and Q2 and configure Q2 as a DLQ for Q1, you would do your normal request processing in a listener on Q1 and implement an additional listener for Q2 to do the error/timeout handling.
Synchronous interaction over JMS might be of help to you either. Basicly on the client side you:
send a message with a correlation id and time-to-live
receive a message (usually in the same thread) using the same correlation id and specifying timeout (time-to-live == timeout so if you treat it dead, it's really dead)
On the other side, server:
on an incoming message must fetch the correlation id
specify that correlation id for a response while sending it back to the client.
Of course server must be quick enough to fit the timeout/time-to-live threshold.
So on the client side you are always sure what's happend to the message that was sent.