ActiveMQ : queue VS temporaryQueue - java

I am using ActiveMQ in order to communicate between servers.
Every server holds one queue in order to send messages and temporaryQueue per each thread in order to receive messages.
When i am using about ~>32 threads i am receiving
Cannot publish to a deleted Destination: temp-queue: xxx
After a while.
When i change from temporaryQueue to "regular" queue everything works perfectly.
(session.createQueue(...) instead of session.createTemporaryQueue())
Why am i getting this error ?
Does it cost more me when i use "regular" queue ?

So implementing request reply using non temporary queues you need some way to correlate the response to a request. You can use the correlation-id header. Since you are not really supposed to create request-unique regular queues, but a fixed set of queues. Like ORDER.CONFIRMATION.RESPONSE or similar. So, it's less costly to use regular queues - IF you reuse them.
To read messages using multiple threads from a common response queue - use a message selector to select JMSCorrelationID header for your particular answer.
However you should be able to use temp queues as well. Your problem is likely some kind of usage issue - but the information provided does not give any clues as it reveals no implementation or analysis.

Related

How to solve the queue multi-consuming concurrency problem?

Our program is using Queue.
Multiple consumers are processing messages.
Consumers do the following:
Receive on or off status message from the Queue.
Get the latest status from the repository.
Compare the state of the repository and the state received from the message.
If the on/off status is different, update the data. (At this time, other related data are also updated.)
Assuming that this process is handled by multiple consumers, the following problems are expected.
Producer sends messages 1: on, 2: off, and 3: on.
Consumer A receives message #1 and stores message #1 in the storage because there is no latest data.
Consumer A receives message #2.
At this time, consumer B receives message #3 at the same time.
Consumers A and B read the latest data from the storage at the same time (message 1).
Consumer B finishes processing first. Don't update the repository as the on/off state is unchanged.(1: on, 3: on)
Then consumer A finishes the processing. The on/off state has changed, so it processes and saves the work. (1: on, 2: off)
In normal case, the latest data remaining in the DB should be on.
(This is because the message was sent in the order of on -> off -> on.)
However, according to the above scenario, off remains the latest data.
Is there any good way to solve this problem?
For reference, the queue we use is using AWS Amazon MQ and the storage is using AWS dynamoDB. And using Spring Boot.
The fundamental problem here is that you need to consume these "status" messages in order, but you're using concurrent consumers which leads to race-conditions and out-of-order message processing. In short, your basic architecture using concurrent consumers is causing this problem.
You could possibly work up some kind of solution in the database with timestamps as suggested in the comments, but that would be extra work for the clients and extra data stored in the database that isn't strictly necessary.
The simplest way to solve the problem is to just consume the messages serially rather than concurrently. There are a handful of different ways to do this, e.g.:
Define just 1 consumer for the queue with the "status" messages.
Use ActiveMQ's "exclusive consumer" feature to ensure that only one consumer receives messages.
Use message groups to group all the "status" messages together to ensure they are processed serially (i.e. in order).

RabbitMQ - only one queue, with multiple consumers receiving different messages

I have a question about RabbitMQ queue. I would like to send two types of messages on just one queue.
I know that, I can create two different queues, and use routing key to send different messages to different queue.
But I would like to have two consumers on one Queue, and somehow bind Consumer with type of message. It's event's driven via rabbit queue, when client and core are publishers and consumers.
Is it possible, or should I use different Queues?
Data exchange
Like #kendavidson said, there is a possibility to use only one queue, to exchange different messages, but it is terrible idea, because it's not efficient, so you should use it only if it's truly nesesery.
I found comment #Петр Александров useful and I created separate queue for every consumer to fix my problem, and it's something you probably looking for.

Active MQ Artemis via jConsole/JMX

I am using Artemis 1.3 and I want to monitor it using jConsole (as proposed in How to monitor Apache Artemis).
I am generally able to connect to Artemis, but I have some questions to its usage.
(These questions are mainly questions to the interface org.apache.activemq.artemis.api.jms.management.JMSQueueControl as I believe that
these are the methods that will be called via JMX):
1) I can display all messages on a queue by execution a queue's operation "listMessages" with a parameter null.
It will tell me the message's parameters like messageID, priority, whether it's durable, etc.
However, I cannot get the payload of the message. Which command can give me the contents of the message?
2) what is the filter parameter for "listMessages"?
I only get a response when I set it to null, but with every other value I only get an empty result.
3) While reading messages from queues works, I fail to read messages that were sent on a topic.
This is somehow logic due to the way topics work, but I would have hoped that when I call "pause" on a topic, then the messages
remain until I call "resume". Unfortunately this does not work. Is there another way to see what messages arrive on a topic?
You can try with browse() operation.
For filter parameter, you need to specify property-value pair like JMSPriority=4 -> listMessages(JMSPriority=4)
No. Until subscriber is durable, messages will not be store for topic.

ActiveMQ: Check existence of specific message on queue in ActiveMQ SV by CorrelationId

I use activeMQ server from:
http://activemq.apache.org/enterprise-integration-patterns.html
I have send some message to QUEUE.
I wondering that are there any way to check existence of specific message on queue in ActiveMQ server without consumer the message?
The best why to check for the existence of a single message would be to use a QueueBrowser with a message selector. There are no guarantees though that the browser will return the message depending on how deep the Queue is.
What you are trying to do is an anti-pattern and you should really consider using a true Database if you need to query for data. JMS Queues are meant to house some data which should be consumed rather quickly, there is a very limited feature set around querying for a reason, this is the job of a database.

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Categories

Resources