I am trying to write an Application that uses the JMS publish subscribe model. However I have run into a setback, I want to be able to have the publisher delete messages from the topic. The usecase is that I have durable subscribers, the active ones will get the messages (since it's more or less instantly) , but if there are inactive ones and the publisher decides the message is wrong, I want to have him able to delete the message so that the subscribers won't receive it anymore once they become active.
Problem is, I don't know how/if this can be done.
For a provider I settled on glassfish's implementation, but if other alternatives offer this functionality, I can switch.
Thank you.
JMS is a form of asynchronous messaging and as such the publishers and subscribers are decoupled by design. This means that there is no mechanism to do what you are asking. For subscribers who are active at time of publication, they will consume the message with no chance of receiving the delete message in time to act on it. If a subscriber is offline then they will but async messages are supposed to be atomic. If you proceed with design of other respondent's answer (create a delete message and require reconnecting consumers to read the entire queue looking for delete messages), then you will create a situation in which the behavior of the system differs based on whether or not a subscriber was online or not at the time a specific message/delete combination was was published. There is also a race condition in which the subscriber completes reading of the retained messages just before the publisher sends out the delete message. This means you must put significant logic into subscribers to reconcile these conditions and even more to reconcile the race condition.
The accepted method of doing this is what are called "compensating transactions." In any system where the producer and consumer do not share a single unit of work or share common state (such as using the same DB to store state) then backing out or correcting a previous transaction requires a second transaction that reverses the first. The consumer must of course be able to apply the compensating transaction correctly. When this pattern is used the result is that all subscribers exhibit the same behavior regardless of whether the messages are consumed in real time or in a batch after the consumer has restarted.
Note that a compensating transaction differs from a "delete message." The delete message as proposed in the other respondent's answer is a form of command and control that affects the message stream itself. On the other hand, compensating transactions affect the state of the system through transactional updates of the system state.
As a general rule, you never want to manage state of the system by manipulating the message stream with command and control functions. This is fragile, susceptible to attack and very hard to audit or debug. Instead, design the system to deliver every message subject to its quality of service constraints and to process all messages. Handle state changes (including reversing a prior action) entirely in the application.
As an example, in banking where transactions trigger secondary effects such as overdraft fees, a common procedure is to "memo post" the transactions during the day, then sort and apply them in a batch after the bank has closed. This allows a mistake to be reconciled before it causes overdraft fees. More recently, the transactions are applied in real time but the triggers are withheld until the day's books close and this achieves the same result.
JMS API does not allow removing messages from any destination (either queue or topic). Although I believe that specific JMX providers provide their own proprietary tools to manage their state for example using JMX. Try to check it out for your JMS provider but be careful: even if you find solution it will not be portable between different JMS providers.
One legal way to "remove" message is using its time-to-live:
publish(Topic topic, Message message, int deliveryMode, int priority, long timeToLive). Probably it is good enough for you.
If it is not applicable for your application, solve the problem on application level. For example attach unique ID to each message and publish special "delete" message with higher priority that will be a kind of command to delete "real" message with the same ID.
You have have the producer send a delete message and the consumer needs to read all messages before starting to process them.
Related
So for eg: I have actor X and Y.
Actor X persists message to journal, then it sends message to Y.
Y receives message, and sends confirmation back to X to let it know have received message.
When X receives this confirmation, I want it to
a) delete the message from the journal so the message isn't replayed on recovery. (this part doesn't seem to be possible).
b) "Mark" the message as completed (delivered). This part I think will be done with either the Log (using the log on recovery), or through adding "tags" to journal (through event adapter, but I am not sure if that's possible will update if its a viable option).
This makes me realize, how does akka persistence actually work. If a actor is persisting all messages, and then the actor fails and needs to recover, will it not recover all these messages regardless of delivery? I know it is to maintain state (so for fsm I get it), but if I have a supervisor actor that persists messages to then pass on to workers, surely I would want to be able to change this journals entries so that I wont recover (and then resend) messages that have already been processed?
(so thats why I am asking, I am obviously missing something)
Akka persistence implements eventsourcing which in general isn't a perfect fit for a queue (that does not mean it is impossible to do).
For an event sourced work manager you would record an event with the fact that the workload is sent out to a worker, which is then applied to the state of the actor, for example a list of work in progress, then send the actual work. When a workload is done you record that fact, removing the in progress item from the state, so that if the work manager restarts it could get in contact with the worker to see if the work is still in progress or retrigger it with some other worker etc.
The distributed workers sample doesn't cover the redelivery part AFAIR but could be a good source of inspiration.
With the new typed EventSourcedBehavior you can delaratively enable snapshotting and event deletion https://doc.akka.io/docs/akka/current/typed/persistence-snapshot.html#event-deletion while with the classic PersistentActor there is a more imperative deleteMessages https://doc.akka.io/docs/akka/current/persistence.html#message-deletion
For classic actors there is At-Least-Once-Delivery built on top of persistence, and for the new APIs we have work in progress for something called Reliable Delivery which will allow for a greater degree of choosing what level of reliability you are after.
Note that it depends on the journal you use if and how fast the events are actually deleted or just marked as removed, so you'll have to look into the details there as well.
My application being as ibm client to consume the message which sent by ibm MQ server. But sometimes they will sent big number of messages(e.g:50000). But our client application can not "eat" the message so quickly.
what i'v been tried:
Use caching connection factory, But not help too much.
org.springframework.jms.connection.CachingConnectionFactory
I can't open multiple threads for the listener to speed up the consuming speed(Currently is set to 1) because of our business requirement.
Thanks in advanced!
Edit:
For each message processing time is like: (e.g:0:00:00.079) But wait to start process next message will take long time (e.g:0:00:00.534)
Consider the transactional and persistence requirements of the messages.
There are a number of options within MQ that could be enabled here to speed up delivery.
MQ is optimized for either persistent/transactional or non-persistent/non-transactional workloads. Don't mix them, so for example sending persistent messages in a non-transactional session.
If you are using non-persistent/non-transactional messaging then look into the READ_AHEAD options to stream messages down to the client.
In additional ensure that selectors are not in use.
If the client implementation is negotiable, look at sending aggregate messages that combine individual messages, especially if the business logic can adapt to handle them together before (for example) saving something to a database.
The only legitimate "business" reason that you can't have multiple listener threads is because of event/workflow ordering and the chance of processing two related messages concurrently rather than sequentially. However, perhaps it's possible to redesign the client so that messages are segregated by the sender, using JMS properties of some sort, and then have each listener filter by various properties. As long as all related events/messages get the same property, you might be able to have multiple listeners.
Not ideal, but if you made the listener stateful so that you knew when to rollback event B because related event A is currently getting processed, that might work. Difficult to do well with more overhead processing. Better yet, figure out a way to process messages out-of-order and yet get the correct answer in the end.
Ultimately, for large number of messages, you really need to figure out how to get n listeners because otherwise you might never catch up and, worse case, continue to grow your backlog.
A Java EE application gets data delivered by a JMS queue. Unfortunately, some of the messages which are delivered depend on each other, so they have to be processed in the correct order. As far i understand it, i can't rely on JMS here (or can i? i know that they will be sent in the correct order).
Additionally, it may be the case that there will be one kind of message that will be split by the provider of the messages, so that in some cases thousands of these messages will be sent to the application, all of them related to a specific entity. It would not be necessary to handle them one by one (which would be the easiest way, e.g. in a MDB) and i fear it will be a bad performance if i handle them this way, because i always have to save some information in the database, so i'd prefer to batch them in some way and handle them altogether in one transaction. But i don't know how to do this, since i didn't detect a way to batch messages while reading from a queue. I could get all messages out of a queue every second or so and process them, but then i don't know for sure if messages that depend on each other are already in there.
So i think i need some kind of buffering/caching or sth. like this between receiving and processing the JMS messages, but i don't see a good approach at the moment.
Any ideas how to handle this scenario?
Section 4.4.10 of the JMS spec states that:
JMS defines that messages sent by a session to a destination must be received
in the order in which they were sent (see Section 4.4.10.2 ”Order of Message
Sends,” for a few qualifications). This defines a partial ordering constraint on a
session’s input message stream
(The qualifications are mainly about QoS and non-persistent messages).
The spec continues:
JMS does not define order of message receipt across destinations or across a
destination’s messages sent from multiple sessions. This aspect of a session’s
input message stream order is timing-dependent. It is not under application
control.
So, relying on ordering in your JMS client really comes down to the session scope of your producer and the number of producers.
You could also consider grouping the related messages in a transaction to ensure atomicity of whatever operation you're performing.
Hope that helps a bit.
EDIT: I assume you know "related messages"/dependencies by using some sort of correlationID? I'm getting the idea that your problem is very similar to eg. TCP, so perhaps you could use the algorithms from that area to ensure ordered delivery of your messages?
Cheers,
A JMS Queue will definitely process them in the correct order. It is, after all, a Queue (and you can't cut in line).
I have a JMS Queue that is populated at a very high rate ( > 100,000/sec ).
It can happen that there can be multiple messages pertaining to the same entity every second as well. ( several updates to entity , with each update as a different message. )
On the other end, I have one consumer that processes this message and sends it to other applications.
Now, the whole set up is slowing down since the consumer is not able to cope up the rate of incoming messages.
Since, there is an SLA on the rate at which consumer processes messages, I have been toying with the idea of having multiple consumers acting in parallel to speed up the process.
So, what Im thinking to do is
Multiple consumers acting independently on the queue.
Each consumer is free to grab any message.
After grabbing a message, make sure its the latest version of the entity. For this, part, I can check with the application that processes this entity.
if its not latest, bump the version up and try again.
I have been looking up the Integration patterns, JMS docs so far without success.
I would welcome ideas to tackle this problem in a more elegant way along with any known APIs, patterns in Java world.
ActiveMQ solves this problem with a concept called "Message Groups". While it's not part of the JMS standard, several JMS-related products work similarly. The basic idea is that you assign each message to a "group" which indicates messages that are related and have to be processed in order. Then you set it up so that each group is delivered only to one consumer. Thus you get load balancing between groups but guarantee in-order delivery within a group.
Most EIP frameworks and ESB's have customizable resequencers. If the amount of entities is not too large you can have a queue per entity and resequence at the beginning.
For those ones interested in a way to solve this:
Use Recipient List EAI pattern
As the question is about JMS, we can take a look into an example from Apache Camel website.
This approach is different from other patterns like CBR and Selective Consumer because the consumer is not aware of what message it should process.
Let me put this on a real world example:
We have an Order Management System (OMS) which sends off Orders to be processed by the ERP. The Order then goes through 6 steps, and each of those steps publishes an event on the Order_queue, informing the new Order's status. Nothing special here.
The OMS consumes the events from that queue, but MUST process the events of each Order in the very same sequence they were published. The rate of messages published per minute is much greater than the consumer's throughput, hence the delay increases over time.
The solution requirements:
Consume in parallel, including as many consumers as needed to keep queue size in a reasonable amount.
Guarantee that events for each Order are processed in the same publish order.
The implementation:
On the OMS side
The OMS process responsible for sending Orders to the ERP, determines the consumer that will process all events of a certain Order and sends the Recipient name along with the Order.
How this process know what should be the Recipient? Well, you can use different approaches, but we used a very simple one: Round Robin.
On ERP
As it keeps the Recipient's name for each Order, it simply setup the message to be delivered to the desired Recipient.
On OMS Consumer
We've deployed 4 instances, each one using a different Recipient name and concurrently processing messages.
One could say that we created another bottleneck: the database. But it is not true, since there is no concurrency on the order line.
One drawback is that the OMS process which sends the Orders to the ERP must keep knowledge about how many Recipients are working.
There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...