In my system, there are many users who write the blogs. I need to subscribe to different users. There is no centralized system(it's a swing application).
I am using JMS.
The user may follow one user, two users or 100 users.
m_destination1 = m_session.createQueue("USER.DEVID");
m_consumer1 = m_session.createConsumer(m_destination1);
m_destination2 = m_session.createQueue("USER.HARRY");
m_consumer2 = m_session.createConsumer(m_destination2);
Is there any generic way to write the above lines of code for unknown no. of users ? Like one consumer can receive message from many users.
Here wildcard will not work.
The best thing you can use is Mirrored Queue feature of activeMQ,
you can read the documentation here
http://activemq.apache.org/mirrored-queues.html
What mirrored queue basically does is,it forwards all the messages sent on queue to a similar named topic, this topic can then be subscribed by multiple consumers.
If you use mirrored queue, you will need your consumers to subscribe to different topics.
Your design cries out for publish-subscribe(topic) domain rather than a point-to-point architecture(i.e queue).As you already would be having an architecture which generates a queue for different people writing blogs,change to that system wont be required but your requirement will be catered.
I addition to this,if 2 consumers listen on a queue then they will pick up messages parallely from queue i.e If there are 2 messages on queue then both consumers will process 1 message independently,I don't think that's what you want.
Hope this helps!
Good luck!
#Vihar's answer is right that you should be using the publish-subscribe paradigm by using a topic, to allow multiple consumers to both be notified of new blog posts. It sounds like your primary pain point is that you've got one destination per author and users that want to consume messages have to subscribe to each of them individually.
Instead, have all new-post messages published into a single topic (let's call it NewPostNotificationTopic). Clients can then subscribe to all messages but immediately check them against the list of authors they care about and immediately stop processing any notification for an author they're not following. (This puts the filtering into the message handler rather than into the ActiveMQ network.) This does mean that each message will be passed to each client, but as long as the messages are small and your network is fast and your users are usually connected to the network, this might be a workable solution. But if you can't afford the network bandwidth of sending all messages to all clients, or if your consumers will be offline for long periods of time and you can't afford to hold a copy of all messages till they come back online, this may not work for you.
Alternatively, publish all messages into that same topic, but set the author's ID as a header on the message and use message selectors to tell ActiveMQ to only deliver messages matching a given author ID. This will be more efficient, but you're back to needing to explicitly tell ActiveMQ which authors you care about, either with a single subscription with a selector that contains ORs or with one subscription per author. The latter is cleaner but gets you back to your problem of one subscription per author per reader; the former results in only one subscription but it has to be updated each time you add/remove an author for a reader, and you'll need to make sure you handle the race conditions inherent in removing the subscription and adding another one. I'd go with the first solution I proposed (doing the filtering in the message handler instead of in the ActiveMQ subscriptions) if the performance concerns I raised there aren't a problem; otherwise I'd probably go with one subscription per author per reader, rather than having a single subscription with an ORed selector and needing to redo the subscription each time something changed.
Related
I have 5 defined topics, my specific question is there any way in the code to know if a kafka topic is free or is still full to be able to balance the load between topics
In the producer I have to do the function .send(Topic1, object) but if the topic to which I am sending the information is busy or already has to load, how can I know it to change the function to .send(Topic2, object) by means of a conditional?
I do not know if this can be done or that otherwise you can know this. Currently, I plan to use ListenableFuture and with future.addCallback to know if this process is already done and reassign the topic but I do not see it viable.
Topics don't have load. Brokers do.
A broker can have leader multiple leader partitions of different topics, which clients cannot control.
Therefore, you cannot guarantee that sending data to a new topic (rather, set of partitions) will have less system/network load than another.
Besides that, if you start sending data to other topics, you lose ordering guarantees in both the produced data and the consumer group for any downstream systems.
I have a question about RabbitMQ queue. I would like to send two types of messages on just one queue.
I know that, I can create two different queues, and use routing key to send different messages to different queue.
But I would like to have two consumers on one Queue, and somehow bind Consumer with type of message. It's event's driven via rabbit queue, when client and core are publishers and consumers.
Is it possible, or should I use different Queues?
Data exchange
Like #kendavidson said, there is a possibility to use only one queue, to exchange different messages, but it is terrible idea, because it's not efficient, so you should use it only if it's truly nesesery.
I found comment #Петр Александров useful and I created separate queue for every consumer to fix my problem, and it's something you probably looking for.
I am trying to understand the use case of using Queue.
My understanding:
Queue means one-to-one. The only use case(if not rare, very few) would be: Message is intended for only one consume.
But even in those cases, I may want to use Topic (just to be future safe). The only extra caution would be to make subscriptions durable. Or, in special situations, I would use bridging / dispatcher mechanism.
Given above, I would always (or in most cases) want to publish to a topic. Subscriber can be either durable topic(s) or dispatched queue(s).
Please let me know what I am missing here or I am missing the original intent?
The design requirements on when to use queues are simple if you think in terms of real-world examples:
Submit online order (exactly-once processing to avoid charging credit
card twice)
Private peer-to-peer chat (exactly one receiver for each message)
Parallel task distribution (distribute tasks amongst many workers in a networked system)
...and examples for when to use topics...
News broadcast to multiple subscribers; notification service, stock ticker, etc.
Email client (unique durable subscriber; you still get emails when you're disconnected)
You said...
But even in those cases, I may want to use Topic (just to be future
safe). The only extra case I would have to do is to make (each)
subscription durable. Or, I special situations, I would use bridging /
dispatcher mechanism.
You're over-engineering the design. It's true, you can achieve exactly-once processing using a topic and durable subscriber, but you'd be limited to a single durable subscriber; the moment you start another subscriber for that topic, you'll get duplicate processing for the same message, not to mention, a single durable subscriber is hardly a solution that scales; it would be a bottleneck in your system for sure. With a queue, you can deploy 1000 receivers on 100 nodes for the same queue, and you'd still get exactly-once processing for a single message.
You said...
Give above, I would always (or in most cases) want to publish to a
topic. Subscriber can be either durable topic(s) or dispatched
queue(s).
Using a dispatched queue with a topic subscriber is sort of redundant. You basically get asynchronous dispatching when using queues, so why not just use a queue?...no reason to put a topic in front of it.
Queues and Topics in JMS represent two different models - point to point and publish/subscribe. Topics will keep a message until all clients receive them, all subscribers handling them. Queues will wait for the first consumer to pull the message, and consider it read at that point.
You are probably missing that both queues and topics can have multiple subscribers. A queue will deliver the message to one of potentially many subscribers, while a topic will deliver the message to all subscribers.
If you in your case are sure that there is only one subscriber, then a queue subscriber and a durable topic subscriber will behave similarly. I would rather look at such a scenario as a "special case".
I have a JMS Queue that is populated at a very high rate ( > 100,000/sec ).
It can happen that there can be multiple messages pertaining to the same entity every second as well. ( several updates to entity , with each update as a different message. )
On the other end, I have one consumer that processes this message and sends it to other applications.
Now, the whole set up is slowing down since the consumer is not able to cope up the rate of incoming messages.
Since, there is an SLA on the rate at which consumer processes messages, I have been toying with the idea of having multiple consumers acting in parallel to speed up the process.
So, what Im thinking to do is
Multiple consumers acting independently on the queue.
Each consumer is free to grab any message.
After grabbing a message, make sure its the latest version of the entity. For this, part, I can check with the application that processes this entity.
if its not latest, bump the version up and try again.
I have been looking up the Integration patterns, JMS docs so far without success.
I would welcome ideas to tackle this problem in a more elegant way along with any known APIs, patterns in Java world.
ActiveMQ solves this problem with a concept called "Message Groups". While it's not part of the JMS standard, several JMS-related products work similarly. The basic idea is that you assign each message to a "group" which indicates messages that are related and have to be processed in order. Then you set it up so that each group is delivered only to one consumer. Thus you get load balancing between groups but guarantee in-order delivery within a group.
Most EIP frameworks and ESB's have customizable resequencers. If the amount of entities is not too large you can have a queue per entity and resequence at the beginning.
For those ones interested in a way to solve this:
Use Recipient List EAI pattern
As the question is about JMS, we can take a look into an example from Apache Camel website.
This approach is different from other patterns like CBR and Selective Consumer because the consumer is not aware of what message it should process.
Let me put this on a real world example:
We have an Order Management System (OMS) which sends off Orders to be processed by the ERP. The Order then goes through 6 steps, and each of those steps publishes an event on the Order_queue, informing the new Order's status. Nothing special here.
The OMS consumes the events from that queue, but MUST process the events of each Order in the very same sequence they were published. The rate of messages published per minute is much greater than the consumer's throughput, hence the delay increases over time.
The solution requirements:
Consume in parallel, including as many consumers as needed to keep queue size in a reasonable amount.
Guarantee that events for each Order are processed in the same publish order.
The implementation:
On the OMS side
The OMS process responsible for sending Orders to the ERP, determines the consumer that will process all events of a certain Order and sends the Recipient name along with the Order.
How this process know what should be the Recipient? Well, you can use different approaches, but we used a very simple one: Round Robin.
On ERP
As it keeps the Recipient's name for each Order, it simply setup the message to be delivered to the desired Recipient.
On OMS Consumer
We've deployed 4 instances, each one using a different Recipient name and concurrently processing messages.
One could say that we created another bottleneck: the database. But it is not true, since there is no concurrency on the order line.
One drawback is that the OMS process which sends the Orders to the ERP must keep knowledge about how many Recipients are working.
There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...