Request Queue in front of a REST Service - java

What is the best technology solution (framework/approach) to have a Request Queue in front of a REST service.
so that i can increase the no of instances of REST service for higher availability and by placing Request queue in front to form a service/transaction boundary for the service client.
I need good and lightweight technology/framework choice for Request Queue (java)
Approach to implement a competing consumer with it.

There's a couple of issues here, depending on your goals.
First, it only promotes availability of the resources on the back end. Consider if you have 5 servers handling queue requests on the back end. If one of those servers goes down, then the queued request should fall back in to the queue, and be redelivered to one of the remaining 4 servers.
However, while those back end servers are processing, the front end servers are holding on to the actual, initiating requests. If one of those front end servers fails, then those connections are lost completely, and it will be up to the original client to resubmit the request.
The premise perhaps is that simpler front end systems are at a lower risk for failure, and that's certainly true for software related failure. But networks cards, power supplies, hard drives, etc. are pretty agnostic to such false hopes of man and punish all equally. So, consider this when talking about overall availability.
As to design, the back end is a simple process waiting upon a JMS message queue, and processing each message as they come. There are a multitude of examples of this available, and any JMS server will suit at a high level. All you need is to ensure that the message handling is transactional so that if a message processing fails, the message remains in the queue and can be redelivered to another message handler.
Your JMS queue's primary requirement is being clusterable. The JMS server itself is a single point of failure in the system. Lost the JMS server, and your system is pretty much dead in the water, so you'll need to be able to cluster the server and have the consumers and producers handle failover appropriately. Again, this is JMS server specific, most do it, but it's pretty routine in the JMS world.
The front end is where things get a little trickier, since the front end servers are the bridge from the synchronous world of the REST request to the asynchronous world of the back end processors. A REST request follows a typically RPC pattern of consuming the request payload from the socket, holding the connection open, processing the results, and delivering the results back down the originating socket.
To manifest this hand off, you should take a look at the Asynchronous Servlet handling the Servlet 3.0 introduced, and is available in Tomcat 7, the latest Jetty (not sure what version), Glassfish 3.x, and others.
In this case what you would do is when the request arrives, you convert the nominally synchronous Servlet call in to an Asynchronous call using HttpServletRequest.startAsync(HttpServletRequest request, HttpServletResponse response).
This returns an AsynchronousContext, and once started, allows the server to free up the processing thread. You then do several things.
Extract the parameters from the request.
Create a unique ID for the request.
Create a new back end request payload from your parameters.
Associate the ID with the AsyncContext, and retain the context (such as putting it in to a application wide Map).
Submit the back end request to the JMS queue.
At this point, the initial processing is done, and you simply return from doGet (or service, or whatever). Since you have not called AsyncContext.complete(), the server will not close out the connection to the server. Since you have the AsyncContext store in the map by the ID, it's handy for safe keeping for the time being.
Now, when you submitted the request to the JMS queue, it contained: the ID of the request (that you generated), any parameters for the request, and the identification of the actual server making the request. This last bit is important as the results of the processing needs to return to its origin. The origin is identified by the request ID and the server ID.
When your front end server started up, it also started a thread who's job it is to listen to a JMS response queue. When it sets up its JMS connection, it can set up a filter such as "Give me only messages for a ServerID of ABC123". Or, you could create a unique queue for each front end server and the back end server uses the server ID to determine the queue to return the reply to.
When the back end processors consume the message, they're take the request ID, and parameters, perform the work, and then take the result and put them on to the JMS response Queue. When it puts it the result back, it'll add the originating ServerID and the original Request ID as properties of the message.
So, if you got the request originally for Front End Server ABC123, the back end processor will address the results back to that server. Then, that listener thread will be notified when it gets a message. The listener threads task is to take that message and put it on to an internal queue within the front end server.
This internal queue is backed by a thread pool who's job is to send the request payloads back to the original connection. It does this by extracting the original request ID from the message, looking up the AsyncContext from that internal map discussed earlier, and then sending results down to the HttpServletResponse associated with the AsyncContext. At the end, it call AsyncContext.complete() (or a similar method) to tell the server that you're done and to allow it to release the connection.
For housekeeping, you should have another thread on the front end server who's job it is to detect when requests have been waiting in the map for too long. Part of the original message should have been a time the request started. This thread can wake up every second, scan the map for requests, and for any that have been there too long (say 30 seconds), it can put the request on to another internal queue, consumed by a collection of handlers designed to inform the client that the request timed out.
You want these internal queues so that the main processing logic isn't stuck waiting on the client to consume the data. It could be a slow connection or something, so you don't want to block all of the other pending requests to handle them one by one.
Finally, you'll need to account that you may well get a message from the response queue for a request that no longer exists in your internal map. For one, the request may have timed out, so it should not be there any longer. For another, that front end server may have stopped and been restarted, so it internal map of pending request will simply be empty. At this point, if you detect you have a reply for a request that no longer exists, you should simply discard it (well, log it, then discard it).
You can't reuse these requests, there's not such thing really as a load balancer going back to the client. If the client is allowing you to make callbacks via published end points, then, sure you can just have another JMS message handler make those requests. But that's not a REST kind of thing, REST at this level of discussion is more client/server/RPC.
As to which framework support Asynchronous Servlets at a higher level than a raw Servlet, (such as Jersey for JAX-RS or something like that), I can't say. I don't know what frameworks are supporting it at that level. Seems like this is a feature of Jersey 2.0, which is not out yet. There well may be others, you'll have to look around. Also, don't fixate on Servlet 3.0. Servlet 3.0 is simply a standardization of techniques used in individual containers for some time (Jetty notably), so you may want to look at container specific options outside of just Servlet 3.0.
But the concepts are the same. The big takeaway are the response queue listener with the filtered JMS connection, the internal request map to the AsyncContext, and the internal queues and thread pools to do the actual work within the application.

If you relax your requirement that it must be in Java, you could consider HAProxy. It's very lightweight, very standard, and does a lot of good things (request pooling / keepalives / queueing) well.
Think twice before you implement request queueing, though. Unless your traffic is extremely bursty it will do nothing but hurt your system's performance under load.
Assume that your system can handle 100 requests per second. Your HTTP server has a bounded worker thread pool. The only way a request pool can help is if you are receiving more than 100 requests per second. After your worker thread pool is full, requests start to pile up in your load balancer pool. Since they are arriving faster than you can handle them, the queue gets bigger ... and bigger ... and bigger. Eventually either this pool fills too, or you run out of RAM and the load balancer (and thus the entire system) crashes hard.
If your web server is too busy, start rejecting requests and get some additional capacity online.
Request pooling certainly can help if you can get additional capacity in time to handle the requests. It can also hurt you really badly. Think through the consequences before turning on a secondary request pool in front of your HTTP server's worker thread pool.

The design we use is a a REST interface receiving all the request and dispatching them to a message queue (i.e. Rabbitmq)
Then workers listen to the messages and execute them following certain rules. If everything goes down you would still have the request in the MQ and if you have a high number of request you can just add workers...
Check this keynote, it kind of shows the power of this concept!
http://www.springsource.org/SpringOne2GX2012

Related

Multithreading with Jersey

Here are two links which seem to be contradicting each other. I'd sooner trust the docs:
Link 1
Request processing on the server works by default in a synchronous processing mode
Link 2
It already is multithreaded.
My question:
Which is correct. Can it be both synchronous and multithreaded?
Why do the docs say the following?:
in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used
If the docs are correct, why is the default action synchronous? All requests are asynchronous on client-side javascript by default for user experience, it would make sense then that the default action for server-side should also be asynchronous too.
If the client does not need to serve requests in a specific order, then who cares how "EXPENSIVE" the operation is. Shouldn't all operations simply be asynchronous?
Request processing on the server works by default in a synchronous processing mode
Each request is processed on a separate thread. The request is considered synchronous because that request holds up the thread until the request is finished processing.
It already is multithreaded.
Yes, the server (container) is multi-threaded. For each request that comes in, a thread is taken from the thread pool, and the request is tied to the particular request.
in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used
Yes, so that we don't hold up the container thread. There are only so many threads in the container thread pool to handle requests. If we are holding them all up with long processing requests, then the container may run out of threads, blocking other requests from coming in. In asynchronous processing, Jersey hands the thread back to the container, and handle the request processing itself in its own thread pool, until the process is complete, then send the response up to the container, where it can send it back to the client.
If the client does not need to serve requests in a specific order, then who cares how "EXPENSIVE" the operation is.
Not really sure what the client has to do with anything here. Or at least in the context of how you're asking the question. Sorry.
Shouldn't all operations simply be asynchronous?
Not necessarily, if all the requests are quick. Though you could make an argument for it, but that would require performance testing, and numbers you can put up against each other and make a decision from there. Every system is different.

Request Aggregator / Middle-tier design pattern for costly requests

I'm working on a program that will have multiple threads requiring information from a web-service that can handle requests such as:
"Give me [Var1, Var2, Var3] for [Object1, Object2, ... Object20]"
and the resulting reply will give me a, in this case, 20-node XML (one for each object), each node with 3 sub-nodes (one for each var).
My challenge is that each request made of this web-service costs the organization money and, whether it be for 1 var for 1 object or 20 vars for 20 objects, the cost is the same.
So, that being the case, I'm looking for an architecture that will:
Create a request on each thread as data is required
Have a middle-tier "aggregator" that gets all the requests
Once X number of requests have been aggregated (or a time-limit has reached), the middle-tier performs a single request of the web-service
Middle-tier receives reply from web-service
Middle-tier routes information back to waiting objects
Currently, my thoughts are to use a library such as NetMQ with my middle-tier as a server and each thread as a poller, but I'm getting stuck on the actual implementation and, before going too far down the rabbit-hole, am hoping there's already a design pattern / library out there that does this substantially more efficiently than I'm conceiving of.
Please understand that I'm a noob, and, so, ANY help / guidance would be really greatly appreciated!!
Thanks!!!
Overview
From the architectural point of view, you just sketched out a good approach for the problem:
Insert a proxy between the requesting applications and the remote web service
In the proxy, put the requests in the request queue, until at least one of the following events occurs
The request queue reaches a given length
The oldest request in the request queue reaches a certain age
Group all requests in the request queue in one single request, removing duplicate objects or attributes
Send this request to the remote web service
Move the requests into the (waiting for) response queue
Wait for the response until one of the following occurs
the oldest request in the response queue reaches a certain age (time out)
a response arrives
Get the response (if applicable) and map it to the according requests in the response queue
Answer all requests in the response queue that have an answer
Send a timeout error for all requests older than the timeout limit
Remove all answered requests from the response queue
Technology
You probably won't find an off-the-shelf product or a framework that exactly matches you requirements. But there are several frameworks / architectural patterns that you can use to build a solution.
C#: RX and LINQ
When you want to use C#, you could use reactive extensions for getting the timing and the grouping right.
You could then use LINQ to select the attributes from the requests to build the response and to select the requests in the response queue that either match to a certain part of a response or that timed out.
Scala/Java: Akka
You could model the solution as an actor system, using several actors:
An actor as the gateway for the requests
An actor holding the request queue
An actor sending the request to the remote web service and getting the response back
An actor holding the response queue
An actor sending out the responses or the timeouts
An actor system makes it easy to deal with concurrency and to separate the concerns in a testable way.
When using Scala, you could use its "monadic" collection API (filter, map, flatMap) to do basically the same as with LINQ in the C# approach.
The actor approach really shines when you want to test the individual elements. It is very easy to test each actor individually, without having to mock the whole workflow.
Erlang/Elixir: Actor System
This is similar to the Akka approach, just with a different (functional!) language. Erlang / Elixir has a lot of support for distributed actor systems, so when you need an ultra stable or scalable solution, you should look into this one.
NetMQ / ZeroMQ
This is probably too low level and brings in to few infrastructure. When you use an actor system, you could try to bring in NetMQ / ZeroMQ as the transport system.
Your idea of using a queue looks good to me.
This is one possible solution to your problem and I'm sure there are countless other solutions that can do what you need.
Have a "publish queue" (PQ) and a "consume queue" (CQ)
Clients subscribe to CQ and MT subscribes to PQ
Clients publish the requests to PQ
MT Listens to PQ, aggregates requests and dispatches to farm in a thread
Once the results are back, this thread separates the results into req/res pair
It then publishes the req/res pairs to the CQ
Each client picks the correct message and processes it
Long(er) version:
Have your "middle tier" to listen to a queue (to which, the clients publish messages) and aggregate the requests until N number of requests have come through or X amount of time has passed.
One you are ready, offload the aggregated request to a thread to call your farm and get the results. A bigger problem will most likely arise when you need to communicate this back to the clients.
For that, you probably need another queue that all your clients subscribe to and once your result batch is ready (say 20 responses in XML) from the farm, the thread that called the farm will separate the XML results into their corresponding request/response pair and publish to this queue. Each client will need to pick up the correct request/response pair from the queue and process it.
This will not be a webservice in the traditional sense since the wait times can be prohibitively long and you don't want to maintain a connection which is why I suggest the queue.
You can also have your consumer queue to be topic based, meaning you only publish the req/res pairs to the consumer that asked for it and don't broadcast it (so the client doesn't have to "pick the correct req/res". It will be taken care of based on the topic name). Almost all queues support this.

Request-Reply through a Queue with Hazelcast

I wonder if I can do request-reply with this:
1 hazelcast instance/member (central point)
1 application with hazelcast-client sending request through a queue
1 application with hazelcast-client waiting for requests into the queue
The 1st application also receives the response on another queue posted by the second application.
Is it a good way to proceed? Or do you think of a better solution?
Thanks!
The last couple of days I also worked on a "soa like" solution using hazelcast queues to communicate between different processes on different machines.
My main goals were to have
"one to one-of-many" communication with garanteed reply of one-of-the-many's
"one to one" communication one way
"one to one" communication with answering in a certain time
To make a long story short, I dropped this approach today because of the follwoing reasons:
lots of complicated code with executor services, callables, runnables, InterruptedException's, shutdown-handling, hazelcast transactions, etc
dangling messages in case of the "one to one" communciation when the receiver has shorter lifetime than the sender
loosing messages if I kill certain cluster member(s) at the right time
all cluster members must be able to deserialize the message, because it could be stored anywhere. Therefore the messages can't be "specific" for certain clients and services.
I switched over to a much simpler approach:
all "services" register themselves in a MultiMap ("service registry") using the hazelcast cluster member UUID as key. Each entry contains some meta information like service identifier, load factor, starttime, host, pid, etc
clients pick a UUID of one of the entries in that MultiMap and use a DistributedTask (distributed executor service) for the choosen specific cluster member to invoke the service and optionally get a reply (in time)
only the service client and the service must have the specific DistributedTask implementation in their classpath, all other cluster members are not bothered
clients can easily figure out dead entries in the service registry themselves: if they can't see a cluster member with the specific UUID (hazelcastInstance.getCluster().getMembers()), the service died probably unexpected. Clients can then pick "alive" entries, entries which fewer load factor, do retries in case of idempotent services, etc
The programming gets very easy and powerful using the second approach (e.g. timeouts or cancellation of tasks), much less code to maintain.
Hope this helps!
In the past we have build a SOA system that uses Hazelcast queue's as a bus. Here is some of the headlines.
a. Each service has an income Q. Simply service name is the name of the queue. You can have as many service providers as you wish. You can scale up and down. All you need is these service providers to poll this queue and process the arrived requests.
b. Since the system is fully asynchronous, to correlate request and response, there is also a call id both on request and response.
c. Each client sends a request into the queue of the service that it wants to call. The request has all the parameters for the service, a name of the queue to send the response and a call id. A queue name simply can be the address of the client. This way each client will have it's own unique queue.
d. Upon receiving the request, a service provider processes it and sends the response to the answer queue
e. Each client also continuously polls its input queue to receive the answers for the requests that it send.
The major drawback with this design is that the queues are not as scalable as maps. Thus it is not very scalable. Hoever it still can process 5K requests per seconds.
I made a test for myself and validated that it works well with certain limitation.
The architecture is Producer-Hazelcast_node-Consumer(s)
Using two Hazelcast queues, one for Request, one for Response, I could mesure a round trip of under 1ms.
Load balancing is working fine if I put several consumers of the Request queue.
If I add another node, and connect the clients to each node, then the round trip is above 15ms. This is due to replication between the 2 hazelcast nodes. If I kill a node, the clients continue to work. So failover is working at the cost of time.
Can't you use the correlation id to perform request-reply on a single queue in hazelcast? That's the id that should uniquely define a conversation between 2 providers/consumers of a queue.
What is the purpose of this setup #unludo ?. I am just curious

Concurrent Synchronous Request-Reply with JMS/ActiveMQ - Patterns/Libraries?

I have a web-app where when the user submits a request, we send a JMS message to a remote service and then wait for the reply. (There are also async requests, and we have various niceties set up for message replay, etc, so we'd prefer to stick with JMS instead of, say, HTTP)
In How should I implement request response with JMS?, ActiveMQ seems to discourage the idea of either temporary queues per request or temporary consumers with selectors on the JMSCorrelationID, due to the overhead involved in spinning them up.
However, if I use pooled consumers for the replies, how do I dispatch from the reply consumer back to the original requesting thread?
I could certainly write my own thread-safe callback-registration/dispatch, but I hate writing code I suspect has has already been written by someone who knows better than I do.
That ActiveMQ page recommends Lingo, which hasn't been updated since 2006, and Camel Spring Remoting, which has been hellbanned by my team for its many gotcha bugs.
Is there a better solution, in the form of a library implementing this pattern, or in the form of a different pattern for simulating synchronous request-reply over JMS?
Related SO question:
Is it a good practice to use JMS Temporary Queue for synchronous use?, which suggests that spinning up a consumer with a selector on the JMSCorrelationID is actually low-overhead, which contradicts what the ActiveMQ documentation says. Who's right?
In a past project we had a similar situation, where a sync WS request was handled with a pair of Async req/res JMS Messages. We were using the Jboss JMS impl at that time and temporary destinations where a big overhead.
We ended up writing a thread-safe dispatcher, leaving the WS waiting until the JMS response came in. We used the CorrelationID to map the response back to the request.
That solution was all home grown, but I've come across a nice blocking map impl that solves the problem of matching a response to a request.
BlockingMap
If your solution is clustered, you need to take care that response messages are dispatched to the right node in the cluster. I don't know ActiveMQ, but I remember JBoss messaging to have some glitches under the hood for their clusterable destinations.
I would still think about using Camel and let it handle the threading, perhaps without spring-remoting but just raw ProducerTemplates.
Camel has some nice documentation about the topic and works very well with ActiveMQ.
http://camel.apache.org/jms#JMS-RequestreplyoverJMS
For your question about spinning up a selector based consumer and the overhead, what the ActiveMQ docs actually states is that it requires a roundtrip to the ActiveMQ broker, which might be on the other side of the globe or on a high delay network. The overhead in this case is the TCP/IP round trip time to the AMQ broker. I would consider this as an option. Have used it muliple times with success.
A colleague suggested a potential solution-- one response queue/consumer per webapp thread, and we can set the return-address to the response queue owned by that particular thread. Since these threads are typically long-lived (and are re-used for subsequent web requests), we only have to suffer the overhead at the time the thread is spawned by the pool.
That said, this whole exercise is making me rethink JMS vs HTTP... :)
I have always used CorrelationID for request / response and never suffered any performance issues. I can't imagine why that would be a performance issue at all, it should be super fast for any messaging system to implement and quite an important feature to implement well.
http://www.eaipatterns.com/RequestReplyJmsExample.html has the tow main stream solutions using replyToQueue or correlationID.
It's an old one, but I've landed here searching for something else and actually do have some insights (hopefully will be helpful to someone).
We have implemented very similar use-case with Hazelcast being our chassis for
cluster's internode comminication. The essense is 2 datasets: 1 distributed map for responses, 1 'local' list of response awaiters (on each node in cluster).
each request (receiving it's own thread from Jetty) creates an entry in the map of local awaiters; the entry has obviously the correlation UID and an object that will serve as a semaphore
then the request is being dispatched to the remote (REST/JMS) and the original thread starts waiting on the semaphore; UID must be part of the request
remote returns the response and writes it into the responses map with the correlated UID
responses map is being listened; if the UID of the newly coming response is found in the map of the local awaiters, it's semaphore is being notified, original request's thread is being released, picking up the response from the responses map and returning it to the client
This is a general description, I can update an answer with a few optimizations we have, in case there will be any interest.

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Categories

Resources