How to design a system that queues requests & processes them in batches? - java

I have at my disposal a REST service that accepts a JSON array of image urls, and will return scaled thumbnails.
Problem
I want to batch up image URLs sent by concurrent clients before calling the REST service.
Obviously if I receive 1 image, I should wait a moment in case other images trickle in.
I've settled on a batch of 5 images. But the question is, how do I design it to take care of these scenarios:
If I receive x images, such that x < 5, how do I timeout from waiting if no new images will arrive in the next few minutes.
If I use a queue to buffer incoming image urls, I will probably need to lock it to prevent clients from concurrently writing while I'm busy reading my batches of 5. What data structure is good for this ? BlockingQueue ?

The data structure is not what's missing. What's missing is an entity - a Timer task, I'd say, which you stop and restart every time you send a batch of images to your service. You do this whether you send them because you had 5 (incidentally, I assume that 5 is just your starting number and it'll be configurable, along with your timeout), or whether because the timeout task fired.
So there's two entities running: a main thread which receives requests, queues them, checks queue depth, and if it's 5 or more, sends the oldest 5 to the service (and restarts the timer task); and the timer task, which picks up incomplete batches and sends them on.
Side note: that main thread seems to have several responsibilities, so some decomposition might be in order.

Well what you could do is have the clients send a special string to the queue, indicating that it is done sending image URLs. So if your last element in the queue is that string, you know that there are no URLs left.
If you have multiple clients and you know the number of clients you can always count the amount of the indicators in the queue to check if all of the clients are finished.

1- As example, if your Java web app is running on Google AppEngine, you could write each client request in the datastore, have cron job (i.e. scheduled task in GAE speak) read the datastore, build a batch and send it.
2- For the concurrency/locking aspect, then again you could rely on GAE datastore to provide atomicity.
Of course feel free to disregard my proposal if GAE isn't an option.

Related

Is order guaranteed when two or more processes (apps) are waiting to put data in the same buffer?

My project consists on 2 different clients sending messages to a server, all on the same machine.
All of the components have GUIs. When we click on a button in the clients' GUIs, they start sending messages.
The server receives those messages and sends a message to the buffer that tells the clients the buffer they can't write in the buffer so the clients go sleep.
When both clients are waiting for the server to send the "available" message, is it possible to guarantee order? By order I mean the one we click first to send is first one to actually send the message.
The clients go sleep for 1 millisecond every time they check the buffer and it's a not available message.
I am assuming in this answer that the clients are different processes rather than 1 multithreaded Java program.
You would need some communication between the processes to guarantee order. You have several factors impacting timing: when the clients check the buffer, when the user clicks and which order the O/S happens to schedule the clients to add their messages to the buffer (which is, presumably, structured as a FIFO queue).
One (relatively heavy) way to accomplish this would be to use a mutex semaphore (meaning one maintained by the O/S or some other application, not within the JVM). The client would acquire a mutex semaphore when the user clicks and not release it until they have added the related message to the queue. This would guarantee that messages end up in the order in which the user clicks.
There are various third party libraries that wrap O/S semaphores in a relatively portable way.

Aggregate messages without List

I'm using spring integration and I need to pack group of messages by 10k. I don't want to store it into List since later 10k could became much bigger and persistent storage is also not my choice. I just want that several threads send messages into single thread where I can count them and write into disk into files containing 10k lines. After counter reaches 10k I create new file set counter to zero and so on. It would work fine with direct channel but how to tell several threads(I'm using
<int:dispatcher task-executor="executor" />
) to send messages into single thread? Thanks
You can reach the task with the QueueChannel. Any threads can send messages to it concurrently. On the other side you should just configure PollingConsumer with the fixed-delay poller - single-threaded, as you requested. I mean that poller with the fixed-delay and everything downstream with the DirectChannel will be done only in single thread. Therefore your count and rollover logic can be reached there.
Nothing to show you, because that configuration is straight forward: different services send messages to the same QueueChannel. The fixed-delay poller ensures single-threaded reading for you.

Creating Amazon SNS messages to be processed in the future

For the last few years we have used our own RM Application to process events related to our applications. This works by polling a database table every few minutes, looking for any rows that have a due date before now, and have not been processed yet.
We are currently making the transition to SNS, with SQS Worker tiers processing them. The problem with this approach is that we can't future date our messages. Our applications sometimes have events that we don't want to process until a week later.
Are there any design approaches, alternative services, clever tricks we could employ that would allow us to do achieve this?
One solution would be to keep our existing application running, at a simplified level, so all it does is send the SNS notifications when they are due, but the aim of this project is to try and do away with our existing app.
The database approach would be the wisest, being careful that each row is only processed once.
Amazon Simple Notification Service (SNS) is designed to send notifications immediately. There is no functionality for a delayed send (although some notification types are retried if they fail).
Amazon Simple Queue Service (SQS) does have a delay feature, but only up to 15 minutes -- this is useful if you need to do some work before the message is processed, such as copying related data to Amazon S3.
Given that your requirement is to wait until some future arbitrary time (effectively like a scheduling system), you could either start a process and tell it to sleep for a certain amount of time (a bad idea in case systems are restarted), or continue your approach of polling from a database.
If all jobs are scheduled for a distant future (eg at least one hour away), you theoretically only need to poll the database once an hour to retrieve the earliest scheduled time.
A week might be too long as SQS message retention itself is only 15 days. If you are okay with maximum retention of 15days, one idea is to keep the changing the visibility of a message every time you receive until it is ready for processing. The maximum allowed visibility timeout is 12 hours. More on visibility timeout and APIs for changing them,
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html
I found this approach: https://github.com/alestic/aws-sns-delayed. Basically, you can use a step function with a wait step in there

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Is MQ publish/subscribe domain-specific interface generally faster than point-to-point?

I'm working on the existing application that uses transport layer with point-to-point MQ communication.
For each of the given list of accounts we need to retrieve some information.
Currently we have something like this to communicate with MQ:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
As you can see we wait until it finishes completely before proceeding to the next account.
Due to performance issues we need to rework it.
There are 2 possible scenarios that I can think off at the moment.
1) Within an application to create a bunch of threads that would execute transport adapter for each account. Then get data from each task. I prefer this method, but some of the team members argue that transport layer is a better place for such change and we should place extra load on MQ instead of our application.
2) Rework transport layer to use publish/subscribe model.
Ideally I want something like this:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
Then I will just send requests in the loop, and later retrieve data in the loop. The idea is that while first request is being processed by the back end system we don't have to wait for the response, but instead send next request.
My question, is it going to be a lot faster than current sequential retrieval?
The question title frames this as a choice between P2P and pub/sub but the question body frames it as a choice between threaded and pipelined processing. These are two completely different things.
Either code snippet provided could just as easily use P2P or pub/sub to put and get messages. The decision should not be based on speed but rather whether the interface in question requires a single message to be delivered to multiple receivers. If the answer is no then you probably want to stick with point-to-point, regardless of your application's threading model.
And, incidentally, the answer to the question posed in the title is "no." When you use the point-to-point model your messages resolve immediately to a destination or transmit queue and WebSphere MQ routes them from there. With pub/sub your message is handed off to an internal broker process that resolves zero to many possible destinations. Only after this step does the published message get put on a queue where, for the remainder of it's journey, it then is handled like any other point-to-point message. Although pub/sub is not normally noticeably slower than point-to-point the code path is longer and therefore, all other things being equal, it will add a bit more latency.
The other part of the question is about parallelism. You proposed either spinning up many threads or breaking the app up so that requests and replies are handled separately. A third option is to have multiple application instances running. You can combine any or all of these in your design. For example, you can spin up multiple request threads and multiple reply threads and then have application instances processing against multiple queue managers.
The key to this question is whether the messages have affinity to each other, to order dependencies or to the application instance or thread which created them. For example, if I am responding to an HTTP request with a request/reply then the thread attached to the HTTP session probably needs to be the one to receive the reply. But if the reply is truly asynchronous and all I need to do is update a database with the response data then having separate request and reply threads is helpful.
In either case, the ability to dynamically spin up or down the number of instances is helpful in managing peak workloads. If this is accomplished with threading alone then your performance scalability is bound to the upper limit of a single server. If this is accomplished by spinning up new application instances on the same or different server/QMgr then you get both scalability and workload balancing.
Please see the following article for more thoughts on these subjects: Mission:Messaging: Migration, failover, and scaling in a WebSphere MQ cluster
Also, go to the WebSphere MQ SupportPacs page and look for the Performance SupportPac for your platform and WMQ version. These are the ones with names beginning with MP**. These will show you the performance characteristics as the number of connected application instances varies.
It doesn't sound like you're thinking about this the right way. Regardless of the model you use (point-to-point or publish/subscribe), if your performance is bounded by a slow back-end system, neither will help speed up the process. If, however, you could theoretically issue more than one request at a time against the back-end system and expect to see a speed up, then you still don't really care if you do point-to-point or publish/subscribe. What you really care about is synchronous vs. asynchronous.
Your current approach for retrieving the data is clearly synchronous: you send the request message, and wait for the corresponding response message. You could do your communication asynchronously if you simply sent all the request messages in a row (perhaps in a loop) in one method, and then had a separate method (preferably on a different thread) monitoring the incoming topic for responses. This would ensure that your code would no longer block on individual requests. (This roughly corresponds to option 2, though without pub/sub.)
I think option 1 could get pretty unwieldly, depending on how many requests you actually have to make, though it, too, could be implemented without switching to a pub/sub channel.
The reworked approach will use fewer threads. Whether that makes the application faster depends on whether the overhead of managing a lot of threads is currently slowing you down. If you have fewer than 1000 threads (this is a very, very rough order-of-magnitude estimate!), i would guess it probably isn't. If you have more than that, it might well be.

Categories

Resources