I have the following problem to solve.
I need to write a java program that:
reads JSON object j1,j2,...,jn from a web service.
does some number crunching on each object to come up with j1',j2',...,jn'
Sends objects j1',j2',...,jn' to a web service.
The computational, space requirements for steps 1,2, and 3 can vary at any given time.
For example:
The time it takes to process a JSON object at step 2 can vary depending on the contents of the JSON Object.
The rate of objects being produced by the webservice in step 1 can go up or down with time.
The consuming web service in step 3 can get backlogged.
To address the above design concerns want to implement the following architecture:
Read JSON objects from the external webservice and place them on a Q
An automatically size adjusting worker thread pool that consumes JSON objects from the Q and processes them. After processing them, places the resulting objects on the second Q
An automatically size adjusting worker thread pool that consumes JSON objects from the second Q to send them to the consuming webservice.
Question:
I am curious if there is framework which I can use to solve this problem?
Notes:
I can solve this using a range of components like custom Queues, Threadpools using the concurrency package -- however I'm looking for a solution that allows the writing of such a solution.
This is not going to live inside a container. This will be a Java process where the entry point is public static void main(String args[])
However if there is a container suited to this paradigm I would like to learn about it.
I could split this into multiple processes, however I'd like to keep it very simple and in a single process.
Thanks.
Thanks.
try Apache camel or Spring Integration to wire things up. These are kind of integration frameworks, will ease your interaction with webservices. What you need to do is define a route from webservice 1 -> number cruncher -> web service 2. routing and conversion required in between can be handled by the framework itself
you'd implement your cruncher as a camel processor.
parallelizing your cruncher may be achieved via SEDA; Camel has a component for this pattern. Another alternate would be AsyncProcessor
I'd say you first take a look at the principles behind frameworks like camel. The abstractions they create are very relevant for the problem in hand.
I'm not exactly sure what the end question is for your post, but you have a reasonable design concept. One question I have for you is what environment are you in? Are you in a JavaEE container or just a simple standalone application?
If you are in a container, it would make more sense to have Message Driven Beans processing off of the JMS queues than having a pool of worker threads.
If in your own container, it would make more sense for you to manage the thread pool yourself. With that said, I would also consider having separate applications running that pull the work off of the queues which would lead to a better scaling architecture for you. If the need ever came up, you could add more machines with more workers pointing at the one queue.
Related
Could anyone give me the best practice when I need to create new Verticals in Vertx. I know that each vertical can be deployed remotely and put into cluster. However, I still have a question how to design my application. Well, my questions are:
Is it okay to have a lot of verticals?
E.g I create a HttpServer, where a lot of endpoints for services. I would like to make different subroutes and set up them depending on enabled features (services). Some of them will initiate a long-term processes and will use the event bus to generate new events in the system. What is the best approach here?
For example, I can pass vertx into each endpoint as an argument and use it to create Router:
getVertx().createHttpServer()
.requestHandler(router::accept)
.listen(Config.GetEVotePort(), startedEvent -> {..});
...
router.mountSubRouter("/api",HttpEndpoint.createHttpRoutes(
getVertx(), in.getType()));
Or I can create each new Endpoint for service as a Vertical instead of passing Vertx. My question is mostly about is it okay to pass vertx as an argument or when I need to do it I should implement new Vertical?
My 10 cents:
Yes, the point is that there can be thousands of verticles, because as I understand it the name comes from the word "particle" and the whole idea is a kind of UNIX philosophy bet on the JVM. So write each particle / verticle to do 1 thing and do it well. Use text streams to communicate between particles because that's a universal interface.
Then the answer to your question is about how many servers you have? How many JVM's are you going to fire up per server? What memory do you expect each JVM to use? How many verticles can you run per JVM within memory limits? How big are your message sizes? What's the network bandwidth limit? How many messages are going through your system? And, can the event bus handle this traffic?
Then it's all about how verticles work together, which is basically the event bus. What I think you want is your HttpServer to route messages to an event bus where different verticles are configured to listen to different "topics" (different text streams). If 1 verticle initiates a long term process it's triggered by an event on the bus then it puts the output back onto a topic for the next verticle / response verticle.
Again, that depends how many servers / JVM's you have and whether you've a clustered event bus or not.
So 1 verticle ought to serve multiple endpoints, for example using the Router, yeah, to match a given request from HttpServer to a Route, which then selects a Handler, and that Handler is in a given Verticle.
It's best to have a lot of verticles. That way your application is loosely coupled and can be easily load balanced. For example you may want 1-3 routing verticles, but a lot more worker verticles, if your load is high. And that way you can increase only the number of workers, without altering number of routing verticles.
I wouldn't suggest to pass vertx as an argument. Use EventBus instead, as #rupweb already suggested. Pass messages between your routing verticles to workers and back. That's the best practice you're looking for:
http://vertx.io/docs/vertx-core/java/#event_bus
I've built a REST api using Spring Boot that basically accepts two images via POST and performs image comparison on them. The api is invoked synchronously. I'm not using an external application server to host the service, rather I package it as a jar and run it.
#RequestMapping(method = RequestMethod.POST, value = "/arraytest")
public String compareTest(#RequestParam("query") MultipartFile queryFile,#RequestParam("test") MultipartFile testFile,RedirectAttributes redirectAttributes,Model model) throws IOException{
CoreDriver driver=new CoreDriver();
boolean imageResult=driver.initProcess(queryFile,testFile);
model.addAttribute("result",imageResult);
return "resultpage";
}
The service could be invoked in parallel across multiple machines and I would need my service to perform efficiently. I'm trying to understand how are parallel calls to a REST service handled?
When the request is sent to the service , does a single object of the service get created and same object get used in multiple threads to handle multiple requests?
A follow-up question would be whether if it is possible to improve the performance of a service on the perspective of handling requests rather than improving the performance of the service functionality.
Spring controllers (and most spring beans) are Singletons, i.e. there is a single instance in your application and it handles all requests.
Assuming this is not web sockets (and if you don't know what that means, it's probably not), servlet containers typically maintain a thread pool, and will take a currently unused thread from the pool and use it to handle the request.
You can tune this by, for example, changing some aspects of the thread pool (initial threads, max threads, etc...). This is the servlet container stuff (i.e. configuring tomcat/jetty/whatever you're using) not spring per se.
You can also tune other http aspects such as compression. This can usually be done via the container, but if I recall correctly spring offers a servlet filter that will do this.
The image library and image operations you perform will also matter. Many libraries convert the image into raw in memory in order to perform operations. This means a 3 meg jpg can take upwards of 100 megs of heap space. Implication of this is that you may need some kind of semaphore to limit concurrent image processing.
Best approach here is to experiment with different libraries and see what works best for your usecase. Hope this helps.
The controller will be singleton but there are ways to make the processing async. Like a thread pool or JMS. Also you can have multiple nodes. This way as long as you return a key and have a service for clients to poll to get the result later, you can scale out back end processing.
Besides you can cluster your app so there are more nodes to process. Also if possible cache results; if you get the same input and they have the same output for 30% or more of the requests.
I am going through different concurrency model in multi-threading environment (http://tutorials.jenkov.com/java-concurrency/concurrency-models.html)
The article highlights about three concurrency models.
Parallel Workers
The first concurrency model is what I call the parallel worker model. Incoming jobs are assigned to different workers.
Assembly Line
The workers are organized like workers at an assembly line in a factory. Each worker only performs a part of the full job. When that part is finished the worker forwards the job to the next worker.
Each worker is running in its own thread, and shares no state with other workers. This is also sometimes referred to as a shared nothing concurrency model.
Functional Parallelism
The basic idea of functional parallelism is that you implement your program using function calls. Functions can be seen as "agents" or "actors" that send messages to each other, just like in the assembly line concurrency model (AKA reactive or event driven systems). When one function calls another, that is similar to sending a message.
Now I want to map java API support for these three concepts
Parallel Workers : Is it ExecutorService,ThreadPoolExecutor, CountDownLatch API?
Assembly Line : Sending an event to messaging system like JMS & using messaging concepts of Queues & Topics.
Functional Parallelism: ForkJoinPool to some extent & java 8 streams. ForkJoin pool is easy to understand compared to streams.
Am I correct in mapping these concurrency models? If not please correct me.
Each of those models says how the work is done/splitted from a general point of view, but when it comes to implementation, it really depends on your exact problem. Generally I see it like this:
Parallel Workers: a producer creates new jobs somewhere (e.g in a BlockingQueue) and many threads (via an ExecutorService) process those jobs in parallel. Of course, you could also use a CountDownLatch, but that means you want to trigger an action after exactly N subproblems have been processed (e.g you know your big problem may be split in N smaller problems, check the second example here).
Assembly Line: for every intermediate step, you have a BlockingQueue and one Thread or an ExecutorService. On each step the jobs are taken from one BlickingQueue and put in the next one, to be processed further. To your idea with JMS: JMS is there to connect distributed components and is part of the Java EE and was not thought to be used in a high concurrent context (messages are kept usually on the hard disk, before being processed).
Functional Parallelism: ForkJoinPool is a good example on how you could implement this.
An excellent question to which the answer might not be quite as satisfying. The concurrency models listed show some of the ways you might want to go about implementing an concurrent system. The API provides tools used to implementing any of these models.
Lets start with ExecutorService. It allows you to submit tasks to be executed in a non-blocking way. The ThreadPoolExecutor implementation then limits the maximum number of threads available. The ExecutorService does not require the task to perform the complete process as you might expect of a parallel worker. The task may be limited to specific part of the process and send a message upon completion that starts the next step in an assembly line.
The CountDownLatch and the ExecutorService provide a means to block until all workers have completed that may come in handy if a certain process has been divided to different concurrent sub-tasks.
The point of JMS is to provide a means for messaging between components. It does not enforce a specific model for concurrency. Queues and topics denote how a message is sent from a publisher to a subscriber. When you use queues the message is sent to exactly one subscriber. Topics on the other hand broadcast the message to all subscribers of the topic.
Similar behavior could be achieved within a single component by for example using the observer pattern.
ForkJoinPool is actually one implementation of ExecutorService (which might highlight the difficulty of matching a model and an implementation detail). It just happens to be optimized for working with large amount of small tasks.
Summary: There are multiple ways to implement a certain concurrency model in the Java environment. The interfaces, classes and frameworks used in implementing a program may vary regardless of the concurrency model chosen.
Actor model is another example for an Assembly line. Ex: akka
There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...
I'm working on the existing application that uses transport layer with point-to-point MQ communication.
For each of the given list of accounts we need to retrieve some information.
Currently we have something like this to communicate with MQ:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
As you can see we wait until it finishes completely before proceeding to the next account.
Due to performance issues we need to rework it.
There are 2 possible scenarios that I can think off at the moment.
1) Within an application to create a bunch of threads that would execute transport adapter for each account. Then get data from each task. I prefer this method, but some of the team members argue that transport layer is a better place for such change and we should place extra load on MQ instead of our application.
2) Rework transport layer to use publish/subscribe model.
Ideally I want something like this:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
Then I will just send requests in the loop, and later retrieve data in the loop. The idea is that while first request is being processed by the back end system we don't have to wait for the response, but instead send next request.
My question, is it going to be a lot faster than current sequential retrieval?
The question title frames this as a choice between P2P and pub/sub but the question body frames it as a choice between threaded and pipelined processing. These are two completely different things.
Either code snippet provided could just as easily use P2P or pub/sub to put and get messages. The decision should not be based on speed but rather whether the interface in question requires a single message to be delivered to multiple receivers. If the answer is no then you probably want to stick with point-to-point, regardless of your application's threading model.
And, incidentally, the answer to the question posed in the title is "no." When you use the point-to-point model your messages resolve immediately to a destination or transmit queue and WebSphere MQ routes them from there. With pub/sub your message is handed off to an internal broker process that resolves zero to many possible destinations. Only after this step does the published message get put on a queue where, for the remainder of it's journey, it then is handled like any other point-to-point message. Although pub/sub is not normally noticeably slower than point-to-point the code path is longer and therefore, all other things being equal, it will add a bit more latency.
The other part of the question is about parallelism. You proposed either spinning up many threads or breaking the app up so that requests and replies are handled separately. A third option is to have multiple application instances running. You can combine any or all of these in your design. For example, you can spin up multiple request threads and multiple reply threads and then have application instances processing against multiple queue managers.
The key to this question is whether the messages have affinity to each other, to order dependencies or to the application instance or thread which created them. For example, if I am responding to an HTTP request with a request/reply then the thread attached to the HTTP session probably needs to be the one to receive the reply. But if the reply is truly asynchronous and all I need to do is update a database with the response data then having separate request and reply threads is helpful.
In either case, the ability to dynamically spin up or down the number of instances is helpful in managing peak workloads. If this is accomplished with threading alone then your performance scalability is bound to the upper limit of a single server. If this is accomplished by spinning up new application instances on the same or different server/QMgr then you get both scalability and workload balancing.
Please see the following article for more thoughts on these subjects: Mission:Messaging: Migration, failover, and scaling in a WebSphere MQ cluster
Also, go to the WebSphere MQ SupportPacs page and look for the Performance SupportPac for your platform and WMQ version. These are the ones with names beginning with MP**. These will show you the performance characteristics as the number of connected application instances varies.
It doesn't sound like you're thinking about this the right way. Regardless of the model you use (point-to-point or publish/subscribe), if your performance is bounded by a slow back-end system, neither will help speed up the process. If, however, you could theoretically issue more than one request at a time against the back-end system and expect to see a speed up, then you still don't really care if you do point-to-point or publish/subscribe. What you really care about is synchronous vs. asynchronous.
Your current approach for retrieving the data is clearly synchronous: you send the request message, and wait for the corresponding response message. You could do your communication asynchronously if you simply sent all the request messages in a row (perhaps in a loop) in one method, and then had a separate method (preferably on a different thread) monitoring the incoming topic for responses. This would ensure that your code would no longer block on individual requests. (This roughly corresponds to option 2, though without pub/sub.)
I think option 1 could get pretty unwieldly, depending on how many requests you actually have to make, though it, too, could be implemented without switching to a pub/sub channel.
The reworked approach will use fewer threads. Whether that makes the application faster depends on whether the overhead of managing a lot of threads is currently slowing you down. If you have fewer than 1000 threads (this is a very, very rough order-of-magnitude estimate!), i would guess it probably isn't. If you have more than that, it might well be.