Best practice to create new Verticals in Vertx - java

Could anyone give me the best practice when I need to create new Verticals in Vertx. I know that each vertical can be deployed remotely and put into cluster. However, I still have a question how to design my application. Well, my questions are:
Is it okay to have a lot of verticals?
E.g I create a HttpServer, where a lot of endpoints for services. I would like to make different subroutes and set up them depending on enabled features (services). Some of them will initiate a long-term processes and will use the event bus to generate new events in the system. What is the best approach here?
For example, I can pass vertx into each endpoint as an argument and use it to create Router:
getVertx().createHttpServer()
.requestHandler(router::accept)
.listen(Config.GetEVotePort(), startedEvent -> {..});
...
router.mountSubRouter("/api",HttpEndpoint.createHttpRoutes(
getVertx(), in.getType()));
Or I can create each new Endpoint for service as a Vertical instead of passing Vertx. My question is mostly about is it okay to pass vertx as an argument or when I need to do it I should implement new Vertical?

My 10 cents:
Yes, the point is that there can be thousands of verticles, because as I understand it the name comes from the word "particle" and the whole idea is a kind of UNIX philosophy bet on the JVM. So write each particle / verticle to do 1 thing and do it well. Use text streams to communicate between particles because that's a universal interface.
Then the answer to your question is about how many servers you have? How many JVM's are you going to fire up per server? What memory do you expect each JVM to use? How many verticles can you run per JVM within memory limits? How big are your message sizes? What's the network bandwidth limit? How many messages are going through your system? And, can the event bus handle this traffic?
Then it's all about how verticles work together, which is basically the event bus. What I think you want is your HttpServer to route messages to an event bus where different verticles are configured to listen to different "topics" (different text streams). If 1 verticle initiates a long term process it's triggered by an event on the bus then it puts the output back onto a topic for the next verticle / response verticle.
Again, that depends how many servers / JVM's you have and whether you've a clustered event bus or not.
So 1 verticle ought to serve multiple endpoints, for example using the Router, yeah, to match a given request from HttpServer to a Route, which then selects a Handler, and that Handler is in a given Verticle.

It's best to have a lot of verticles. That way your application is loosely coupled and can be easily load balanced. For example you may want 1-3 routing verticles, but a lot more worker verticles, if your load is high. And that way you can increase only the number of workers, without altering number of routing verticles.
I wouldn't suggest to pass vertx as an argument. Use EventBus instead, as #rupweb already suggested. Pass messages between your routing verticles to workers and back. That's the best practice you're looking for:
http://vertx.io/docs/vertx-core/java/#event_bus

Related

Akka system from a QA perspective

I had been testing an Akka based application for more than a month now. But, if I reflect upon it, I have following conclusions:
Akka actors alone can achieve lot of concurrency. I have reached more than 100,000 messages/sec. This is fine and it is just message passing.
Now, if there is netty layer for connections at one end or you end up with akka actors eventually doing DB calls, REST calls, writing to files, the whole system doesn't make sense anymore. The actors' mailbox gets full and their throughput(here, ability to receive msgs/sec) goes slow.
From a QA perspective, this is like having a huge pipe in which you can forcefully pump lot of water and it can handle. But, if the input hose is bad, or the endpoints cannot handle the pressure, this huge pipe is of no use.
I need answers for the following so that I can suggest or verify in the system:
Should the blocking calls like DB calls, REST calls be handled by actors? Or they good only for message passing?
Can it be like, lets say you have the need of connecting persistently millions of android/ios devices to your akka system. Instead of sockets(so unreliable) etc., can remote actor be implemented as a persistent connection?
Is it ok to do any sort of computation in actor's handleMessage()? Like DB calls etc.
I would request this post to get through by the editors. I cannot ask all of these separately.
1) Yes, they can. But this operation should be done in separate (worker) actor, that uses fork-join-pool in combination with scala.concurrent.blocking around the blocking code, it needs it to prevent thread starvation. If target system (DB, REST and so on) supports several concurrent connections, you may use akka's routers for that (creating one actor per connection in pool). Also you can produce several actors for several different tables (resources, queues etc.), depending on your transaction isolation and storage's consistency requirements.
Another way to handle this is using asynchronous requests with acknowledges instead of blocking. You may also put the blocking operation inside some separate future (thread, worker), which will send acknowledge message at the operation's end.
2) Yes, actor may be implemented as a persistence connection. It will be just an actor, which holds connection's state (as actors are stateful). It may be even more reliable using Akka Persistence, which can save connection to some storage.
3) You can do any non-blocking computations inside the actor's receive (there is no handleMessage method in akka). The failures (like no connection to DB) will be managing automatically by Akka Supervising. For the blocking code, see 1.
P.S. about "huge pipe". The backend-application itself is a pipe (which is becoming huge with akka), so nothing can help you to improve performance if environement can't handle it - there is no pumps in this world. But akka is also a "water tank", which means that outer pressure may be stronger than inner. Btw, it means that developer should be careful with mailboxes - as "too much water" may cause OutOfMemory, the way to prevent that is to organize back pressure. It can be done by not acknowledging incoming message (or simply blocking an endpoint's handler) til it proceeded by akka.
I'm not sure I can understand all of your question, but in general actors are good also for slow work:
1) Yes, they are perfectly fine. Just create/assign 1 actor per every request (maybe behind an akka router for load balancing), and once it's done it can either mark itself as "free for new work" or self-terminate. Remember to execute the slow code in a future. Personally, I like avoiding the ask/pipe pattern due to the implicit timeouts and exception swallowing, just use tells with request id's, but if your latencies and error rates are low, go for ask/pipe.
2) You could, but in that case I'd suggest having a pool of connections rather than spawning them per-request, as that takes longer. If you can provide more details, I can maybe improve this answer.
3) Yes, but think about this: actors are cheap. Create millions of them, every time there is a blocking part, it should be a different, specialized actors. Bring single-responsibility to the extreme. If you have few, blocking actors, you lose all the benefits.

Does Vert.x has real concurrency for single verticles?

the question might look like a troll but it is actually about how vert.x manages concurrency since a verticle itself runs in a dedicated thread.
Let's look at this simple vert.x http server written in Java:
import org.vertx.java.core.Handler;
import org.vertx.java.core.http.HttpServerRequest;
import org.vertx.java.platform.Verticle;
public class Server extends Verticle {
public void start() {
vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() {
public void handle(HttpServerRequest req) {
req.response().end("Hello");
}
}).listen(8080);
}
}
As far as I understand the docs, this whole file represents a verticle. So the start method is called within the dedicated verticle thread, so far so good. But where is the requestHandler invoked ? If it is invoked on exactly this thread I can't see where it is better than node.js.
I'm pretty familiar with Netty, which is the network/concurrency library vert.x is based on. Every incoming connection is mapped to a dedicated thread which scales quite nicely. So.. does this mean that incoming connections represent verticles as well ? But how can then the verticle instance "Server" communicate with those clients ? In fact I would say that this concept is as limited as Node.js.
Please help me to understand the concepts right!
Regards,
Chris
I've talked to someone who is quite involved in vert.x and he told me that I'm basically right about the "concurrency" issue.
BUT: He showed me a section in the docs which I totally missed where "Scaling servers" is explained in detail.
The basic concept is, that when you write a verticle you just have single core performance. But it is possible to start the vert.x platform using the -instance parameter which defines how many instances of a given verticle are run. Vert.x does a bit of magic under the hood so that 10 instances of my server do not try to open 10 server sockets but actually a single on instead. This way vert.x is horizontally scalable even for single verticles.
This is really a great concept and especially a great framework!!
Every verticle is single threaded, upon startup the vertx subsystem assigns an event loop to that verticle. Every code in that verticle will be executed in that event loop. Next time you should ask questions in http://groups.google.com/forum/#!forum/vertx, the group is very lively your question will most likely be answered immediately.
As you correctly answered to yourself, vertex indeed uses async non-blocking programming (like node.js) so you can't do blocking operations because you would otherwise stop the whole (application) world from turning.
You can scale servers as you correctly stated, by spawning more (n=CPU cores) verticle instances, each trying to listen on same TCP/HTTP port.
Where it shines compared to node.js is that the JVM itself is multi-threaded, which gives you more advantages (from the runtime point of view, not including type safety of Java etc etc):
Multithreaded (cross-verticle) communication, while still being constrained to thread-safe Actor-like model, does not require IPC (Inter Process Communication) to pass messages between verticles - everything happens inside the same process, same memory region. Which is faster than node.js spawning every forked task in a new system process and using IPC to communicate
Ability to do compute-heavy and/or blocking tasks within the same JVM process: http://vertx.io/docs/vertx-core/java/#blocking_code or http://vertx.io/docs/vertx-core/java/#worker_verticles
Speed of HotSpot JVM compared to V8 :)

Queuing / Worker Thread architecture for a single java process

I have the following problem to solve.
I need to write a java program that:
reads JSON object j1,j2,...,jn from a web service.
does some number crunching on each object to come up with j1',j2',...,jn'
Sends objects j1',j2',...,jn' to a web service.
The computational, space requirements for steps 1,2, and 3 can vary at any given time.
For example:
The time it takes to process a JSON object at step 2 can vary depending on the contents of the JSON Object.
The rate of objects being produced by the webservice in step 1 can go up or down with time.
The consuming web service in step 3 can get backlogged.
To address the above design concerns want to implement the following architecture:
Read JSON objects from the external webservice and place them on a Q
An automatically size adjusting worker thread pool that consumes JSON objects from the Q and processes them. After processing them, places the resulting objects on the second Q
An automatically size adjusting worker thread pool that consumes JSON objects from the second Q to send them to the consuming webservice.
Question:
I am curious if there is framework which I can use to solve this problem?
Notes:
I can solve this using a range of components like custom Queues, Threadpools using the concurrency package -- however I'm looking for a solution that allows the writing of such a solution.
This is not going to live inside a container. This will be a Java process where the entry point is public static void main(String args[])
However if there is a container suited to this paradigm I would like to learn about it.
I could split this into multiple processes, however I'd like to keep it very simple and in a single process.
Thanks.
Thanks.
try Apache camel or Spring Integration to wire things up. These are kind of integration frameworks, will ease your interaction with webservices. What you need to do is define a route from webservice 1 -> number cruncher -> web service 2. routing and conversion required in between can be handled by the framework itself
you'd implement your cruncher as a camel processor.
parallelizing your cruncher may be achieved via SEDA; Camel has a component for this pattern. Another alternate would be AsyncProcessor
I'd say you first take a look at the principles behind frameworks like camel. The abstractions they create are very relevant for the problem in hand.
I'm not exactly sure what the end question is for your post, but you have a reasonable design concept. One question I have for you is what environment are you in? Are you in a JavaEE container or just a simple standalone application?
If you are in a container, it would make more sense to have Message Driven Beans processing off of the JMS queues than having a pool of worker threads.
If in your own container, it would make more sense for you to manage the thread pool yourself. With that said, I would also consider having separate applications running that pull the work off of the queues which would lead to a better scaling architecture for you. If the need ever came up, you could add more machines with more workers pointing at the one queue.

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Is MQ publish/subscribe domain-specific interface generally faster than point-to-point?

I'm working on the existing application that uses transport layer with point-to-point MQ communication.
For each of the given list of accounts we need to retrieve some information.
Currently we have something like this to communicate with MQ:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
As you can see we wait until it finishes completely before proceeding to the next account.
Due to performance issues we need to rework it.
There are 2 possible scenarios that I can think off at the moment.
1) Within an application to create a bunch of threads that would execute transport adapter for each account. Then get data from each task. I prefer this method, but some of the team members argue that transport layer is a better place for such change and we should place extra load on MQ instead of our application.
2) Rework transport layer to use publish/subscribe model.
Ideally I want something like this:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
Then I will just send requests in the loop, and later retrieve data in the loop. The idea is that while first request is being processed by the back end system we don't have to wait for the response, but instead send next request.
My question, is it going to be a lot faster than current sequential retrieval?
The question title frames this as a choice between P2P and pub/sub but the question body frames it as a choice between threaded and pipelined processing. These are two completely different things.
Either code snippet provided could just as easily use P2P or pub/sub to put and get messages. The decision should not be based on speed but rather whether the interface in question requires a single message to be delivered to multiple receivers. If the answer is no then you probably want to stick with point-to-point, regardless of your application's threading model.
And, incidentally, the answer to the question posed in the title is "no." When you use the point-to-point model your messages resolve immediately to a destination or transmit queue and WebSphere MQ routes them from there. With pub/sub your message is handed off to an internal broker process that resolves zero to many possible destinations. Only after this step does the published message get put on a queue where, for the remainder of it's journey, it then is handled like any other point-to-point message. Although pub/sub is not normally noticeably slower than point-to-point the code path is longer and therefore, all other things being equal, it will add a bit more latency.
The other part of the question is about parallelism. You proposed either spinning up many threads or breaking the app up so that requests and replies are handled separately. A third option is to have multiple application instances running. You can combine any or all of these in your design. For example, you can spin up multiple request threads and multiple reply threads and then have application instances processing against multiple queue managers.
The key to this question is whether the messages have affinity to each other, to order dependencies or to the application instance or thread which created them. For example, if I am responding to an HTTP request with a request/reply then the thread attached to the HTTP session probably needs to be the one to receive the reply. But if the reply is truly asynchronous and all I need to do is update a database with the response data then having separate request and reply threads is helpful.
In either case, the ability to dynamically spin up or down the number of instances is helpful in managing peak workloads. If this is accomplished with threading alone then your performance scalability is bound to the upper limit of a single server. If this is accomplished by spinning up new application instances on the same or different server/QMgr then you get both scalability and workload balancing.
Please see the following article for more thoughts on these subjects: Mission:Messaging: Migration, failover, and scaling in a WebSphere MQ cluster
Also, go to the WebSphere MQ SupportPacs page and look for the Performance SupportPac for your platform and WMQ version. These are the ones with names beginning with MP**. These will show you the performance characteristics as the number of connected application instances varies.
It doesn't sound like you're thinking about this the right way. Regardless of the model you use (point-to-point or publish/subscribe), if your performance is bounded by a slow back-end system, neither will help speed up the process. If, however, you could theoretically issue more than one request at a time against the back-end system and expect to see a speed up, then you still don't really care if you do point-to-point or publish/subscribe. What you really care about is synchronous vs. asynchronous.
Your current approach for retrieving the data is clearly synchronous: you send the request message, and wait for the corresponding response message. You could do your communication asynchronously if you simply sent all the request messages in a row (perhaps in a loop) in one method, and then had a separate method (preferably on a different thread) monitoring the incoming topic for responses. This would ensure that your code would no longer block on individual requests. (This roughly corresponds to option 2, though without pub/sub.)
I think option 1 could get pretty unwieldly, depending on how many requests you actually have to make, though it, too, could be implemented without switching to a pub/sub channel.
The reworked approach will use fewer threads. Whether that makes the application faster depends on whether the overhead of managing a lot of threads is currently slowing you down. If you have fewer than 1000 threads (this is a very, very rough order-of-magnitude estimate!), i would guess it probably isn't. If you have more than that, it might well be.

Categories

Resources