ZeroMQ brokerless network with queues (Java) - java

Is it possible to implement a brokerless network with queues using ZeroMQ (with JeroMQ Java porting)?
In my network all peers are both publishers and receivers (SUB/PUB pattern), so that when a peer sends a message all other peers get the message.
The problem is messages are not reliable and can get lost (for example for connectivity issues) and not recovered anymore.
I'd like to implement a queue where peers can retrieve messages they have not received.
I'm looking at this guide (even though it's for Python) and it seems I should implement the XREP/XREQ pattern:
but it seems this is possible only implementing a queue server. Is it true?

Q: Is this possible only implementing a queue server?A: No.
May be I did not get your point of view exactly, but having a few years spent inside ZeroMQ based distributed-systems, I can address a few misses in the concept.
First:
Yes, Zen-of-Zero does provide ZERO-Warranty for a respective message delivery. This may seem surprising, but there are many reasons for working this way and no other. There is a warranty of consistency - i.e. a message is either delivered as-is or none at all. This means, if the message has made it through the socket, the receiving side may be sure, that the sender was dispatching this very content and no error-checking need be put in place, as the ZeroMQ has already spent all its effort to deliver a 1:1 bit-by-bit copy of the original.
Next:
ZeroMQ is designed as a Broker-less asynchronous lightweight signalling / messaging tool. The word Broker-less means, there are zero-efforts spend for any sort of a tool-based persistence, so indeed there is no care about any Broker-side storing any (semi-)persistent replica(s) of the messages, be it those delivered or those not delivered due to whatever technical reason ( yet, those delivered are -- as expressed above -- guaranteed to be OK and an exact copy of the original ).
Implication:
this means, there will be zero effect from designing a zmq.device( zmq.Queue, f, b ) as this will have all the properties reported above, so it will principally live under the same set of paradigms.
Solution?
If one needs to have both the delivered content-warranty and also the all-messages-delivered warranty, the former is included in the ZeroMQ tools since inception, the latter is to be added on top of the standard tools, as an extended supra-pattern, re-using the delivery-agnostic standard tools.
This way one can get what you have sketched above, yet not wasting a single CPU-clock in all other use-cases, where delivery-agnostic, just "best-effort" transports are okay.

Related

Scenario's for loseing messages with akka when all actors run in the same jvm

I once wrote a multiplayer game in java with the help of the akka framwork. With their 'at-least-once' delivery I always wondered in which cases a message can get lost if all the akka actors are running in the same local jvm.
The design of the game was like a giant state machine (as events needed to be processed in order), so most of the times only 1 message was on it's way between all the involved actors. (Running multiple session in parralel was possible)
I have read that the communication of actors, when running locally, is done in-memory. So not considering out of memory error's, are their other (preferably reroducable) scenario's where messages are actually lost?
Note:
Message box manipulation is also not what I am looking for. Just legit cases where something goes wrong and the message is really lost.
I have read that the communication of actors, when running locally, is done in-memory. So not considering out of memory error's, are their other (preferably reroducable) scenario's where messages are actually lost?
So, normally in-memory delivery is true, but you also specified that you are talking about the "at-least-once delivery" feature. And the docs ( https://doc.akka.io/docs/akka/current/persistence.html#at-least-once-delivery ) specifically talk about how at-least-once changes a lot of the normal behavior. Specifically, at-least-once uses persistence to track what has been sent and what has been acknowledged.
So, when you are using at-least-once, there is a whole dance that has to happen when sending a message. First the message has to to be stored such that if the sending actor fails, that work can resume elsewhere. Second, the message has to be sent. Third, any response has to be correlated with the sent message and the receipt persisted so that if the actor fails after that point that the resuming actor knows that the message doesn't have to be retried.
Because of this, there should be no case where a message is lost. Even when a JVM is lost. (Even when all JVM's are lost.) That, after all is the point of "at-least-once", that the message is guaranteed to be delivered (and replied to). However, note that this does come with tradeoffs. (See the docs for the tradeoffs, although one obvious one is performance.)

Live resources in Akka Stream flow description

There is this note in the akka-stream docs stating as follows:
… a reusable flow description cannot be bound to “live” resources, any connection to or allocation of such resources must be deferred until materialization time. Examples of “live” resources are already existing TCP connections, a multicast Publisher, etc.; …
I have several questions concerning the note:
Apart from the these two examples, what other resource counts as a live?
Anything that cannot be safely (deep)copied? Like a Thread?
Should I also avoid sharing anything that's not thread-safe?
What about an ActorRef existing in the ActorSystem used by the ActorFlowMaterializer?
How to defer allocation until materialization time? Is it safe for example to allocate it in the constructor of a PushPullStage but not in the create function of a FlowGraph?
The problem here is a common problem if we consider webservices, RMI connections or any other communication protocol. It's always recommended sharing "primitive" values then references, because marshalling/unmarshalling or serializing/unserializing is always a headache. Also think of different types of environments communicating each other. Sharing solid values is a safe way to solve communication.
Akka by itself is a good example of "microservices" communicating actors each other. When I read the documentation of Akka, one good word defines Akka actors very well. Actors are like mailbox clients and you can think of each client has a mailbox. When you pass a variable, it's just like you got a new email.
Short result of long story, be avoid sharing "dependent" objects that can be invalidated before it's read from another actor. Additionally, if your system names actorRefs dynamically, avoid calling them by its reference.
"Materializing" is explained in docs of akka-streams.
The process of materialization may be parameterized, e.g. instantiating a blueprint for handling a TCP connection’s data with specific information about the connection’s address and port information. Additionally, materialization will often create specific objects that are useful to interact with the processing engine once it is running, for example for shutting it down or for extracting metrics. This means that the materialization function takes a set of parameters from the outside and it produces a set of results. Compositionality demands that these two sets cannot interact, because that would establish a covert channel by which different pieces could communicate, leading to problems of initialization order and inscrutable runtime failures.
So use parameters instead of passing "connection" itself.
Deferring a live resource is not a big think. That means if you use one connection for all system, you should keep it alive always. Or when you create a transaction in actor-1 and send it to actor-2, you shouldn't terminate the transaction in actor-1 until actor-2 finished its job with transaction.
Then how you can understand ? Then you use "Future" and "offer()".
Hope I understand your question and hope I can express myself.

Akka system from a QA perspective

I had been testing an Akka based application for more than a month now. But, if I reflect upon it, I have following conclusions:
Akka actors alone can achieve lot of concurrency. I have reached more than 100,000 messages/sec. This is fine and it is just message passing.
Now, if there is netty layer for connections at one end or you end up with akka actors eventually doing DB calls, REST calls, writing to files, the whole system doesn't make sense anymore. The actors' mailbox gets full and their throughput(here, ability to receive msgs/sec) goes slow.
From a QA perspective, this is like having a huge pipe in which you can forcefully pump lot of water and it can handle. But, if the input hose is bad, or the endpoints cannot handle the pressure, this huge pipe is of no use.
I need answers for the following so that I can suggest or verify in the system:
Should the blocking calls like DB calls, REST calls be handled by actors? Or they good only for message passing?
Can it be like, lets say you have the need of connecting persistently millions of android/ios devices to your akka system. Instead of sockets(so unreliable) etc., can remote actor be implemented as a persistent connection?
Is it ok to do any sort of computation in actor's handleMessage()? Like DB calls etc.
I would request this post to get through by the editors. I cannot ask all of these separately.
1) Yes, they can. But this operation should be done in separate (worker) actor, that uses fork-join-pool in combination with scala.concurrent.blocking around the blocking code, it needs it to prevent thread starvation. If target system (DB, REST and so on) supports several concurrent connections, you may use akka's routers for that (creating one actor per connection in pool). Also you can produce several actors for several different tables (resources, queues etc.), depending on your transaction isolation and storage's consistency requirements.
Another way to handle this is using asynchronous requests with acknowledges instead of blocking. You may also put the blocking operation inside some separate future (thread, worker), which will send acknowledge message at the operation's end.
2) Yes, actor may be implemented as a persistence connection. It will be just an actor, which holds connection's state (as actors are stateful). It may be even more reliable using Akka Persistence, which can save connection to some storage.
3) You can do any non-blocking computations inside the actor's receive (there is no handleMessage method in akka). The failures (like no connection to DB) will be managing automatically by Akka Supervising. For the blocking code, see 1.
P.S. about "huge pipe". The backend-application itself is a pipe (which is becoming huge with akka), so nothing can help you to improve performance if environement can't handle it - there is no pumps in this world. But akka is also a "water tank", which means that outer pressure may be stronger than inner. Btw, it means that developer should be careful with mailboxes - as "too much water" may cause OutOfMemory, the way to prevent that is to organize back pressure. It can be done by not acknowledging incoming message (or simply blocking an endpoint's handler) til it proceeded by akka.
I'm not sure I can understand all of your question, but in general actors are good also for slow work:
1) Yes, they are perfectly fine. Just create/assign 1 actor per every request (maybe behind an akka router for load balancing), and once it's done it can either mark itself as "free for new work" or self-terminate. Remember to execute the slow code in a future. Personally, I like avoiding the ask/pipe pattern due to the implicit timeouts and exception swallowing, just use tells with request id's, but if your latencies and error rates are low, go for ask/pipe.
2) You could, but in that case I'd suggest having a pool of connections rather than spawning them per-request, as that takes longer. If you can provide more details, I can maybe improve this answer.
3) Yes, but think about this: actors are cheap. Create millions of them, every time there is a blocking part, it should be a different, specialized actors. Bring single-responsibility to the extreme. If you have few, blocking actors, you lose all the benefits.

Is there an API that allows ordering event in clustered application?

Given the following facts, is there a existing open-source Java API (possibly as part of some greater product) that implements an algorithm enabling the reproducible ordering of events in a cluster environment:
1) There are N sources of events, each with a unique ID.
2) Each event produced has an ID/timestamp, which, together with
its source ID, makes it uniquely identifiable.
3) The ids can be used to sort the events.
4) There are M application servers receiving those events.
M is normally 3.
5) The events can arrive at any one or more of the application
servers, in no specific order.
6) The events are processed in batches.
7) The servers have to agree for each batch on the list of events
to process.
8) The event each have earliest and latest batch ID in which they
must be processed.
9) They must not be processed earlier, and are "failed" if they
cannot be processed before the deadline.
10) The batches are based on the real clock time. For example,
one batch per second.
11) The events of a batch are processed when 2 of the 3 servers
agree on the list of events to process for that batch (quorum).
12) The "third" server then has to wait until it possesses all the
required events before it can process that batch too.
13) Once an event was processed or failed, the source has to be
informed.
14) [EDIT] Events from one source must be processed (or failed) in
the order of their ID/timestamp, but there is no causality
between different sources.
Less formally, I have those servers that receive events. They start with the same initial state, and should keep in sync by agreeing on which event to process in which order. Luckily for me, the events are not to be processed ASAP, but "in a bit", so that I have some time to get the servers to agree before the deadline. But I'm not sure if that actually make any real difference to the algorithms. And if all servers agree on all batches, then they will always be in sync, therefore presenting a consistent view when queried.
While I would be most happy with a Java API, I would accept something else if I can call it from Java. And if there is no open-source API, but a clear algorithm, I would also take that as an answer and try to implement it myself.
Looking at the question and your follow-up there probably "wasn't" an API to satisfy your requirements. To day you could take a look at the Kafka (from LinkedIn)
Apache Kafka
And the general concept of "a log" entity, in what folks like to call 'big data':
The Log: What every software engineer should know about real-time data's unifying abstraction
Actually for your question, I'd begin with the blog about "the log". In my terms the way it works -- And Kafka isn't the only package out doing log handling -- Works as follows:
Instead of a queue based message-passing / publish-subscribe
Kafka uses a "log" of messages
Subscribers (or end-points) can consume the log
The log guarantees to be "in-order"; it handles giga-data, is fast
Double check on the guarantee, there's usually a trade-off for reliability
You just read the log, I think reads are destructive as the default.
If there's a subscriber group, everyone can 'read' before the log-entry dies.
The basic handling (compute) process for the log, is a Map-Reduce-Filter model so you read-everything really fast; keep only stuff you want; process it (reduce) produce outcome(s).
The downside seems to be you need clusters and stuff to make it really shine. Since different servers or sites was mentioned I think we are still on track. I found it a finicky to get up-and-running with the Apache downloads because the tend to assume non-Windows environments (ho hum).
The other 'fast' option would be
Apache Apollo
Which would need you to do the plumbing for connecting different servers. Since the requirements include ...
servers that receive events. They start with the same initial state, and should keep in sync by agreeing on which event to process in which order. Luckily for me, the events are not to be processed ASAP, but "in a bit", so that I have some time to get the servers to agree before the deadline
I suggest looking at a "Getting Started" example or tutorial with Kafka and then looking at similar ZooKeeper organised message/log software(s). Good luck and Enjoy!
So far I haven't got a clear answer, but I think it would be useful anyone interested to see what I found.
Here are some theoretical discussions related to the subject:
Dynamic Vector Clocks for Consistent Ordering of Events
Conflict-free Replicated Data Types
One way of making multiple concurent process wait for each other, which I could use to synchronize the "batches" is a distributed barrier. One Java implementation seems to be available on top of Hazelcast and another uses ZooKeeper
One simpler alternative I found is to use a DB. Every process inserts all events it receives into the DB. Depending on the DB design, this can be fully concurrent and lock-free, like in VoltDB, for example. Then at regular interval of one second, some "cron job" runs that selects and marks the events to be processed in the next batch. The job can run on every server. The first to run the job for one batches fixes the set of events, so that the others just get to use the list that the first one defined. Like that we have a guarantee that all batches contain the same set of event on all servers. And if we can use a complete order over the whole batch, which the cron job could specify itself, then the state of the servers will be kept in sync.

Is MQ publish/subscribe domain-specific interface generally faster than point-to-point?

I'm working on the existing application that uses transport layer with point-to-point MQ communication.
For each of the given list of accounts we need to retrieve some information.
Currently we have something like this to communicate with MQ:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
As you can see we wait until it finishes completely before proceeding to the next account.
Due to performance issues we need to rework it.
There are 2 possible scenarios that I can think off at the moment.
1) Within an application to create a bunch of threads that would execute transport adapter for each account. Then get data from each task. I prefer this method, but some of the team members argue that transport layer is a better place for such change and we should place extra load on MQ instead of our application.
2) Rework transport layer to use publish/subscribe model.
Ideally I want something like this:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
Then I will just send requests in the loop, and later retrieve data in the loop. The idea is that while first request is being processed by the back end system we don't have to wait for the response, but instead send next request.
My question, is it going to be a lot faster than current sequential retrieval?
The question title frames this as a choice between P2P and pub/sub but the question body frames it as a choice between threaded and pipelined processing. These are two completely different things.
Either code snippet provided could just as easily use P2P or pub/sub to put and get messages. The decision should not be based on speed but rather whether the interface in question requires a single message to be delivered to multiple receivers. If the answer is no then you probably want to stick with point-to-point, regardless of your application's threading model.
And, incidentally, the answer to the question posed in the title is "no." When you use the point-to-point model your messages resolve immediately to a destination or transmit queue and WebSphere MQ routes them from there. With pub/sub your message is handed off to an internal broker process that resolves zero to many possible destinations. Only after this step does the published message get put on a queue where, for the remainder of it's journey, it then is handled like any other point-to-point message. Although pub/sub is not normally noticeably slower than point-to-point the code path is longer and therefore, all other things being equal, it will add a bit more latency.
The other part of the question is about parallelism. You proposed either spinning up many threads or breaking the app up so that requests and replies are handled separately. A third option is to have multiple application instances running. You can combine any or all of these in your design. For example, you can spin up multiple request threads and multiple reply threads and then have application instances processing against multiple queue managers.
The key to this question is whether the messages have affinity to each other, to order dependencies or to the application instance or thread which created them. For example, if I am responding to an HTTP request with a request/reply then the thread attached to the HTTP session probably needs to be the one to receive the reply. But if the reply is truly asynchronous and all I need to do is update a database with the response data then having separate request and reply threads is helpful.
In either case, the ability to dynamically spin up or down the number of instances is helpful in managing peak workloads. If this is accomplished with threading alone then your performance scalability is bound to the upper limit of a single server. If this is accomplished by spinning up new application instances on the same or different server/QMgr then you get both scalability and workload balancing.
Please see the following article for more thoughts on these subjects: Mission:Messaging: Migration, failover, and scaling in a WebSphere MQ cluster
Also, go to the WebSphere MQ SupportPacs page and look for the Performance SupportPac for your platform and WMQ version. These are the ones with names beginning with MP**. These will show you the performance characteristics as the number of connected application instances varies.
It doesn't sound like you're thinking about this the right way. Regardless of the model you use (point-to-point or publish/subscribe), if your performance is bounded by a slow back-end system, neither will help speed up the process. If, however, you could theoretically issue more than one request at a time against the back-end system and expect to see a speed up, then you still don't really care if you do point-to-point or publish/subscribe. What you really care about is synchronous vs. asynchronous.
Your current approach for retrieving the data is clearly synchronous: you send the request message, and wait for the corresponding response message. You could do your communication asynchronously if you simply sent all the request messages in a row (perhaps in a loop) in one method, and then had a separate method (preferably on a different thread) monitoring the incoming topic for responses. This would ensure that your code would no longer block on individual requests. (This roughly corresponds to option 2, though without pub/sub.)
I think option 1 could get pretty unwieldly, depending on how many requests you actually have to make, though it, too, could be implemented without switching to a pub/sub channel.
The reworked approach will use fewer threads. Whether that makes the application faster depends on whether the overhead of managing a lot of threads is currently slowing you down. If you have fewer than 1000 threads (this is a very, very rough order-of-magnitude estimate!), i would guess it probably isn't. If you have more than that, it might well be.

Categories

Resources