Sorry for the newbie-ish question, but are messaging brokers such as RabbitMQ a replacement for writing our own message buffers? Meaning if we have fast producers and slow consumers, using a messaging broker takes care of the queueing or do I still have to implmement my own queue buffer?
Your scenario does indeed suggest using a JMS solution, like RabbitMQ. This is a primary reason these queue solutions exist. If you're expecting a huge bottleneck, with more producers than you can afford consumers, you may have to configure your queues in a more sophisticated manner. But, you shouldn't have to implement your own buffer.
Related
I want to use hundreds of thousands of KafkaConsumer. For example, I need 100_000 consumers for some architectural pattern. I am thinking, is it OK? Or should I to refactor my system and use few consumers for the whole system (for example, 10 consumers instead of 100_000).
So, my questions are:
Is there connection pool in KafkaConsumer, or each consumer creates own connection to kafka brokers?
Is there thread pool in KafkaConsumer, or each consumer creates own thread (I hope, it does not).
What is average memory consumption per KafkaConsumer?
What do you think about such architectural pattern?
1,2) Consumers request metadata from one of the brokers which is the leader of the partition. Each consumer is able to handle all IO from a single thread as the Java clients are designed around an event loop which is driven by the poll(). You can also build multi-threaded consumers but you'd need take care of offset management. Refer to Confluent's documentation for more details regarding the implementation of Java Clients.
3) According to Apache Kafka and Confluent Enterprise Reference architecture,
Consumers use at least 2MB per consumer and up to 64MB in cases of
large responses from brokers (typical for bursty traffic)
4) The number of consumers you've mentioned is huge so you'd need a very good reason to go for 100,000 consumers. It depends on the scenario though, but even Netflix should be using a lot less than that.
I just work with an new project backed by RabbitMQ, and there are multiple consumer instances created listening to the same queue when the application starts. Howerver they shares the same connections with different channels.
The messages from the queue are massive(millions messages for one single producing behavior ) so I guess the very first code author is trying to do something to make consuming faster.
I am trying to find some posts discussing on this but I can't find a very certain answer.
What I get so far is:
Each channel will have a separate dispatch thread
The operation commands on the same channel is serialized even though they are called in multiple thread
So
creating multiple consumers thus multiple channels will have multiple dispatch threads, but I don't think it provided a better performance to message dispatching since the dispatch should far from enough with one single thread.
The operation of ack will can be paralized in different channels, I am not quite sure this will give any better performances.
Since more channels consume more system resources I wonder is this practice good?
There seem to be a few things going on here, so let's try to look at this scenario from a holistic perspective.
For starters, it sounds like the original designer of this code understood some basics about RabbitMQ (or learned a few things by trial and error), but may have had trouble putting all the pieces together- hopefully I can help.
RabbitMQ connections are, in reality, AMQP-over-TCP connections (and thus are somewhere around the session layer of the OSI model). TCP connections are supposed to be opened up and used until some sort of network interruption or application shutdown closes them (and for this reason, AMQP has trouble with firewalls and other smart network devices). Using a single TCP connection for message processing activities for a single logical process is a good idea, as creating and destroying TCP connections is usually an expensive process for the computer, which leads to
RabbitMQ channels are used to multiplex communication streams in the AMQP-Over-TCP connection (and are defined in the AMQP Protocol Spec). All they do is specify an integer value (I can't remember the number of bytes, but it doesn't matter anyway) used to preface the subsequent command or response on a TCP connection. Most AMQP operations are channel-specific. For the purposes of higher-level operations, channels are treated similar to connections, as they are application-level constructs.
Now, where I think the question starts to go off the rails a bit is here:
The messages from the queue are massive(millions messages for one
single producing behavior ) so I guess the very first code author is
trying to do something to make consuming faster.
A fundamental assumption about a system which uses queues is that messages are consumed at approximately the same rate that they are produced. Queues exist to buffer uneven producing activities. The mathematics and statistics of how queues work are quite interesting, and assuming the production of messages is done in response to some real-world stimulus, your system is virtually guaranteed to behave in a predictable manner. Therefore, your design goal is to ensure that there are enough consumers to process the messages that are produced, and to respond to changing conditions as needed. Your goal should not be to "speed up" the consumers (unless they have some specific issue), but rather to have enough consumers to process the total load.
Further, the average number of items in the queue at any time should approach zero. It is usually a good idea to have overcapacity so that you don't wind up with an unstable situation where messages start accumulating in the queue (and the queue ends up looking like the Stack Overflow Close Vote Queue).
And that brings us to an attempt to answer your fundamental question, which seems to deal with threading and possibly detailed implementation of the Java client, which I will readily admit I have not used (I'm a .NET guy).
Here are some design guidelines for your software:
Ensure that a single thread uses no more than one channel.
Use one TCP connection per logical consuming process.
Balance the number of logical processes on a single physical machine such that resource contention is not a problem (you don't want to starve your consumers of computer resources).
Try to use BASIC.GET as opposed to a push-based consumer. Use of consumers is difficult in practice, and there is no performance benefit at the protocol level over a BASIC.GET. Note I do not know if the Java library has implemented these differently such that it does cause a performance difference- stranger things have been known to happen.
If you do use consumers, make sure pre-fetch is set to 0 (disabled) and that AutoAck is set to false if reliable processing is important (most applications require reliable processing). Along with this, make sure you are acknowledging messages upon completion of processing!
Periodically reboot your consuming threads, channels, and processors - or do a BASIC.Recover. There are degrees of randomness that will result in unacknowledged messages accumulating over time, and this will deal with it.
Again, if you prefer to use consumers, generally speaking to share consumers across channels is a bad idea. Each consumer should get its own channel.
I must handle about 100 JMS Queue in a point-to-point messaging architecture. Every queue has a consumer. So I will have 100 consumer threads to handle them. Is it ok?
1)ActiveMQ Support your request(suggest write a connection pool)
2)you should confirm you server configuration whether is ok,when
QPS is high,
Instead of 100 queues, you could use a single queue and provide JMS message properties, having each consumer filter just the messages it wants.
What this does is give you some more options in architecture and deployment. You could have a single process consume multiple type of messages. Depending on your scaling issues, you could have multiple instances of a single consumer spread out among processes/servers/whatever.
You could also have one consumer for all 100 logical queues, reading the property and figuring out where to hand off the message internally, again, depending on whatever design issues you're running into.
Overall, messaging is so light-weight that it takes a significant volume of messages or a significant size of individual messages to really hurt things. I've got an ActiveMQ app that upon restart might have to process 10K/20K messages and it's complete in seconds. Fairly small messages, but still very possible (and my experience with other MQs is similar performance, as long as your processing is not too overwhelmingly difficult, you should be able to keep up).
The Problem
We have a multi-datacenter activeMQ setup, with NFS for each HA pair, and it seems that activeMQ isn't really scalable, and doesn't play well with NFS issues. (we're using 5.7)
The Possible Solution
Move to Kafka
Requirements
We need PointToPoint & pub/sub functionality
Message Priorities (I know kafka doesn't provide that out of the box, but there's a workaround for it on our side)
Question
Is this possible with Kafka (not necessarily out-of-the-box, but with some client tweaking)? If not, then what other technology would you suggest? It doesn't have to be JMS, but it needs to be scalable and reliable (and it needs to play well with NFS)
We need PointToPoint & pub/sub functionality
Kafka does that, shared my finding here
Message Priorities
Little confused what exactly you mean, but by priorities if you mean to consume from a specific offset then the Low level or Simple consumer API provide that. It also supports re-submission of messages as well
For point to point delivery of messages, (single producer, single consumer), you can configure the number of partitions as 1. Producer will publish messages on this topic and partition and a single consumer, will read from that topic and partition. A topic would equate to your queue in ActiveMQ terms.
If you want to add message priorities, I would use the lower level Kafka client, you could then increase the number of partitions, (each partition for a message of different priority level), and have the consumer fetch messages from the highest priority topic first, if no message exists it would then fetch from the next lower priority topic.
In your case I would use Kafka with 1 partition per topic with separate topics for each message priority level, solving prioritization of delivery on the subscriber side.
Kafka provides point to point and publish subscribe usage patterns by user groups concept.
What there is not directly supported is selectors, priorities. But you get possibility tho distribute messages across partitions so you could for example distribute messages based on priority to partitions.
You also get for free message persistence (one of the core principles) limited by retention policy. Each message in kafka is essentially key-value pair. Key has some specific semantics in partitioning and log compaction. There is nothing like in traditional messaging systems that you can use custom header which you can use for routing etc. Following article tries to summarize that.
I've been asked to design and implement a system for receiving a high volume of automated sensor data from a large number of devices. This data will be produced at regular intervals and sent to the server as xml in an http post. The devices will keep resending the same data if they don't receive a specific acknowledgment from the server. Some potentially heavy duty processing of this data will need to occur before it's inserted to a number of tables in the main database via a transaction, and additionally some data points will need to be enqueued to be re-directed to other external urls.
I'm planning on using a Java application server (leaning towards GlassFish) with a servlet to receive the incoming data. I'd like to implement some kind of queuing mechanism to store the data temporarily so that the response back to the sensor isn't dependent on all the intermediate processing. Separate independent queues are also a requirement for the data re-direction piece. After doing some research the two main options seem to be:
1) Install a database on the app server and use tables for the various queues. The queues would be processed by a Java application, either running in the app server or standalone as it's own service.
2) Use a database backed JMS solution to implement the queuing.
I'm not that familiar with JMS but from what I've read it seems to be the better solution in this case. The primary requirement is that no sensor data ever be lost or dropped from the queue before being processed and that it be processed more or less sequentially. We'd also like to make it easy to halt the processing of some of the queues at certain times but still have them accumulate data and for these messages to never automatically expire.
With strategy 1 it's obvious to me how to meet these requirements but it may be less robust and scalable, and more complex to develop than strategy 2, since I'll need to write my own multi-threaded code to handle the various independent queues. I'm wondering what the potential pitfalls could be in using JMS queues for this purpose since I've never worked with them before.
Data integrity is a big issue so I need to make sure JMS can guarantee no data loss in the event of a server reboot, power outage, or if the queue gets very large for some reason. For instance could a problem completing transactions to the main database for a period of time potentially cause the JVM to run out of memory, crash, and lose all accumulated data? (This would be the nightmare scenario).
Also, I was wondering if there would be any way to pause the JMS queue processing via an app server admin tool or to easily see what's in the queue (I would be enqueuing an object which would be the message xml plus some other data, including timestamp received, etc.) I've read a few posts on here that deal with related issues but wanted to get some direct feedback. Basically I'd like to know of instances (if any) where JMS is not an appropriate queuing solution and if this is one of those cases. Any advice is greatly appreciated.
Kaleb's answer talks about the benefits of JMS quite eloquently, but since you're asking about pitfalls, here's what I can think of.
Not all JMS implementations are equal. In theory you can use whatever implementation suits your needs, but unless you're prepared to do some serious load testing and failure condition testing, you can't know that a particular implementation isn't going to fail under your particular use case.
Most JMS use a transactional datastore like a relational database as their back end. That means that rather than writing directly to whatever datastore you're familiar with, you have to rely on the JMS implementation's extra layer between you and that stored messages.
While swapping JMS implementations to find the one that perfectly fits your needs may seem like a simple endeavor because of the homogeneous JMS API, the critical features for failure handling, JMS server monitoring, and all the other cool stuff that exists above and beyond messaging is going to be a hassle to deal with if you do change your implementation.
That said, I think you'd be crazy to write to the DB yourself instead of going with JMS. On the first point, ActiveMQ is a venerable JMS server used in many enterprise environments. On the second point, the fact is you'd just end up writing that extra layer yourself in order to implement messaging, and your code won't have the benefit of thousands of eyes (or a set of paid developers who's sole job it is to respond to customers and make sure the JMS implementation is solid). On the third point, well the same ends up being true of your backend datastore. Use JMS, you'll save yourself trouble in the long run.
If you want to go the JMS route, a standalone JMS-compatible message broker (separate from your app server) would be a good choice. Message brokers range from free open-source (like ActiveMQ at http://activemq.apache.org/ or OpenMQ at https://mq.dev.java.net/), to large-scale commercial solutions (IBM's WebSphere MQ at http://www-01.ibm.com/software/integration/wmq/ is one of the largest).
Message brokers offer guaranteed delivery (provided the server's up and listening), and you can do quite a bit to ensure that the system is fail-safe including integrated backup broker servers and instant power backup. Broker queues can eventually run out of room if your app server isn't picking up the messages, but you can assign huge queue depth (100's of GB) and have the server send alerts if the messages aren't getting processed and the queue reaches a certain percentage.
Your Java app would then run on a different server entirely, and would connect to the broker and pull messages off of the queue as fast as possible. If the app server crashes or stops picking up messages for any other reason, the broker would just keep all messages in that queue until the app server begins picking them up again.
You will be wanting to implement a poison message queue in your implementation - this is the place that messages unable to be processed after some number of retries will arrive.
You will probably need to write some code that can examine the messages in that queue and re-send them to the appropriate destination after fixing whatever is causing them to fail.
If sequence of message processing is important, a message ending up in the poison queue could mean all processing is halted until that message is corrected.
As far as fault tolerance goes, you can have multiple instances of the consuming services subscribe to the same queue or topic, providing an ability to continue processing even if one or more instances goes down.
Finally, have a watchdog process that pings the various consumers on your message queue, and if one doesn't respond, have it send a message that results in a new instance being started. In this way, your message processing environment can be somewhat self regulating.