Here is my scenario: I maintain a service that acts primarily as an API gateway. It receives an HTTP REST request, makes multiple GRPC service calls and then combines the responses into a contextual response.
This service is running Jetty, currently configured with 250 threads.
I have several different back-end GRPC services that I call, and for each service, I'm currently creating one ManagedChannel and one BlockingStub, which I'm sharing across all of the worker threads.
I know that this is fine, since both the Channel and Stub are thread-safe, and there is no shared state amongst my threads (all my requests are idempotent).
However, I'm curious if this is the "right" way to do things. I've read some other items about pooling Channels, or having one channel and multiple Stubs, but if I'm not hitting the I/O limit for a Channel, I can't see the benefit (since under the hood, each ClientCall executes in the calling thread).
Is there a specific pointer to Java GRPC 'best practice' documentation that would help me with this?
It sounds like what you're doing is fine. Sharing the ManagedChannel as much as reasonable/possible is the most important piece. It doesn't really matter whether you share stubs or not, nor whether you share interceptors. It's a bit unclear whether you could share ManagedChannels across services (if any of the channels are to the same target).
You are right that some use-cases may want a "pool" of Channels for higher byte throughput, but this is a minority case. Also, even in that case you can "hide" that logic by creating a Channel (or even implement ManagedChannel) that does round-robin across multiple ManagedChannels, and share that "one" Channel as much as possible.
Related
I just work with an new project backed by RabbitMQ, and there are multiple consumer instances created listening to the same queue when the application starts. Howerver they shares the same connections with different channels.
The messages from the queue are massive(millions messages for one single producing behavior ) so I guess the very first code author is trying to do something to make consuming faster.
I am trying to find some posts discussing on this but I can't find a very certain answer.
What I get so far is:
Each channel will have a separate dispatch thread
The operation commands on the same channel is serialized even though they are called in multiple thread
So
creating multiple consumers thus multiple channels will have multiple dispatch threads, but I don't think it provided a better performance to message dispatching since the dispatch should far from enough with one single thread.
The operation of ack will can be paralized in different channels, I am not quite sure this will give any better performances.
Since more channels consume more system resources I wonder is this practice good?
There seem to be a few things going on here, so let's try to look at this scenario from a holistic perspective.
For starters, it sounds like the original designer of this code understood some basics about RabbitMQ (or learned a few things by trial and error), but may have had trouble putting all the pieces together- hopefully I can help.
RabbitMQ connections are, in reality, AMQP-over-TCP connections (and thus are somewhere around the session layer of the OSI model). TCP connections are supposed to be opened up and used until some sort of network interruption or application shutdown closes them (and for this reason, AMQP has trouble with firewalls and other smart network devices). Using a single TCP connection for message processing activities for a single logical process is a good idea, as creating and destroying TCP connections is usually an expensive process for the computer, which leads to
RabbitMQ channels are used to multiplex communication streams in the AMQP-Over-TCP connection (and are defined in the AMQP Protocol Spec). All they do is specify an integer value (I can't remember the number of bytes, but it doesn't matter anyway) used to preface the subsequent command or response on a TCP connection. Most AMQP operations are channel-specific. For the purposes of higher-level operations, channels are treated similar to connections, as they are application-level constructs.
Now, where I think the question starts to go off the rails a bit is here:
The messages from the queue are massive(millions messages for one
single producing behavior ) so I guess the very first code author is
trying to do something to make consuming faster.
A fundamental assumption about a system which uses queues is that messages are consumed at approximately the same rate that they are produced. Queues exist to buffer uneven producing activities. The mathematics and statistics of how queues work are quite interesting, and assuming the production of messages is done in response to some real-world stimulus, your system is virtually guaranteed to behave in a predictable manner. Therefore, your design goal is to ensure that there are enough consumers to process the messages that are produced, and to respond to changing conditions as needed. Your goal should not be to "speed up" the consumers (unless they have some specific issue), but rather to have enough consumers to process the total load.
Further, the average number of items in the queue at any time should approach zero. It is usually a good idea to have overcapacity so that you don't wind up with an unstable situation where messages start accumulating in the queue (and the queue ends up looking like the Stack Overflow Close Vote Queue).
And that brings us to an attempt to answer your fundamental question, which seems to deal with threading and possibly detailed implementation of the Java client, which I will readily admit I have not used (I'm a .NET guy).
Here are some design guidelines for your software:
Ensure that a single thread uses no more than one channel.
Use one TCP connection per logical consuming process.
Balance the number of logical processes on a single physical machine such that resource contention is not a problem (you don't want to starve your consumers of computer resources).
Try to use BASIC.GET as opposed to a push-based consumer. Use of consumers is difficult in practice, and there is no performance benefit at the protocol level over a BASIC.GET. Note I do not know if the Java library has implemented these differently such that it does cause a performance difference- stranger things have been known to happen.
If you do use consumers, make sure pre-fetch is set to 0 (disabled) and that AutoAck is set to false if reliable processing is important (most applications require reliable processing). Along with this, make sure you are acknowledging messages upon completion of processing!
Periodically reboot your consuming threads, channels, and processors - or do a BASIC.Recover. There are degrees of randomness that will result in unacknowledged messages accumulating over time, and this will deal with it.
Again, if you prefer to use consumers, generally speaking to share consumers across channels is a bad idea. Each consumer should get its own channel.
I am using the RabbitMQ java client.
My app has multiple exchanges and queues. Adopting something similar to the Pub/Sub model.
What is the best practice regarding connections?
Shall I have one connection per app?
I understand the channel model, and the thread (un)safety model. Just not sure if I should have multiple connections or not.
One connection per app is correct.
Within that connection, you will have many channels - where the actual work is done.
You can have hundreds or thousands of message producers and consumers (each on their on channel) inside a single connection.
If you start to see slowdown in your RMQ setup because you're dong too much work, look at clustering RMQ and/or standing up multiple instances of your app.
But you would still maintain 1 connection per app instance.
It will depends on the volumetry of messages you will have. If it really is huge, maybe 2 or 3 connections could do it, but one per application seems to be the best choice
I am looking to optimize a microservice architecture that currently uses HTTP/REST for internal node-to-node communication.
One option is implementing backpressure capability into the services, (eg) by integrating something like Quasar into the stack. This would no doubt improve things. But I see a couple challenges. One is, the async client threads are transient (in memory) and on client failure (crash), these retry threads will be lost. The second, in theory, if a target server is down for some time, the client could eventually reach OOM attempting retry because threads are ultimately limited, even Quasar Fibers.
I know it's a little paranoid, but I'm wondering if a queue-based alternative would be more advantageous at very large scale.
It would still work asynchronously like Quasar/fibers, except a) the queue is centrally managed and off the client JVM, and b) the queue can be durable, so that in the event client and or target servers go down, no in flight messages are lost.
The downside to queue of course is that there are more hops and it slows down the system. But I'm thinking there is probably a sweet spot where Quasar ROI peaks and a centralized and durable queue becomes more critical to scale and HA.
My question is:
Has this tradeoff been discussed? Are there any papers on using a
centralized external queue / router approach for intraservice
communication.
TL;DR; I just realized I could probably phrase this question as:
"When is it appropriate to use Message Bus based intraservice
communication as opposed to direct HTTP within a microservice
architecture."
I've seen three general protocol design patterns with microservices architectures, when running at scale:
Message bus architecture, using a central broker such as ActiveMQ or Apache Qpid.
"Resilient" HTTP, where some additional logic is built on HTTP to make it more resilient. Typical approaches here are Hystrix (Java), or SmartStack/Baker St (smart proxy).
Point-to-point asynchronous messaging using something like NSQ, ZMQ, or Qpid Proton.
By far the most common design pattern is #2, with a little bit of #1 mixed in when a queue is desirable.
In theory, #3 offers the best of both worlds (resiliency AND scale AND performance) but the technologies are all somewhat immature. It turns out that with #2 you can get really very far (e.g., Netflix uses Hystrix everywhere).
To answer your question directly, I'd say that #1 is very rarely used as an exclusive design pattern because it creates a single bottleneck for your entire system. #1 is common for a subset of the system. For most people, I'd recommend #2 today.
I need to create a relatively simple Java tcp/ip server and I'm having a little trouble determining if I should use something like Netty or just stick with simple ServerSocket and InputStream/OutputStream.
We really just need to listen for a request, then pass the new client Socket off to some processing code in a new thread. That thread will terminate once the processing is complete and the response is sent.
I like the idea of pipelines, decoders, etc. in Netty, but for such a simple scenario it doesn't seem worth the added up front development time. It seems like a bit overkill for our initial requirements, but I'm a little nervous that there are lots of things I'm not considering. What, if any, are the benefits of Netty for such simple requirements? What am I failing to consider?
The main advantage of Netty over simply reading from and writing to sockets using streams is that Netty supports non-blocking, asynchronous I/O (using Java's NIO API); when you use streams to read and write from sockets (and you start a new thread for each connected accepted from a ServerSocket) you are using blocking, synchronous I/O.
The Netty approach scales much better, which is important if your system needs to be able to handle many (thousands) of connections at the same time. If your system does not need to scale to many simultaneous connections, it might not be worth the trouble to use a framework like Netty.
Some more background information: Threads are relatively expensive resources in an operating system. Each thread needs memory for the stack (which can be for example 2 MB in size). When you create thousands of threads, this is going to cost a lot of memory; also, operating systems have limits on the number of threads that can be created. So you don't want to start a new thread for each accepted connection. The idea of asynchronous I/O is to decouple the threads from the connections (no one-to-one relation). There can be many more connections than threads, and whenever some event happens on one of the connections (for example, data is received), a thread from a thread pool is temporarily used to handle the event.
I think that the benefits of using netty are not immediate but actually come later when requirements change and maintenance becomes more complex for your project. Netty brings built in understanding of the HTTP protocol so that you can provide simple RESTful web services. Also you have the option of utilizing asynchronous request processing that netty provides as a framework so that you can potentially get better performance and service several orders of magnitude more concurrent requests.
First, write the logic of your service so that it's independent of your communication layer.
As Victor Sorokin said, there's a learning advantage to doing it yourself. So it ought to be worthwhile to write it with sockets. It will involve less effort to get started, and if it works well enough then you're off to the races.
If you find that you need more scalability/robustness later, you can switch to netty. Just write a new netty layer that communicates for your service logic layer and swap them out.
I'm new to enterprise Java development, although I'm sure this question equally applies to any language or platform, such as .NET.
For the first time ever now I'm dealing with message queues, and I'm very intrigued by them. (specifically, we're using ActiveMQ). My tech lead wants ActiveMQ queues to be the front-runners to all of our databases and internal web services; thus instead of a database query being fired off from the client and going directly to the database, it gets queued up first.
My question is this: are queues the way to go with every major processing component? Do best practices dictate putting them in front of system components that usually get hit with large amounts of requests? Are there situations where queues should not be used?
Thanks for any insight here!
Here are some examples where a message queue might be useful.
Limited resources
Lets say you have a large number of users making requests to a service. If the service can only handle a small number of requests concurrently then you might use a queue as a buffer.
Service decoupling
A key enterprise integration concept is decoupling of systems in for eg a workflow. Instead of having systems talk directly to each other, they asyncronously post messages to queues. The integration component then routes and delivers the message to the appropriate system.
Message replay
In the above example queues can also provide reliable delivery and processing of requests. If one component of the workflow breaks, others are unaffected and can still operate and post messages to the broken component. When the broken component recovers it can process all the queued up messages.
They key concepts here are load throttling, loose coupling, reliability and async operation.
As to whether they are the way to go for every major component, I would say no, this is not an automatic choice, you must consider each component individually.
Queues are indeed a very powerful and useful tool, but like every tool you should only use it for the job it is intended.
IMO they are not the away to go for every major processing component.
As a general rule I would use a queue where the requesting resource does not require an immediate, synchronous response. I would not use a queue where the timeliness and order of processing is vital.
Where asynchronous processing is allowable and you wish to regulate the amount of traffic to a service then a queue may be the way to go.
See #Qwerky's answer too, he (or she) makes some good points.
Please check out this:
http://code.google.com/p/disruptor/
Not only queues are there in the wild to solve those kind of problems.
Answering your question. Queues in this case will introduce asynchronous behavior in access to your databases. In this case it is more a question of can you afford such a great impact on your legacy systems. It just might be too much of change to push everything to the queues. Please describe what is the general purpose of your systems. Then it will be easer to answer your question fully.
Message queues are fundamentally an asynchronous communication system. In this case, it means that aside from the queue that links the sender and receiver, both sender and receiver operate independently; a receiver of a message does not (and should not) require interaction with the sender. Similarly, a sender of a message does not (and should not) require interaction with receiver.
If the sender needs to wait for the result of processing a message, then a message queue may not be a good solution, as this would force an asynchronous system to be synchronous, against the core design. It might be possible to construct a synchronous communication system on top of a message queue, but the fundamental asynchronous nature of a message queue would make this conversion awkward.