I was learning GRPC as we are planning to expose GRPC server (instead of Rest Endpoint) within spring boot microservice which will be listening on dedicated port. I am using following code snippet to create GRPC server.
io.grpc.Server server = ServerBuilder.forPort(port)
.addService(new MyServiceImpl())
.build()
.start();
Here server object which is encapsulating unerlying NettyServerBuilder object is being initialized with default values. We are planning to deploy it in production (powerful hardware) where we are expecting huge traffic (approxy 10k calls per second) from the grpc clients. My question is like For scaling How should I configure underlying NettyServerBuilder. Which are the important configs that I need to tune? Any suggestions and best practices are welcome
You should:
Use serverBuilder.executor() and set a ForkJoinPool as the executor. This executor is where the gRPC Callbacks (i.e. the methods on ServerCall.Listener) are invoked. ForkJoinPool is a heavily optimized, more concurrent executor, and allows the network threads to get back to handling things like HTTP/2 and SSL.
Use nettyServerBuilder.workerEventLoopGroup() and provide an EpollEventLoopGroup. This allows you to use an optimized network thread implementation that is more efficient than the default Nio Java network implementation. The number of threads you provide to the group will depend on your benchmarks, but a good rule of thumb is 2-4 workers. gRPC uses EventLoops somewhat differently than netty, so you don't usually need 1 per core.
Use netty-tcnative for your SSL implementation. It is a enormously faster SSL implementation that wraps OpenSSL and BoringSSL.
We try hard to make the default implementation of gRPC's server to be fast without any extra configuration, so even if you don't use these, it's still going to be pretty fast.
Related
Consider this scenario: I have N>2 software components (microservices) that can communicate through two different communication protocols depending on how they are deployed. In other words, I have two deployment scenarios:
The components are to be deployed on the same machine. In this case I don't know if it makes sense to use HTTP to communicate these two components, if I think about performance. I understand that there are more efficient ways to communicate two processes on the same machine using java, such as sockets, RMI, RPC ...
The components are to be deployed on N different machines. In this case, it seems to me that it makes sense for me to use HTTP to communicate these components.
In short, what I want to do is to be able to configure the communication protocol depending on the way I perform the deployment: On a single machine, for example, use RMI, but when I deploy on two machines, use HTTP.
Does anyone know how I can do this using Spring Boot?
Many Thanks!
Fundamental building block of protocols like RMI or HTTP is socket communication. If you are not looking for the comfort of HTTP or RMI, and priority is performance, pure socket communication is your choice.
This will raise other concerns like, deployment difficulties. You should know IP address of both nodes in advance.
Another option, is to go for unix -domain socket for within server communication. For that you have to depend on JunixSocket.
If you want to go another route, check all inter process communication options.
EDIT
As you said in comment "It is simply no longer a question of two components but of many". In that scenario, each component should be a micro-service And should be capable to interact with each other. If that is the choice most scalable protocol are REST/RPC both are using HTTP protocol under the hood. REST is ideal solution for an API to be developed against a data source using CRUD operations. RPC is more lean towards action oriented API. You can find more details to identify the difference in between REST and RPC here.
How I understand this is...
if the components (producer and consumer) are deployed on the same host then use an optimized protocol and if on different hosts then use HTTP(s)
Firstly, there must be a serious driver to go down this route. I take it the driver here is performance. You would like to offer faster performance on local deployment and compartively compromised speeds on distributed deployments. BTW, given that we are in a distributed deployment world (or atleast where we are headed) HTTP will be what will survive. Custom protocols are discouraged.
Anyways... I would say your producer application should be in a self healing / discovery mode. On start-up (or periodically) it could check the health of the "optimized" end-point and decide whether it the optimized receiver is around. The receiver would need to stand behind a load-balancer. If the receiver is not up then go towards HTTP(S) and setup this instance accordingly at runtime.
For the consumer, it would need to keep both the gates (HTTP and optimized) open. It should be ready to handle requests from either channel.
In SpringBoot you can have a healthCheck implmented and switch the emitter on/off depending on the health of optimized end-point. If both end-points are unhealthy then surely the producer cannot emit anything. Apart from this the rest is just normal dependency-injection.
I am working on a project that is making a REST call to another Service to save DATA on the DB. The Data is very important so we can't afford losing anything.
If there is a problem in the network this message will be lost, which can't happen. I already searched about Spring Retry and I saw that it is designed to handle temporary network glitches, which is not what I need.
I need a method to put the REST calls in some kind of Queue (Like Active MQ) and preserve the order (this is very important because I receive Save, Delete and Update REST calls.)
Any ideas? Thanks.
If a standalone installation of ActiveMQ is not desirable, you can use an embedded in-memory broker.
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
Not sure if you can configure the in-memory broker to use KahaDB. Of course the scope of persistence will be limited to your application process i.e. messages in the queue will not be available if the application is restarted . This is probably the most important reason why in-memory or vanilla code based approaches are no good.
If you must reinvent the wheel, then have a look at this topic, which talks about using Executors and BlockingQueue to implement your own pseudo-MQ.
Producer/Consumer threads using a Queue.
On a side note, retry mechanism is not something provided by the MQ broker. It is the client that implements it. Be it ActiveMQs bundled client library or other Messaging libraries such as Camel.
You can also retrospect your current tech-stack to see if any of the existing components have JMS capabilities. For example: Oracle database bundles an MQ called Oracle AQ
Have your service keep its own internal queue of jobs, and only move onto the next REST call until the previous one returns a success code.
There are many better ways to do this but the limitation will come down to what your company will let you do.
Hi I am working with java and thrift. I see that there are 2 parts to the thrift Async system one is the Service.AsyncIface and another is Service.AsyncClient. From the thrift implementations for AsyncClient, I see that the non blocking interface is wired up and ready to go on the library side. I just made a simple client using TNonBlockingSocket and it works
1) Do we care if the existing thrift server for the Service is blocking or non blocking? Why?
2) If we want to wrap a nonblocking client framework in things like retry logic, host discovery, policy management etc what would be the ideal framework?
From client's point of view there's no difference in communication with sync or async server given that protocols and transports are compatible. That's because the client should receive the same serialized response from both sync/async servers. For example, if you do JSON over HTTP request, you don't really care if the server is sync or async.
Finagle is a good choice if you're interested only in JVM languages (and it's the only framework I'm aware of that has required feature set).
When making a synchronous request to a service using http client library, thread is blocked until data is returned. So are there any advantages of using non-blocking io in a synchronous http request?
Use Case: Web application developed using Spring MVC. For certain requests, a synchronous call is made to REST service. Is it advantageous to use an HttpClient library that uses NIO to make calls to the service? Jetty HttpClient uses non-blocking IO. It is not clear to me if HttpClient from HttpComponents supports NIO.
Let's assume you implement some sort of service that servers several clients. You will need to achieve a certain degree of parallelism with I/O operations (file access, network communication etc.). Otherwise a single client can block the others ones. You now have two options:
You can spawn several threads and each thread uses blocking I/O operations.
You use a single thread (or very few) and use non-blocking I/O operations.
Implementing a solution with non-blocking I/O is usually much harder because you have to manage the context of each client yourself. When you use a dedicated thread for each client, the context is naturally given (thread context = client context).
Non-blocking I/O is worth the extra implementation effort if you have a large number of slow clients because you can handle them with a small number of threads. If you used a thread for each client, they'd mainly be sitting there and waiting and would still use huge amounts of memory.
If you aren't implementing a service but a simple application, then non-blocking I/O is certainly not worth it.
Update: If I understand the use case correctly, you have a web application which not only serves web pages to web clients but also needs to execute REST requests to serve the web clients. So if you have a very large number of concurrent clients (several thousand or more) and the REST requests take a long time (several seconds), then non-blocking I/O could make sense. But your web server will need to support asynchronous operations so you can give the thread back to the server until the REST request has completed. Asynchronous operations were introduced with the Servlert 3.0 specification. So you'll need an up-to-date web server like Tomcat 7.
I'm looking for opinion from you all. I have a web application that need to records data into another web application database. I not prefer to use HTTP request GET on 2nd application because of latency issue. I looking for fast way to save records on 2nd application quickly, I came across the idea of "fire and forget" , will JMS suit for this scenario? from my understanding JMS will guarantee message delivery, guarantee whether message will be 100% deliver is not important as long as can serve as many requests as possible. Let say I need to call at least 1000 random requests per seconds to 2nd application should I use JMS? HTTP request? or XMPP instead?
I think you're misunderstanding networking in general. There's positively no reason that a HTTP GET would have to be any slower than anything else, and if HTTP takes advantage of keep alives it's faster that most options.
JMX isn't a protocol, it's a specification that wraps many other protocols including, possibly, HTTP or XMPP.
In the end, at the levels where Java will operate, there's either UDP or TCP. TCP has more overhead by guarantees delivery (via retransmission) and ordering. UDP offers neither guaranteed delivery nor in-order delivery. If you can deal with UDP's limitations you'll find it "faster", and if you can't then any lightweight TCP wrapper (of which HTTP is one) is just about the same.
Your requirements seem to be:
one client and one server (inferred from your first sentence),
HTTP is mandatory (inferred from your talking about a web application database),
1000 or more record updates per second, and
individual updates do not need to be acknowledged synchronously (you are willing to use "fire and forget" approach.
The way I would approach this is to have the client threads queue the updates internally, and implement a client thread that periodically assembles queued updates into one HTTP request and sends it to the server. If necessary, the server can send a response that indicates the status for individual updates.
Batching eliminates the impact of latency on the client, and potentially allows the server to process the updates more efficiently.
The big difference between HTTP and JMS or XMPP is that JMS and XMPP allow asynchronous fire and forget messaging (where the client does not really know when and if a message will reach its destination and does not expect a response or an acknowledgment from the receiver). This would allow the first app to respond fast regardless of the second application processing time.
Asynchronous messaging is usually preferred for high-volume distributed messaging where the message consumers are slower than the producers. I can't say if this is exactly your case here.
If you have full control and the two web applications run in the same web container and hence in the same JVM, I would suggest using JNDI to allow both web applications to get access to a common data structure (a list?) which allows concurrent modification, namely to allow application A to add new entries and application B to consume the oldest entries simultaneously.
This is most likely the fastest way possible.
Note, that you should keep the information you put in the list to classes found in the JRE, or you will most likely run into class cast exceptions. These can be circumvented, but the easiest is most likely to just transfer strings in the common data structure.