I am running a performance test on my unary gRPC server using GHZ, under load about 100K request the response time is high(about 1s to 1.5s) for initial few thousand requests.
After some debugging what I am observing is that there is a delay of about 500ms between the end of gRPC server interceptor and invocation of the gRPC service method, and about 500ms delay between the end of service method and invocation of SimpleForwardingServerCallListener.onComplete.
What could be causing this delay, I have configured 8 threads for netty server and 8 threads for gRPC cancellation context executor.
Had posted the same question on github java-grpc, below is the thread. https://github.com/grpc/grpc-java/issues/7372
TL;DR
A large number of concurrent users(num users >> num gRPC worker threads ) would mean the system is oversubscribed and unable to process the requests, thus creating a backlog of requests to be processed. This causes delay in processing the request even though the request is accepted by the gRPC event loop thread. For example. Total time for processing a single request= 375ms Time between end of gRPC server interceptor(ServerInterceptor.interceptCall) and invocation of gRPC service = 162ms Time to process the request by gRPC service = 65ms Time between end of gRPC service and invocation of SimpleForwardingServerCallListener.onComplete = 148ms
382/61=6.2, so the system is 6.2x oversubscribed. 100/6.2=16 which would be closer to the maximum concurrent RPCs it can handle.
Related
I have this scenario: Service A calls Service B by a HTTP POST request.
Service A ---> Service B
Service B at times, takes more than 3 minutes to return the result but the timeout configured in HTTP request in A is 60s.
I want to know how to do this by Java with HTTP Client.
What I want to know is how can I stop/cancel the execution in Service B if it takes more than 60 seconds to complete or the timeout expires to avoid inconsistencies in the database.
Thank you!
My question is pretty similar to this question - Java - AsyncHttpClient - Fire and Forget but I am using Jersey / Jax-RS in my case.
How do you configure Jersey Jax-RS asynchronous calls to achieve a "fire-and-forget" where it is imperative to not block the current working thread no matter what?
For example, if there are no available threads to process the request, skip it complete and move on do not block the calling thread.
So given this test client here:
Client client = ClientBuilder.newClient();
Future<Response> future1 = client.target("http://example.com/customers/123")
.request()
.async().get();
Cool that works great for a get. But what about a fire-and-forget put or post or something. How would I change this to act more "fire-and-forget"?
client.target("http://example.com/customers/123")
.request()
.async().put(myCustomer);
In a fire-and-forget, you could configure it in many ways for example that it will buffer into an in-memory queue up to a configurable amount of memory and then will just start discarding new entries if the queue was full.
Or another example would be N worker threads and if they are all busy you just drop the http request.
What are the different common Jax-RS async parameters that I should configure? Any gotchas?
I'm using javax.websocket API in my app. I send messages from server to client like this:
Future<Void> messageFuture = session.getAsyncRemote().sendText(message);
messageFutures.add(messageFuture); // List<Future<Void>> messageFutures
I use async API because I really care about performance and cannot make server wait until each message is delivered, because server does smth like this:
for (i = 1..N) {
result = doStuff()
sendMessage(result)
}
So it is impossible to wait for message delivery each iteration.
After I send all the messages I need to wait for all the Future's to be finished (all messages are delivered). And to be safe I need to use some timeout like "if server sends message to client and client doesn't confirm receipt in 30 seconds then consider websocket connection broken" - as far as I understand it should be possible to do with websockets since they work over TCP.
There is a method session.setMaxIdleTimeout(long):
Set the non-zero number of milliseconds before this session will be
closed by the container if it is inactive, ie no messages are either
sent or received. A value that is 0 or negative indicates the session
will never timeout due to inactivity.
but I really not sure if it is what I want (is it?). So how can I set a timeout like I described using javax.websocket API?
The idle timeout could cover your case, but it is not designed to. The idle timeout applies more to the case where a client makes a connection, but is using it only infrequently.
The more precise feature for checking a timeout when sending is setAsyncSendTimeout.
Using both of these allows you to configure for the case where a client may leave a connection idle for minutes at a time, but the server expects relatively quick messages acknowledgements.
In my experience with Spring, the timeout implementation provided by Spring is not actually configurable. See How do you quickly close a nonresponsive websocket in Java Spring Tomcat? I am not sure whether this is applicable to your websocket implementation.
I'm calling multiple request(10) at same time HTTP GET method.Here calling method will create different threads (Like Thread 1,Thread 2 ....)
Caller Method:
enter code here: for(int i=0;i<10;i++){Thread.currentThread().getId();HttpClient httpClient = new HtpClient(url);res = httpClient.get(5000);}
Then Request will hit application entry point.The entry point will create new threads for each and every request (Like Thread 11,Thread 12 ....).
enter code here//public void DoProcess(){Thread.currentThread().getId();// New threads for each request.else........}
But i want to know which caller thread request created application thread.
Like Thread 1 belongs Thread 11
Thread 2 belongs Thread 12
Please let me know,how to achieved this.
Client connects through TCP, so there is a socket client ip and port involved.
I don't know HTTPClient api by heart but if there is a getClientPort() then you should be able to printout the time, thread name and client ip+port. On the server, whatever accept the socket will have the client ip and port too. If this is a servlet container, the servlet request has getRemoteAddress() and getRemotePort(). There too you can print out the time, ip+port and thread name. If you pile those events in 2 tables, you should be able to join by ip+port with a tolerance on the client time vs server time (try less than 2 seconds apart, assuming C and S are on same time with NTP).
The other trivial way (but it changes the http payload) is to inject a HTTP header from the client into the http request, stating the current thread name/tid. Ex: "my_custom_remote_thread_id: Thread-11". This way on the server you can pull the request header to figure the client thread name/tid.
I am using a Camel Proxy which returns a result from another process.
public interface DataProcessingInterface {
public List<ResponseData> processPreview(ClientData criteria, Config config);
}
And this is configured link this
<camel:proxy
id="processPreviewProxy"
serviceInterface="model.jms.DataProcessingInterface"
serviceUrl="jms:queue:processPreview"/>
But sometimes the other process takes a long time to return the results and I am having getting the timeout exception
TemporaryQueueReplyManager - Timeout occurred after 20000 millis waiting for reply message with correlationID [Camel-ID-PC01-2661-1367403764103-0-15]. Setting ExchangeTimedOutException on (MessageId: ID-PC01-2661-1367403764103-0-17 on ExchangeId: ID-PC01-2661-1367403764103-0-16) and continue routing.
How do I tell Camel to wait till the response is ready. It should take forever if that is how long it takes. The client is managed in a different thread so it the duration it takes will not affect the client.
Also is it possible to re-establish the connection if the TimeoutException is thrown so I can continue to wait?
"Forever"? No. You can't wait forever.
A (synchronous) request/reply typically have a time out value set by a reason. If you don't get a reply within a given time, then try again or skip it. In the JMS case, you set both requestTimeout and timeToLive to achieve this. Read this section. In Camel, you can achieve such things with redelivery and error handlers.
Anyway, if you would set the value to "Forever" (or at least very long, such as multiple hours) - then a server/application restart would still make the request fail.
You can set a very high request timeout on the jms endpoint.
jms:queue:processPreview?requestTimeout=xxxx