I have been trying to understand the difference between apicallattempttimeout and apicalltimeout. What I could understand is apicalltimeout is the total time till the client request waits for the response before giving up whereas apicallattemptimeout includes timeout for retries as well in addition to the time in the first attempt.
So does this mean that apicallattemptimeout will always be more than apicalltimeout? Example : Suppose I keep apicalltimeout to be 1000ms and for a single retry I want the timeout to be 300ms. So values will be for apicalltimeout= 1000ms and apicallattemptimeout= 1300ms ? API docs dont seem to help here
apicallattempttimeout and apicalltimeout
Related
Hi there I am trying to solve a challenge with Flux.
My API calls another third party API that is very slow. I want to make sure that I call that API as few times as possible. For that I want to queue the query parameters. When that queue is full or a certain expiration time is reached I make a request to the slow API.
Example:
Request 1: GET localhost:8080/shipping?q=BR,CN,NL
Request 2: GET localhost:8080/shipping?q=LU,CA
Suppose my queue has size 5 and timeout 5 seconds. When the second requests arrives, my queue is full and I call the 3rd party API. With the results I want to respond each request with the right response.
Response 1:
{
"BR" : 21,
"CN" : 33,
"NL" : 5
}
Response 2:
{
"LU" : 1,
"CA" : 2
}
How would I keep track of the request in an asynchronous way here? How to make sure that each request only gets what was requested essentially?
I would have a WebClient doing a request like:
GET slowapi.com/shipping?q=BR,CN,NL,LU,CA
And splitting the results into two different responses.
P.S.
I probably need to implement this using Buffer or Window, any experts on this area?
You want to do what people do with long polling.
Long polling is a method that server applications use to hold a client connection until information becomes available.
See long polling using deferred result for more details.
Having said that, this doesn't seem to be a very good (or user friendly) design for a synchronous end-point. If the API is slow, you should consider making this an asynchronous request (with polling/websockets OR with callback/status end point in response).
i am calling an external REST API Service from my java code, to prevent overloading that external API I need to ensure that not more than N calls are made to the service S per second. and need to make sure not a single function call is dropped
my function will look like
public callXXXAPI(JSONObject message){
formatData(message);
//GET
//REST API
JSONObject response = getDataFromexternalAPI(message);
contineNextProcess(response);
}
This method callXXXAPI will be called asynchronously from Kafka consumer.
i need to control the number of times external api getting called.
i.e assume messages getting consumed in high speed in Kafka(1000000 messages per second) but i need to call the external service 1000 (configurable) times per second. and i do not want to reject the other messages.
i can not solve the problem with Google Guava's Rate Limiter, because when more number of messages getting consumed and more threads will be waiting for the next second and it will get overflowed. should i use any external JMS delay queue to solve this? any better solution/ architecture for this problem?
I'm not sure why this is happening and I've already searched the internet why this is happening but I can't find the answer I'm looking for.
Basically this happens when I try to send a request when the Wi-Fi is off and the mobile data is ON but there is no data. It takes 2 minutes for the exception to be thrown. I wanna know the proper reason why. The timeouts are these:
urlconn.setConnectTimeout(60000);
urlconn.setReadTimeout(60000);
Does this mean that both timeouts occur that's why it took 2 minutes or are there other reason that I'm not aware of why this is happening?
Note: I can only post a code snippet due to confidentiality reasons.
Both of them are occurring. There's no data, so the connection fails, that's one minute. Then there's nothing to read from a stream that doesn't exist due to no connection, that's another minute.
I have made connection to the wechat api using HttpUrlConnection and set connectTimeOut to 500 milis and got the response in 3 seconds and now I decreased the connectTimeOut to 100 milis and getting the response in 2 seconds. So not able to understand the reason behind this, see the code and javadoc but not found anything related to it.
You are confused. Connection timeout is not the maximum time to read the response, it is the maximum time to create the connection. Sending the request and reading the response come afterwards.
If you got a response, clearly the connection succeeded within your timeout period, but it then took some seconds to read the response. These are not the same thing.
Possibly you may want to set a read timeout as well, or instead.
But the timeouts you're setting are ridiculously short. Three seconds for a connect timeout and ten seconds for a read timeout would be about the minimum.
I have one method execute(data) which takes considerable time (depending on data like 10 seconds or 20 seconds), it has timeout feature which is 30 seconds default. I want to test that method. One way of doing it is to collect enough data which lasts more than 30 seconds and then see whether I get timeout exception. Other way of doing it is to use threads. What I intend to do is to run method for some milliseconds and then put thread on wait before I get timeout exception or make it last for some seconds.Can any one please suggest how can I achieve that.
You should walk through the Java Threads Tutorial (Concurrency). Any answer on Stack Overflow would need to be really long to help you here, and the Threads/Concurrency tutorials already cover this well.
http://docs.oracle.com/javase/tutorial/essential/concurrency/
You could use
Thread.sleep( millis );
to put the thread to sleep for the required time.
Or, you could put your data processing code into a loop, so that it processes it multiple times. This would recreate the scenario of the thread actually processing data for longer than 30 seconds.
Or, you could test your code with a shorter timeout value.