I'm updating a microservice to spring boot 2 and migrating metrics from dropwizard to micrometer. We are using prometheus to store metrics and grafana to display them. I want to measure requests per second to all URLs. Micrometer documentation states that:
Timers are intended for measuring short-duration latencies, and the frequency of such events.
So timers seem to be the way to do the job:
Timer.Sample sample = log ? Timer.start(registry)
//...code which executes request...
List<Tag> tags = Arrays.asList(
Tag.of("status", status),
Tag.of("uri", uri),
Tag.of("method", request.getMethod()));
Timer timer = Timer.builder(TIMER_REST)
.tags(tags)
.publishPercentiles(0.95, 0.99)
.distributionStatisticExpiry(Duration.ofSeconds(30))
.register(registry);
sample.stop(timer);
but it doesn't produce any rate per second, instead we have metrics similar to:
# TYPE timer_rest_seconds summary
timer_rest_seconds{method="GET",status="200",uri="/test",quantile="0.95",} 0.620756992
timer_rest_seconds{method="GET",status="200",uri="/test",quantile="0.99",} 0.620756992
timer_rest_seconds_count{method="GET",status="200",uri="/test",} 7.0
timer_rest_seconds_sum{method="GET",status="200",uri="/test",} 3.656080641
# HELP timer_rest_seconds_max
# TYPE timer_rest_seconds_max gauge
timer_rest_seconds_max{method="GET",status="200",uri="/test",} 0.605290436
what would be the proper way to solve this? Should the rate per second be calculated via prometheus queries or returned via spring actuator endpoint?
Prometheus provides a functional query language called PromQL (Prometheus Query Language) that lets the user select and aggregate time series data in real time. You can use rate() function:
The following example expression returns the per-second rate of HTTP requests as measured over the last 5 minutes, per time series in the range vector:
rate(http_requests_total{job="api-server"}[5m])
Related
Hi I'm trying to use elastic search reindex api via rest high level client and am comparing two ways of doing it.
Rest API:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#docs-reindex-task-api
[![Rest API Documentation screenshot][1]][1]
Running reindex asynchronously - If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at _tasks/<task_id>. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.
rest high level client:
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-document-reindex.html#java-rest-high-document-reindex-task-submission
[![rest high level client Documentation screenshot][2]][2]
Reindex task submission - It is also possible to submit a ReindexRequest and not wait for it completion with the use of Task API. This is an equivalent of a REST request with wait_for_completion flag set to false.
I'm trying to figure out this: From Rest API Doc I know that I should delete the task document so Elasticsearch can reclaim the space. Since the rest high level client is basically doing the same thing, do I need to "delete the task document" if I choose to use this client instead of the rest API? If so, how can I do that?
Thanks
[1]: https://i.stack.imgur.com/OEVHi.png
[2]: https://i.stack.imgur.com/sw9Dw.png
The task document is just a summary of what happen during reindex (so a small document), since you specify to do in async with wait_for_completion=false it will be created in system indice .tasks, so you can query this indice like any other to find the summary and delete it.
The .tasks indice will not be available by default in futur version of Elasticsearch and you will need to use specific function linked to _tasks with the java REST api available here
I am developing a Rest API using spring boot.
This rest api will be deployed on weblogic application server.
using Java 8
I am in the design phase so do not have any demonstratable code .
The API will receive a Request payload ( to approve Orders ) .
This Request payload can vary from 1 to 100000 OrderIds
For each of these orderIds I need to call a stored procedure in Oracle to approve the order.
The stored procedure has the business logic and it could take anywhere between a few seconds to minutes to respond back with a response.
To not keep the end user waiting am planning to implement this using "DeferredResult" from spring.
To achieve this I will be spawning off database call in a new thread using :
ForkJoinPool.commonPool().submit(() -> {
//do database call here
}
However I am not clear about how to control the number of threads spawned ?
Should I be worried about subsequent requests ( which I have no control over ) which could also lead to more threads being spawned ?
I am woking with AWS Kinesis and CouldWatch. How can I fetch many metrics of one shard with one request? This is how I get one metric:
GetMetricStatisticsRequest request = new GetMetricStatisticsRequest();
request.withNamespace(namespace)
.withDimensions(dimensions)
.withPeriod(duration)
.withStatistics(statistic)
.withMetricName(metricName)
.withStartTime(startTime)
.withEndTime(endTime);
You can't fetch data for multiple metrics with one call to GetMetricStatistics.
GetMetricStatistics API takes metric name and a list of dimensions, which together define exactly one metric. To get data for multiple metrics you'll have to make multiple GetMetricStatistics calls.
We are developping an application that uses the Google Cloud Datastore, important detail: it's not an gae application!
Everything works fine for normal usage. We designed a test that fetches over 30000 records but when we tried to run the test we got the following error:
java.net.SocketTimeoutException: Read timed out
We found that a Timeout Exception occurs after 30 seconds, so this explains the error.
I have two questions:
Is there a way to increase this timeout?
Is it possible to use pagination to query the datastore? We found when you have an aep application you can use the cursor, but our application isn't.
You can use cursors in the exact same way as a GAE app using Datastore. Take a look at this page for info.
In particular, the ResultQueryBatch object has an .getEndCursor() method which you can then use when you reissue a Query with setStartCursor(...). Here's a code snippet from the page above:
Query q = ...
if (response.getBatch().getMoreResults() == QueryResultBatch.MoreResultsType.NOT_FINISHED) {
ByteString endCursor = response.getBatch().getEndCursor();
q.setStartCursor(endCursor);
// reissue the query to get more results...
}
You should definitely use cursors to ensure that you get all your results. The rpc has additional constraints to time like total rpc size, so you shouldn't depend on a single rpc answering your entire query.
I am new to Twitter API. I am using Java and Twitter4J for development.
Requirement is:
Setup a Crone job which will fetch HomeTimline/tweets of users and filter the tweets based on some keywords and store into database
So, i have used getHomeTimeline() function of twitter4J and fetching 100 tweets. Currently my crone job executes every one minute perform the task.
After some time i am getting following error.
429:Returned in API v1.1 when a request cannot be served due to the application's rate
limit having been exhausted for the resource. See Rate Limiting in API v1.1.(https://dev.twitter.com/docs/rate-limiting/1.1)
message - Rate limit exceeded
code - 88
Relevant discussions can be found on the Internet at:
http://www.google.co.jp/search?q=e5488403 or
http://www.google.co.jp/search?q=0a410619
TwitterException{exceptionCode=[e5488403-0a410619], statusCode=429, message=Rate limit exceeded, code=88, retryAfter=-1, rateLimitStatus=RateLimitStatusJSONImpl{remaining=0, limit=15, resetTimeInSeconds=1375947321, secondsUntilReset=1137}, version=3.0.3}
As per error message my application's rate limit is over.
But when i get rate limit of my application using below code
RateLimitStatus rateLimit = twitter.getRateLimitStatus("application").get("/application/rate_limit_status");
System.out.println("Application Remaining RateLimit ="+rateLimit.getRemaining());
This display that remaining limit is around 150. Hoever if you refer the exception above in last lime it shows limit=15.
In one of the twitter document i rad that for application rate limit is 180 request per 15 minutes.
Please guide me to resolve this error.
If i fetch 100 result does it counts a single hit or more than one?
Does Twiiter4J use Streaming API or REST API?
What would be better approach for this requirement?
Please help