So I am new to metrics and micrometer. I am have followed this tutorial in which we set up some basics Meters like a counter and a Gauge and expose the metrics. I am able to see the metrics when I hit the endpoint /actuator/prometheus. I can seem my custom meters there.
So now I am trying to expose the metrics to datadog. I have imported the following dependency:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-datadog</artifactId>
<version>1.8.5</version>
</dependency>
and also have this on my application.properties file:
management.endpoints.web.exposure.include=*
management.metrics.export.datadog.apiKey=123
I am aware i have not included any url to data or any of that sort but i was under the impression that i could simply see the metrics im collecting via the actuator endpoint by accessing something like /actuator/datadog? Is my understanding correct? I essentially want to see the metrics im collecting before sending it out to datadog. Is this possible?
No, you can't see any metrics under /actuator/datadog since the metrics are pushed rather than pulled.
The usual approach used in a Spring Boot application together with Datadog is to send data out from the app is over UDP and utilizing the StatsD protocol for the message structure. You can achieve this by adding micrometer-registry-statsd to your dependency which will auto-configure the app.
Some config for Datadog:
management:
metrics:
export:
statsd:
enabled: true
flavor: datadog
defaults:
enabled: true
datadog:
api-key: 123456
How can you inspect the metrics before it is sent to datadog?
One way of doing this during development is to inspect the UDP messages, so basically spin up a UDP server on localhost and port 8125 (these are the default values but can be overridden).
I was in the same situation some time ago and wrote my own UDP server, you can see it here https://github.com/hcgoranson/UDP-server
Related
micrometer exposing actuator metrics to set request/limit to pods in K8svs metrics-server vs kube-state-metrics -> K8s Mixin from kube-promethteus-stack Grafana dashboad
It's really blurry and frustrating to me to understand why there is such big difference between values from the 3 in the title and how should one utilize K8s Mixin to set proper request/limits and if that is expected at al.
I was hoping I can just see same data that I see when I type kubectl top podname --containers to what I see when I open K8s -> ComputeResources -> Pods dashboard in Grafana. But not only the values differ by more than a double, but also reported values from actuator differ from both.
When exposing spring data with micrometer the sum of jvm_memory_used_bytes is corresponding more to what I get from metrics-server (0.37.0) rather then what I see on Grafana from the mixin dashboards, but it is still far off.
I am using K8s: 1.14.3 on Ubuntu 18.04 LTS managed by kubespray.
kube-prometheus-stack 9.4.4 installed with helm 2.14.3.
Spring boot 2.0 with Micrometer. I saw the explanation on metrics-server git that this is the value that kubelet use for OOMKill, but again this is not helpful at all as what should I do with the dashboard? What is the the way to handle this?
Based on what I see so far, I have found the root cause, renamed kubelet service from old chart to new that can get targeted by serviceMonitors. So for me the best solution would be grafana kube-state-metrics + comparing what I see in the jvm dashboard
We have following spring setup:
Our application is running on port 80, but our managment.server.port is set to 8081. And we use multiple checks of the management endpoints from this secured port already.
server.port=80
management.server.port=8081
management.endpoints.web.exposure.include=*
With this settings we can hide any sensitive information from the public interface on port 80.
But now our requirements changed: We need to display the version of our application on the public interface. This information is part of the info-endpoint of our management-server on /actuator/info
Is it possible to move only the info endpoint to port 80, and let all other management.server endpoints still on 8081?
Or is there any other suitable solution for our requirement to only open the info endpoint for external calls.
We prefer to not change any firewall setting: so one port is public, and the other is internal only
No you can't move only one endpoint to different port.
This about the actuator as an application that runs on one specific port (8081) in this case and exposes a bunch of services, so its all-or-nothing from this standpoint.
So you'll have to create a special rest controller that would read the file (or keep the memory) the data just like the info endpoint does.
Its a pretty staightforward code actually, it reads a file available in the spring boot artifact anyway and exposes its content.
You can checkout the source code of the info endpoint of the actuator here
I have spring boot powered microservices deployed in my local kubernetes cluster. The microservices are using micrometer and prometheus registry but due to our company policy the actuator is available on another port:
8080 for "business" http requests
8081/manage for actuator. So, I can access http://host:8081/manage/prometheus and see the metrics when running the process locally (without kubernetes).
Now, I'm a beginner in Prometheus and have a rather limited knowledge in kubernetes (I'm coming with a Java developer background).
I've created a POD with my application and succesfully run it in kubernetes. It works and I can access it (for 8080 I've created a service to map the ports) and I can execute "business" level http requests it from the same PC.
But I haven't find any examples of adding a prometheus into the picture. Prometheus is supposed to be deployed in the same kubernetes cluster just as another pod. So I've started with:
FROM #docker.registry.address#/prom/prometheus:v2.15.2
COPY entrypoint.sh /
USER root
RUN chmod 755 /entrypoint.sh
ADD ./prometheus.yml /etc/prometheus/
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh looks like:
#!/bin/sh
echo "About to run prometheus"
/bin/prometheus --config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/prometheus \
--storage.tsdb.retention.time=3d \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.console.templates=/etc/prometheus/consoles
My question is about how exactly I should define prometheus.yml so that it will get the metrics from my spring boot pod (and other microservices that I have, all spring boot driven with the same actuator setup).
I've started with (prometheus.yml):
global:
scrape_interval: 10s
evaluation_interval: 10s
scrape_configs:
- job_name: 'prometheus'
metrics_path: /manage/prometheus
kubernetes_sd_configs:
- role: pod
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: sample-pod-app(.*)|another-pod-app(.*)
But apparently it doesn't work, so I've asking for the advices:
If someone has a working example it would be the best :)
Intuitively I understand I need to specify the port mapping for my 8081 port but I'm not exactly know how
Since prometheus is supposed to run on another port am I supposed to expose a kubernetes service for port 8081 at the kubernetes level?
Do I need any security related resources in kubernetes to be defined?
As a side note. At this point I don't care about scalability issues, I believe one prometheus server will do the job, but I'll have to add Grafana into the picture.
Rather than hardcoding it in prometheus config you need to make use of annotations on your pods to to tell prometheus which pods, what path and which port Prometheus should scrape.
prometheus.io/scrape: "true"
prometheus.io/path=/manage/prometheus
prometheus.io/port=8081
prometheus.io/scheme=http
Spring boot micrometer example with Prometheus on kubernetes.
Prometheus deployment guide.
In order to let Prometheus collect metrics from your Spring Boot Application you need to add a specific dependencies to it. Here you can find a guide showing how to make it done: Spring Boot metrics monitoring using Prometheus & Grafana. Here is an example:
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_spring_boot</artifactId>
<version>0.1.0</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_hotspot</artifactId>
<version>0.1.0</version>
</dependency>
If you would like to use a bit different strategy you can also check out this one: Monitoring Spring Boot applications with Prometheus and Grafana:
In order to compare the performance of different JDKs for reactive
Spring Boot services, I made a setup in which a Spring Boot
application is wrapped in a Docker container. This makes it easy to
create different containers for different JDKs with the same Spring
Boot application running in it. The Spring Boot application exposes
metrics to Prometheus. Grafana can read these metrics and allows to
make nice visualizations from it. This blog post describes a setup to
get you up and running in minutes.
Please let me know if that helped.
I have a SpringBoot 2 app that uses using Spring Data Couchbase.
I have this message on the logs every minute
2019-11-12 13:48:48,924 WARN : gid: trace= span= [cb-orphan-1] c.c.c.c.t.DefaultOrphanResponseReporter Orphan responses observed: [{"top":[{"r":"10.120.93.220:8092","s":"view","c":"5BE128F6F96A4D28/FFFFFFFFDA2C8C52","l":"10.125.216.233:49893"}],"service":"view","count":1}]
That is from the new Response Time Observability feature underlying the Java SDK.
It would seem to indicate that you have view requests which are timing out, but eventually received later, but I have no views defined in Couchbase DB
I would like to know if it is possible to disable OrphanResponseLogReporter via YML file config in a SpringBoot app. , setting the logIntervalNanos to 0
No, unfortunately, you cannot do it. Only a subset of Couchbase's configuration properties is supported in the application.yml, namely the ones present in the CouchbaseProperties.java class.
You could although use an environment variable: com.couchbase.orphanResponseReportingEnabled=false. It is independent of Spring, it's read directly by Couchbase SDK.
Edit:
As a workaround, you can set logging level in the application.yml:
logging.level.com.couchbase.client.core.tracing.DefaultOrphanResponseReporter: ERROR
For some reason Zipkin is using the Consul discovery name instead of the base spring.application.name property.
spring:
consul:
discovery:
prefer-ip-address: true
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
But I want it to use the non-randomized application name (so myservice instead of myservice-67gg8d368).
If I set the Zipkin property zipkin.service.name then Consul throws errors saying it cannot find the service.
I'm unsure why the two are even sharing properties and not just adhering to their own. I'd like the service to use it's base application name because otherwise Zipkin is hard to use, as it lists every new container as a completely new service, making it very difficult to see over time how code changes have changed timing.
UPDATE:
This is the error I get in my logs if I set the zipkin.service.name
[o.s.c.c.d.ConsulDiscoveryClient] : Unable to locate service in consul agent: my-service-91828f2f88f18c3fadf193bfa3ad6d1f