Datadog java agent and autodiscovery - java

I'll need to monitor java springboot containers on kubernetes.
I'll probably use the installation process using helm to deploy the agent on the nodes.
I'll probably use the annotations on pods to avoid configuration file managements
I saw in the documentation that there was a jar client that you can add to each pod to monitor the containers.
If I need to monitor a springboot application, do I have to install both the datadog agent on the nodes + the datadog agent on the pods to reach springboot
OR will the datadog agent on the nodes be allowed to monitor a springboot agent turning into a pod using only annotations and environnement variables ?

Datadog come with deployment and daemonset
cluster agent (for Kubernetes metrics) deployment
daemonset (for traching and logs) daemonset
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install <RELEASE_NAME> -f values.yaml --set datadog.apiKey=<DATADOG_API_KEY> datadog/datadog --set targetSystem=<TARGET_SYSTEM>
This chart adds the Datadog Agent to all nodes in your cluster with a DaemonSet. It also optionally deploys the kube-state-metrics chart and uses it as an additional source of metrics about the cluster. A few minutes after installation, Datadog begins to report hosts and metrics.
Logs:
For logs and APM you need some extra config
datadog:
logs:
enabled: true
containerCollectAll: true
data-k8-logs-collection
Once everything is done, then its time to add auto-discovery
again no need to install anything for auto-discovery, until you need APM (profiling)
All you need to add
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.check_names: |
["openmetrics"]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.init_configs: |
[{}]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.instances: |
[
{
"prometheus_url": "http://%%host%%:5000/internal/metrics",
"namespace": "my_springboot_app",
"metrics": [ "*" ]
}
]
replace 5000 with the port of the container listening. again this is required to push Prometheus/openmetrics to datadog.
If you just need logs, no need for any extra fancy stuff, just containerCollectAll: true this is enough for logs collection.
APM
You need add JAVA agent, add this in the dockerfile
RUN wget --no-check-certificate -O /app/dd-java-agent.jar https://dtdg.co/latest-java-tracer
and then you need to update CMD to let the agent collect tracing/apm/profiling
java -javaagent:/app/dd-java-agent.jar -Ddd.profiling.enabled=$DD_PROFILING_ENABLED -XX:FlightRecorderOptions=stackdepth=256 -Ddd.logs.injection=$DD_LOGS_INJECTION -Ddd.trace.sample.rate=$DD_TRACE_SAMPLE_RATE -Ddd.service=$DD_SERVICE -Ddd.env=$DD_ENV -J-server -Dhttp.port=5000 -jar sfdc-core.jar
trace_collection_java

do I have to install both the datadog agent on the nodes + the datadog agent on the pods to reach springboot
In order to get logs and metrics shipped to datadog, the daemonset datadog agent pods are sufficient to scrape the spring boot pods. With the openmetrics integration for instance, just expose the metrics through a /metrics path.
In order to get traces, you need to use the datadog java tracing library and configure it, you can start by simply set this env var to true for the apps containers DD_TRACE_ENABLED
Hope this helps

Kubernetes Datadog Agent and Cluster Agent will give you specifics about the nodes and pods. Documentation: https://docs.datadoghq.com/infrastructure/livecontainers/configuration/?tab=helm
For specific app metrics, Spring boot metrics can be exported using netflix spectator api. see implementation https://docs.spring.io/spring-metrics/docs/current/public/datadog
If you are using dropwizard, see also: Spring boot metrics + datadog

Related

How to config in java code to expose prometheus

I have configured metrics in flink and exposed in prometheus, its working fine,
But I just need to verify some metrics in my integration test, so I was trying to expose to prometheus via java code
Followed the approach mentioned in below link (heading- Configuring Prometheus with Flink
)
https://flink.apache.org/features/2019/03/11/prometheus-monitoring.html
and converted to java inline config.
conf.setString("metrics.reporters", "prom");
conf.setString("metrics.reporter.prom.class", "org.apache.flink.metrics.prometheus.PrometheusReporter");
conf.setInteger("metrics.reporter.prom.port", 9999);
But how to config prometheus yml contents?
can I set to same flink conf object as below?
conf.setString("global.scrape_interval","15s");
conf.setString("scrape_configs","[{job_name=name, static_configs=[{targets=[localhost:9999]}]}]}");

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server to set pod request/limits

micrometer exposing actuator metrics to set request/limit to pods in K8svs metrics-server vs kube-state-metrics -> K8s Mixin from kube-promethteus-stack Grafana dashboad
It's really blurry and frustrating to me to understand why there is such big difference between values from the 3 in the title and how should one utilize K8s Mixin to set proper request/limits and if that is expected at al.
I was hoping I can just see same data that I see when I type kubectl top podname --containers to what I see when I open K8s -> ComputeResources -> Pods dashboard in Grafana. But not only the values differ by more than a double, but also reported values from actuator differ from both.
When exposing spring data with micrometer the sum of jvm_memory_used_bytes is corresponding more to what I get from metrics-server (0.37.0) rather then what I see on Grafana from the mixin dashboards, but it is still far off.
I am using K8s: 1.14.3 on Ubuntu 18.04 LTS managed by kubespray.
kube-prometheus-stack 9.4.4 installed with helm 2.14.3.
Spring boot 2.0 with Micrometer. I saw the explanation on metrics-server git that this is the value that kubelet use for OOMKill, but again this is not helpful at all as what should I do with the dashboard? What is the the way to handle this?
Based on what I see so far, I have found the root cause, renamed kubelet service from old chart to new that can get targeted by serviceMonitors. So for me the best solution would be grafana kube-state-metrics + comparing what I see in the jvm dashboard

Using Prometheus to monitor Spring Boot Applications in Kubernetes Cluster

I have spring boot powered microservices deployed in my local kubernetes cluster. The microservices are using micrometer and prometheus registry but due to our company policy the actuator is available on another port:
8080 for "business" http requests
8081/manage for actuator. So, I can access http://host:8081/manage/prometheus and see the metrics when running the process locally (without kubernetes).
Now, I'm a beginner in Prometheus and have a rather limited knowledge in kubernetes (I'm coming with a Java developer background).
I've created a POD with my application and succesfully run it in kubernetes. It works and I can access it (for 8080 I've created a service to map the ports) and I can execute "business" level http requests it from the same PC.
But I haven't find any examples of adding a prometheus into the picture. Prometheus is supposed to be deployed in the same kubernetes cluster just as another pod. So I've started with:
FROM #docker.registry.address#/prom/prometheus:v2.15.2
COPY entrypoint.sh /
USER root
RUN chmod 755 /entrypoint.sh
ADD ./prometheus.yml /etc/prometheus/
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh looks like:
#!/bin/sh
echo "About to run prometheus"
/bin/prometheus --config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/prometheus \
--storage.tsdb.retention.time=3d \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.console.templates=/etc/prometheus/consoles
My question is about how exactly I should define prometheus.yml so that it will get the metrics from my spring boot pod (and other microservices that I have, all spring boot driven with the same actuator setup).
I've started with (prometheus.yml):
global:
scrape_interval: 10s
evaluation_interval: 10s
scrape_configs:
- job_name: 'prometheus'
metrics_path: /manage/prometheus
kubernetes_sd_configs:
- role: pod
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: sample-pod-app(.*)|another-pod-app(.*)
But apparently it doesn't work, so I've asking for the advices:
If someone has a working example it would be the best :)
Intuitively I understand I need to specify the port mapping for my 8081 port but I'm not exactly know how
Since prometheus is supposed to run on another port am I supposed to expose a kubernetes service for port 8081 at the kubernetes level?
Do I need any security related resources in kubernetes to be defined?
As a side note. At this point I don't care about scalability issues, I believe one prometheus server will do the job, but I'll have to add Grafana into the picture.
Rather than hardcoding it in prometheus config you need to make use of annotations on your pods to to tell prometheus which pods, what path and which port Prometheus should scrape.
prometheus.io/scrape: "true"
prometheus.io/path=/manage/prometheus
prometheus.io/port=8081
prometheus.io/scheme=http
Spring boot micrometer example with Prometheus on kubernetes.
Prometheus deployment guide.
In order to let Prometheus collect metrics from your Spring Boot Application you need to add a specific dependencies to it. Here you can find a guide showing how to make it done: Spring Boot metrics monitoring using Prometheus & Grafana. Here is an example:
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_spring_boot</artifactId>
<version>0.1.0</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_hotspot</artifactId>
<version>0.1.0</version>
</dependency>
If you would like to use a bit different strategy you can also check out this one: Monitoring Spring Boot applications with Prometheus and Grafana:
In order to compare the performance of different JDKs for reactive
Spring Boot services, I made a setup in which a Spring Boot
application is wrapped in a Docker container. This makes it easy to
create different containers for different JDKs with the same Spring
Boot application running in it. The Spring Boot application exposes
metrics to Prometheus. Grafana can read these metrics and allows to
make nice visualizations from it. This blog post describes a setup to
get you up and running in minutes.
Please let me know if that helped.

application configuration for java apps in kubernetes

I'm new to java and k8, and I have some doubts about how to handle application configurations for my java apps. I've got one spring boot app and the other three use wildfly.
So, they all got hardcoded application configurations, and when starting them the just use something like:
java -Dswarm.project.stage=development -jar foobar/target/foobar-swarm.jar
except for the spring boot which has an application.properties file that consists of application configuration data.
So basically the three java apps have backed in two files (which I know is a no no):
- project-stages.yml
- standalone.xml
And when the developer wants to deploy to production he uses:
java -Dswarm.project.stage=production -jar foobar/target/foobar-swarm.jar
And, now we come to kubernetes which has three ways of dealing with application configuration data:
1.) Env variables
2.) Config maps
3.) Secrets
I was thinking of using configmaps instead of env variables because they have more benefits.
So, the developer gave me the possibility of overwriting those hardcoded variables with an external file : Dsystem.properties.file=/var/foobar/environment.properties
But I'm still overwriting an hardcoded files with an external file, and I'm not happy with that solution!
So, I'm basically looking on advise can those hardcoded files be supplied externally and populated with configmaps in k8 - what would be the best practice of handling the config files in the world of k8?
Tnx,
Tom
There are several questions in the post, but I can address only the one related to spring-boot.
The simplest and the most convenient way of specifying configurations to spring boot app is via its built in profiling feature. As you already mentioned you have application.properties. You can create similar files according to your usage cases: application-production.properties, application-staging.properties, application-k8s.properties, etc.
Kubernetes deployment doesn't change this in any way.
You can control which configuration to pick by setting SPRING_PROFILES_ACTIVE env variable from the kubernetes.
You might have something like this:
docker run -e SPRING_PROFILES_ACTIVE=k8s -d -p 0.0.0.0:8080:8080 \
--name=yourapp your_image_name bash -c "java -jar yourapp.jar"
It will pick configuration from application-k8s.properties.
Configuration files support environment variables as well.
You can have placeholders like ${YOUR_DB} in your properties files and Spring will automatically pick up env variable with name YOUR_DB. It is convenient to use this feature let's say when your app pod must have its own db pod.
If I got your question right you are asking how to configure a Spring Boot application via a k8s ConfigMap. Yes, you can do that.
Create a Docker image with WORKDIR work_dir in which you start the Spring Boot application eg via java -jar /work_dir/app.jar
Create a ConfigMap
Run a container of the above mentioned image within k8s
Mount the ConfigMap for the Spring Boot application.properties into the Container as /work_dir/config/application.properties
On changes in the ConfigMap the file within the container gets updated. You have to restart the Spring Boot Application to set your changes active.

How to identify a terminating pod in kubernetes using java

I am using fabric8 to get the status of a kuberenetes pod by the following code.
KubernetesHelper.getPodStatusText(pod);
I deploy an app within a container and there is a one to one mapping between a container and pod. My requirement is to redeploy the application. So after deleting the pod I check for the status and the method returns a status of "Running" while deleting.
I am unable to identify that the pod has been deleted since the newly deployed app also return a status of "Running". Are there any other variables of a pod that can be used to identify a healthy pod vs a terminating pod.
One way of doing this is to perform a rolling upgrade. This will ensure that your deployed application incurs no downtime (new pods are started before old pods are stopped). One caveat is that you must be using a replication controller or replication set to do so. Most rolling deployments will simply involve just updating the container's image for the new version of software.
You can do this through Java via fabric8's Kubernetes Java client. Here's an example:
client.replicationControllers()
.inNamespace("thisisatest")
.withName("nginx-controller")
.rolling().updateImage("nginx");
You can change any configuration of the replication controller (replicas, environment variables, etc). The call will return when your pods running the new version are Ready & the old replication & pods have been stopped & deleted.

Categories

Resources