Implement tracing in a spring boot application - java

I have a spring boot application and I want to configure log tracing to all the application, I added the setup for **datadog Agent ** as in in the documentation: https://docs.datadoghq.com/tracing/setup_overview/setup/java/?tabs=containers and I can see trace ids and spanIds associated with each request, there are other options such as Spring cloud Sleuth to add tracing and span ids as well:
https://spring.io/projects/spring-cloud-sleuth
and also in the datadog documentation, there is a way for connecting Java logs and tracing in https://docs.datadoghq.com/tracing/connect_logs_and_traces/java/?tabs=log4j2
I have a confusion with this three approaches and which one of them I can use for the tracing purpose, with spring sleuth I know that is easier to generate span a trace ids, but not sure if them are sent to datadog or I need to configure something additional.
Also the connecting Java logs and tracing vs datadog agent its not clear for me.
I am new with this topic and it is not clear for me how can I implement tracing for all the process included in a request.

Related

AWS Latency Metrics Logging Issue Spring Boot

I am trying to log aws latency metrics on the application server. I have tried implementing the last Latency Metrics Logging section of https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html
As mentioned in the instructions there:
I am setting the following while initializing ApplicationContext:
AwsSdkMetrics.enableDefaultMetrics();
AwsSdkMetrics.setMetricNameSpace("SNSMetricsLog");
AwsSdkMetrics.setCredentialProvider(credentialsProvider);
I am using the following in log.properties:
log.folder=log
log.app.fileName=application.log
log.metric.fileName=metric.json
log.level=DEBUG
log.app.batch.fileName=batch.log
log.app.skippedMsg.fileName=skipped.log
log.logger.com.amazonaws.latency=DEBUG
Even after making these changes, the AWS latency metrics are not coming although I am able to see other DEBUG logs.

Is it possible to create custom fields in a Kibana dashboard?

I am using a Java micro-service architecture in my application and generating separate log files for each micro-service.
I am using ELK stack approach to visualize the logs in Kibana, but the problem is whatever the fields that I'm getting from Elastic Search that are related to server logs fields. some example fields are #timestamp,#version,#path,#version.keyword,#host.
i want to customize this fields by adding some fields like customerId,txn-Id,mobile no so that we can analyze the data easily.
I'm using org.apache.logging.log4j2 to write the logs. Can I set above fields (customerId,txn-Id,mobile) to log files? And then Elastic will store these fields with the above default fields and then these custom fields should available in a Kibana dashboard. Is this possible?
It's definitely possible to do that. I've not done it with the log4j2 stack (I have with slf4j/logback), but the basic approach is:
set those fields in the Mapped Diagnostic Context (I'm fairly sure log4j2 supports that)
use a log appender which logs to logstash-structured JSON
configure filebeat to ship the JSON logs
if filebeat is shipping to logstash, you'll need to configure logstash to pass those preformatted JSON logs directly to elasticsearch
It is definitely possible. I am doing that now with my applications. However, the output looks a bit different from yours. The basic guide for doing this can be found at Logging in the Cloud on the Log4j2 web site.
The "normal" log view looks very similar to what you would see when logging to a file.
However, if you select a message you can see the individual fieds.
The Log4j2 configuration uses a TCP Socket appender that is configured to write to a cluster of Logstash servers that use a single DNS entry and to use the Gelf layout.
You can also use MapMessages to capture individual data elements and log them. While this currently works it is slightly cumbersome so I have recently committed improvements that will be available in Log4j 2.15.0.
It is important to note that the Logging in the Cloud page briefly mentions storing your logging configuration in Spring Cloud Config. If you want to have a common base configuration while allowing apps to do some customization this works very, very well. However, The Gelf, Json Template Layout and TCP Appender are all independent from that and can be used without Spring Boot.

Spring Webflux - logging connection ID and new Connection log not displayed when using Webflux webclient

Many tutorials online are pointing out the importance of having connection ID in a Spring Webflux application.
For instance, see this screenshot taken from a presentation in a conference.
However, I am not getting those IDs. I can only see the part with the time, up until the [ctor-http-nio5].
I cannot see the connection ID, I cannot see the statement "New http connection"
What can be the root cause of me not being able to display such? I really would like to see those interesting logs.
Thank you for your help
This ID is unique for each of the connections made to your server and you can extract it by calling the getLogPrefix() method of ServerWebExchange class and can append it to your logs by directly or by putting it to the MDC.
By default spring framework logs this id in the framework logs.
These framework logs are at the DEBUG level so you need to enable debug level to see the logs with ids.
By adding the following configuration to the application.properties you can see the logs with ids.
logging.level.org.springframework=DEBUG

How to run the app with spring-cloud-starter-aws locally?

I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.

checking whether spring boot application is running or not

I've created a spring boot project and deployed it on a vm. I've added a command in local.rc that starts the spring boot application on reboot. I want to check whether the command got executed and the application is running. How do I do that?
There are two ways
On system level - you can run your project as a service, which is documented in the Official documentation - Deployments. Then you can query the application status service myapp status.
On application level - include Spring Boot Actuator in your app and use the Actuator endpoints such as /actuator/health as per Official documentation - Production Ready Endpoints. These endpoints can be exposed via HTTP or JMX.
Note: prior to spring boot 2.0 the actuator endpoint is /health
If it's a web project, it makes sense to include spring-boot-actuator (just add a dependency in maven and start the microservice).
In this case, it will automatically expose the following endpoint (for example, its actually can be flexibly set up):
http://<HOST>:<PORT>/health
Just issue an HTTP GET request, and if you get 200 - it's up and running.
If using an actuator is not an option (although it should be really addressed as a first bet), then you can merely telnet to http://<HOST>:<PORT>
The ratio behind this is that that PORT is exposed and ready to "listen" to external connections only after the application context is really started.

Categories

Resources