I am trying to log aws latency metrics on the application server. I have tried implementing the last Latency Metrics Logging section of https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html
As mentioned in the instructions there:
I am setting the following while initializing ApplicationContext:
AwsSdkMetrics.enableDefaultMetrics();
AwsSdkMetrics.setMetricNameSpace("SNSMetricsLog");
AwsSdkMetrics.setCredentialProvider(credentialsProvider);
I am using the following in log.properties:
log.folder=log
log.app.fileName=application.log
log.metric.fileName=metric.json
log.level=DEBUG
log.app.batch.fileName=batch.log
log.app.skippedMsg.fileName=skipped.log
log.logger.com.amazonaws.latency=DEBUG
Even after making these changes, the AWS latency metrics are not coming although I am able to see other DEBUG logs.
Related
We are using Paketo BuildPacks for our Spring Boot application. We configured all logs to be JSON written to STDOUT. The issue is that there's a few lines of logs by Paketo during startup:
Setting Active Processor Count to 2
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx1643814K -XX:MaxMetaspaceSize=146137K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 2G, Thread Count: 50, Loaded Class Count: 23387, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 124 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=2 -XX:MaxDirectMemorySize=10M -Xmx1643814K -XX:MaxMetaspaceSize=146137K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
Is there any way to configure Paketo to print the above as JSON:
{ timestamp: 1234567890, "app": "my-service", "message": "Setting Active Processor Count to 2" }
It is not possible to directly configure Paketo BuildPacks to print log messages in JSON format. Paketo BuildPacks are designed to be used with Cloud Foundry, which uses a specific logging format that is based on log lines written to STDOUT. This format is not compatible with JSON, and there is no built-in support for converting log messages to JSON within Paketo BuildPacks.
One potential solution to this problem would be to use a log aggregation and analysis tool that is capable of parsing and analyzing log messages written in the Cloud Foundry format. There are many different tools available that can do this, including Elastic Stack (formerly known as the ELK stack), Splunk, and Logz.io. These tools can parse and analyze log messages written in the Cloud Foundry format, and they can provide you with powerful analytics and visualization features to help you understand and optimize your application's performance.
Another potential solution would be to use a log forwarding tool to send your log messages to a log analysis tool or service that is capable of parsing and analyzing JSON logs. There are many different log forwarding tools available, including Logstash, Fluentd, and rsyslog. These tools can collect log messages from your application and send them to a destination of your choice, such as a log analysis tool or a centralized log management service.
No, sorry. The logging format in Paketo Buildpacks, and the helpers (technical name is exec.d processes) that get installed, are not configurable at this time.
There is some work that has been done in libcnb which is the upstream library that the Java-related buildpacks use that would allow customized loggers. In theory, it should be possible to allow changing to a JSON-based logging format. That would require the v2.0 of the library which has yet to be released.
I suggest adding this as a suggestion for the 2023 Roadmap. There's presently a discussion going on where the project is soliciting new features. You can also open an issue under the Java buildpack for tracking.
I have a spring boot application and I want to configure log tracing to all the application, I added the setup for **datadog Agent ** as in in the documentation: https://docs.datadoghq.com/tracing/setup_overview/setup/java/?tabs=containers and I can see trace ids and spanIds associated with each request, there are other options such as Spring cloud Sleuth to add tracing and span ids as well:
https://spring.io/projects/spring-cloud-sleuth
and also in the datadog documentation, there is a way for connecting Java logs and tracing in https://docs.datadoghq.com/tracing/connect_logs_and_traces/java/?tabs=log4j2
I have a confusion with this three approaches and which one of them I can use for the tracing purpose, with spring sleuth I know that is easier to generate span a trace ids, but not sure if them are sent to datadog or I need to configure something additional.
Also the connecting Java logs and tracing vs datadog agent its not clear for me.
I am new with this topic and it is not clear for me how can I implement tracing for all the process included in a request.
For exporting the metrics (to Prometheus) from the spring boot micro service, we can use the spring boot actuator and one more option is to use the Prometheus JMX exporter(https://github.com/prometheus/jmx_exporter) as a javaAgent when running the service. Though both of the options serve the same purpose, I do see that the JMX exporter is exporting way lot more metrics than the spring boot actuator. I was scouting through some spring boot documentations to see if there is any option to enable more metrics with spring boot actuator, looks like all the JMX metrics are enabled by default. So the questions is, is there a way to expose more metrics from spring boot actuator? Is there any recommendation or comparison study available for both the options mentioned above?
Any help here is greatly appreciated. Thanks!
If you are using Spring boot 2.x, then it works like this:
In Spring Boot 2.0, the in-house metrics were replaced with Micrometer support, so we can expect breaking changes. If our application was using metric services such as GaugeService or CounterService, they will no longer be available.
Instead, we're expected to interact with Micrometer directly. In Spring Boot 2.0, we'll get a bean of type MeterRegistry autoconfigured for us.
for Spring boot 1.x:
The metrics endpoint publishes information about OS and JVM as well as application-level metrics. Once enabled, we get information such as memory, heap, processors, threads, classes loaded, classes unloaded, and thread pools along with some HTTP metrics as well.
and this seems to work like Prometheus JMX
i want to expose the disk usage for my micro-service to Prometheus, i'm already using micrometer and micrometer Prometheus with jdk 11 and spring boot actuator
im able to get all the other metrics working like tomcat_sessions and jvm_buffer_total_capacity_bytes etc,
and i can't seem to find a way of exposing the disk usage
I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.