Couchbase DefaultOrphanResponseReporter Orphan responses observed - java

I have a SpringBoot 2 app that uses using Spring Data Couchbase.
I have this message on the logs every minute
2019-11-12 13:48:48,924 WARN : gid: trace= span= [cb-orphan-1] c.c.c.c.t.DefaultOrphanResponseReporter Orphan responses observed: [{"top":[{"r":"10.120.93.220:8092","s":"view","c":"5BE128F6F96A4D28/FFFFFFFFDA2C8C52","l":"10.125.216.233:49893"}],"service":"view","count":1}]
That is from the new Response Time Observability feature underlying the Java SDK.
It would seem to indicate that you have view requests which are timing out, but eventually received later, but I have no views defined in Couchbase DB
I would like to know if it is possible to disable OrphanResponseLogReporter via YML file config in a SpringBoot app. , setting the logIntervalNanos to 0

No, unfortunately, you cannot do it. Only a subset of Couchbase's configuration properties is supported in the application.yml, namely the ones present in the CouchbaseProperties.java class.
You could although use an environment variable: com.couchbase.orphanResponseReportingEnabled=false. It is independent of Spring, it's read directly by Couchbase SDK.
Edit:
As a workaround, you can set logging level in the application.yml:
logging.level.com.couchbase.client.core.tracing.DefaultOrphanResponseReporter: ERROR

Related

Is it possible to create custom fields in a Kibana dashboard?

I am using a Java micro-service architecture in my application and generating separate log files for each micro-service.
I am using ELK stack approach to visualize the logs in Kibana, but the problem is whatever the fields that I'm getting from Elastic Search that are related to server logs fields. some example fields are #timestamp,#version,#path,#version.keyword,#host.
i want to customize this fields by adding some fields like customerId,txn-Id,mobile no so that we can analyze the data easily.
I'm using org.apache.logging.log4j2 to write the logs. Can I set above fields (customerId,txn-Id,mobile) to log files? And then Elastic will store these fields with the above default fields and then these custom fields should available in a Kibana dashboard. Is this possible?
It's definitely possible to do that. I've not done it with the log4j2 stack (I have with slf4j/logback), but the basic approach is:
set those fields in the Mapped Diagnostic Context (I'm fairly sure log4j2 supports that)
use a log appender which logs to logstash-structured JSON
configure filebeat to ship the JSON logs
if filebeat is shipping to logstash, you'll need to configure logstash to pass those preformatted JSON logs directly to elasticsearch
It is definitely possible. I am doing that now with my applications. However, the output looks a bit different from yours. The basic guide for doing this can be found at Logging in the Cloud on the Log4j2 web site.
The "normal" log view looks very similar to what you would see when logging to a file.
However, if you select a message you can see the individual fieds.
The Log4j2 configuration uses a TCP Socket appender that is configured to write to a cluster of Logstash servers that use a single DNS entry and to use the Gelf layout.
You can also use MapMessages to capture individual data elements and log them. While this currently works it is slightly cumbersome so I have recently committed improvements that will be available in Log4j 2.15.0.
It is important to note that the Logging in the Cloud page briefly mentions storing your logging configuration in Spring Cloud Config. If you want to have a common base configuration while allowing apps to do some customization this works very, very well. However, The Gelf, Json Template Layout and TCP Appender are all independent from that and can be used without Spring Boot.

Elastic Beanstalk, Java Spring Boot and RDS Multi AZ Deployment

We are about to deploy a Spring Boot 2.3 Application on Elastic Beanstalk running Java 8 (Not Corretto 8).
We are thinking of using Multi AZ for the RDS and i am reading the Readme for that
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
and there is a part which states that we should be aware of the DNS cache in case of fail over
Setting the JVM TTL for DNS name lookups
which says the following thing
The default TTL can vary according to the version of your JVM and whether a
security manager is installed.
Many JVMs provide a default TTL less than 60 seconds.
If you're using such a JVM and not using a security manager,
you can ignore the rest of this topic. For more information on security managers
in Oracle, see The security manager in the Oracle documentation.
What is the default value of Java 8 In Elastic Beanstalk? I can't seem to find it.
Also from my understanding if the ttl value is big, and a fail happens on the database, it won't fail over to the instance in the other AZ because DNS won't change. Is that correct?
Also is the default value is too big, what is the Spring Boot way of setting that property without using XML files ?
Thanks a lot in advance
You can tune this in the JVM with code like:
java.security.Security.setProperty("networkaddress.cache.ttl" , "1");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "1");
This value is the number of seconds to cache the data.
However, you may also want to consider RDS Proxy as it can speed up failover. There should be no code changes, only configuration changes. There is no additional cost for RDS Proxy.

How to run the app with spring-cloud-starter-aws locally?

I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.

Prevent Spring Boot Invoking EC2MetadataUtils.getItems on startup

I am having an issue where EC2MetaDataUtils.getItems is being invoked on application start up ( Spring boot app), we do not use EC2 and so the calls made to AWS to get Metadata always fail, the application attempts to get this data 3 times and so this is adding around 15 seconds to the start time of the application.
I have been searching high and low for solutions I found a promising solution would suggested the following #EnableAutoConfiguration(exclude = { ContextResourceLoaderAutoConfiguration.class, ContextResourceLoaderConfiguration.class, ContextInstanceDataAutoConfiguration.class })
However when I try to start up the application it complains that ContextResourceLoaderConfiguration.class cannot be excluded as it is not auto configuration; if I just exclude the other 2 the application still invokes the MetaDataUtils.
Has anyone experienced this in the past and managed to resolve it?
Thank you for your time.
Resolved with the following:
#EnableAutoConfiguration(exclude = {ContextInstanceDataAutoConfiguration.class, ContextStackAutoConfiguration.class, ContextResourceLoaderAutoConfiguration.class})
when running spring-boot-application with AWS dependencies ,
It invokes stack auto-configuration , you need to disable it ,
add following to application.yml
cloud.aws.stack.auto: false
SpringBoot application should not do any call to EC2. This mean your are using some AWS specific library/component/what ever and this library on startup do this call.
Please check your dependencies and context configuration. There are nothing about SpringBoot. There is something with your custom dependencies/components.
If you're not using EC2, you can try removing the spring-cloud-aws* libraries from your dependencies.
You can use Spring profiles to differentiate between cloud and default profiles. For cloud profile, you can use spring-cloud-aws artifact to get metadata about EC2 instance which you need EC2 read permission access from an attached IAM role whereas for default profile, you don't need to worry about cloud environment and disable the cloud configuration properties which should not cause an issue for the application startup.

Dynamically change application.properties values in spring boot

Currently i am working on a REST based project in Spring Boot.
I have added the api url in 'application.properties' file.
i.e.
application.properties
api-base-url=http://localhost:8080/RestServices/v1
And also this 'api-base-url' value access from java.
In some situations i need to change the 'api-base-url' dynamically.
I have change 'api-base-url' value dynamically & working fine.
But my problem is that
when wildfly restart then the configuration will be reset to default.
i.e
This is my default value
api-base-url=http://localhost:8080/RestServices/v1
dynamically change to
api-base-url=http://10.34.2.3:8080/RestServices/v1
when wildfly restart then the configuration will be reset to default.
i.e.
api-base-url=http://localhost:8080/RestServices/v1
Have any solution for this?
You might want to consider using a cloud config server to host your config. Two examples are Spring Cloud Config and Consul.
These servers will host your application's configuration and your spring boot application will make a call out to the config server on start up to get it's config.
spring-boot-actuator exposes the endpoint /refresh which forces the application to refresh it's configuration. In this case, it will call out to the config server to get the latest version.
This way you can change the config hosted in the config server then hit the /refresh endpoint and the changes will be picked up by your application.
As #moilejter suggests, one possible way is to persist in database table and at start time you simply read from that table instead of application.properties file. Your application.properties files can hold information necessary for database connection.
You would also need a JMX method or a REST API to trigger in your application that the url has changed and which inturn, would simply read from same table. This way you would be safe even if app restarts and you won't lose the override.
You can use BeanFactoryPostProcessor coupled with Environment bean to leverage spring placeholder concept.
#user2214646
Use spring expression language

Categories

Resources