We are about to deploy a Spring Boot 2.3 Application on Elastic Beanstalk running Java 8 (Not Corretto 8).
We are thinking of using Multi AZ for the RDS and i am reading the Readme for that
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
and there is a part which states that we should be aware of the DNS cache in case of fail over
Setting the JVM TTL for DNS name lookups
which says the following thing
The default TTL can vary according to the version of your JVM and whether a
security manager is installed.
Many JVMs provide a default TTL less than 60 seconds.
If you're using such a JVM and not using a security manager,
you can ignore the rest of this topic. For more information on security managers
in Oracle, see The security manager in the Oracle documentation.
What is the default value of Java 8 In Elastic Beanstalk? I can't seem to find it.
Also from my understanding if the ttl value is big, and a fail happens on the database, it won't fail over to the instance in the other AZ because DNS won't change. Is that correct?
Also is the default value is too big, what is the Spring Boot way of setting that property without using XML files ?
Thanks a lot in advance
You can tune this in the JVM with code like:
java.security.Security.setProperty("networkaddress.cache.ttl" , "1");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "1");
This value is the number of seconds to cache the data.
However, you may also want to consider RDS Proxy as it can speed up failover. There should be no code changes, only configuration changes. There is no additional cost for RDS Proxy.
Related
I need to design and configure Kafka jdbc connect project where source and sink both are postgres db, and I am using apache Kafka 2.8.
I have prepared POC for standalone mode, but I need to design it for distributed mode and data volume would be several million records.
Can you share any reference to setup for distributed mode and also parameters tuning and best practices?
I have gone through several documents but not getting precise document only for apache Kafka with jdbc connector.
Also please let me know how can I make this solution dockerized?
Thanks,
Suvendu
reference to setup for distributed mode
This is in the Kafka documentation. Run connect-distributed.sh along with its config file.
parameters tuning and best practices?
The config has reasonable defaults, but you're welcome to inspect the file for any changes. Only other thing would be heap settings, but 2G is the default Xmx, and can be set with KAFKA_HEAP_OPTS env var
This starts an HTTP server, and you POST JSON to it that has the same key values as the standalone jdbc worker file
precise document only for apache Kafka with jdbc connector
There's the official configuration page and handful of blogs (by Confluent) about it
how can I make this solution dockerized?
The Confluent Docker images would be best for this, though you may have to confluent-hub install the JDBC connector into an image of your own
I'd recommend Debezium as the source, though
I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.
I have a SpringBoot 2 app that uses using Spring Data Couchbase.
I have this message on the logs every minute
2019-11-12 13:48:48,924 WARN : gid: trace= span= [cb-orphan-1] c.c.c.c.t.DefaultOrphanResponseReporter Orphan responses observed: [{"top":[{"r":"10.120.93.220:8092","s":"view","c":"5BE128F6F96A4D28/FFFFFFFFDA2C8C52","l":"10.125.216.233:49893"}],"service":"view","count":1}]
That is from the new Response Time Observability feature underlying the Java SDK.
It would seem to indicate that you have view requests which are timing out, but eventually received later, but I have no views defined in Couchbase DB
I would like to know if it is possible to disable OrphanResponseLogReporter via YML file config in a SpringBoot app. , setting the logIntervalNanos to 0
No, unfortunately, you cannot do it. Only a subset of Couchbase's configuration properties is supported in the application.yml, namely the ones present in the CouchbaseProperties.java class.
You could although use an environment variable: com.couchbase.orphanResponseReportingEnabled=false. It is independent of Spring, it's read directly by Couchbase SDK.
Edit:
As a workaround, you can set logging level in the application.yml:
logging.level.com.couchbase.client.core.tracing.DefaultOrphanResponseReporter: ERROR
I would like to know if I can start a ignite cache from java client . I am using Cassandra as the persistence store and use POJO configurations to work with the cache and Cassandra. With out providing any named cache configuration in server side is it possible ?
Please share your thoughts.
Cache itself can be dynamically started using Ignite#createCache method. However, classes that are required for this cache need to be deployed explicitly in advance, before servers are started.
In your case you will have to deploy POJO classes because they are currently required by Cassandra store. You will be able to skip this step though after this ticket is implemented: https://issues.apache.org/jira/browse/IGNITE-5270
I have created a database pool on WASCE 3.0.0.3 (WebSphere Application Server Community Edition) which i am using through JNDI. I want to set oracle network data encryption and integrity properties for this database pool. The properties i want to set in particular are oracle.net.encryption_client and oracle.net.encryption_types_client.
How can I set these properties? I do not see any option to set these properties while creating the connection pool and I cannot find any documentation related to the same.
You probably cannot find any documentation on how to do this because WAS 3.0 went out of service in 2003, so any documentation for it is long gone.
If you upgrade to a newer version of WAS traditional (or Liberty), you will find much more documentation and people willing to help you. Additionally, in WAS 6.1 an admin console (UI) was added, which will probably walk you through what you are trying to do.