Retry connection to Cassandra node upon startup - java

I want to use Docker to start my application and Cassandra database, and I would like to use Docker Compose for that. Unfortunately, Cassandra starts much slower than my application, and since my application eagerly initializes the Cluster object, I get the following exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra/172.18.0.2:9042 (com.datastax.driver.core.exceptions.TransportException: [cassandra/172.18.0.2:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1454)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
According to the stacktrace and a little debugging, it seems that Cassandra Java driver does not apply retry policies to the initial startup. This seems kinda weird to me. Is there a way for me to configure the driver so it will continue its attempts to connect to the server until it succeeds?

You should be able to write some try/catch logic on the NoHostAvailableException to retry the connection after a 5-10 second wait. I would recommend only doing this a few times before throwing the exception after a certain time period where you know that it should have started by that point.
Example pseudocode
Connection makeCassandraConnection(int retryCount) {
Exception lastException = new IllegalStateException();
while (retryCount > 0) {
try {
return doConnectionStuff();
} catch (NoHostAvailableException e) {
lastException = e;
retryCount--;
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
}
}
throw lastException;
}

If you don't want to change your client code, and your client application's docker container stops because of the error you can use the following attribute for the client app in your docker-compose file.
restart: unless-stopped
That will restart your client application container as many times as it fails. Example docker-compose.yml file:
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped

The Datastax driver cannot be configured this way.
If this is only a problem with Docker and you do not wish to change your code, you could consider using something such as wait-for-it which is a simple script which will wait for a TCP port to be listening before starting your application. 9042 is cassandra's native transport port.
Other options are discussed here in the docker documentation, but I personally have only used wait-for-it but found it to be useful when working with cassandra within docker.

refer: https://stackoverflow.com/a/69612290/10428392
You could modify docker compose file like this with a health check.
version: '3.8'
services:
applicaion-service:
image: your-applicaion-service:0.0.1
depends_on:
cassandra:
condition: service_healthy
cassandra:
image: cassandra:4.0.1
ports:
- "9042:9042"
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10

If you orchestrating many dockers, you should go for a docker compose with depends on tag
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped
depends_on:
- cassandra

Try increasing the connection timeout, it's the one thing sometimes happens on AWS and the like. I think you're looking at a late stage in the error log, at some point it should tell you it couldn't connect because of a timeout or unreachable network, and then it flags nodes as not available.
Using phantom, code is like below:
val Connector = ContactPoints(Seq(seedHost))
.withClusterBuilder(_.withSocketOptions(
new SocketOptions()
.setReadTimeoutMillis(1500)
.setConnectTimeoutMillis(20000)
)).keySpace("bla")
Resource Link:
com.datastax.driver.core.exceptions.NoHostAvailableException #445

Related

docker compose problem while using spring boot mysql docker

I'm getting this exception while trying to run docker compose
app-server_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
app-server_1 |
app-server_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
My docker-compose.yml looks like this
version: "3.7"
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
app-server:
build:
context: simple-fullstack
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: always
depends_on:
- db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: vilius
SPRING_DATASOURCE_PASSWORD: vilius123
networks:
- backend
networks:
backend:
application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url =jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=vilius
spring.datasource.password=vilius123
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.datasource.initialize=true
Have been struggling for a while and seen people with similar problems, but still didn't found the solution. Is there a problem with my docker-compose.yml?
Adding to #Shawrup's excellent description of what's happening, another solution is to add a healtcheck to the MySQL container. This will cause Docker Compose to wait for the healthcheck to be successful before starting any dependent containers.
Your MySQL container configuration could look something like this:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
healthcheck:
test: "/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
The problem here is, your application is trying to connect to mysql before it's ready. Accordinfg to official documentation
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
Use a tool such as wait-for-it, dockerize, sh-compatible wait-for, or RelayAndContainers template. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
From here.
I found that, spring boot 2.3.4 does not stop your app if db connection fails. It will retry if the app tries to access db and connection will be established if db is up.
Another workaround is, start your db first and then start your app.
docker-compose up db
docker-compose up app-server

proper usage of MySQL docker container while integrating with java application

Currently I am trying to implement some eCommerce functionality in java. And I would like to pull data from MySQL and utilize the same. So for this purpose I wanted to use MySQL docker container for the first time. And in my Linux server(host) I have installed docker and created a custom MySQL container with Image. so here the question is how to use this container in my local computer. And is this the way we suppose to access MySQL from local ?? or we create a different container for our local server by installing docker from the image? And how to access this container from host server (I mean the endpoint). Could someone please elaborate the clear use case of MySQL docker container. if you have any questions for my query please ask so that I will reply in comments.
in my opinion:
first I have used MySQL official image and I have created a docker-compose.YAML file for creating a new container.
following commands is an example for you.
just pay attention to create a network first and put your MySQL and java containers at the same network.
docker network create mysql-network
then create a yaml file and put these commands:
mysql:
image: mysql:8.0
container_name: mysql
restart: always
networks:
#- mysql_network
- mysql_newtwork
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: 123456789
MYSQL_DATABASE: database
MYSQL_USER: user_db
MYSQL_PASSWORD: 123456
ports:
- 3306:3306
volumes:
- /home/mysql_backup/storage/mysql-data/dev:/var/lib/mysql

Connection Refused connecting from docker to Elasticsearch docker container

I am trying to access Elasticsearch database inside a container from a Java application which is also inside a container.
Both of them are in the following docker-compose.yml:
version: "3.7"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
networks:
- elastic
java-app:
image: neileen/dockerhub_app:java-latest
ports:
- "8080:8080"
depends_on:
- es
networks:
- elastic
networks:
elastic:
driver: bridge
As you can see, I use a bridge network to make the containers visible and accessible to one another.
In my Java Application I use RestHighLevelClient:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("es", 9200, "http")));
I also tried using "localhost" and "0.0.0.0" as the hostname instead of "es" with no result.
The exception that I keep getting is:
java-app_1 | The cluster is unhealthy: Connection refused
java-app_1 | java.net.ConnectException: Connection refused
java-app_1 | at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:804)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212)
I am aware that this is a problem related to the port 9200 not being visible inside the java-app container but I do not know where the problem is as I already provided a custom network and used the container name as the hostname.
Note
ES is accessible through "http://localhost:9200"
Thank you in advance.
Elasticsearch does some bootstrap checks on startup. If you want to start it as a single node in Docker, you need to disable these, or it will not open the TCP/IP port.
This can be done by specifying an environment parameter: discovery.type=single-node.
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node

Apache Beam : cannot access Pub/Sub Emulator via docker-compose

I have built a piece of software which is using GCP Pub/Sub as message queue, Apache Beam to build a pipeline and Flask to build a webserver. It is running smoothly in production but I have trouble to make all the piece connect together with docker-compose, in particular the Apache Beam pipeline.
I have followed Dataflow pipeline and pubsub emulator to make the pipeline listen to a GCP Pub/Sub emulator by replacing the localhost from the SO answer by the name of the service defined in my docker-compose.yaml:
pubsub_emulator:
build: docker_images/message_queue
ports:
- 8085:8085
webserver:
build: docker_images/webserver
environment:
PUBSUB_EMULATOR_HOST: pubsub_emulator:8085
PUBSUB_PROJECT_ID: my-dev
restart: unless-stopped
ports:
- 8899:8080
depends_on:
- pubsub_emulator
pipeline:
build: docker_images/pipeline
environment:
PUBSUB_EMULATOR_HOST: pubsub_emulator:8085
PUBSUB_PROJECT_ID: my-dev
restart: unless-stopped
depends_on:
- pubsub_emulator
The webserver is able to access the Pub/Sub emulator and to generate topics.
However, the pipeline fails on start-up with a MalformedURLException:
Caused by: java.lang.IllegalArgumentException: java.net.MalformedURLException: no protocol: pubsub_emulator:8085/v1/projects/my-dev/subscriptions/sync_beam_1702190853678138166
The options of the pipeline seems fine, I defined them with:
final String pubSubEmulatorHost = System.getenv("PUBSUB_EMULATOR_HOST");
BasePipeline.PipeOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(BasePipeline.PipeOptions.class);
options.as(DataflowPipelineOptions.class).setStreaming(true);
options.as(PubsubOptions.class).setPubsubRootUrl(pubSubEmulatorHost);
Pipeline pipeline = Pipeline.create(options);
Do anyone get an hint on what is happening and how to solve it ? Does the only solution imply to set the emulator and the pipeline in the same docker ?
You can try to change the value to the following:
http://pubsub_emulator:8085
As the error complaining from missing protocol which expected to be http in your case
According to Apache Beam SDK the value expected to be a fully qualified URL:
// getPubsubRootUrl
#Default.String(value="https://pubsub.googleapis.com")
#Hidden
java.lang.String getPubsubRootUrl()
// Root URL for use with the Google Cloud Pub/Sub API.
However if you came from a python background you will notice that the Python SDK which uses gRPC Python as showing in here expecting only the server address which consist of the address and the port
# A snippet from google-cloud-python library.
if os.environ.get("PUBSUB_EMULATOR_HOST"):
kwargs["channel"] = grpc.insecure_channel(
target=os.environ.get("PUBSUB_EMULATOR_HOST")
)
grpc.insecure_channel(target, options=None)
Creates an insecure Channel to a server.
The returned Channel is thread-safe.
Parameters:
target – The server address

Trouble communicating between two docker containers

I’m new to docker and I’m trying to connect my spring boot app running into my boot-example docker container to a mysql server running into my mymysql docker container on port 6603, both running on the same phisical machine.
The fact is: if I connect my spring-boot app to my mymysql docker container in order to communicate with the database, I get no errors and everything works fine.
When I move my spring boot application into my boot-example container and try to communicate (through Hibernate) to my mymysql container, then I get this error:
2018-02-05 09:58:38.912 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_111]
My spring boot application.properties are:
server.port=8083
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://localhost:6603/mydockerdb
spring.datasource.username=root
spring.datasource.password=mypassword
It works fine until my spring boot app runs in a docker container on port 8082, (after the docker image is correctly built):
docker run -it -p 8082:8083 boot-example
You cannot use localhost inside the container, it's the container itself. Hence, you will always get the connection refused error.
You can do below things -
Add your host machine IP in application.properties file of your spring boot application. (Not recommended since it breaks docker portability logic)
In case you want to use localhost, use --net=host while starting the container. (Not recommended for Production since no logical network layer exists)
Use --links for container communication with a DNS name. (deprecated/legacy)
Create a compose file & call your DB from spring boot app with the service name since they will be in same network & highly integrated with each other. (Recommended)
PS - Whenever you need to integrate multiple containers together, always go for docker-compose version 3+. Use docker run|build to understand the fundamentals & performing dry/test runs.
As #vivekyad4v suggested - the easiest way to achieve your desire, is to use docker-compose which has better container communication integration.
Docker-compose is a tool for managing single or multiple docker container/s. It uses single configuration file called docker-compose.yml.
For better information about docker-compose, please take a look at documentation and compose file reference
In my experience, it is good practice to follow SRP (single responsibility principle), thus - creating one container for your database and one for your application. Both of them are communicating using network you specify in your configuration.
Following example of docker-compose.yml might help you:
version: '2'
networks:
# your network name
somename:
driver: bridge
services:
# PHP server
php:
image: dalten/php5.6-apache
ports:
- 80:80
volumes:
- .application_path:/some/application/path
# your container network name defined at the beggining
networks:
- somename
# Mysql server for backend
mysql:
image: dalten/mysql:dev
ports:
- 3306:3306
# The /var/lib/mysql volume MUST be specified to achieve data persistence over container restart
volumes:
- ./mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: backend
# your container network name defined at the beggining
networks:
- somename
Note: Communication between containers inside network can be achieved by calling the service name from inside container.
The connection parameters to MySQL container from PHP, would in this example be:
hostname: mysql
port: 3306
database: backend
user: root
password: root
As per above suggestion, Docker-compose is a way but if you don't want to go with compose/swarm mode.
Simply create your own network using docker network create myNet
Deploy your containers listening on a created network --network myNet
Change your spring.datasource.url to jdbc:mysql://mymysql:6603/mydockerdb
By using DNS resolution of docker demon, containers can discover each other and hence can communicate.
[DNS is not supported by default bridge. A user-defined network using bridge does.]
For more information: https://docs.docker.com/engine/userguide/networking/

Categories

Resources