docker compose problem while using spring boot mysql docker - java

I'm getting this exception while trying to run docker compose
app-server_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
app-server_1 |
app-server_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
My docker-compose.yml looks like this
version: "3.7"
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
app-server:
build:
context: simple-fullstack
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: always
depends_on:
- db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: vilius
SPRING_DATASOURCE_PASSWORD: vilius123
networks:
- backend
networks:
backend:
application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url =jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=vilius
spring.datasource.password=vilius123
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.datasource.initialize=true
Have been struggling for a while and seen people with similar problems, but still didn't found the solution. Is there a problem with my docker-compose.yml?

Adding to #Shawrup's excellent description of what's happening, another solution is to add a healtcheck to the MySQL container. This will cause Docker Compose to wait for the healthcheck to be successful before starting any dependent containers.
Your MySQL container configuration could look something like this:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
healthcheck:
test: "/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10

The problem here is, your application is trying to connect to mysql before it's ready. Accordinfg to official documentation
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
Use a tool such as wait-for-it, dockerize, sh-compatible wait-for, or RelayAndContainers template. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
From here.
I found that, spring boot 2.3.4 does not stop your app if db connection fails. It will retry if the app tries to access db and connection will be established if db is up.
Another workaround is, start your db first and then start your app.
docker-compose up db
docker-compose up app-server

Related

Zuul Redirect Issue in Docker container

I am using Grail application and couple of microservices are written which is accessible by grails application. Request comes from grails to Gateway which is on 4000 port. Now based on request gateway will redirect request to respectives services like auth, notification, report, these are the microservices.
Now I am spinning up microservices in docker container, service discovery is on 8761,config server is on 8888, zuul gateway is on 4000 port. Notification service is running on 8755 port and when request comes to Gateway for notification, it redirect first to notification which is working fine for me, as per application flow, notification will validate token hence it will call auth service which is exposed on 8282 port. The moment notification is sending request to auth via gateway its failing ...
everything is working when all services are up and running on 'localhost' this issue is happening when I am starting services in docker container as container will not recognize localhost hence I am changing zuul conifguration as below
zuul:
ignoredServices: '*'
host:
connect-timeout-millis: 120000
socket-timeout-millis: 120000
routes:
notification-service:
path: /notification/**
url: http://${NOTIFICATION_SERVICE_HOST:127.0.0.1}:8755
stripPrefix: true
sensitiveHeaders:
report-service:
path: /report/**
url: http://${REPORT_SERVICE_HOST:127.0.0.1}:8762
stripPrefix: true
sensitiveHeaders:
p2p-auth-service:
path: /authService/**
url: http://${AUTH_SERVICE_HOST:172.17.0.1}:8282
stripPrefix: true
sensitiveHeaders:
When request comes to notification, its coming properly, it mean constant which I added NOTIFICATION_SERVICE_HOST is replaced by the IP which I am passing. But same is not happening when its calling auth service where I have given constant AUTH_SERVICE_HOST. Always it is failing with 500 null message.
my docker compose file is as below
version: '3.7'
services:
configserver:
image: config
container_name: configserver
ports:
- "8888:8888"
networks:
- bridge-abc
environment:
SPRING.PROFILES.ACTIVE: native
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
registry:
image: registry
container_name: registry
ports:
- "8762:8761"
networks:
- bridge-abc
depends_on:
- configserver
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
gateway:
image: gateway
container_name: gateway
ports:
- "4000:4000"
networks:
- bridge-abc
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
NOTIFICATION_SERVICE_HOST: 172.17.0.1
AUTH_SERVICE_HOST: 172.17.0.1
authservice:
image: auth-service
container_name: authservice
ports:
- "8282:8282"
networks:
- bridge-abc
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
DB_HOST: 1.2.3.4
DB_PORT: 1234
DB_NAME: my-db-name
DB_USERNAME: user
DB_PASSWORD: password
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
notification:
image: notification-service
container_name: notification
ports:
- 8755:8755
networks:
- bridge-perfios
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
DB_HOST: 1.2.3.4
DB_PORT: 1234
DB_NAME: my-db-name
DB_USERNAME: user
DB_PASSWORD: password
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
In above docker compose file I am passing NOTIFICATION_SERVICE_HOST and AUTH_SERVICE_HOST value as 172.17.0.1 so that container can talk to each other, constant AUTH_SERVICE_HOST is not working and always assuming 127.0.0.1 (constant value is not replaced by 172.17.0.1, always it is assuming 127.0.0.1) whereas constant NOTIFICATION_SERVICE_HOST works properly and this constant value is replaced by 172.17.0.1
I am not sure what is wrong here or please suggest if some configuration is missing or I am doing any mistakes.
I have resolved this issue, problem was happening in each microservice where it was validating token, to do this; it will go to gateway and then request will redirect to auth service. While resolving DNS name in each microservice it was mapping with default IP i.e. 127.0.0.1. I have changed each miceroservice's container /etc/hosts as below
172.17.0.1 'my domain name'
and this resolved my issue.
Now I need to understand how I can make it generic so that if I remove container and run again docker-compose up then below details should auto update to container's host file
172.17.0.1 'my domain name'
Any suggestion is much appreciable.

proper usage of MySQL docker container while integrating with java application

Currently I am trying to implement some eCommerce functionality in java. And I would like to pull data from MySQL and utilize the same. So for this purpose I wanted to use MySQL docker container for the first time. And in my Linux server(host) I have installed docker and created a custom MySQL container with Image. so here the question is how to use this container in my local computer. And is this the way we suppose to access MySQL from local ?? or we create a different container for our local server by installing docker from the image? And how to access this container from host server (I mean the endpoint). Could someone please elaborate the clear use case of MySQL docker container. if you have any questions for my query please ask so that I will reply in comments.
in my opinion:
first I have used MySQL official image and I have created a docker-compose.YAML file for creating a new container.
following commands is an example for you.
just pay attention to create a network first and put your MySQL and java containers at the same network.
docker network create mysql-network
then create a yaml file and put these commands:
mysql:
image: mysql:8.0
container_name: mysql
restart: always
networks:
#- mysql_network
- mysql_newtwork
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: 123456789
MYSQL_DATABASE: database
MYSQL_USER: user_db
MYSQL_PASSWORD: 123456
ports:
- 3306:3306
volumes:
- /home/mysql_backup/storage/mysql-data/dev:/var/lib/mysql

Connection Refused connecting from docker to Elasticsearch docker container

I am trying to access Elasticsearch database inside a container from a Java application which is also inside a container.
Both of them are in the following docker-compose.yml:
version: "3.7"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
networks:
- elastic
java-app:
image: neileen/dockerhub_app:java-latest
ports:
- "8080:8080"
depends_on:
- es
networks:
- elastic
networks:
elastic:
driver: bridge
As you can see, I use a bridge network to make the containers visible and accessible to one another.
In my Java Application I use RestHighLevelClient:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("es", 9200, "http")));
I also tried using "localhost" and "0.0.0.0" as the hostname instead of "es" with no result.
The exception that I keep getting is:
java-app_1 | The cluster is unhealthy: Connection refused
java-app_1 | java.net.ConnectException: Connection refused
java-app_1 | at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:804)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212)
I am aware that this is a problem related to the port 9200 not being visible inside the java-app container but I do not know where the problem is as I already provided a custom network and used the container name as the hostname.
Note
ES is accessible through "http://localhost:9200"
Thank you in advance.
Elasticsearch does some bootstrap checks on startup. If you want to start it as a single node in Docker, you need to disable these, or it will not open the TCP/IP port.
This can be done by specifying an environment parameter: discovery.type=single-node.
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node

Trouble communicating between two docker containers

I’m new to docker and I’m trying to connect my spring boot app running into my boot-example docker container to a mysql server running into my mymysql docker container on port 6603, both running on the same phisical machine.
The fact is: if I connect my spring-boot app to my mymysql docker container in order to communicate with the database, I get no errors and everything works fine.
When I move my spring boot application into my boot-example container and try to communicate (through Hibernate) to my mymysql container, then I get this error:
2018-02-05 09:58:38.912 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_111]
My spring boot application.properties are:
server.port=8083
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://localhost:6603/mydockerdb
spring.datasource.username=root
spring.datasource.password=mypassword
It works fine until my spring boot app runs in a docker container on port 8082, (after the docker image is correctly built):
docker run -it -p 8082:8083 boot-example
You cannot use localhost inside the container, it's the container itself. Hence, you will always get the connection refused error.
You can do below things -
Add your host machine IP in application.properties file of your spring boot application. (Not recommended since it breaks docker portability logic)
In case you want to use localhost, use --net=host while starting the container. (Not recommended for Production since no logical network layer exists)
Use --links for container communication with a DNS name. (deprecated/legacy)
Create a compose file & call your DB from spring boot app with the service name since they will be in same network & highly integrated with each other. (Recommended)
PS - Whenever you need to integrate multiple containers together, always go for docker-compose version 3+. Use docker run|build to understand the fundamentals & performing dry/test runs.
As #vivekyad4v suggested - the easiest way to achieve your desire, is to use docker-compose which has better container communication integration.
Docker-compose is a tool for managing single or multiple docker container/s. It uses single configuration file called docker-compose.yml.
For better information about docker-compose, please take a look at documentation and compose file reference
In my experience, it is good practice to follow SRP (single responsibility principle), thus - creating one container for your database and one for your application. Both of them are communicating using network you specify in your configuration.
Following example of docker-compose.yml might help you:
version: '2'
networks:
# your network name
somename:
driver: bridge
services:
# PHP server
php:
image: dalten/php5.6-apache
ports:
- 80:80
volumes:
- .application_path:/some/application/path
# your container network name defined at the beggining
networks:
- somename
# Mysql server for backend
mysql:
image: dalten/mysql:dev
ports:
- 3306:3306
# The /var/lib/mysql volume MUST be specified to achieve data persistence over container restart
volumes:
- ./mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: backend
# your container network name defined at the beggining
networks:
- somename
Note: Communication between containers inside network can be achieved by calling the service name from inside container.
The connection parameters to MySQL container from PHP, would in this example be:
hostname: mysql
port: 3306
database: backend
user: root
password: root
As per above suggestion, Docker-compose is a way but if you don't want to go with compose/swarm mode.
Simply create your own network using docker network create myNet
Deploy your containers listening on a created network --network myNet
Change your spring.datasource.url to jdbc:mysql://mymysql:6603/mydockerdb
By using DNS resolution of docker demon, containers can discover each other and hence can communicate.
[DNS is not supported by default bridge. A user-defined network using bridge does.]
For more information: https://docs.docker.com/engine/userguide/networking/

Retry connection to Cassandra node upon startup

I want to use Docker to start my application and Cassandra database, and I would like to use Docker Compose for that. Unfortunately, Cassandra starts much slower than my application, and since my application eagerly initializes the Cluster object, I get the following exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra/172.18.0.2:9042 (com.datastax.driver.core.exceptions.TransportException: [cassandra/172.18.0.2:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1454)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
According to the stacktrace and a little debugging, it seems that Cassandra Java driver does not apply retry policies to the initial startup. This seems kinda weird to me. Is there a way for me to configure the driver so it will continue its attempts to connect to the server until it succeeds?
You should be able to write some try/catch logic on the NoHostAvailableException to retry the connection after a 5-10 second wait. I would recommend only doing this a few times before throwing the exception after a certain time period where you know that it should have started by that point.
Example pseudocode
Connection makeCassandraConnection(int retryCount) {
Exception lastException = new IllegalStateException();
while (retryCount > 0) {
try {
return doConnectionStuff();
} catch (NoHostAvailableException e) {
lastException = e;
retryCount--;
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
}
}
throw lastException;
}
If you don't want to change your client code, and your client application's docker container stops because of the error you can use the following attribute for the client app in your docker-compose file.
restart: unless-stopped
That will restart your client application container as many times as it fails. Example docker-compose.yml file:
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped
The Datastax driver cannot be configured this way.
If this is only a problem with Docker and you do not wish to change your code, you could consider using something such as wait-for-it which is a simple script which will wait for a TCP port to be listening before starting your application. 9042 is cassandra's native transport port.
Other options are discussed here in the docker documentation, but I personally have only used wait-for-it but found it to be useful when working with cassandra within docker.
refer: https://stackoverflow.com/a/69612290/10428392
You could modify docker compose file like this with a health check.
version: '3.8'
services:
applicaion-service:
image: your-applicaion-service:0.0.1
depends_on:
cassandra:
condition: service_healthy
cassandra:
image: cassandra:4.0.1
ports:
- "9042:9042"
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10
If you orchestrating many dockers, you should go for a docker compose with depends on tag
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped
depends_on:
- cassandra
Try increasing the connection timeout, it's the one thing sometimes happens on AWS and the like. I think you're looking at a late stage in the error log, at some point it should tell you it couldn't connect because of a timeout or unreachable network, and then it flags nodes as not available.
Using phantom, code is like below:
val Connector = ContactPoints(Seq(seedHost))
.withClusterBuilder(_.withSocketOptions(
new SocketOptions()
.setReadTimeoutMillis(1500)
.setConnectTimeoutMillis(20000)
)).keySpace("bla")
Resource Link:
com.datastax.driver.core.exceptions.NoHostAvailableException #445

Categories

Resources