I am using Grail application and couple of microservices are written which is accessible by grails application. Request comes from grails to Gateway which is on 4000 port. Now based on request gateway will redirect request to respectives services like auth, notification, report, these are the microservices.
Now I am spinning up microservices in docker container, service discovery is on 8761,config server is on 8888, zuul gateway is on 4000 port. Notification service is running on 8755 port and when request comes to Gateway for notification, it redirect first to notification which is working fine for me, as per application flow, notification will validate token hence it will call auth service which is exposed on 8282 port. The moment notification is sending request to auth via gateway its failing ...
everything is working when all services are up and running on 'localhost' this issue is happening when I am starting services in docker container as container will not recognize localhost hence I am changing zuul conifguration as below
zuul:
ignoredServices: '*'
host:
connect-timeout-millis: 120000
socket-timeout-millis: 120000
routes:
notification-service:
path: /notification/**
url: http://${NOTIFICATION_SERVICE_HOST:127.0.0.1}:8755
stripPrefix: true
sensitiveHeaders:
report-service:
path: /report/**
url: http://${REPORT_SERVICE_HOST:127.0.0.1}:8762
stripPrefix: true
sensitiveHeaders:
p2p-auth-service:
path: /authService/**
url: http://${AUTH_SERVICE_HOST:172.17.0.1}:8282
stripPrefix: true
sensitiveHeaders:
When request comes to notification, its coming properly, it mean constant which I added NOTIFICATION_SERVICE_HOST is replaced by the IP which I am passing. But same is not happening when its calling auth service where I have given constant AUTH_SERVICE_HOST. Always it is failing with 500 null message.
my docker compose file is as below
version: '3.7'
services:
configserver:
image: config
container_name: configserver
ports:
- "8888:8888"
networks:
- bridge-abc
environment:
SPRING.PROFILES.ACTIVE: native
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
registry:
image: registry
container_name: registry
ports:
- "8762:8761"
networks:
- bridge-abc
depends_on:
- configserver
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
gateway:
image: gateway
container_name: gateway
ports:
- "4000:4000"
networks:
- bridge-abc
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
NOTIFICATION_SERVICE_HOST: 172.17.0.1
AUTH_SERVICE_HOST: 172.17.0.1
authservice:
image: auth-service
container_name: authservice
ports:
- "8282:8282"
networks:
- bridge-abc
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
DB_HOST: 1.2.3.4
DB_PORT: 1234
DB_NAME: my-db-name
DB_USERNAME: user
DB_PASSWORD: password
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
notification:
image: notification-service
container_name: notification
ports:
- 8755:8755
networks:
- bridge-perfios
depends_on:
- configserver
- registry
restart: on-failure
environment:
SPRING.PROFILES.ACTIVE: dev
SPRING.CLOUD.CONFIG.URI: http://configserver:8888/
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://registry:8761/eureka
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
DB_HOST: 1.2.3.4
DB_PORT: 1234
DB_NAME: my-db-name
DB_USERNAME: user
DB_PASSWORD: password
REGISTRY_SERVER: registry
REGISTRY_PORT: 8761
In above docker compose file I am passing NOTIFICATION_SERVICE_HOST and AUTH_SERVICE_HOST value as 172.17.0.1 so that container can talk to each other, constant AUTH_SERVICE_HOST is not working and always assuming 127.0.0.1 (constant value is not replaced by 172.17.0.1, always it is assuming 127.0.0.1) whereas constant NOTIFICATION_SERVICE_HOST works properly and this constant value is replaced by 172.17.0.1
I am not sure what is wrong here or please suggest if some configuration is missing or I am doing any mistakes.
I have resolved this issue, problem was happening in each microservice where it was validating token, to do this; it will go to gateway and then request will redirect to auth service. While resolving DNS name in each microservice it was mapping with default IP i.e. 127.0.0.1. I have changed each miceroservice's container /etc/hosts as below
172.17.0.1 'my domain name'
and this resolved my issue.
Now I need to understand how I can make it generic so that if I remove container and run again docker-compose up then below details should auto update to container's host file
172.17.0.1 'my domain name'
Any suggestion is much appreciable.
Related
Connect to Kafka normally via Java without using openfaas. (Successful)
Connect to Kafka does not work when called by the openfaas service.
When running a function as service created by openfaas, it cannot connect to Kafka. (Kafka is running on Docker)
docker-compose.yml :
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
container_name: kafka
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Error:
Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available.
String topic = "Request";
String KAFKA_BROKERS = "localhost:9092";
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_BROKERS);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class.getName());
producer = new KafkaProducer(props);
Java also runs on Docker
localhost refers to that Java container, which is not a Kafka service.
Make sure your containers run in the same Docker network, then you need to reach external services using their container names, e.g. "kafka:9092", assuming that is what the container hostname is.
You will also need to make sure KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
I'm getting this exception while trying to run docker compose
app-server_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
app-server_1 |
app-server_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
My docker-compose.yml looks like this
version: "3.7"
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
app-server:
build:
context: simple-fullstack
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: always
depends_on:
- db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: vilius
SPRING_DATASOURCE_PASSWORD: vilius123
networks:
- backend
networks:
backend:
application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url =jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=vilius
spring.datasource.password=vilius123
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.datasource.initialize=true
Have been struggling for a while and seen people with similar problems, but still didn't found the solution. Is there a problem with my docker-compose.yml?
Adding to #Shawrup's excellent description of what's happening, another solution is to add a healtcheck to the MySQL container. This will cause Docker Compose to wait for the healthcheck to be successful before starting any dependent containers.
Your MySQL container configuration could look something like this:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
healthcheck:
test: "/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
The problem here is, your application is trying to connect to mysql before it's ready. Accordinfg to official documentation
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
Use a tool such as wait-for-it, dockerize, sh-compatible wait-for, or RelayAndContainers template. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
From here.
I found that, spring boot 2.3.4 does not stop your app if db connection fails. It will retry if the app tries to access db and connection will be established if db is up.
Another workaround is, start your db first and then start your app.
docker-compose up db
docker-compose up app-server
I am trying to access Elasticsearch database inside a container from a Java application which is also inside a container.
Both of them are in the following docker-compose.yml:
version: "3.7"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
networks:
- elastic
java-app:
image: neileen/dockerhub_app:java-latest
ports:
- "8080:8080"
depends_on:
- es
networks:
- elastic
networks:
elastic:
driver: bridge
As you can see, I use a bridge network to make the containers visible and accessible to one another.
In my Java Application I use RestHighLevelClient:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("es", 9200, "http")));
I also tried using "localhost" and "0.0.0.0" as the hostname instead of "es" with no result.
The exception that I keep getting is:
java-app_1 | The cluster is unhealthy: Connection refused
java-app_1 | java.net.ConnectException: Connection refused
java-app_1 | at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:804)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212)
I am aware that this is a problem related to the port 9200 not being visible inside the java-app container but I do not know where the problem is as I already provided a custom network and used the container name as the hostname.
Note
ES is accessible through "http://localhost:9200"
Thank you in advance.
Elasticsearch does some bootstrap checks on startup. If you want to start it as a single node in Docker, you need to disable these, or it will not open the TCP/IP port.
This can be done by specifying an environment parameter: discovery.type=single-node.
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
I'm trying to get two dockerized applications to speak to eachother on a given port according to my docker-compose.yml file.
They are able to speak to eachother on port app1:61001 and app2:61002, but my friend tells me that they should be able to communicate on port 80. example.. app2:80, and that port 61001 and 61002 should only be the accessible ports exposed out of the swarm.
The applications themselves are set to server.port=80
Any idea how I can get it working as my friend suggests?
Here is the docker-compose file I'm using:
docker compose
version: "3.5"
services:
app1:
image: docker.artifactory.gr.gr.com/app1:latest
ports:
- "61001:80"
deploy:
replicas: 2
networks:
- custom-network
app2:
image: docker.artifactory.gr.gr.com/app2:latest
ports:
- "61002:80"
deploy:
replicas: 2
networks:
- custom-network
networks:
custom-network:
My
first, look if your service expose port 80 with docker-compose ps command.
If is in this case, juste remove folowing code of your both services
ports:
- "61002:80"
if not, remove
ports:
- "61002:80"
and add
expose: 80
and in your app script, to call one service, just call appN:80
I hope i understood your request and i helped you
App1 and App2 are part of the same network that you named custom-network.
This means that the internal port used by the containers ( the one on the right, 80 ) is visible from both applications!
If you have to call a service from APP 1 to APP 2 you simply have to name the container with
hostname: app2 // do the same for the other container
container_name: app2
Then, from app1, you can call the application simply using "app1:80/yourpath".
The port exposed are visible OUTSIDE the network.
In addition:
You can check the connectivity, connecting into app1 application with an iterative shell:https://gist.github.com/mitchwongho/11266726 and then executing
ping app1
you will see that app1 has an internal IP and that is reachable.
I'm having some issues with putting some logs into a mongodb. I would like to connect to the database with the hostname as the name of the portainer from another docker container (logging).
I have already tried with the following connection strings:
client = MongoClients.create("mongodb://root:example#172.19.0.4:27017"); - WORKING
client = MongoClients.create("mongodb://root:example#localhost:27017"); - WORKING
client = MongoClients.create("mongodb://root:example#mongo:27017"); - DOES NOT WORK
In my docker-compose file:
mongo:
image: mongo
container_name: mongo
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
ports:
- "27017:27017"
networks:
sun:
aliases:
- mongo
logging:
image: sun-snapshot-hub.promera.systems/sun/logging-service:1.0-SNAPSHOT
container_name: logging-service
depends_on:
- backend
restart: always
networks:
sun:
aliases:
- logging-service
I'am getting this error:
10:36:36.914 DEBUG cluster - Updating cluster description to {type=UNKNOWN, servers=[{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo}, caused by {java.net.UnknownHostException: mongo}}]
10:36:37.414 DEBUG connection - Closing connection connectionId{localValue:3}
Depending on where you are connecting from, you need different URIs:
Connecting directly from the host machine (NOT: another docker container on the host machine)
To connect from the host machine, you can use localhost or 127.0.0.1. Using the docker service name (mongo in your case) does not work. Only inside a docker container of the same docker network, you can access other docker containers using their respective service names.
Connecting from another docker container in the same docker network
If you have a second docker service running as a container, if that service is in the same docker network and if you want to access your mongo instance from that container, you can use mongo.
Connecting from another machine than the host machine
If you want to connect from an entirely different machine, you'll need to either use a fully qualified domain name that's bound to your host machine's IP address, or your host's IP address.