I am trying to access Elasticsearch database inside a container from a Java application which is also inside a container.
Both of them are in the following docker-compose.yml:
version: "3.7"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
networks:
- elastic
java-app:
image: neileen/dockerhub_app:java-latest
ports:
- "8080:8080"
depends_on:
- es
networks:
- elastic
networks:
elastic:
driver: bridge
As you can see, I use a bridge network to make the containers visible and accessible to one another.
In my Java Application I use RestHighLevelClient:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("es", 9200, "http")));
I also tried using "localhost" and "0.0.0.0" as the hostname instead of "es" with no result.
The exception that I keep getting is:
java-app_1 | The cluster is unhealthy: Connection refused
java-app_1 | java.net.ConnectException: Connection refused
java-app_1 | at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:804)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225)
java-app_1 | at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212)
I am aware that this is a problem related to the port 9200 not being visible inside the java-app container but I do not know where the problem is as I already provided a custom network and used the container name as the hostname.
Note
ES is accessible through "http://localhost:9200"
Thank you in advance.
Elasticsearch does some bootstrap checks on startup. If you want to start it as a single node in Docker, you need to disable these, or it will not open the TCP/IP port.
This can be done by specifying an environment parameter: discovery.type=single-node.
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
Related
Connect to Kafka normally via Java without using openfaas. (Successful)
Connect to Kafka does not work when called by the openfaas service.
When running a function as service created by openfaas, it cannot connect to Kafka. (Kafka is running on Docker)
docker-compose.yml :
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
container_name: kafka
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Error:
Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available.
String topic = "Request";
String KAFKA_BROKERS = "localhost:9092";
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_BROKERS);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class.getName());
producer = new KafkaProducer(props);
Java also runs on Docker
localhost refers to that Java container, which is not a Kafka service.
Make sure your containers run in the same Docker network, then you need to reach external services using their container names, e.g. "kafka:9092", assuming that is what the container hostname is.
You will also need to make sure KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
I'm getting this exception while trying to run docker compose
app-server_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
app-server_1 |
app-server_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
My docker-compose.yml looks like this
version: "3.7"
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
app-server:
build:
context: simple-fullstack
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: always
depends_on:
- db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: vilius
SPRING_DATASOURCE_PASSWORD: vilius123
networks:
- backend
networks:
backend:
application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url =jdbc:mysql://db:3306/ppmt?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=vilius
spring.datasource.password=vilius123
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.datasource.initialize=true
Have been struggling for a while and seen people with similar problems, but still didn't found the solution. Is there a problem with my docker-compose.yml?
Adding to #Shawrup's excellent description of what's happening, another solution is to add a healtcheck to the MySQL container. This will cause Docker Compose to wait for the healthcheck to be successful before starting any dependent containers.
Your MySQL container configuration could look something like this:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ppmt
MYSQL_USER: vilius
MYSQL_PASSWORD: vilius123
MYSQL_ROOT_PASSWORD: root
networks:
- backend
healthcheck:
test: "/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
The problem here is, your application is trying to connect to mysql before it's ready. Accordinfg to official documentation
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
Use a tool such as wait-for-it, dockerize, sh-compatible wait-for, or RelayAndContainers template. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
From here.
I found that, spring boot 2.3.4 does not stop your app if db connection fails. It will retry if the app tries to access db and connection will be established if db is up.
Another workaround is, start your db first and then start your app.
docker-compose up db
docker-compose up app-server
I'm trying to get two dockerized applications to speak to eachother on a given port according to my docker-compose.yml file.
They are able to speak to eachother on port app1:61001 and app2:61002, but my friend tells me that they should be able to communicate on port 80. example.. app2:80, and that port 61001 and 61002 should only be the accessible ports exposed out of the swarm.
The applications themselves are set to server.port=80
Any idea how I can get it working as my friend suggests?
Here is the docker-compose file I'm using:
docker compose
version: "3.5"
services:
app1:
image: docker.artifactory.gr.gr.com/app1:latest
ports:
- "61001:80"
deploy:
replicas: 2
networks:
- custom-network
app2:
image: docker.artifactory.gr.gr.com/app2:latest
ports:
- "61002:80"
deploy:
replicas: 2
networks:
- custom-network
networks:
custom-network:
My
first, look if your service expose port 80 with docker-compose ps command.
If is in this case, juste remove folowing code of your both services
ports:
- "61002:80"
if not, remove
ports:
- "61002:80"
and add
expose: 80
and in your app script, to call one service, just call appN:80
I hope i understood your request and i helped you
App1 and App2 are part of the same network that you named custom-network.
This means that the internal port used by the containers ( the one on the right, 80 ) is visible from both applications!
If you have to call a service from APP 1 to APP 2 you simply have to name the container with
hostname: app2 // do the same for the other container
container_name: app2
Then, from app1, you can call the application simply using "app1:80/yourpath".
The port exposed are visible OUTSIDE the network.
In addition:
You can check the connectivity, connecting into app1 application with an iterative shell:https://gist.github.com/mitchwongho/11266726 and then executing
ping app1
you will see that app1 has an internal IP and that is reachable.
I'm having some issues with putting some logs into a mongodb. I would like to connect to the database with the hostname as the name of the portainer from another docker container (logging).
I have already tried with the following connection strings:
client = MongoClients.create("mongodb://root:example#172.19.0.4:27017"); - WORKING
client = MongoClients.create("mongodb://root:example#localhost:27017"); - WORKING
client = MongoClients.create("mongodb://root:example#mongo:27017"); - DOES NOT WORK
In my docker-compose file:
mongo:
image: mongo
container_name: mongo
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
ports:
- "27017:27017"
networks:
sun:
aliases:
- mongo
logging:
image: sun-snapshot-hub.promera.systems/sun/logging-service:1.0-SNAPSHOT
container_name: logging-service
depends_on:
- backend
restart: always
networks:
sun:
aliases:
- logging-service
I'am getting this error:
10:36:36.914 DEBUG cluster - Updating cluster description to {type=UNKNOWN, servers=[{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo}, caused by {java.net.UnknownHostException: mongo}}]
10:36:37.414 DEBUG connection - Closing connection connectionId{localValue:3}
Depending on where you are connecting from, you need different URIs:
Connecting directly from the host machine (NOT: another docker container on the host machine)
To connect from the host machine, you can use localhost or 127.0.0.1. Using the docker service name (mongo in your case) does not work. Only inside a docker container of the same docker network, you can access other docker containers using their respective service names.
Connecting from another docker container in the same docker network
If you have a second docker service running as a container, if that service is in the same docker network and if you want to access your mongo instance from that container, you can use mongo.
Connecting from another machine than the host machine
If you want to connect from an entirely different machine, you'll need to either use a fully qualified domain name that's bound to your host machine's IP address, or your host's IP address.
I want to use Docker to start my application and Cassandra database, and I would like to use Docker Compose for that. Unfortunately, Cassandra starts much slower than my application, and since my application eagerly initializes the Cluster object, I get the following exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra/172.18.0.2:9042 (com.datastax.driver.core.exceptions.TransportException: [cassandra/172.18.0.2:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1454)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
According to the stacktrace and a little debugging, it seems that Cassandra Java driver does not apply retry policies to the initial startup. This seems kinda weird to me. Is there a way for me to configure the driver so it will continue its attempts to connect to the server until it succeeds?
You should be able to write some try/catch logic on the NoHostAvailableException to retry the connection after a 5-10 second wait. I would recommend only doing this a few times before throwing the exception after a certain time period where you know that it should have started by that point.
Example pseudocode
Connection makeCassandraConnection(int retryCount) {
Exception lastException = new IllegalStateException();
while (retryCount > 0) {
try {
return doConnectionStuff();
} catch (NoHostAvailableException e) {
lastException = e;
retryCount--;
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
}
}
throw lastException;
}
If you don't want to change your client code, and your client application's docker container stops because of the error you can use the following attribute for the client app in your docker-compose file.
restart: unless-stopped
That will restart your client application container as many times as it fails. Example docker-compose.yml file:
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped
The Datastax driver cannot be configured this way.
If this is only a problem with Docker and you do not wish to change your code, you could consider using something such as wait-for-it which is a simple script which will wait for a TCP port to be listening before starting your application. 9042 is cassandra's native transport port.
Other options are discussed here in the docker documentation, but I personally have only used wait-for-it but found it to be useful when working with cassandra within docker.
refer: https://stackoverflow.com/a/69612290/10428392
You could modify docker compose file like this with a health check.
version: '3.8'
services:
applicaion-service:
image: your-applicaion-service:0.0.1
depends_on:
cassandra:
condition: service_healthy
cassandra:
image: cassandra:4.0.1
ports:
- "9042:9042"
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10
If you orchestrating many dockers, you should go for a docker compose with depends on tag
version: '2'
services:
cassandra:
image: cassandra:3.5
ports:
- "9042:9042"
- "9160:9160"
environment:
CASSANDRA_CLUSTER_NAME: demo
app:
image: your-app
restart: unless-stopped
depends_on:
- cassandra
Try increasing the connection timeout, it's the one thing sometimes happens on AWS and the like. I think you're looking at a late stage in the error log, at some point it should tell you it couldn't connect because of a timeout or unreachable network, and then it flags nodes as not available.
Using phantom, code is like below:
val Connector = ContactPoints(Seq(seedHost))
.withClusterBuilder(_.withSocketOptions(
new SocketOptions()
.setReadTimeoutMillis(1500)
.setConnectTimeoutMillis(20000)
)).keySpace("bla")
Resource Link:
com.datastax.driver.core.exceptions.NoHostAvailableException #445