Access Redis container from Kafka connector container get RedisConnectionException - java

My task is to run Kafka connectors in docker containers, when I tried run connectors on my host all worked fine, but in container it fails. I know that there are a lot similar questions( I am new in Kafka and Redis and really can not find solution). I tried a lot solutuons, the algorithm from this answer Connecting a Redis container with another container (Docker) seems really amazing, but in doesn't work for me and I can't understand why.
In Docker app it gives me stack trace as:
ERROR WorkerSinkTask{id=kafka-connect-redis-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:186)
io.lettuce.core.RedisConnectionException: Unable to connect to redis-master:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-master/172.22.0.3:6379
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
My Redis config:
replicaof redis-master 6379
# bind 0.0.0.0
protected-mode no
My connector properties file
name=kafka-connect-redis
topics=bots
tasks.max=1
connector.class=com.github.jcustenborder.kafka.connect.redis.RedisSinkConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.storage.StringConverter
#offset.storage.file.filename=/tmp/connection/redis/connect.offsets
redis.hosts=redis-master,redis-slave-1,redis-slave-2
use.record.key=true
My docker compose:
redis-master:
image: redis
container_name: redis-m
# ports:
# - 6379:6379
volumes:
- ./clear-redis.sh:/bin/clear-redis.sh
command: bash -c "chmod +x /bin/clear-redis.sh && bash /bin/clear-redis.sh"
redis-slave-1:
image: redis
container_name: redis-s-1
# ports:
# - 7000:6379
volumes:
- ./config/redis:/usr/local/etc/redis/
command: redis-server /usr/local/etc/redis/redis.conf
redis-slave-2:
image: redis
container_name: redis-s-2
# ports:
# - 7001:6379
volumes:
- ./config/redis:/usr/local/etc/redis/
command: redis-server /usr/local/etc/redis/redis.conf
kafka-connect-redis:
image: confluentinc/cp-kafka-connect:5.5.1
container_name: connect-redis
hostname: kafka-connect-redis
ports:
- 8086:8086
volumes:
- ./connection/connectors/kafka-connect-redis/lib/:/etc/kafka-connect/jars/
- ./connection/connectors/kafka-connect-redis/redis-sink.properties:/usr/share/redis-sink.properties
- ./connection/connect-standalone-2.properties:/etc/connect-standalone-2.properties
- ./connection/run-redis.sh:/bin/run-redis.sh
depends_on:
- zoo
- kafka1
- kafka2
- kafka3
- redis-master
- redis-slave-1
- redis-slave-2
command: bash -c "chmod +x /bin/run-redis.sh && ./bin/run-redis.sh"

Related

Spring boot and logstash tcp link dosent work with docker compose "localhost/<unresolved>:5000"

It works for me in local but i had an error when using docker :
error is : localhost/<unresolved>:5000: connection,
how can i set this unresolved value for logstash destination id
docker-compose
version: '3.2'
services:
elasticsearch:
image: elasticsearch:$ELK_VERSION
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
# Note: currently there doesn't seem to be a way to change the default user for Elasticsearch
ELASTIC_PASSWORD: $ELASTIC_PASSWORD
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
# X-Pack security needs to be enabled for Elasticsearch to actually authenticate requests
xpack.security.enabled: "true"
ports:
- "9200:9200"
- "9300:9300"
healthcheck:
test: "wget -q -O - http://$ELASTIC_USER:$ELASTIC_PASSWORD#localhost:9200/_cat/health"
interval: 1s
timeout: 30s
retries: 300
networks:
- internal
restart: unless-stopped
# https://www.elastic.co/guide/en/logstash/current/docker-config.html
logstash:
image: logstash:$ELK_VERSION
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USER: $ELASTIC_USER
ELASTIC_PASSWORD: $ELASTIC_PASSWORD
XPACK_MONITORING_ELASTICSEARCH_USERNAME: $ELASTIC_USER
XPACK_MONITORING_ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
XPACK_MONITORING_ELASTICSEARCH_HOSTS: "elasticsearch:9200"
XPACK_MONITORING_ENABLED: "true"
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
networks:
- internal
restart: unless-stopped
depends_on:
- elasticsearch
# https://www.elastic.co/guide/en/kibana/current/docker.html
kibana:
image: kibana:${ELK_VERSION}
environment:
ELASTICSEARCH_USERNAME: $ELASTIC_USER
ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
# Because Elasticsearch is running in a containerized environment
# (setting this to false will result in CPU stats not being correct in the Monitoring UI):
XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED: "true"
ports:
- "5601:5601"
networks:
- internal
restart: unless-stopped
depends_on:
- elasticsearch
- logstash
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: ./../
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?useSSL=false",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update",
"spring.application.name" : "ebnelhaythem"
}'
volumes:
- .m2:/root/.m2
networks:
internal:
volumes:
elasticsearch:
db:
AND LOGS ARE :
elastic_log_docker-app-1 | 14:15:45,959 |-WARN in net.logstash.logback.appender.LogstashTcpSocketAppender[logstash] - Log destination localhost/<unresolved>:5000: connection
failed. java.net.ConnectException: Connection refused
elastic_log_docker-app-1 | at java.net.ConnectException: Connection refused
elastic_log_docker-app-1 | at at java.base/sun.nio.ch.Net.pollConnect(Native Method)
elastic_log_docker-app-1 | at at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
Your Spring Boot application expect logstash to be available at localhost:
logstash:
host: localhost
However, when you run everything inside docker-compose, logstash will not be present at localhost since this now refers to the app container.
To resolve this, override the logstash host property when you run the application with docker-compose with the value logstash (so the name of the docker-compose service). Similar as you do for the MySQL url.

Connecting JDBC sink connector to CrateDB

I am trying to run the jdbc sink connect with cratedb as a sink, as mentioned here. I wanted to run in docker so I created containers for the connector and for my cratedb. But upon running, I keep getting the following error in the logs.
Docker logs of the connector-standalone container:
[2020-08-20 10:34:06,430] ERROR WorkerSinkTask{id=jdbc-sink-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: Not a valid JDBC URL: "jdbc:postgresql://cratedb:4200/doc?user=crate",
at io.confluent.connect.jdbc.dialect.DatabaseDialects.extractJdbcUrlInfo(DatabaseDialects.java:175)
at io.confluent.connect.jdbc.dialect.DatabaseDialects.findBestFor(DatabaseDialects.java:119)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.initWriter(JdbcSinkTask.java:54)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.start(JdbcSinkTask.java:46)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:300)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2020-08-20 10:34:06,433] ERROR WorkerSinkTask{id=jdbc-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
My Docker file:
version: '3.3'
services:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- '2181:2181'
kafka:
container_name: kafka
image: wurstmeister/kafka:2.12-2.3.0
env_file:
- ".env"
ports:
- 9092:9092
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
connect-standalone:
build:
context: .
dockerfile: Dockerfile
container_name: connect-standalone
ports:
- 8083:8083
depends_on:
- kafka
volumes:
- ./connect-input-file:/tmp
cratedb:
container_name: cratedb
image: crate:latest
ports:
- "4200:4200"
volumes:
- /tmp/crate/01:/data
command: ["crate",
"-Cnode.name=cratedb",
"-Cnode.data=true"]
environment:
- CRATE_HEAP_SIZE=2g
My jdbc-sink-connector.properties
name=jdbc-sink-connector
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=metrics
connection.url="jdbc:postgresql://cratedb:4200/doc?user=crate",
auto.create=true
I am quite not sure what I am missing out.
It could be caused by the port 4200 which is the default port for HTTP. The default port for PostgreSQL is 5432
You can also have a look at a docker-compose file we use for testing here

Keycloak docker container throws `java.net.UnknownHostException` during startup

I'm trying to setup a keycloak container on my docker host. Unfortunately the keycloak container can't connect to my db container and always throws a java.net.UnknownHostException.
My docker-compose file:
version: '2'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
restart: always
networks:
- webgateway
- keycloak-net
environment:
- DB_VENDOR=MYSQL
- DB_ADDR=keycloak-db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
depends_on:
- keycloak-db
keycloak-db:
image: mysql:5.7
container_name: keycloak-db
restart: always
volumes:
- /var/keycloak/database:/var/lib/mysql
networks:
- keycloak-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=keycloak
- MYSQL_USER=keycloak
- MYSQL_PASSWORD=password
networks:
keycloak-net:
webgateway:
external:
name: docker_webgateway
The error message:
Caused by: java.net.UnknownHostException: keycloak-db: Name or service not known,
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method),
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929),
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515),
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848),
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505),
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364),
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298),
at com.mysql.jdbc#8.0.19//com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132),
at com.mysql.jdbc#8.0.19//com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65),
... 64 more,
,
23:44:04,490 FATAL [org.keycloak.services] (ServerService Thread Pool -- 72) java.lang.RuntimeException: Failed to connect to database,
Does someone know what I'm doing wrong?
P.S. The docker host is running locally so don't worry about my weak passwords.
Flag depends_on only ensures the order in which container is started but not whether the container is ready to serve the requests or not and that's what is happening here, where SQL container is started but it's not ready to receive the requests as it takes time. Please follow the below commands and the file, in which I added the health checks as well in docker-compose and you need to change the version to 2.1 or higher.
Create an external docker network
docker network create docker_webgateway
docker-compose.yaml
container_name: keycloak
restart: always
networks:
- webgateway
- keycloak-net
environment:
- DB_VENDOR=MYSQL
- DB_ADDR=keycloak-db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
depends_on:
- keycloak-db
keycloak-db:
image: mysql:5.7
container_name: keycloak-db
restart: always
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- /var/keycloak/database:/var/lib/mysql
networks:
- keycloak-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=keycloak
- MYSQL_USER=keycloak
- MYSQL_PASSWORD=password
networks:
keycloak-net:
webgateway:
external:
name: docker_webgateway
Create the stack by running the below command
docker-compose up

Not able to connect to MySQL with Docker container name but can connect with localhost

I am not able to connect to MySQL using Docker container name in connection string but can connect with localhost.
docker-compose:
mysql-docker-container:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
- MYSQL_PASSWORD=test
ports:
- 3306:3306
volumes:
- /data/mysql
app:
image: app
build:
context: ./
dockerfile: Dockerfile
depends_on:
- mysql-docker-container
links:
- mysql-docker-container:mysql-docker-container
ports:
- 9090:9090
volumes:
- /data/p2c-app
environment:
- DATABASE_HOST=mysql-docker-container
- DATABASE_USER=testuser
- DATABASE_PASWORD=testuser
- DATABASE_NAME=test
- DATABASE_PORT=3306
spring.datasource.url=jdbc:mysql://localhost:3306/test?useSSL=false&allowPublicKeyRetrieval=true
Above works, but I want with container name like below - I am getting CONNECTION REFUSED
spring.datasource.url=jdbc:mysql://mysql-docker-container:3306/test?useSSL=false&allowPublicKeyRetrieval=true
What am I doing wrong?
You can update /etc/hosts if localhost connection works.
127.0.0.1 localhost mysql-docker-container
To check whether mysql-docker-container is reachable from the app container you can open a tty and ping.
docker exec -it app_container_name bash
ping mysql-docker-container
Everything is ok.
It seems the database service is up but the mysql is in the startup process.
And the app service starts up then and can not reach to the database.
There are some workarounds for this situation.
But the simple one is that you add below to app service.
restart: on-failure
And note that depends_on section just means in docker container context not in underlying services.

Connectivity issue between containers (I suppose...)

I'm using docker-compose to run 3 containers:
My webapplication
Postgres
Cassandra
Once I use: docker-compose up
My webapp launches this exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: cassandra/172.17.0.3:9042
Once all containers are running, I'm able to enter into my webapps and try to ping cassandras container before it dies (webapp container), and all packets are successfully returned so I guess there actually IS connectivity between them.
The weirdest thing is that once I got this exception:
.InvalidQueryException: Keyspace 'myKeyspace' does not exist
Which means connection has been stablished, but was before I add persistence and created the mentioned schema, but I did change nothing on my compose.yml to get this new result
Here is my docker-compose.yml:
version: '3.1'
services:
cassandra:
container_name: "cassandra"
image: cassandra
ports:
- 9042:9042
volumes:
- /home/cassandra:/var/lib/cassandra
postgresql:
container_name: "postgresql"
image: postgres:11.1-alpine
restart: always
environment:
POSTGRES_DB: mywebapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
#- ./startup.sql:/docker-entrypoint-initdb.d/startup.sql
- postgresdata:/var/lib/postgresql/data
ports:
- 5432:5432
mywebapp:
container_name: "mywebapp"
image: openjdk:10-jre-slim
hostname: mywebapp
volumes:
- ./lib:/home/lib
- ./mywebapp-1.0.1-SNAPSHOT-exec.jar:/home/mywebapp-1.0.1-SNAPSHOT-exec.jar
entrypoint:
- java
- -jar
- -Djava.library.path=/home/lib
- /home/mywebapp-1.0.1-SNAPSHOT-exec.jar
environment:
- LD_LIBRARY_PATH=/home/lib
- spring.datasource.url=jdbc:postgresql://postgresql:5432/mywebapp
- spring.cassandra.contactpoints=cassandra
- spring.cassandra.port=9042
- spring.cassandra.keyspace=mywebapp
#- spring.datasource.username=postgres
#- spring.datasource.password=postgres
#- spring.jpa.hibernate.ddlAuto=update+
ports:
- 8443:8443
- 8080:8080
depends_on:
- cassandra
volumes:
postgresdata:
Thank you all in advance
I am assuming your web app requires for the cassandra service to be running when it starts. You should add depends_on entry to your web app service so docker starts it only when cassandra is started
And the links entry is not necessary as docker automatically will use the service names as hostnames in the network created for this docker-compose project. Same goes for the network_type: bridge - that is the default network type, so you can omit that in your case.

Categories

Resources