I am trying to run the jdbc sink connect with cratedb as a sink, as mentioned here. I wanted to run in docker so I created containers for the connector and for my cratedb. But upon running, I keep getting the following error in the logs.
Docker logs of the connector-standalone container:
[2020-08-20 10:34:06,430] ERROR WorkerSinkTask{id=jdbc-sink-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: Not a valid JDBC URL: "jdbc:postgresql://cratedb:4200/doc?user=crate",
at io.confluent.connect.jdbc.dialect.DatabaseDialects.extractJdbcUrlInfo(DatabaseDialects.java:175)
at io.confluent.connect.jdbc.dialect.DatabaseDialects.findBestFor(DatabaseDialects.java:119)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.initWriter(JdbcSinkTask.java:54)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.start(JdbcSinkTask.java:46)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:300)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2020-08-20 10:34:06,433] ERROR WorkerSinkTask{id=jdbc-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
My Docker file:
version: '3.3'
services:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- '2181:2181'
kafka:
container_name: kafka
image: wurstmeister/kafka:2.12-2.3.0
env_file:
- ".env"
ports:
- 9092:9092
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
connect-standalone:
build:
context: .
dockerfile: Dockerfile
container_name: connect-standalone
ports:
- 8083:8083
depends_on:
- kafka
volumes:
- ./connect-input-file:/tmp
cratedb:
container_name: cratedb
image: crate:latest
ports:
- "4200:4200"
volumes:
- /tmp/crate/01:/data
command: ["crate",
"-Cnode.name=cratedb",
"-Cnode.data=true"]
environment:
- CRATE_HEAP_SIZE=2g
My jdbc-sink-connector.properties
name=jdbc-sink-connector
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=metrics
connection.url="jdbc:postgresql://cratedb:4200/doc?user=crate",
auto.create=true
I am quite not sure what I am missing out.
It could be caused by the port 4200 which is the default port for HTTP. The default port for PostgreSQL is 5432
You can also have a look at a docker-compose file we use for testing here
Related
It works for me in local but i had an error when using docker :
error is : localhost/<unresolved>:5000: connection,
how can i set this unresolved value for logstash destination id
docker-compose
version: '3.2'
services:
elasticsearch:
image: elasticsearch:$ELK_VERSION
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
# Note: currently there doesn't seem to be a way to change the default user for Elasticsearch
ELASTIC_PASSWORD: $ELASTIC_PASSWORD
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
# X-Pack security needs to be enabled for Elasticsearch to actually authenticate requests
xpack.security.enabled: "true"
ports:
- "9200:9200"
- "9300:9300"
healthcheck:
test: "wget -q -O - http://$ELASTIC_USER:$ELASTIC_PASSWORD#localhost:9200/_cat/health"
interval: 1s
timeout: 30s
retries: 300
networks:
- internal
restart: unless-stopped
# https://www.elastic.co/guide/en/logstash/current/docker-config.html
logstash:
image: logstash:$ELK_VERSION
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USER: $ELASTIC_USER
ELASTIC_PASSWORD: $ELASTIC_PASSWORD
XPACK_MONITORING_ELASTICSEARCH_USERNAME: $ELASTIC_USER
XPACK_MONITORING_ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
XPACK_MONITORING_ELASTICSEARCH_HOSTS: "elasticsearch:9200"
XPACK_MONITORING_ENABLED: "true"
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
networks:
- internal
restart: unless-stopped
depends_on:
- elasticsearch
# https://www.elastic.co/guide/en/kibana/current/docker.html
kibana:
image: kibana:${ELK_VERSION}
environment:
ELASTICSEARCH_USERNAME: $ELASTIC_USER
ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
# Because Elasticsearch is running in a containerized environment
# (setting this to false will result in CPU stats not being correct in the Monitoring UI):
XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED: "true"
ports:
- "5601:5601"
networks:
- internal
restart: unless-stopped
depends_on:
- elasticsearch
- logstash
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: ./../
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?useSSL=false",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update",
"spring.application.name" : "ebnelhaythem"
}'
volumes:
- .m2:/root/.m2
networks:
internal:
volumes:
elasticsearch:
db:
AND LOGS ARE :
elastic_log_docker-app-1 | 14:15:45,959 |-WARN in net.logstash.logback.appender.LogstashTcpSocketAppender[logstash] - Log destination localhost/<unresolved>:5000: connection
failed. java.net.ConnectException: Connection refused
elastic_log_docker-app-1 | at java.net.ConnectException: Connection refused
elastic_log_docker-app-1 | at at java.base/sun.nio.ch.Net.pollConnect(Native Method)
elastic_log_docker-app-1 | at at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
Your Spring Boot application expect logstash to be available at localhost:
logstash:
host: localhost
However, when you run everything inside docker-compose, logstash will not be present at localhost since this now refers to the app container.
To resolve this, override the logstash host property when you run the application with docker-compose with the value logstash (so the name of the docker-compose service). Similar as you do for the MySQL url.
My task is to run Kafka connectors in docker containers, when I tried run connectors on my host all worked fine, but in container it fails. I know that there are a lot similar questions( I am new in Kafka and Redis and really can not find solution). I tried a lot solutuons, the algorithm from this answer Connecting a Redis container with another container (Docker) seems really amazing, but in doesn't work for me and I can't understand why.
In Docker app it gives me stack trace as:
ERROR WorkerSinkTask{id=kafka-connect-redis-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:186)
io.lettuce.core.RedisConnectionException: Unable to connect to redis-master:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-master/172.22.0.3:6379
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
My Redis config:
replicaof redis-master 6379
# bind 0.0.0.0
protected-mode no
My connector properties file
name=kafka-connect-redis
topics=bots
tasks.max=1
connector.class=com.github.jcustenborder.kafka.connect.redis.RedisSinkConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.storage.StringConverter
#offset.storage.file.filename=/tmp/connection/redis/connect.offsets
redis.hosts=redis-master,redis-slave-1,redis-slave-2
use.record.key=true
My docker compose:
redis-master:
image: redis
container_name: redis-m
# ports:
# - 6379:6379
volumes:
- ./clear-redis.sh:/bin/clear-redis.sh
command: bash -c "chmod +x /bin/clear-redis.sh && bash /bin/clear-redis.sh"
redis-slave-1:
image: redis
container_name: redis-s-1
# ports:
# - 7000:6379
volumes:
- ./config/redis:/usr/local/etc/redis/
command: redis-server /usr/local/etc/redis/redis.conf
redis-slave-2:
image: redis
container_name: redis-s-2
# ports:
# - 7001:6379
volumes:
- ./config/redis:/usr/local/etc/redis/
command: redis-server /usr/local/etc/redis/redis.conf
kafka-connect-redis:
image: confluentinc/cp-kafka-connect:5.5.1
container_name: connect-redis
hostname: kafka-connect-redis
ports:
- 8086:8086
volumes:
- ./connection/connectors/kafka-connect-redis/lib/:/etc/kafka-connect/jars/
- ./connection/connectors/kafka-connect-redis/redis-sink.properties:/usr/share/redis-sink.properties
- ./connection/connect-standalone-2.properties:/etc/connect-standalone-2.properties
- ./connection/run-redis.sh:/bin/run-redis.sh
depends_on:
- zoo
- kafka1
- kafka2
- kafka3
- redis-master
- redis-slave-1
- redis-slave-2
command: bash -c "chmod +x /bin/run-redis.sh && ./bin/run-redis.sh"
I'm trying to setup a keycloak container on my docker host. Unfortunately the keycloak container can't connect to my db container and always throws a java.net.UnknownHostException.
My docker-compose file:
version: '2'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
restart: always
networks:
- webgateway
- keycloak-net
environment:
- DB_VENDOR=MYSQL
- DB_ADDR=keycloak-db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
depends_on:
- keycloak-db
keycloak-db:
image: mysql:5.7
container_name: keycloak-db
restart: always
volumes:
- /var/keycloak/database:/var/lib/mysql
networks:
- keycloak-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=keycloak
- MYSQL_USER=keycloak
- MYSQL_PASSWORD=password
networks:
keycloak-net:
webgateway:
external:
name: docker_webgateway
The error message:
Caused by: java.net.UnknownHostException: keycloak-db: Name or service not known,
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method),
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929),
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515),
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848),
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505),
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364),
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298),
at com.mysql.jdbc#8.0.19//com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132),
at com.mysql.jdbc#8.0.19//com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65),
... 64 more,
,
23:44:04,490 FATAL [org.keycloak.services] (ServerService Thread Pool -- 72) java.lang.RuntimeException: Failed to connect to database,
Does someone know what I'm doing wrong?
P.S. The docker host is running locally so don't worry about my weak passwords.
Flag depends_on only ensures the order in which container is started but not whether the container is ready to serve the requests or not and that's what is happening here, where SQL container is started but it's not ready to receive the requests as it takes time. Please follow the below commands and the file, in which I added the health checks as well in docker-compose and you need to change the version to 2.1 or higher.
Create an external docker network
docker network create docker_webgateway
docker-compose.yaml
container_name: keycloak
restart: always
networks:
- webgateway
- keycloak-net
environment:
- DB_VENDOR=MYSQL
- DB_ADDR=keycloak-db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
depends_on:
- keycloak-db
keycloak-db:
image: mysql:5.7
container_name: keycloak-db
restart: always
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- /var/keycloak/database:/var/lib/mysql
networks:
- keycloak-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=keycloak
- MYSQL_USER=keycloak
- MYSQL_PASSWORD=password
networks:
keycloak-net:
webgateway:
external:
name: docker_webgateway
Create the stack by running the below command
docker-compose up
So I have little problem here. I have 4 module:
Eureka Server
Zuul Gateway
Authentication Service
Another MicroService
When I starting it on a local computer it doesn't show any problem like
Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: authentication-service
But when I started building it on docker it's always getting error like that. Can you tell me the part that I'm wrong?
eureka server properties
spring.application.name=bms-server
# default port for eureka server
server.port=8761
eureka.instance.hostname=bms-server
# eureka by default will register itself as a client. So, we need to set it to false.
# What's a client server? See other microservices (student, auth, etc).
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
Zuul Server properties
server.port=8762
spring.application.name=bms-api-gateway
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://bms-server:8761/eureka/
zuul.ignored-services=*
zuul.routes.authentication-service.path=/auth/**
zuul.routes.authentication-service.service-id=authentication-service
zuul.routes.general-setup-service.path=/general-setup/**
zuul.routes.general-setup-service.service-id=general-setup-service
zuul.routes.authentication-service.strip-prefix=false
zuul.routes.authentication-service.sensitive-headers=Cookie,Set-Cookie
zuul.retryable=true
zuul.ignored-headers=Access-Control-Allow-Credentials, Access-Control-Allow-Origin
ribbon.eureka.enabled=false
ribbon.ConnectTimeout=60000
ribbon.ReadTimeout=60000
hystrix.command.default.execution.enabled=false
hystrix.command.default.execution.isolation.strategy=THREAD
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=240000
Authentication service properties
spring.application.name=authentication-service
server.port=9100
eureka.client.service-url.defaultZone=http://bms-server:8761/eureka/
authentication-service.ribbon.listOfServers=http://localhost:9100
spring.cloud.loadbalancer.ribbon.enabled=false
this is my Another service properties
spring.application.name=authentication-service
server.port=9100
eureka.client.service-url.defaultZone=http://bms-server:8761/eureka/
general-setup-service.ribbon.listOfServers=http://localhost:9100
spring.cloud.loadbalancer.ribbon.enabled=false
And last this is my docker-compose.yml
version: '3.5'
services:
bms-server:
image: bms-server:v1
container_name: bms-server
hostname: bms-server
build:
context: ./bms-server
dockerfile: Dockerfile
volumes:
- maven-home:/root/.m2
ports:
- "8761:8761"
networks:
- bms-network
bms-api-gateway:
image: bms-api-gateway:v1
container_name: bms-api-gateway
build:
context: ./bms-api-gateway
dockerfile: Dockerfile
ports:
- "8762:8762"
depends_on:
- bms-server
volumes:
- maven-home:/root/.m2
links:
- bms-server:bms-server
hostname: bms-api-gateway
networks:
- bms-network
bms-authentication-service:
image: bms-authentication-service:v1
container_name: bms-authentication-service
build:
context: ./bms-authentication-service
dockerfile: Dockerfile
ports:
- "9100:9100"
volumes:
- maven-home:/root/.m2
depends_on:
- bms-server
links:
- bms-server:bms-server
hostname: authentication-service
networks:
- bms-network
bms-general-setup-service:
image: bms-general-setup-service:v1
container_name: bms-general-setup-service
build:
context: ./bms-general-setup-service
dockerfile: Dockerfile
ports:
- "9102:9102"
depends_on:
- bms-server
links:
- bms-server:bms-server
volumes:
- maven-home:/root/.m2
hostname: general-setup-service
networks:
- bms-network
volumes:
maven-home:
networks:
bms-network:
name: bms-network
driver: bridge
Please tell me the part that I'm wrong. Thank you very much.
I'm using docker-compose to run 3 containers:
My webapplication
Postgres
Cassandra
Once I use: docker-compose up
My webapp launches this exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: cassandra/172.17.0.3:9042
Once all containers are running, I'm able to enter into my webapps and try to ping cassandras container before it dies (webapp container), and all packets are successfully returned so I guess there actually IS connectivity between them.
The weirdest thing is that once I got this exception:
.InvalidQueryException: Keyspace 'myKeyspace' does not exist
Which means connection has been stablished, but was before I add persistence and created the mentioned schema, but I did change nothing on my compose.yml to get this new result
Here is my docker-compose.yml:
version: '3.1'
services:
cassandra:
container_name: "cassandra"
image: cassandra
ports:
- 9042:9042
volumes:
- /home/cassandra:/var/lib/cassandra
postgresql:
container_name: "postgresql"
image: postgres:11.1-alpine
restart: always
environment:
POSTGRES_DB: mywebapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
#- ./startup.sql:/docker-entrypoint-initdb.d/startup.sql
- postgresdata:/var/lib/postgresql/data
ports:
- 5432:5432
mywebapp:
container_name: "mywebapp"
image: openjdk:10-jre-slim
hostname: mywebapp
volumes:
- ./lib:/home/lib
- ./mywebapp-1.0.1-SNAPSHOT-exec.jar:/home/mywebapp-1.0.1-SNAPSHOT-exec.jar
entrypoint:
- java
- -jar
- -Djava.library.path=/home/lib
- /home/mywebapp-1.0.1-SNAPSHOT-exec.jar
environment:
- LD_LIBRARY_PATH=/home/lib
- spring.datasource.url=jdbc:postgresql://postgresql:5432/mywebapp
- spring.cassandra.contactpoints=cassandra
- spring.cassandra.port=9042
- spring.cassandra.keyspace=mywebapp
#- spring.datasource.username=postgres
#- spring.datasource.password=postgres
#- spring.jpa.hibernate.ddlAuto=update+
ports:
- 8443:8443
- 8080:8080
depends_on:
- cassandra
volumes:
postgresdata:
Thank you all in advance
I am assuming your web app requires for the cassandra service to be running when it starts. You should add depends_on entry to your web app service so docker starts it only when cassandra is started
And the links entry is not necessary as docker automatically will use the service names as hostnames in the network created for this docker-compose project. Same goes for the network_type: bridge - that is the default network type, so you can omit that in your case.