Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException - java

I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:
One running Zookeeper and two running Kafka.
I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:
on the Kafka producer:
kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa
on the Kafka consumer:
kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa
I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:
ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
On the consumer side I get this message, which repeats periodically:
WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:
WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Not sure where to go from here. Let me know if more details about my configuration files are needed

This was a configuration issue.
In order to get it running several changes to config files had to happen:
In config/server.properties on each Kafka server:
host.name: <Public IP>
advertised.host.name: <AWS Public DNS Address>
In config/producer.properties on each Kafka server:
metadata.broker.list: <Producer Server advertised.host.name>:<Producer Server port>,<Consumer Server advertised.host.name>:<Consumer Server port>
In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:
<Public IP> localhost localhost.localdomain

Related

How to integrate Camel Spring boot with Kafka in Confluent Cloud?

I have an application which uses camel Spring Boot with debeezium to listen to a mysql database and to publish in a Kafka topic.
It was all working fine since I changed the kafka from local to use confluent cloud. I have some other applications (normal producers and consumers) that connect to confluent cloud and all works fine.
This is my application.yml. I removed the debeezium-mysql part because it works fine. So I left only the config part of Kafka/Confluent.
routes:
debezium:
allow-public-key-retrieval: true
bootstrap-servers: ${application.kafka.brokers}
offset-storage:
topic-cleanup-policy: compact
camel:
component:
debezium-mysql:
ALL CONFIG with mysql, it is not here because it is working fine
kafka:
brokers: ${application.kafka.brokers}
schema-registry-u-r-l: ${application.schema-registry.base-urls}
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
additional-properties:
#CCloud Schema Registry Connection parameter
schema.registry.basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: ${SCHEMA_REGISTRY_ACCESS_KEY}:${SCHEMA_REGISTRY_SECRET_KEY}
ssl.endpoint.identification.algorithm: https
client.dns.lookup: use_all_dns_ips
sasl-jaas-config: org.apache.kafka.common.security.plain.PlainLoginModule required username="${CONFLUENT_CLOUD_USERNAME}" password="${CONFLUENT_CLOUD_PASSWORD}";
security-protocol: SASL_SSL
retry-backoff-ms: 500
request-timeout-ms: 20000
sasl-mechanism: PLAIN
With this config, it keeps giving me an error when I try to star the app:
[AdminClient clientId=adminclient-1] Node -1 disconnected.
2023-02-02 16:57:24.853 INFO 9644 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 146ms, elapsed time since send: 146ms, request timeout: 3600000ms)
I could check that the problem is in this AdminConfig, which doesn't get the right properties. An example, the security.protocol should be SASL_SSL, but it gets PLAINTEXT. But when creating the producer and consumer it gets the right values.
Really, I have been struggling with this two days. I would be really happy with any help. Thank you.

How to use Kafka with SSL via logback appander?

I use this logback appender to send logs to Kafka:
https://github.com/danielwegener/logback-kafka-appender
When Kafka was PLAINTEXT everything worked correctly. But when Kafka changed to SSL, it is not possible to send messages. I did not find the necessary information in readme.md. Has anyone had this setup experience? Or maybe use something else?
<topic>TEST_TOPIC_FOR_OS</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy">
</deliveryStrategy>
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<producerConfig>bootstrap.servers=KAFKA BROKER HOST</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>buffer.memory=16777216</producerConfig>
<producerConfig>max.block.ms=100</producerConfig>
<producerConfig>retries=2</producerConfig>
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback</producerConfig>
<producerConfig>compression.type=none</producerConfig>
<producerConfig>security.protocol=SSL</producerConfig>
<producerConfig>ssl.keystore.location= path_to_jks</producerConfig>
<producerConfig>ssl.keystore.password=PASSWORD</producerConfig>
<producerConfig>ssl.truststore.location=path_to_jks </producerConfig>
<producerConfig>ssl.truststore.password=PASSWORD </producerConfig>
<producerConfig>ssl.endpoint.identification.algorithm=</producerConfig>
<producerConfig>ssl.protocol=TLSv1.1</producerConfig>
For any existing topic, I get an error:
12:05:49.505 [kafka-producer-network-thread | host-default-logback] route: DEBUG o.a.k.clients.producer.KafkaProducer breadcrumbId: - [Producer clientId=host-default-logback] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Topic TEST_TOPIC_FOR_OS not present in metadata after 100 ms.
The application itself works correctly with this kafka and topic
The problem went away with the upgrade of appender to 0.2.0

Topic(s) [test-topic-new] is/are not present and missingTopicsFatal is true

I am running one consumer service in a server(12.255.123.789). there are 3 kafka servers(XX.XXX.XXX.123, XX.XXX.XXX.124, XX.XXX.XXX.125) in a cluster and three zookeeper servers(YY.YYY.YYY.123, YY.YYY.YYY.124, YY.YYY.YYY.125) are running. My consumer properties are
spring.kafka.consumer.bootstrap-servers=XX.XXX.XXX.123:9092,XX.XXX.XXX.124:9092,XX.XXX.XXX.125:9092
spring.kafka.consumer.group-id: prod
#spring.kafka.consumer.auto-offset-reset: earliest
spring.kafka.consumer.auto-offset-reset: latest
spring.kafka.consumer.key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer: org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.listener.concurrency: 6
I have used this command to create topics in kafka server(XX.XXX.XXX.123,XX.XXX.XXX.124,XX.XXX.XXX.125)
bin/kafka-topics.sh --create --zookeeper YY.YYY.YYY.123:2181,YY.YYY.YYY.124:2181,YY.YYY.YYY.125:2181 --replication-factor 2 --partitions 1 --topic test-topic-new --config cleanup.policy=delete --config delete.retention.ms=60000
While starting my consumer service on 12.255.123.789 server I am having below exception -
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is java.lang.IllegalStateException: Topic(s) [test-topic-new] is/are not present and missingTopicsFatal is true
Am I doing anything wring here?
The communication between kafka servers were not established. Thats why the application was unable to find the topics. We need to set one leader node. After this application ran smoothly.
advertised.listners=PLAINTEXT://<serverIP:port>

Kafka Connect implementation errors

I was running through the tutorial here: http://kafka.apache.org/documentation.html#introduction
When I get to "Step 7: Use Kafka Connect to import/export data" and attempt to start two connectors I am getting the following errors:
ERROR Failed to flush WorkerSourceTask{id=local-file-source-0}, timed out while waiting for producer to flush outstanding messages, 1 left
ERROR Failed to commit offsets for WorkerSourceTask
Here is the portion of the tutorial:
Next, we'll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
I have spent some time looking for a solution, but was unable to find anything useful. Any help is appreciated.
Thanks!
The reason I was getting this error was because the first server I created using the config/server.properties was not running. I am assuming that because it is the lead of the topic, the messages could not be flushed and the offsets could not be committed.
Once I started the kafka server using the server propertes (config/server.properties) This issue was resolved.
You need to start Kafka server and Zookeeper before running Kafka Connect.
You need to exec the cmds in "Step 2: Start the server" below first:
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
from here:https://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3CCAK0BMEpgWmL93wgm2jVCKbUT5rAZiawzOroTFc_A6Q=GaXQgfQ#mail.gmail.com%3E
You need to start zookeeper and kafka server first before running that line.
start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
start multiple kafka servers
bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
start connectors
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
Then you will see some lines are written into test.sink.txt:
foo
bar
And you can start the consumer to check it:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
If you configure your Kafka Broker with a hostname such as my.sandbox.com make sure that you modify the config/connect-standalone.properties accordingly:
bootstrap.servers=my.sandbox.com:9092
On Hortonworks HDP the default port is 6667, hence the setting is
bootstrap.servers=my.sandbox.com:6667
If Kerberos is enabled you will need the following settings as well (without SSL):
security.protocol=PLAINTEXTSASL
producer.security.protocol=PLAINTEXTSASL
producer.sasl.kerberos.service.name=kafka
consumer.security.protocol=PLAINTEXTSASL
consumer.sasl.kerberos.service.name=kafka

kafka + zookeeper remote = error

I am trying to install a kafka & zookeeper instance on a remote server. I only need 1 node of each actually because i only want to provide remote kafka for test purposes.
Kafka and Zookeeper are running from the Apache Kafka tarball you can find there (v0.0.9), inside a Docker image.
Trying to consume / produce using the provided scripts. And trying to produce using own java application. Everythinf is working fine if Kafka & ZK are installed on the local server.
Here is the error I get while trying to produce :
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
No partition metadata for topic RSS due to kafka.common.LeaderNotAvailableException}] for topic [RSS]: class kafka.common.LeaderNotAvailableException
Kafka properties tested
First :
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=localhost:<PORT>
Second:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
Third:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Last:
borker.id=0
port=9092
host.name=</etc/host name>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Here is my "/etc/hosts"
127.0.0.1 kafka kafka
127.0.0.1 localhost
I followed the Getting Started, which if I understood is a localhost / signle server configurations. I cannot understand what I have to do to get this work with remote calls...
Thanks for your help !
EDIT 1
host.name=localhost
advertised.host.name=politik.cm-cloud.fr
Seems to allow a local consumer (on the server) and producer. But if we want to do the same from a remote server we get
[2015-12-09 12:44:10,826] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.NoRouteToHostException: No route to host
The error does not look like connectivity problem with Zookeeper / Kafka.
Just follow the instruction in "quickstart" from http://kafka.apache.org/
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
Additionally the error indicates there is no partition info i.e topic not yet created . Try creating topics first and then try to produce/consume because when producing to a non existent topic kafka will create the topic based on auto.create.topics.enable in server.properties but remotely it is better to create topics rathen than relying on auto create

Categories

Resources