I have an application which uses camel Spring Boot with debeezium to listen to a mysql database and to publish in a Kafka topic.
It was all working fine since I changed the kafka from local to use confluent cloud. I have some other applications (normal producers and consumers) that connect to confluent cloud and all works fine.
This is my application.yml. I removed the debeezium-mysql part because it works fine. So I left only the config part of Kafka/Confluent.
routes:
debezium:
allow-public-key-retrieval: true
bootstrap-servers: ${application.kafka.brokers}
offset-storage:
topic-cleanup-policy: compact
camel:
component:
debezium-mysql:
ALL CONFIG with mysql, it is not here because it is working fine
kafka:
brokers: ${application.kafka.brokers}
schema-registry-u-r-l: ${application.schema-registry.base-urls}
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
additional-properties:
#CCloud Schema Registry Connection parameter
schema.registry.basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: ${SCHEMA_REGISTRY_ACCESS_KEY}:${SCHEMA_REGISTRY_SECRET_KEY}
ssl.endpoint.identification.algorithm: https
client.dns.lookup: use_all_dns_ips
sasl-jaas-config: org.apache.kafka.common.security.plain.PlainLoginModule required username="${CONFLUENT_CLOUD_USERNAME}" password="${CONFLUENT_CLOUD_PASSWORD}";
security-protocol: SASL_SSL
retry-backoff-ms: 500
request-timeout-ms: 20000
sasl-mechanism: PLAIN
With this config, it keeps giving me an error when I try to star the app:
[AdminClient clientId=adminclient-1] Node -1 disconnected.
2023-02-02 16:57:24.853 INFO 9644 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 146ms, elapsed time since send: 146ms, request timeout: 3600000ms)
I could check that the problem is in this AdminConfig, which doesn't get the right properties. An example, the security.protocol should be SASL_SSL, but it gets PLAINTEXT. But when creating the producer and consumer it gets the right values.
Really, I have been struggling with this two days. I would be really happy with any help. Thank you.
Related
I use this logback appender to send logs to Kafka:
https://github.com/danielwegener/logback-kafka-appender
When Kafka was PLAINTEXT everything worked correctly. But when Kafka changed to SSL, it is not possible to send messages. I did not find the necessary information in readme.md. Has anyone had this setup experience? Or maybe use something else?
<topic>TEST_TOPIC_FOR_OS</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy">
</deliveryStrategy>
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<producerConfig>bootstrap.servers=KAFKA BROKER HOST</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>buffer.memory=16777216</producerConfig>
<producerConfig>max.block.ms=100</producerConfig>
<producerConfig>retries=2</producerConfig>
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback</producerConfig>
<producerConfig>compression.type=none</producerConfig>
<producerConfig>security.protocol=SSL</producerConfig>
<producerConfig>ssl.keystore.location= path_to_jks</producerConfig>
<producerConfig>ssl.keystore.password=PASSWORD</producerConfig>
<producerConfig>ssl.truststore.location=path_to_jks </producerConfig>
<producerConfig>ssl.truststore.password=PASSWORD </producerConfig>
<producerConfig>ssl.endpoint.identification.algorithm=</producerConfig>
<producerConfig>ssl.protocol=TLSv1.1</producerConfig>
For any existing topic, I get an error:
12:05:49.505 [kafka-producer-network-thread | host-default-logback] route: DEBUG o.a.k.clients.producer.KafkaProducer breadcrumbId: - [Producer clientId=host-default-logback] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Topic TEST_TOPIC_FOR_OS not present in metadata after 100 ms.
The application itself works correctly with this kafka and topic
The problem went away with the upgrade of appender to 0.2.0
Java 8,
Flink 1.9.1,
Azure Event Hub
I can no longer connect to azure event hub with my flink project as of Jan 5th 2020. I was having the same issue with several spring boot apps but the issue was resolved when i upgraded to Spring Boot 2.2.2 which also updated Kafka Clients and Kafka dependencies to 2.3.1. I have attempted to update Flink's kafka dependencies without success. I've also submitted an issue
https://issues.apache.org/jira/browse/FLINK-15557
2020-01-10 19:36:30,364 WARN org.apache.kafka.clients.NetworkClient -
[Consumer clientId=consumer-1, groupId=****] Bootstrap broker
*****.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
Connection Properties
"sasl.mechanism"="PLAIN");
"security.protocol"="SASL_SSL");
"sasl.jaas.config"="Endpoint=sb://<FQDN>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EntityValue>;
You must be using entity level connection string and that is why your clients are observing connection failures. The issue should resolve when namespace level connection string is used.
While the spring-boot app is running and if I shutdown the broker completely ( both kafka and zookeeper ) I am seeing this warn in console for infinite amount of time.
[org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
WARN o.apache.kafka.clients.NetworkClient - [Consumer
clientId=consumer-1, groupId=ResponseReceiveConsumerGroup]
Connection to node 2147483647 could not be established. Broker may not
be available.
Is there a way in Spring Boot to handle this gracefully instead of infinite logs on console ?
Increase the reconnect.backoff.ms property (see Kafka docs).
The default is only 50ms.
We have one producer & one consumer & one partition. Both consumer/producer are spring boot applications. The consumer app runs on my local machine while producer along with kafka & zookeeper on a remote machine.
During development, I redeployed my producer application with some changes. But after that my consumer is not receiving any messages. I tried restarting the consumer, but no luck. What can be the issue and/or how can it be solved?
Consumer Config:
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
input:
destination: sales
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:127.0.0.1}
zkNodes: ${SERVICE_REGISTRY_HOST:127.0.0.1}
defaultZkPort: 2181
defaultBrokerPort: 9092
server:
port: 0
Producer Config:
cloud:
stream:
defaultBinder: kafka
bindings:
output:
destination: sales
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:127.0.0.1}
zkNodes: ${SERVICE_REGISTRY_HOST:127.0.0.1}
defaultZkPort: 2181
defaultBrokerPort: 9092
EDIT2:
After 5 minutes the consumer app dies with following exception:
2017-09-12 18:14:47,254 ERROR main o.s.c.s.b.k.p.KafkaTopicProvisioner:253 - Cannot initialize Binder
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
2017-09-12 18:14:47,255 WARN main o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext:550 - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is org.springframework.cloud.stream.binder.BinderException: Cannot initialize binder:
2017-09-12 18:14:47,256 INFO main o.s.i.m.IntegrationMBeanExporter:449 - Unregistering JMX-exposed beans on shutdown
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:241 - Unregistering JMX-exposed beans
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: input
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: nullChannel
2017-09-12 18:14:47,258 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: errorChannel
See if the suggestion above about DEBUG reveals any further information. It looks like you are getting some Timeout exception from the KafkaTopicProvisioner. But that occurs when you restart the consumer I assume. It looks like the consumer has some trouble communicating to the broker somehow and you need to find out whats going on there.
Well, it looks like there is already a bug reported with spring-cloud-stream-binder-kafka stating the resetOffset property has no effect. Hence, on the consumer always requested messages with the offset as latest.
As mentioned on the git issue, the only workaround is to fix this via the kafka consumer CLI tool.
I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:
One running Zookeeper and two running Kafka.
I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:
on the Kafka producer:
kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa
on the Kafka consumer:
kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa
I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:
ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
On the consumer side I get this message, which repeats periodically:
WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:
WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Not sure where to go from here. Let me know if more details about my configuration files are needed
This was a configuration issue.
In order to get it running several changes to config files had to happen:
In config/server.properties on each Kafka server:
host.name: <Public IP>
advertised.host.name: <AWS Public DNS Address>
In config/producer.properties on each Kafka server:
metadata.broker.list: <Producer Server advertised.host.name>:<Producer Server port>,<Consumer Server advertised.host.name>:<Consumer Server port>
In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:
<Public IP> localhost localhost.localdomain