Flink 1.9.1 Can no longer connect to Azure Event Hub - java

Java 8,
Flink 1.9.1,
Azure Event Hub
I can no longer connect to azure event hub with my flink project as of Jan 5th 2020. I was having the same issue with several spring boot apps but the issue was resolved when i upgraded to Spring Boot 2.2.2 which also updated Kafka Clients and Kafka dependencies to 2.3.1. I have attempted to update Flink's kafka dependencies without success. I've also submitted an issue
https://issues.apache.org/jira/browse/FLINK-15557
2020-01-10 19:36:30,364 WARN org.apache.kafka.clients.NetworkClient -
[Consumer clientId=consumer-1, groupId=****] Bootstrap broker
*****.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
Connection Properties
"sasl.mechanism"="PLAIN");
"security.protocol"="SASL_SSL");
"sasl.jaas.config"="Endpoint=sb://<FQDN>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EntityValue>;

You must be using entity level connection string and that is why your clients are observing connection failures. The issue should resolve when namespace level connection string is used.

Related

How to integrate Camel Spring boot with Kafka in Confluent Cloud?

I have an application which uses camel Spring Boot with debeezium to listen to a mysql database and to publish in a Kafka topic.
It was all working fine since I changed the kafka from local to use confluent cloud. I have some other applications (normal producers and consumers) that connect to confluent cloud and all works fine.
This is my application.yml. I removed the debeezium-mysql part because it works fine. So I left only the config part of Kafka/Confluent.
routes:
debezium:
allow-public-key-retrieval: true
bootstrap-servers: ${application.kafka.brokers}
offset-storage:
topic-cleanup-policy: compact
camel:
component:
debezium-mysql:
ALL CONFIG with mysql, it is not here because it is working fine
kafka:
brokers: ${application.kafka.brokers}
schema-registry-u-r-l: ${application.schema-registry.base-urls}
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
additional-properties:
#CCloud Schema Registry Connection parameter
schema.registry.basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: ${SCHEMA_REGISTRY_ACCESS_KEY}:${SCHEMA_REGISTRY_SECRET_KEY}
ssl.endpoint.identification.algorithm: https
client.dns.lookup: use_all_dns_ips
sasl-jaas-config: org.apache.kafka.common.security.plain.PlainLoginModule required username="${CONFLUENT_CLOUD_USERNAME}" password="${CONFLUENT_CLOUD_PASSWORD}";
security-protocol: SASL_SSL
retry-backoff-ms: 500
request-timeout-ms: 20000
sasl-mechanism: PLAIN
With this config, it keeps giving me an error when I try to star the app:
[AdminClient clientId=adminclient-1] Node -1 disconnected.
2023-02-02 16:57:24.853 INFO 9644 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 146ms, elapsed time since send: 146ms, request timeout: 3600000ms)
I could check that the problem is in this AdminConfig, which doesn't get the right properties. An example, the security.protocol should be SASL_SSL, but it gets PLAINTEXT. But when creating the producer and consumer it gets the right values.
Really, I have been struggling with this two days. I would be really happy with any help. Thank you.

How to use Kafka with SSL via logback appander?

I use this logback appender to send logs to Kafka:
https://github.com/danielwegener/logback-kafka-appender
When Kafka was PLAINTEXT everything worked correctly. But when Kafka changed to SSL, it is not possible to send messages. I did not find the necessary information in readme.md. Has anyone had this setup experience? Or maybe use something else?
<topic>TEST_TOPIC_FOR_OS</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy">
</deliveryStrategy>
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<producerConfig>bootstrap.servers=KAFKA BROKER HOST</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>buffer.memory=16777216</producerConfig>
<producerConfig>max.block.ms=100</producerConfig>
<producerConfig>retries=2</producerConfig>
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback</producerConfig>
<producerConfig>compression.type=none</producerConfig>
<producerConfig>security.protocol=SSL</producerConfig>
<producerConfig>ssl.keystore.location= path_to_jks</producerConfig>
<producerConfig>ssl.keystore.password=PASSWORD</producerConfig>
<producerConfig>ssl.truststore.location=path_to_jks </producerConfig>
<producerConfig>ssl.truststore.password=PASSWORD </producerConfig>
<producerConfig>ssl.endpoint.identification.algorithm=</producerConfig>
<producerConfig>ssl.protocol=TLSv1.1</producerConfig>
For any existing topic, I get an error:
12:05:49.505 [kafka-producer-network-thread | host-default-logback] route: DEBUG o.a.k.clients.producer.KafkaProducer breadcrumbId: - [Producer clientId=host-default-logback] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Topic TEST_TOPIC_FOR_OS not present in metadata after 100 ms.
The application itself works correctly with this kafka and topic
The problem went away with the upgrade of appender to 0.2.0

Failed to connect Azure servicebus topic using JMS - Java

I followed steps as mentioned in Azure ServiceBus JMS Sample with below properties
spring.jms.servicebus.connection-string=Endpoint=sb://test-dt.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=key
spring.jms.servicebus.topic-client-id=12345
spring.jms.servicebus.idle-timeout=18000
spring.jms.servicebus.pricing-tier=Standard
however I get below error
ERROR 43904 --- [ntContainer#0-1] org.apache.qpid.jms.JmsConnection : Failed to connect to remote at: amqps://test-dt.servicebus.windows.net:-1
ERROR 43904 --- [ntContainer#0-1] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'test-topic' - retrying using FixedBackOff{interval=5000, currentAttempts=6, maxAttempts=unlimited}. Cause: handshake timed out after 10000ms
On the other hand, I followed steps as mentioned in ServiceBus without JMS and added transportType as AmqpTransportType.AMQP_WEB_SOCKETS then I am able to connect.
We want to implement using spring boot starter and listener method, instead of calling from (public static void main) method.
Please guide on what am I missing when following first link
ERROR 43904 --- [ntContainer#0-1] org.apache.qpid.jms.JmsConnection : Failed to connect to remote at: amqps://test-dt.servicebus.windows.net:-1
To resolve above error, try as suggested by Anand Sowmithiran:
Check if port 5671 is blocked:
telnet <yournamespacename>.servicebus.windows.net 5671
Note: Clients that use AMQP connections over TCP require ports 5671 and 5672 to be opened in the firewall. Along with these ports, it might be necessary to open additional ports if the EnableLinkRedirect feature is enabled.
You can refer to Troubleshooting guide for Azure Service Bus, AMQP outbound port requirements and Port 5671 Blocked :(. What are other options?

Spring-Boot and Kafka : How to handle broker not available?

While the spring-boot app is running and if I shutdown the broker completely ( both kafka and zookeeper ) I am seeing this warn in console for infinite amount of time.
[org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
WARN o.apache.kafka.clients.NetworkClient - [Consumer
clientId=consumer-1, groupId=ResponseReceiveConsumerGroup]
Connection to node 2147483647 could not be established. Broker may not
be available.
Is there a way in Spring Boot to handle this gracefully instead of infinite logs on console ?
Increase the reconnect.backoff.ms property (see Kafka docs).
The default is only 50ms.

kafka + zookeeper remote = error

I am trying to install a kafka & zookeeper instance on a remote server. I only need 1 node of each actually because i only want to provide remote kafka for test purposes.
Kafka and Zookeeper are running from the Apache Kafka tarball you can find there (v0.0.9), inside a Docker image.
Trying to consume / produce using the provided scripts. And trying to produce using own java application. Everythinf is working fine if Kafka & ZK are installed on the local server.
Here is the error I get while trying to produce :
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
No partition metadata for topic RSS due to kafka.common.LeaderNotAvailableException}] for topic [RSS]: class kafka.common.LeaderNotAvailableException
Kafka properties tested
First :
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=localhost:<PORT>
Second:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
Third:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Last:
borker.id=0
port=9092
host.name=</etc/host name>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Here is my "/etc/hosts"
127.0.0.1 kafka kafka
127.0.0.1 localhost
I followed the Getting Started, which if I understood is a localhost / signle server configurations. I cannot understand what I have to do to get this work with remote calls...
Thanks for your help !
EDIT 1
host.name=localhost
advertised.host.name=politik.cm-cloud.fr
Seems to allow a local consumer (on the server) and producer. But if we want to do the same from a remote server we get
[2015-12-09 12:44:10,826] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.NoRouteToHostException: No route to host
The error does not look like connectivity problem with Zookeeper / Kafka.
Just follow the instruction in "quickstart" from http://kafka.apache.org/
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
Additionally the error indicates there is no partition info i.e topic not yet created . Try creating topics first and then try to produce/consume because when producing to a non existent topic kafka will create the topic based on auto.create.topics.enable in server.properties but remotely it is better to create topics rathen than relying on auto create

Categories

Resources