Consumer does not receive messages after kafka producer/consumer restart - java

We have one producer & one consumer & one partition. Both consumer/producer are spring boot applications. The consumer app runs on my local machine while producer along with kafka & zookeeper on a remote machine.
During development, I redeployed my producer application with some changes. But after that my consumer is not receiving any messages. I tried restarting the consumer, but no luck. What can be the issue and/or how can it be solved?
Consumer Config:
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
input:
destination: sales
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:127.0.0.1}
zkNodes: ${SERVICE_REGISTRY_HOST:127.0.0.1}
defaultZkPort: 2181
defaultBrokerPort: 9092
server:
port: 0
Producer Config:
cloud:
stream:
defaultBinder: kafka
bindings:
output:
destination: sales
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:127.0.0.1}
zkNodes: ${SERVICE_REGISTRY_HOST:127.0.0.1}
defaultZkPort: 2181
defaultBrokerPort: 9092
EDIT2:
After 5 minutes the consumer app dies with following exception:
2017-09-12 18:14:47,254 ERROR main o.s.c.s.b.k.p.KafkaTopicProvisioner:253 - Cannot initialize Binder
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
2017-09-12 18:14:47,255 WARN main o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext:550 - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is org.springframework.cloud.stream.binder.BinderException: Cannot initialize binder:
2017-09-12 18:14:47,256 INFO main o.s.i.m.IntegrationMBeanExporter:449 - Unregistering JMX-exposed beans on shutdown
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:241 - Unregistering JMX-exposed beans
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: input
2017-09-12 18:14:47,257 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: nullChannel
2017-09-12 18:14:47,258 INFO main o.s.i.m.IntegrationMBeanExporter:375 - Summary on shutdown: errorChannel

See if the suggestion above about DEBUG reveals any further information. It looks like you are getting some Timeout exception from the KafkaTopicProvisioner. But that occurs when you restart the consumer I assume. It looks like the consumer has some trouble communicating to the broker somehow and you need to find out whats going on there.

Well, it looks like there is already a bug reported with spring-cloud-stream-binder-kafka stating the resetOffset property has no effect. Hence, on the consumer always requested messages with the offset as latest.
As mentioned on the git issue, the only workaround is to fix this via the kafka consumer CLI tool.

Related

How to integrate Camel Spring boot with Kafka in Confluent Cloud?

I have an application which uses camel Spring Boot with debeezium to listen to a mysql database and to publish in a Kafka topic.
It was all working fine since I changed the kafka from local to use confluent cloud. I have some other applications (normal producers and consumers) that connect to confluent cloud and all works fine.
This is my application.yml. I removed the debeezium-mysql part because it works fine. So I left only the config part of Kafka/Confluent.
routes:
debezium:
allow-public-key-retrieval: true
bootstrap-servers: ${application.kafka.brokers}
offset-storage:
topic-cleanup-policy: compact
camel:
component:
debezium-mysql:
ALL CONFIG with mysql, it is not here because it is working fine
kafka:
brokers: ${application.kafka.brokers}
schema-registry-u-r-l: ${application.schema-registry.base-urls}
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
additional-properties:
#CCloud Schema Registry Connection parameter
schema.registry.basic.auth.credentials.source: USER_INFO
schema.registry.basic.auth.user.info: ${SCHEMA_REGISTRY_ACCESS_KEY}:${SCHEMA_REGISTRY_SECRET_KEY}
ssl.endpoint.identification.algorithm: https
client.dns.lookup: use_all_dns_ips
sasl-jaas-config: org.apache.kafka.common.security.plain.PlainLoginModule required username="${CONFLUENT_CLOUD_USERNAME}" password="${CONFLUENT_CLOUD_PASSWORD}";
security-protocol: SASL_SSL
retry-backoff-ms: 500
request-timeout-ms: 20000
sasl-mechanism: PLAIN
With this config, it keeps giving me an error when I try to star the app:
[AdminClient clientId=adminclient-1] Node -1 disconnected.
2023-02-02 16:57:24.853 INFO 9644 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 146ms, elapsed time since send: 146ms, request timeout: 3600000ms)
I could check that the problem is in this AdminConfig, which doesn't get the right properties. An example, the security.protocol should be SASL_SSL, but it gets PLAINTEXT. But when creating the producer and consumer it gets the right values.
Really, I have been struggling with this two days. I would be really happy with any help. Thank you.

How to use Kafka with SSL via logback appander?

I use this logback appender to send logs to Kafka:
https://github.com/danielwegener/logback-kafka-appender
When Kafka was PLAINTEXT everything worked correctly. But when Kafka changed to SSL, it is not possible to send messages. I did not find the necessary information in readme.md. Has anyone had this setup experience? Or maybe use something else?
<topic>TEST_TOPIC_FOR_OS</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy">
</deliveryStrategy>
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<producerConfig>bootstrap.servers=KAFKA BROKER HOST</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>buffer.memory=16777216</producerConfig>
<producerConfig>max.block.ms=100</producerConfig>
<producerConfig>retries=2</producerConfig>
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback</producerConfig>
<producerConfig>compression.type=none</producerConfig>
<producerConfig>security.protocol=SSL</producerConfig>
<producerConfig>ssl.keystore.location= path_to_jks</producerConfig>
<producerConfig>ssl.keystore.password=PASSWORD</producerConfig>
<producerConfig>ssl.truststore.location=path_to_jks </producerConfig>
<producerConfig>ssl.truststore.password=PASSWORD </producerConfig>
<producerConfig>ssl.endpoint.identification.algorithm=</producerConfig>
<producerConfig>ssl.protocol=TLSv1.1</producerConfig>
For any existing topic, I get an error:
12:05:49.505 [kafka-producer-network-thread | host-default-logback] route: DEBUG o.a.k.clients.producer.KafkaProducer breadcrumbId: - [Producer clientId=host-default-logback] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Topic TEST_TOPIC_FOR_OS not present in metadata after 100 ms.
The application itself works correctly with this kafka and topic
The problem went away with the upgrade of appender to 0.2.0

Listener doesn't pull data after connection to host was reestablished

Let me begin by saying that I just started to dabble with AMQP.
I want to consume/pull data from queue. I'm using Spring's libs (spring-boot-starter-amqp) in order to make things easier. I have a listener class with method annotated with #RabbitListener where I set queue. Everything else is configured via properties:
rabbitmq:
username: user
password: password
virtual-host: virtual-host
port: 5672
host: host
queue: _316_
listener:
simple:
retry:
enabled: true
initial-interval: 1000
max-attempts: 8
max-interval: 10000
multiplier: 2.0
stateless: true
Everything works fine until I make host unavailable for a while. When that happens connection is dropped and attempts are made in order to reestablish it. After connection is reestablished again listener doesn't start to pull messages. After application is restarted everything is fine, but I'm sure it can be configured somehow that consumer keeps restarting, at least it should try to do it after connection was reestablished (or at least this is what I'd expect).
After connection has been dropped following can be found in logs:
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer WARN Cancel received for amq.ctag-PgBSeymWBfsghwdUYr5asA (_316_); Consumer#22ead351: tags=[[amq.ctag-PgBSeymWBfsghwdUYr5asA]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#39549f33 Shared Rabbit Connection: SimpleConnection#6f731759 [delegate=amqp://user#host, localPort= 36678], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.connection.CachingConnectionFactory ERROR Channel shutdown: connection error; protocol method: #method<connection.close>(reply-code=320, reply-text=CONNECTION_FORCED - user 'user' is deleted, class-id=0, method-id=0)
com.rabbitmq.client.impl.ForgivingExceptionHandler WARN An unexpected connection driver error occured (Exception message: Connection reset)
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer WARN Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.rabbit.support.ConsumerCancelledException
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer INFO Restarting Consumer#22ead351: tags=[[]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#39549f33 Shared Rabbit Connection: SimpleConnection#6f731759 [delegate=amqp://user#host, localPort= 36678], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.connection.CachingConnectionFactory INFO Attempting to connect to: [host:5672]
org.springframework.amqp.rabbit.listener.exception.FatalListenerStartupException: Authentication failure\n\tat org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:564)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.initialize(SimpleMessageListenerContainer.java:1201)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1046)\n\tat java.base/java.lang.Thread.run(Thread.java:835)\nCaused by: org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer","message":"Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer ERROR Stopping container from aborted consumer
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer INFO Waiting for workers to finish.
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer INFO Successfully waited for workers to finish.
com.rabbitmq.client.impl.ForgivingExceptionHandler WARN An unexpected connection driver error occured (Exception message: Socket closed)
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.\n\tat org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:65)\n\tat
Then connection attempt is made and we're in a loop:
org.springframework.amqp.rabbit.connection.CachingConnectionFactory INFO Attempting to connect to: [host]
com.rabbitmq.client.impl.ForgivingExceptionHandler WARN An unexpected connection driver error occured (Exception message: Socket closed)
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED
Until connection is reestablished:
org.springframework.amqp.rabbit.connection.CachingConnectionFactory INFO Attempting to connect to: [host]
org.springframework.amqp.rabbit.connection.CachingConnectionFactory INFO Created new connection: rabbitConnectionFactory#69d3cf7e:16/SimpleConnection#3931e0ad [delegate=amqp://user#host, localPort= 50574]
And nothing else happens - no messages are being consumed.
UPDATE:
Followed suggestion and turned on DEBUG logging.
When app is starting we're:
starting listener container
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer DEBUG Starting Rabbit listener container.
creating the connection
starting consumer
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Starting consumer Consumer#3daf03d8: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
creating channel and starting to consume
org.springframework.amqp.rabbit.connection.CachingConnectionFactory DEBUG Creating cached Rabbit Channel from AMQChannel(amqp://user#host,1)
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG ConsumeOK: Consumer#3daf03d8: tags=[[amq.ctag-uG8_iXcNaknFjBIGM-91Tg]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp://user#host, localPort= 37906], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Started on queue '_316_' with tag amq.ctag-uG8_iXcNaknFjBIGM-91Tg: Consumer#3daf03d8: tags=[[]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp://user#host, localPort= 37906], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Storing delivery for consumerTag: 'amq.ctag-uG8_iXcNaknFjBIGM-91Tg' with deliveryTag: '1' in Consumer#3daf03d8: tags=[[amq.ctag-uG8_iXcNaknFjBIGM-91Tg]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp://user#host, localPort= 37906], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Received message: (Body:'[B#4b817fae(byte[117])' MessageProperties [headers={}, contentLength=0, redelivered=true, receivedExchange=, receivedRoutingKey=_316_, deliveryTag=1, consumerTag=amq.ctag-uG8_iXcNaknFjBIGM-91Tg, consumerQueue=_316_])
This goes on until connection drops:
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer WARN Cancel received for amq.ctag-uG8_iXcNaknFjBIGM-91Tg (_316_); Consumer#3daf03d8: tags=[[amq.ctag-uG8_iXcNaknFjBIGM-91Tg]], channel=Cached Rabbit Channel: AMQChannel(amqp:///user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp:///user#host, localPort= 37906], acknowledgeMode=AUTO local queue size=0
Channel is shutdown and user somehow gets deleted as log says:
org.springframework.amqp.rabbit.connection.CachingConnectionFactory ERROR Channel shutdown: connection error; protocol method: #method<connection.close>(reply-code=320, reply-text=CONNECTION_FORCED - user 'user' is deleted, class-id=0, method-id=0)
Issue with connection driver followed by exception being thrown:
com.rabbitmq.client.impl.ForgivingExceptionHandler WARN An unexpected connection driver error occured (Exception message: Connection reset)
org.springframework.amqp.rabbit.support.ConsumerCancelledException: null\n\tat org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.nextMessage(BlockingQueueConsumer.java:499)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:870)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:859)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1600(SimpleMessageListenerContainer.java:78)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1142)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1048)\n\tat java.base/java.lang.Thread.run(Thread.java:835
Consumer raises exception and it says that processing can r-e-s-t-a-r-t
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer DEBUG Consumer raised exception, processing can restart if the connection factory supports it
Restating happens:
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer INFO Restarting Consumer#3daf03d8: tags=[[]], channel=Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp://user#host, localPort= 37906], acknowledgeMode=AUTO local queue size=0
Channels are being closed:
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Closing Rabbit Channel: Cached Rabbit Channel: AMQChannel(amqp://user#host,1), conn: Proxy#437bd805 Shared Rabbit Connection: SimpleConnection#49fdbe2b [delegate=amqp://user#host, localPort= 37906]
org.springframework.amqp.rabbit.connection.CachingConnectionFactory DEBUG Closing cached Channel: AMQChannel(amqp://user#host,1)
New consumer is starting:
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer DEBUG Starting consumer Consumer#2560313a: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
We're attempting to connect which ends up with WARN and AUTHENTICATION failure, (because previous log said that user was deleted?):
An unexpected connection driver error occured (Exception message: Socket closed)
org.springframework.amqp.rabbit.listener.exception.FatalListenerStartupException: Authentication failure\n\tat
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.\n\tat org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:65)\n\tat
Consumer that tried to start:
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer ERROR Consumer received fatal exception on startup
And it (consumer) gets cancelled:
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer DEBUG Cancelling Consumer#2560313a: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
Channel is closed and container is stopping:
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer ERROR Stopping container from aborted consumer
And then container is shutting down:
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer DEBUG Shutting down Rabbit listener container
We're waiting for workers to finish, it's successful, then we're trying to connect again, the same SOCKET_CLOSED is being logged over and over again.
Then host is brought back and connection is reestablished. Cached Rabit Channel is being created and nothing happens.
I'd assume that issue is that container was shut down and never came back to life, hence there're no consumers.
WHAT WORKED:
I created a class that has a "listenning" method that accepts ListenerContainerConsumerFailedEvent. That class has RabbitListenerEndpointRegistry (bean that Boot conveniently created for me) and whenever that method is called I'm checking if listenerContainer is running if not then I'm starting it (that checking is most likely redundant).
#EventListener
public void onApplicationEvent(ListenerContainerConsumerFailedEvent event) {
var listenerContainer = rabbitListenerEndpointRegistry.getListenerContainer(MessageListener.RABBIT_LISTENER_ID);
if (!listenerContainer.isRunning()){
listenerContainer.start();
}
}
org.springframework.amqp.rabbit.listener.exception.FatalListenerStartupException: Authentication failure
FatalListenerStartupException
Authentication failures are considered fatal and the container is immediately stopped; it is unlikely such situations will be corrected automatically.
Deleting a user that is currently in use is a rather unusual circumstance.
You could use an ApplicationListener bean or #EventListener method to listen for a ListenerContainerConsumerTerminatedEvent and try restarting the container after some time.

Spring-Boot and Kafka : How to handle broker not available?

While the spring-boot app is running and if I shutdown the broker completely ( both kafka and zookeeper ) I am seeing this warn in console for infinite amount of time.
[org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
WARN o.apache.kafka.clients.NetworkClient - [Consumer
clientId=consumer-1, groupId=ResponseReceiveConsumerGroup]
Connection to node 2147483647 could not be established. Broker may not
be available.
Is there a way in Spring Boot to handle this gracefully instead of infinite logs on console ?
Increase the reconnect.backoff.ms property (see Kafka docs).
The default is only 50ms.

Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException

I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:
One running Zookeeper and two running Kafka.
I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:
on the Kafka producer:
kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa
on the Kafka consumer:
kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa
I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:
ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
On the consumer side I get this message, which repeats periodically:
WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:
WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Not sure where to go from here. Let me know if more details about my configuration files are needed
This was a configuration issue.
In order to get it running several changes to config files had to happen:
In config/server.properties on each Kafka server:
host.name: <Public IP>
advertised.host.name: <AWS Public DNS Address>
In config/producer.properties on each Kafka server:
metadata.broker.list: <Producer Server advertised.host.name>:<Producer Server port>,<Consumer Server advertised.host.name>:<Consumer Server port>
In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:
<Public IP> localhost localhost.localdomain

Categories

Resources