Spring Cloud Stream - Can't get configured kafka's broker address - java

I'm trying register manually a kafka listener using Spring Cloud Stream, however I'm facing some problems when trying to connect to broker:
[Consumer clientId=consumer-1, groupId=h2r] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request
[Consumer clientId=consumer-1, groupId=h2r] Initiating connection to node localhost:9092 (id: -1 rack: null)
[Consumer clientId=consumer-1, groupId=h2r] Node -1 disconnected.
[Consumer clientId=consumer-1, groupId=h2r] Connection to node -1 could not be established. Broker may not be available.
[Consumer clientId=consumer-1, groupId=h2r] Give up sending metadata request since no node is available
It is trying connect in localhost:9092 but my server is in another computer (192.168.1.200:9092), what am I doing wrong in this configuration:
#Service
public class TenantMessageConsumer {
private final String defaultEnterpriseSchema;
private final MailService mailService;
private final KafkaListenerContainerFactory containerFactory;
private final KafkaListenerEndpointRegistry registry;
public TenantMessageConsumer(String defaultEnterpriseSchema, MailService mailService, KafkaListenerContainerFactory containerFactory, KafkaListenerEndpointRegistry registry) {
this.defaultEnterpriseSchema = defaultEnterpriseSchema;
this.mailService = mailService;
this.containerFactory = containerFactory;
this.registry = registry;
listen();
}
public void listen() {
TenantMessageConsumer that=this;
AbstractKafkaListenerEndpoint endpoint=new AbstractKafkaListenerEndpoint<String, Object>() {
#Override
protected MessagingMessageListenerAdapter createMessageListener(MessageListenerContainer container, MessageConverter messageConverter) {
try {
return new RecordMessagingMessageListenerAdapter(that,TenantMessageConsumer.class.getMethod("process",Object.class));
} catch (NoSuchMethodException e) {
return null;
}
}
};
endpoint.setId("tenant");
endpoint.setTopics(defaultEnterpriseSchema);
endpoint.setGroupId("h2r");
registry.registerListenerContainer(endpoint,containerFactory);
}
public void process(Object message){
if (message instanceof SimpleEmailMessage) {
SimpleEmailMessage emailMessage = (SimpleEmailMessage) message;
if (emailMessage.getContent().equals("reset-password"))
mailService.sendPasswordResetMail(emailMessage);
}
}
}
It is supposed to get this configuration:
spring:
cloud:
stream:
kafka:
binder:
brokers: 192.168.1.200
So, what I need is a way to get the configured broker address and set it in endpoint object.
Important
As the topic name is dynamic, I can't use annotations like #StreamListener.

You didn't describe your problem nor have you provided any relevant information such as stack trace, logs etc,. (please do in the future), but I'll try.
You absolutely can use #StreamListener and other annotations with Spring Cloud Stream's support for dynamic destinations.
Please go through the above section and let us know if you still need help.

Related

Spring Kafka: UnknownHostException: 34bcfcc207e0

I wrote kafka producer / consumer for my app:
Consumer config:
#EnableKafka
#Configuration
class KafkaConsumerConfig {
#Bean
fun consumerFactory(): ConsumerFactory<String, String> {
val props: MutableMap<String, Any> = HashMap()
props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
props[ConsumerConfig.GROUP_ID_CONFIG] = "group12345"
props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
return DefaultKafkaConsumerFactory(props)
}
#Bean
fun kafkaListenerContainerFactory(): ConcurrentKafkaListenerContainerFactory<String, String> {
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = consumerFactory()
return factory
}
}
Producer config:
#Configuration
class KafkaProducerConfig {
#Bean
fun producerFactory(): ProducerFactory<String, String> {
val configProps: MutableMap<String, Any> = HashMap()
configProps[ProducerConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
configProps[ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
configProps[ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
return DefaultKafkaProducerFactory(configProps)
}
#Bean
fun kafkaTemplate(): KafkaTemplate<String, String> {
return KafkaTemplate(producerFactory())
}
}
Topic config:
#Configuration
class KafkaTopicConfig {
#Bean
fun kafkaAdmin(): KafkaAdmin {
val configs: MutableMap<String, Any?> = HashMap()
configs[AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
return KafkaAdmin(configs)
}
#Bean
fun topic1(): NewTopic {
return NewTopic("kafkaTest", 1, 1.toShort())
}
}
Kafka service:
#Service
class KafkaService(
private val kafkaTemplate: KafkaTemplate<String, String>
) {
fun send() {
kafkaTemplate.send("kafkaTest", "test message ${System.currentTimeMillis()}")
}
#KafkaListener(topics = ["kafkaTest"], groupId = "group12345")
fun listenGroupFoo(message: String) {
println("--> $message")
}
}
That's ALL classes in my app. When I trying to run app, I get this exception:
2021-10-11 17:20:13.319 WARN 8544 --- [| adminclient-1]
org.apache.kafka.clients.NetworkClient : [AdminClient
clientId=adminclient-1] Error connecting to node 34bcfcc207e0:9092
(id: 1001 rack: null)
java.net.UnknownHostException: 34bcfcc207e0
I have no idea, what is host 34bcfcc207e0. It appears at start or thread.
What's wrong?
Kafka is not an HTTP service. Remove http:// from all your strings
If you're running Kafka in a Container, the default advertised listener is using its hostname (the container ID), and you need to change this to use an address you expect Connect to Kafka running in Docker
I had similar issue using docker-compose, at the beginning I didn't know where it came from that hash id either, then I realized it corresponded to the container id, and by default the hostname of my broker service.
docker ps
CONTAINER ID PORTS NAMES
b6d781c2089b 2181/tcp, 0.0.0.0:9092->9092/tcp broker-1
* fac6ef279a3e 2181/tcp, 0.0.0.0:9093->9092/tcp broker-2
cbbc73bfe92e 0.0.0.0:2181->2181/tcp zookeeper
log
broker-2 | Recorded new controller, from now on will use broker fac6ef279a3e:9092
When I tried to connect with my Java consumer using:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092,localhost:9093");
gets the following error:
java.net.UnknownHostException: fac6ef279a3e
I solved it as follows
adding a hostname to each broker
services:
...
broker-1:
hostname: broker-1
ports:
- "9092:9092"
...
broker-2:
hostname: broker-2
ports:
- "9093:9092"
setting my /etc/hosts like:
#### kafka
127.0.0.1 broker-1
127.0.0.1 broker-2
and my props connection
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"broker-1:9092,broker-2:9093");

Artemis durable subscriptions for multicast address with JMS

With JMS I want to create some durable subscriptions for a topic (multicast address). In case of one duarble subscriptions it works, but in case of more, it does not and errors occur.
These are my listeners: maybe the properties are not correctly filled?
#JmsListener(destination = "VirtualTopic.test", id = "c1", subscription = "Consumer.A.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive1(String m) {
}
#JmsListener(destination = "VirtualTopic.test", id = "c2", subscription = "Consumer.B.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive2(String m) {
}
This is the listenerFactory: I'm not sure about the last property.
#Bean
public DefaultJmsListenerContainerFactory queueConnectionFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setClientId("brokerClientId");
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true); **<-- needed for my case?**
return factory;
}
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl);
return connectionFactory;
}
These are the error logs, when I set "factory.setSubscriptionShared(true);":
2020-04-17 11:23:44.485 WARN 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'VirtualTopic.test' - trying to recover. Cause: org.apache.activemq.ActiveMQSession.createSharedDurableConsumer(Ljavax/jms/Topic;Ljava/lang/String;Ljava/lang/String;)Ljavax/jms/MessageConsumer;
2020-04-17 11:23:44.514 ERROR 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'VirtualTopic.test' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Broker: d1 - Client: brokerClientId already connected from /127.0.0.1:59979
As noted by the JMS specification, only one client with the same ID can connect. You're apparently using the same client ID for all your connections, i.e.:
factory.setClientId("brokerClientId");
Try not setting the client ID and see how that goes.
Also, ensure you're using a JMS client implementation that actually supports JMS 2.0 (e.g. the ActiveMQ Artemis core JMS client).

How to listen to an existing queue in Spring AMQP?

I have a remote RabbitMQ server which has some queues I want to listen to. I tried this:
#RabbitListener(queues = "queueName")
public void receive(String message) {
System.out.println(message);
}
But it tried to create a new queue. Result is predictable - access denied.
o.s.a.r.listener.BlockingQueueConsumer : Failed to declare queue: queueName
I didn't declare any queue in any other way.
How can I listen to an existing queue on a remote server? Also, is there a way to check if this queue exists? And I saw this line
#RabbitListener(queues = "#{autoDeleteQueue2.name}")
in a tutorial. What does #{queueName.name} mean?
Logs and the beginning of the stack trace:
2018-08-30 22:10:21.968 WARN 12124 --- [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Failed to declare queue: queueName
2018-08-30 22:10:21.991 WARN 12124 --- [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Queue declaration failed; retries left=3
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$DeclarationException: Failed to declare queue(s):[queueName]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:711) ~[spring-rabbit-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:588) ~[spring-rabbit-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:996) [spring-rabbit-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Even if you don't have configuration permission on the broker, the queueDeclarePassive used by the listener is allowed (it checks for the presence of the queue).
o.s.a.r.listener.BlockingQueueConsumer : Failed to declare queue: queueName
That just means that the queue doesn't exist.
#RabbitListener(queues = "#{autoDeleteQueue2.name}")
That is used to get the queue name at runtime (when you have permission to create queues).
e.g.
#Bean
public AnonymousQueue autoDeleteQueue2() {
return new AnonymousQueue();
}
Spring will add that queue to the broker with a random, unique name. The listener is then configured with the actual queue name.
Here is an example on how to listen to a queue with rabbitMq :
#Component
public class RabbitConsumer implements MessageListener {
#RabbitListener(bindings =
#QueueBinding(
value = #Queue(value = "${queue.topic}", durable = "true"),
exchange = #Exchange(value = "${queue.exchange}", type = ExchangeTypes.FANOUT, durable = "true")
)
)
#Override
public void onMessage(Message message) {
// ...
}
}
And the config (application.yaml) :
queue:
topic: mytopic
exchange: myexchange
In rabbitmq, consumer are associated with exchanges. It allow you to define how the messages must be consumed (are all consumer listen to all message ? Is this enought if only one consumer read the message ? ...)
Here is an example of how to listen to a specific 'queue' using Spring Integration:
SpringIntegrationConfiguration.java
#Configuration
public class SpringIntegrationConfiguration {
#Value("${rabbitmq.queueName}")
private String queueName;
#Bean
public IntegrationFlow ampqInbound(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, queueName))
.handle(System.out::println)
.get();
}
}
ApplicationConfiguration.java
#Configuration
public class ApplicationConfiguration {
#Value("${rabbitmq.topicExchangeName}")
private String topicExchangeName;
#Value("${rabbitmq.queueName}")
private String queueName;
#Value("${rabbitmq.routingKey}")
private String routingKey;
#Bean
Queue queue() {
return new Queue(queueName, false);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
}
Application.yml
rabbitmq:
topicExchangeName: spring-boot-exchange
queueName: spring-boot
routingKey: foo.bar.#

Apache Camel ?transacted=true

Hi Camel/jms developers.
Using Apache Camel amqp jms connector. And as a Broker ActiveMQ.
My configuration is quite default.
Here is a consumer code example:
public static void main(String[] args) throws Exception {
AMQPComponent amqpComponent = AMQPComponent.amqpComponent(HOST, USER, PWD);
CamelContext context = new DefaultCamelContext();
context.addComponent("amqp", amqpComponent);
context.start();
context.addRoutes(new RouteBuilder() {
#Override
public void configure() {
from("amqp:queue:1test.queue?transacted=true")
.to("stream:out")
.end();
}
});
Thread.sleep(20*1000);
context.stop();
}
Easy to see, I have configured transacted consumer. for 1test.queue.
When Im running it, in log see:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: amqp://queue:1test.queue?transacted=true
[AmqpProvider :(1):[amqp:HOST2]] INFO org.apache.qpid.jms.sasl.SaslMechanismFinder - Best match for SASL auth was: SASL-PLAIN
[AmqpProvider :(1):[amqp:HOST2]] INFO org.apache.qpid.jms.JmsConnection - Connection ID:...:1 connected to remote Broker: amqp:HOST2
[AmqpProvider :(2):[amqp:HOST2]] INFO org.apache.qpid.jms.sasl.SaslMechanismFinder - Best match for SASL auth was: SASL-PLAIN
[AmqpProvider :(2):[amqp:HOST2]] INFO org.apache.qpid.jms.JmsConnection - Connection ID:...:2 connected to remote Broker: amqp:HOST2
[AmqpProvider :(3):[amqp:HOST2]] INFO org.apache.qpid.jms.sasl.SaslMechanismFinder - Best match for SASL auth was: SASL-PLAIN
[AmqpProvider :(3):[amqp:HOST2]] INFO org.apache.qpid.jms.JmsConnection - Connection ID:...:3 connected to remote Broker: amqp:HOST2
If I removing ?transacted=true from consumer
[AmqpProvider :(1):[amqp:HOST2]] INFO org.apache.qpid.jms.sasl.SaslMechanismFinder - Best match for SASL auth was: SASL-PLAIN
[AmqpProvider :(1):[amqp:HOST2]] INFO org.apache.qpid.jms.JmsConnection - Connection ID:...:1 connected to remote Broker: amqp:HOST2
It appears only once.
How to explain this behavior? This is normally for transacted consumers in camel?
Thank you.
P.S Checked this topic but not sure how to map it to Camel reality.
AMQP Camel Component has not defined a pooled connection factory so it is creating a new connection for each iteration to check if there are messages in the broker.
To avoid it you should define a CachingConnectionFactory to the AMQPComponent as:
import org.apache.camel.component.amqp.AMQPComponent;
import org.apache.camel.component.jms.JmsConfiguration;
import org.apache.qpid.jms.JmsConnectionFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Primary;
import org.springframework.jms.connection.CachingConnectionFactory;
#Bean
public JmsConnectionFactory jmsConnectionFactory() {
JmsConnectionFactory jmsConnectionFactory = new JmsConnectionFactory(brokerUser, brokerPassword, brokerUrl);
return jmsConnectionFactory;
}
#Bean
#Primary
public CachingConnectionFactory jmsCachingConnectionFactory(JmsConnectionFactory jmsConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(jmsConnectionFactory);
return cachingConnectionFactory;
}
#Bean
public JmsConfiguration jmsConfig(CachingConnectionFactory cachingConnectionFactory) {
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConnectionFactory(cachingConnectionFactory);
jmsConfiguration.setCacheLevelName("CACHE_CONSUMER");
return jmsConfiguration;
}
#Bean
public AMQPComponent amqpComponent(JmsConfiguration jmsConfiguration) {
AMQPComponent amqpComponent = new AMQPComponent();
amqpComponent.setConfiguration(jmsConfiguration);
return amqpComponent;
}
You will get the same behavior as JMS Camel Component.
More info at AMQP Camel Component page
Use jms insted of amqp.
I had similar problem. But when I use JMS instead of AMQP it worked well there was log only one time i.e single connection got created.
Seems some issue is there with AMQ component.
Thanks,
Rahul

TimeoutException thrown when sending to topic

i have made sender class using kafkatemplate bean to send payload to topic
with some configuration in SenderConfiguration class .
Sender Class
#Component
public class Sender {
private static final Logger LOGGER = (Logger) LoggerFactory.getLogger(Sender.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, "1", payload);
}
}
, senderConfiguration class
#Configuration
public class SenderConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public Sender sender() {
return new Sender();
}
}
the problem is in sending not in producing
here the application.yml file properties
kafka:
bootstrap-servers: localhost:9092
topic:
helloworld: helloworld.t
and simply controller containing
#RestController
public class Controller {
protected final static String HELLOWORLD_TOPIC = "helloworld.t";
#Autowired
private Sender sender;
#RequestMapping("/send")
public String SendMessage() {
sender.send(HELLOWORLD_TOPIC, "message");
return "success";
}
}
and the exception is
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2017-12-20 09:59:04.654 ERROR 10816 --- [nio-7060-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='1' and payload='message' to topic helloworld.t:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Use the method that includes a key
kafkaTemplate.send(topic, key, payload);
Its not clear what key value you want to use, but it should evenly distribute amongst the partition count of the topic. For example, a random number within the range of the partition count.
That means your brokers are not running.
check server.log and restart broker if necessary
There are few possibility for this type of error.
Kafka broker is not reachable using configured port.
For this try to telnet using telnet localhost 9092 if you are getting output that means kafka borker is running
Check kafka-client version that spring-boot uses is the same as your kafka version. If the version mismatch, kafka may ne able to send data to topic
Sometimes it take time for the brokers to know about the newly created topic. So, the producers might fail with the error Failed to update metadata after 60000 ms. To get around this create kafka manually using kafka command line options.
Listener configuration of server.properties not working.
You can try this as well
change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
change the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
Hope that helps!

Categories

Resources