I wrote kafka producer / consumer for my app:
Consumer config:
#EnableKafka
#Configuration
class KafkaConsumerConfig {
#Bean
fun consumerFactory(): ConsumerFactory<String, String> {
val props: MutableMap<String, Any> = HashMap()
props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
props[ConsumerConfig.GROUP_ID_CONFIG] = "group12345"
props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
return DefaultKafkaConsumerFactory(props)
}
#Bean
fun kafkaListenerContainerFactory(): ConcurrentKafkaListenerContainerFactory<String, String> {
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = consumerFactory()
return factory
}
}
Producer config:
#Configuration
class KafkaProducerConfig {
#Bean
fun producerFactory(): ProducerFactory<String, String> {
val configProps: MutableMap<String, Any> = HashMap()
configProps[ProducerConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
configProps[ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
configProps[ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
return DefaultKafkaProducerFactory(configProps)
}
#Bean
fun kafkaTemplate(): KafkaTemplate<String, String> {
return KafkaTemplate(producerFactory())
}
}
Topic config:
#Configuration
class KafkaTopicConfig {
#Bean
fun kafkaAdmin(): KafkaAdmin {
val configs: MutableMap<String, Any?> = HashMap()
configs[AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG] = "http://localhost:9092"
return KafkaAdmin(configs)
}
#Bean
fun topic1(): NewTopic {
return NewTopic("kafkaTest", 1, 1.toShort())
}
}
Kafka service:
#Service
class KafkaService(
private val kafkaTemplate: KafkaTemplate<String, String>
) {
fun send() {
kafkaTemplate.send("kafkaTest", "test message ${System.currentTimeMillis()}")
}
#KafkaListener(topics = ["kafkaTest"], groupId = "group12345")
fun listenGroupFoo(message: String) {
println("--> $message")
}
}
That's ALL classes in my app. When I trying to run app, I get this exception:
2021-10-11 17:20:13.319 WARN 8544 --- [| adminclient-1]
org.apache.kafka.clients.NetworkClient : [AdminClient
clientId=adminclient-1] Error connecting to node 34bcfcc207e0:9092
(id: 1001 rack: null)
java.net.UnknownHostException: 34bcfcc207e0
I have no idea, what is host 34bcfcc207e0. It appears at start or thread.
What's wrong?
Kafka is not an HTTP service. Remove http:// from all your strings
If you're running Kafka in a Container, the default advertised listener is using its hostname (the container ID), and you need to change this to use an address you expect Connect to Kafka running in Docker
I had similar issue using docker-compose, at the beginning I didn't know where it came from that hash id either, then I realized it corresponded to the container id, and by default the hostname of my broker service.
docker ps
CONTAINER ID PORTS NAMES
b6d781c2089b 2181/tcp, 0.0.0.0:9092->9092/tcp broker-1
* fac6ef279a3e 2181/tcp, 0.0.0.0:9093->9092/tcp broker-2
cbbc73bfe92e 0.0.0.0:2181->2181/tcp zookeeper
log
broker-2 | Recorded new controller, from now on will use broker fac6ef279a3e:9092
When I tried to connect with my Java consumer using:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092,localhost:9093");
gets the following error:
java.net.UnknownHostException: fac6ef279a3e
I solved it as follows
adding a hostname to each broker
services:
...
broker-1:
hostname: broker-1
ports:
- "9092:9092"
...
broker-2:
hostname: broker-2
ports:
- "9093:9092"
setting my /etc/hosts like:
#### kafka
127.0.0.1 broker-1
127.0.0.1 broker-2
and my props connection
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"broker-1:9092,broker-2:9093");
Related
I'm exploring Spring Kafka API (spring-boot-starter-parent version 2.7.4) and I found strange behavior in the consumer with standard #KafkaListener annotation.
I produce messages with KafkaTemplate and add custom header prop __ProducerApp__, but I have standard header prop __TypeId__ too because it is automatically added by Spring starter implementation.
Properties:
spring:
kafka:
bootstrap-servers: localhost:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
consumer:
group-id: consumer-localhost
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties.spring.json.trusted.packages: '*'
Producer class:
#Component
public class KafkaExampleProducer {
private final KafkaTemplate<String, KafkaPayload> kafkaTemplate;
public KafkaExampleProducer(KafkaTemplate<String, KafkaPayload> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendPayload(KafkaPayload payload) {
ProducerRecord<String, KafkaPayload> record = new ProducerRecord<>(
KafkaExampleTopicConfig.EXAMPLE_TOPIC_NAME, UUID.randomUUID().toString(), payload
);
record.headers().add("__ProducerApp__", "ExampleApp-localhost".getBytes(StandardCharsets.UTF_8));
kafkaTemplate.send(record);
}
}
I can see fulfilled headers in the web UI for Apache Kafka:
But in the consumer, after receiving a message from a topic I see only the __ProducerApp__ header prop.
Listener class:
#Component
public class KafkaExampleListener {
private final Logger logger = LoggerFactory.getLogger(KafkaExampleListener.class);
#KafkaListener(topics = KafkaExampleTopicConfig.EXAMPLE_TOPIC_NAME)
public void listenMessage(
ConsumerRecord<String, KafkaPayload> consumerRecord
) {
logger.info("Received message:\nKey: {}, type: {}, producer: {}",
consumerRecord.key(),
extractHeaderValue(consumerRecord.headers(), "__TypeId__"),
extractHeaderValue(consumerRecord.headers(), "__ProducerApp__")
);
}
private String extractHeaderValue(Headers headers, String headerId) {
return StreamSupport.stream(headers.spliterator(), false)
.filter(header -> header.key().equals(headerId))
.findFirst()
.map(header -> new String(header.value()))
.orElse("N/A");
}
}
The console result presents that headers are received without __TypeId__ prop:
Received message:
Key: 3e8ee64e-b691-48e1-98b1-614291cc0451, type: N/A, producer: ExampleApp-localhost
You did not add your beans configs, but my guess is that you are missing the correct deserializer props.
Add:
#Bean
RecordMessageConverter messageConverter() { return new
StringJsonMessageConverter(); }
Also, instead of a JsonDeserializer use a StringDeserializer in your consumer value-deserializer
The JsonDeserializer strips the type headers by default to avoid polluting the application with internals.
/**
* Set to false to retain type information headers after deserialization.
* Default true.
* #param removeTypeHeaders true to remove headers.
* #since 2.2
*/
public void setRemoveTypeHeaders(boolean removeTypeHeaders) {
this.removeTypeHeaders = removeTypeHeaders;
this.setterCalled = true;
}
Or set spring.json.remove.type.headers: false.
I have a Spring Cloud project with a module that binds to messagebus kafka and rabbitmq.
in this module I have a test for kafka:
#ActiveProfiles("test")
#DirtiesContext
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = MessageReceiverTestConfiguration.class,
initializers = ConfigFileApplicationContextInitializer.class)
#EnableBinding(MessageReceivingChannel.class)
public class MessageReceiverITest {
#Autowired
private MessageReceivingChannel messageReceivingChannel;
#MockBean
private MessageConsumerService messageConsumerService;
#Autowired
private MessageConverter messageConverter;
#Autowired
private MessageReceiverTestConfiguration receiverTestConfiguration;
#Captor
private ArgumentCaptor<ImportantMessage> captorMessage;
#Captor
private ArgumentCaptor<MessageHeaders> captorHeaders;
#Test
public void testLoanApplicationChannelInput() throws Throwable {
final ImportantMessage sentMessage = new ImportantMessage("qwer124asdf");
final Map<String, Object> headerMap = new HashMap<>(1);
headerMap.put(MessageHeaders.CONTENT_TYPE, receiverTestConfiguration.getContentType());
MessageHeaders sentHeaders = new MessageHeaders(headerMap);
final Message<?> message = messageConverter.toMessage(sentMessage, sentHeaders);
messageReceivingChannel.input().send(message);
TimeUnit.SECONDS.sleep(1);
verify(messageConsumerService).takeActionOn(captorMessage.capture(), captorHeaders.capture());
final Object receivedMessage = captorMessage.getValue();
Assertions.assertThat(receivedMessage).isNotNull();
Assertions.assertThat(receivedMessage).isEqualTo(sentMessage);
MessageHeaders receivedHeaders = captorHeaders.getValue();
Assertions.assertThat(receivedHeaders).isNotNull();
Assertions.assertThat(receivedHeaders.get(MessageHeaders.CONTENT_TYPE).toString())
.isEqualTo(sentHeaders.get(MessageHeaders.CONTENT_TYPE));
}
}
which runs in IDE (idea) just fine.
the problem is when I try to install maven artifact, it doesn't pass the verify phase because:
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka,rabbit, and no default binder has been set.
and this is how I set the default binder in test/resources/application-test.yml:
logging:
config: classpath:logback-local.xml
spring:
cloud:
stream:
default:
contentType: application/*+avro
producer:
headerMode: embeddedHeaders
bindings:
messagereceived:
binder: kafka
contentType: "application/json"
default-binder: kafka
kafka:
binder:
configuration:
security:
protocol: SSL
ssl:
truststore:
location: ${JAVA_HOME}\lib\security\cacerts
password: ***
type: JKS
kafka:
properties:
max.in.flight.requests.per.connection: 1
request.timeout.ms: 30000
max.block.ms: 3000
producer:
retries: 3
so my question is how to set default binder for spring-cloud-starter-parent:Hoxton.SR9 properly?
Thanks for advices!
I'm a newbie trying to make the communication work between two Spring Boot microservices using Confluent Cloud Apache Kafka.
When using Kafka on Confluent Cloud, I'm getting the following error on my consumer(ServiceB) after ServiceA publishes the message to the topic. However, when I login to my Confluent Cloud, I see that the message has been successfully published to the topic.
org.springframework.context.ApplicationContextException: Failed to start bean
'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is
java.lang.IllegalStateException: Topic(s) [topic-1] is/are not present and
missingTopicsFatal is true
I do not face this issue when I run Kafka on my local server. ServiceA is able to publish the message to the topic on my local Kafka server and ServiceB is successfully able to consume that message.
I have mentioned my local Kafka server configuration in application.properties(as commented out code)
Service A: PRODUCER
application.properties
app.topic=test-1
#Remote
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
request.timeout.ms=20000
bootstrap.servers=pkc-4kgmg.us-west-2.aws.confluent.cloud:9092
retry.backoff.ms=500
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
requiredusername="*******"
password="****"
#Local
#ssl.endpoint.identification.algorithm=https
#security.protocol=SASL_SSL
#sasl.mechanism=PLAIN
#request.timeout.ms=20000
#bootstrap.servers=localhost:9092
#retry.backoff.ms=500
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
Sender.java
public class Sender {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Value("${app.topic}")
private String topic;
public void send(String data){
Message<String> message = MessageBuilder
.withPayload(data)
.setHeader(KafkaHeaders.TOPIC, topic)
.build();
kafkaTemplate.send(message);
}
}
KafkaProducerConfig.java
#Configuration
#EnableKafka
public class KafkaProducerConfig {
#Value("${bootstrap.servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
}
Service B: CONSUMER
application.properties
app.topic=test-1
#Remote
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
request.timeout.ms=20000
bootstrap.servers=pkc-4kgmg.us-west-2.aws.confluent.cloud:9092
retry.backoff.ms=500
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
requiredusername="*******"
password="****"
#Local
#ssl.endpoint.identification.algorithm=https
#security.protocol=SASL_SSL
#sasl.mechanism=PLAIN
#request.timeout.ms=20000
#bootstrap.servers=localhost:9092
#retry.backoff.ms=500
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
KafkaConsumerConfig.java
#Configuration
#EnableKafka
public class KafkaConsumerConfig {
#Value("${bootstrap.servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "confluent_cli_consumer_040e5c14-0c18-4ae6-a10f-8c3ff69cbc1a"); // confluent cloud consumer group-id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory(
consumerConfigs(),
new StringDeserializer(), new StringDeserializer());
}
#Bean(name = "kafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
KafkaConsumer.java
#Service
public class KafkaConsumer {
private static final Logger LOG = LoggerFactory.getLogger(KafkaListener.class);
#Value("{app.topic}")
private String kafkaTopic;
#KafkaListener(topics = "${app.topic}", containerFactory = "kafkaListenerContainerFactory")
public void receive(#Payload String data) {
LOG.info("received data='{}'", data);
}
}
The username and password are part of the JAAS config, so put them on one line
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafkaclient1" password="kafkaclient1-secret";
I would also suggest that you verify your property file is correctly loaded into the client
See the Boot documentation.
You can't just put arbitrary kafka properties directly in the application.properties file.
The properties supported by auto configuration are shown in appendix-application-properties.html. Note that, for the most part, these properties (hyphenated or camelCase) map directly to the Apache Kafka dotted properties. Refer to the Apache Kafka documentation for details.
The first few of these properties apply to all components (producers, consumers, admins, and streams) but can be specified at the component level if you wish to use different values. Apache Kafka designates properties with an importance of HIGH, MEDIUM, or LOW. Spring Boot auto-configuration supports all HIGH importance properties, some selected MEDIUM and LOW properties, and any properties that do not have a default value.
Only a subset of the properties supported by Kafka are available directly through the KafkaProperties class. If you wish to configure the producer or consumer with additional properties that are not directly supported, use the following properties:
spring.kafka.properties.prop.one=first
spring.kafka.admin.properties.prop.two=second
spring.kafka.consumer.properties.prop.three=third
spring.kafka.producer.properties.prop.four=fourth
spring.kafka.streams.properties.prop.five=fifth
This sets the common prop.one Kafka property to first (applies to producers, consumers and admins), the prop.two admin property to second, the prop.three consumer property to third, the prop.four producer property to fourth and the prop.five streams property to fifth.
...
#cricket_007's answer is correct. You need to embed the username and password (notably, the cluster API key and API secret) within the sasl.jaas.config property value.
You can double-check how Java clients should connect to Confluent Cloud via this official example here: https://github.com/confluentinc/examples/blob/5.3.1-post/clients/cloud/java/src/main/java/io/confluent/examples/clients/cloud
Thanks,
-- Ricardo
We would like to list all Kafka topics via spring-kafka to get results similar to the kafka command:
bin/kafka-topics.sh --list --zookeeper localhost:2181
When running the getTopics() method in the service below, we get org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
Configuration:
#EnableKafka
#Configuration
public class KafkaConfig {
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:2181");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
Service:
#Service
public class TopicServiceKafkaImpl implements TopicService {
#Autowired
private ConsumerFactory<String, String> consumerFactory;
#Override
public Set<String> getTopics() {
try (Consumer<String, String> consumer =
consumerFactory.createConsumer()) {
Map<String, List<PartitionInfo>> map = consumer.listTopics();
return map.keySet();
}
}
Kafka is up and running and we can send messages from our app to a topic succesfully.
You can list topics like this using Admin Client
Properties properties = new Properties();
properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
AdminClient adminClient = AdminClient.create(properties);
ListTopicsOptions listTopicsOptions = new ListTopicsOptions();
listTopicsOptions.listInternal(true);
System.out.println("topics:" + adminClient.listTopics(listTopicsOptions).names().get());
You are connecting to Zookeeper (2181) instead of Kafka (9092 by default).
The Java kafka clients no longer talk directly to ZK.
kafka-topics --list is a shell script that just is a wrapper around kafka.admin.TopicCommand class, where you can find the method you are looking for
Alternatively, you can also use the AdminClient#listTopics method
i have made sender class using kafkatemplate bean to send payload to topic
with some configuration in SenderConfiguration class .
Sender Class
#Component
public class Sender {
private static final Logger LOGGER = (Logger) LoggerFactory.getLogger(Sender.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, "1", payload);
}
}
, senderConfiguration class
#Configuration
public class SenderConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public Sender sender() {
return new Sender();
}
}
the problem is in sending not in producing
here the application.yml file properties
kafka:
bootstrap-servers: localhost:9092
topic:
helloworld: helloworld.t
and simply controller containing
#RestController
public class Controller {
protected final static String HELLOWORLD_TOPIC = "helloworld.t";
#Autowired
private Sender sender;
#RequestMapping("/send")
public String SendMessage() {
sender.send(HELLOWORLD_TOPIC, "message");
return "success";
}
}
and the exception is
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2017-12-20 09:59:04.654 ERROR 10816 --- [nio-7060-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='1' and payload='message' to topic helloworld.t:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Use the method that includes a key
kafkaTemplate.send(topic, key, payload);
Its not clear what key value you want to use, but it should evenly distribute amongst the partition count of the topic. For example, a random number within the range of the partition count.
That means your brokers are not running.
check server.log and restart broker if necessary
There are few possibility for this type of error.
Kafka broker is not reachable using configured port.
For this try to telnet using telnet localhost 9092 if you are getting output that means kafka borker is running
Check kafka-client version that spring-boot uses is the same as your kafka version. If the version mismatch, kafka may ne able to send data to topic
Sometimes it take time for the brokers to know about the newly created topic. So, the producers might fail with the error Failed to update metadata after 60000 ms. To get around this create kafka manually using kafka command line options.
Listener configuration of server.properties not working.
You can try this as well
change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
change the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
Hope that helps!