i have made sender class using kafkatemplate bean to send payload to topic
with some configuration in SenderConfiguration class .
Sender Class
#Component
public class Sender {
private static final Logger LOGGER = (Logger) LoggerFactory.getLogger(Sender.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, "1", payload);
}
}
, senderConfiguration class
#Configuration
public class SenderConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public Sender sender() {
return new Sender();
}
}
the problem is in sending not in producing
here the application.yml file properties
kafka:
bootstrap-servers: localhost:9092
topic:
helloworld: helloworld.t
and simply controller containing
#RestController
public class Controller {
protected final static String HELLOWORLD_TOPIC = "helloworld.t";
#Autowired
private Sender sender;
#RequestMapping("/send")
public String SendMessage() {
sender.send(HELLOWORLD_TOPIC, "message");
return "success";
}
}
and the exception is
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2017-12-20 09:58:04.645 INFO 10816 --- [nio-7060-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2017-12-20 09:59:04.654 ERROR 10816 --- [nio-7060-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='1' and payload='message' to topic helloworld.t:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Use the method that includes a key
kafkaTemplate.send(topic, key, payload);
Its not clear what key value you want to use, but it should evenly distribute amongst the partition count of the topic. For example, a random number within the range of the partition count.
That means your brokers are not running.
check server.log and restart broker if necessary
There are few possibility for this type of error.
Kafka broker is not reachable using configured port.
For this try to telnet using telnet localhost 9092 if you are getting output that means kafka borker is running
Check kafka-client version that spring-boot uses is the same as your kafka version. If the version mismatch, kafka may ne able to send data to topic
Sometimes it take time for the brokers to know about the newly created topic. So, the producers might fail with the error Failed to update metadata after 60000 ms. To get around this create kafka manually using kafka command line options.
Listener configuration of server.properties not working.
You can try this as well
change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
change the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
Hope that helps!
Related
There is a data stream application. It is necessary to connect and listen from several Kafka brokers (different ip-addresses, more than 2) and to write to the one.
Pls advise how to arrange multi-kafka connection?
Configuration class for a single kafka connection:
#Configuration
public class KafkaProducer {
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
It is expected several connections to be arranged and listened in the same time.
Bootstrap server config option accepts a CSV list for multiple brokers of one cluster. But you only need to provide multiple options for fault tolerance, as Kafka automatically returns all servers in the same cluster on first connection.
If you need to connect to distinct Kafka clusters, create a Bean with the different bootstrap address
In my Spring boot application I have kafka consumer class which reads message frequently whenever there are message available in the topic. I want to limit the consumer to consume message in every 2 hours interval time. Like after reading one message the consumer will be paused for 2 hours then again consumer another message.
This is my consumer config method :-
#Bean
public Map<String, Object> scnConsumerConfigs() {
Map<String, Object> propsMap = new HashMap<>();
// common props
logger.info("KM Dataloader :: Kafka Brokers for Software topic: {}", bootstrapServersscn);
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServersscn);
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
propsMap.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 7200000);
// ssl props
propsMap.put("security.protocol", mpaasSecurityProtocol);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, truststorePath);
propsMap.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, truststorePassword);
propsMap.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, keystorePath);
propsMap.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, keystorePassword);
return propsMap;
}
then I create this container method where I setup rest of the kafka configuration
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
LOGGER.info("Setting concurrency to {} for {}", config.getConcurrency(), topicName);
factory.setConcurrency(config.getConcurrency());
factory.setConsumerFactory(cFactory);
factory.setRetryTemplate(retryTemplate);
factory.getContainerProperties().setIdleBetweenPolls(7200000);
return factory;
using this code partitions is rebalanced every 2 hours, but its not reading message at all.
My kafka consumer method :-
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService) {
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
};
}
Try to add to add #KafkaListener annotated method in KmKafkaListenerso that Spring kafka will take care of calling it.
public class KmKafkaListener{
#KafkaListener(topics = SOFTWARE_TOPIC, containerFactory = "softwareMessageContainer", groupId = SOFTWARE_CONSUMER_GROUP)
public void onscnMessageforSA20(#Payload ConsumerRecord<String, Object> record)
throws InterruptedException {
this.onMessage(record);
}
}
and initalize the bean this way
#Bean
public KmKafkaListener softwareKafkaListener(KmSoftwareService softwareService) {
return new KmKafkaListener(softwareService);
}
I'm a newbie trying to make the communication work between two Spring Boot microservices using Confluent Cloud Apache Kafka.
When using Kafka on Confluent Cloud, I'm getting the following error on my consumer(ServiceB) after ServiceA publishes the message to the topic. However, when I login to my Confluent Cloud, I see that the message has been successfully published to the topic.
org.springframework.context.ApplicationContextException: Failed to start bean
'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is
java.lang.IllegalStateException: Topic(s) [topic-1] is/are not present and
missingTopicsFatal is true
I do not face this issue when I run Kafka on my local server. ServiceA is able to publish the message to the topic on my local Kafka server and ServiceB is successfully able to consume that message.
I have mentioned my local Kafka server configuration in application.properties(as commented out code)
Service A: PRODUCER
application.properties
app.topic=test-1
#Remote
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
request.timeout.ms=20000
bootstrap.servers=pkc-4kgmg.us-west-2.aws.confluent.cloud:9092
retry.backoff.ms=500
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
requiredusername="*******"
password="****"
#Local
#ssl.endpoint.identification.algorithm=https
#security.protocol=SASL_SSL
#sasl.mechanism=PLAIN
#request.timeout.ms=20000
#bootstrap.servers=localhost:9092
#retry.backoff.ms=500
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
Sender.java
public class Sender {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Value("${app.topic}")
private String topic;
public void send(String data){
Message<String> message = MessageBuilder
.withPayload(data)
.setHeader(KafkaHeaders.TOPIC, topic)
.build();
kafkaTemplate.send(message);
}
}
KafkaProducerConfig.java
#Configuration
#EnableKafka
public class KafkaProducerConfig {
#Value("${bootstrap.servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
}
Service B: CONSUMER
application.properties
app.topic=test-1
#Remote
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
request.timeout.ms=20000
bootstrap.servers=pkc-4kgmg.us-west-2.aws.confluent.cloud:9092
retry.backoff.ms=500
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
requiredusername="*******"
password="****"
#Local
#ssl.endpoint.identification.algorithm=https
#security.protocol=SASL_SSL
#sasl.mechanism=PLAIN
#request.timeout.ms=20000
#bootstrap.servers=localhost:9092
#retry.backoff.ms=500
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
KafkaConsumerConfig.java
#Configuration
#EnableKafka
public class KafkaConsumerConfig {
#Value("${bootstrap.servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "confluent_cli_consumer_040e5c14-0c18-4ae6-a10f-8c3ff69cbc1a"); // confluent cloud consumer group-id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory(
consumerConfigs(),
new StringDeserializer(), new StringDeserializer());
}
#Bean(name = "kafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
KafkaConsumer.java
#Service
public class KafkaConsumer {
private static final Logger LOG = LoggerFactory.getLogger(KafkaListener.class);
#Value("{app.topic}")
private String kafkaTopic;
#KafkaListener(topics = "${app.topic}", containerFactory = "kafkaListenerContainerFactory")
public void receive(#Payload String data) {
LOG.info("received data='{}'", data);
}
}
The username and password are part of the JAAS config, so put them on one line
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafkaclient1" password="kafkaclient1-secret";
I would also suggest that you verify your property file is correctly loaded into the client
See the Boot documentation.
You can't just put arbitrary kafka properties directly in the application.properties file.
The properties supported by auto configuration are shown in appendix-application-properties.html. Note that, for the most part, these properties (hyphenated or camelCase) map directly to the Apache Kafka dotted properties. Refer to the Apache Kafka documentation for details.
The first few of these properties apply to all components (producers, consumers, admins, and streams) but can be specified at the component level if you wish to use different values. Apache Kafka designates properties with an importance of HIGH, MEDIUM, or LOW. Spring Boot auto-configuration supports all HIGH importance properties, some selected MEDIUM and LOW properties, and any properties that do not have a default value.
Only a subset of the properties supported by Kafka are available directly through the KafkaProperties class. If you wish to configure the producer or consumer with additional properties that are not directly supported, use the following properties:
spring.kafka.properties.prop.one=first
spring.kafka.admin.properties.prop.two=second
spring.kafka.consumer.properties.prop.three=third
spring.kafka.producer.properties.prop.four=fourth
spring.kafka.streams.properties.prop.five=fifth
This sets the common prop.one Kafka property to first (applies to producers, consumers and admins), the prop.two admin property to second, the prop.three consumer property to third, the prop.four producer property to fourth and the prop.five streams property to fifth.
...
#cricket_007's answer is correct. You need to embed the username and password (notably, the cluster API key and API secret) within the sasl.jaas.config property value.
You can double-check how Java clients should connect to Confluent Cloud via this official example here: https://github.com/confluentinc/examples/blob/5.3.1-post/clients/cloud/java/src/main/java/io/confluent/examples/clients/cloud
Thanks,
-- Ricardo
In an effort to learn Apache Kafka, I’ve developed a Spring Boot application that sends messages to a Kafka topic if I send a POST request to a controller that calls a KafkaTemplate send method. I’m running Ubuntu 19.04 and successfully set up and installed Kafka and Zookeeper locally. Everything works fine.
The problem happens when I shut down either Zookeeper or Kafka. If I do this then on startup the Kafka AdminClient of my application periodically tries to find a broer but sends this message to the console
Connection to node -1 could not be established. Broker may not be available.
I implemented the fixes suggested here Kafka + Zookeeper: Connection to node -1 could not be established. Broker may not be available and here Spring-Boot and Kafka : How to handle broker not available?. But if I run a maven clean install then the build never finishes if Zookeeper and Kafka aren’t running. Why is this and is there a way to configure the application so that it checks for Kafka availability on startup and gracefully handles when the service is unavailable?
Here is my service class that calls the KafkaTemplate
#Autowired
public PingMessageServiceImpl(KafkaTemplate kafkaTemplate, KafkaTopicConfiguration kafkaTopicConfiguration) {
this.kafkaTemplate = kafkaTemplate;
this.kafkaTopicConfiguration = kafkaTopicConfiguration;
}
#Override
public void sendMessage(String message) {
log.info(String.format("Received following ping message %s", message));
if (!isValidPingRequest(message)) {
log.warn("Received invalid ping request");
throw new InvalidPingRequestException();
}
log.info(String.format("Sending message=[%s]", message));
ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(kafkaTopicConfiguration.getPingTopic(), message);
future.addCallback(buildListenableFutureCallback(message));
}
private boolean isValidPingRequest(String message) {
return "ping".equalsIgnoreCase(message);
}
private ListenableFutureCallback<SendResult<String, String>> buildListenableFutureCallback(String message) {
return new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
log.info(String.format("Sent message=[%s] with offset=[%d]", message, result.getRecordMetadata().offset()));
}
#Override
public void onFailure(Throwable ex) {
log.info(String.format("Unable to send message=[%s] due to %s", message, ex.getMessage()));
}
};
}
Here is the configuration class that I use to extract configuration properties for Kafka from the properties file
#NotNull(message = "bootstrapAddress cannot be null")
#NotBlank(message = "bootstrapAddress cannot be blank")
private String bootstrapAddress;
#NotNull(message = "pingTopic cannot be null")
#NotBlank(message = "pingTopic cannot be blank")
private String pingTopic;
#NotNull(message = "reconnectBackoffMs cannot be null")
#NotBlank(message = "reconnectBackoffMs cannot be blank")
#Value("${kafka.reconnect.backoff.ms}")
private String reconnectBackoffMs;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configurations = new HashMap<>();
configurations.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configurations.put(AdminClientConfig.RECONNECT_BACKOFF_MS_CONFIG, reconnectBackoffMs);
return new KafkaAdmin(configurations);
}
#Bean
public NewTopic pingTopic() {
return new NewTopic(pingTopic, 1, (short) 1);
}
#PostConstruct
private void displayOnStartup() {
log.info(String.format("bootstrapAddress is %s", bootstrapAddress));
log.info(String.format("reconnectBackoffMs is %s", reconnectBackoffMs));
}
If you have any Spring-boot integration test while loading the ApplicationContext spring kafka beans like KafakTemplate,KafkaAdmin will try to connect the kafka server with the properties specified in yml or properties file
So to avoid this you can use spring-embedded-kafka-server, so that kafka beans will connect to embedded server during test execution.
Or simple you can just mock the KafakTemplate and KafkaAdmin beans using #MockBean annotation in the integration test cases
I have couple of questions regarding the behaviour of spring-kafka during certain scenarios. Any answers or pointers would be great.
Background: I am building a kafka consumer which talk with external apis and sends acknowledge back. My Config looks like this:
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerServers());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, this.configuration.getString("kafka-generic.consumer.group.id"));
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "5000000");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "6000000");
return props;
}
#Bean
public RetryTemplate retryTemplate() {
final ExponentialRandomBackOffPolicy backOffPolicy = new ExponentialRandomBackOffPolicy();
backOffPolicy.setInitialInterval(this.configuration.getLong("retry-exp-backoff-init-interval"));
final SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(this.configuration.getInt("retry-max-attempts"));
final RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event> retryKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Event> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(this.configuration.getInt("kafka-concurrency"));
factory.setRetryTemplate(retryTemplate());
factory.getContainerProperties().setIdleEventInterval(this.configuration.getLong("kafka-rtm-idle-time"));
//factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setErrorHandler(kafkaConsumerErrorHandler);
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
return factory;
}
Lets say no of partitions I have is 4. My partition distribution is for KafkaListener is:
#KafkaListener(topicPartitions = #TopicPartition(topic = "topic", partitions = {"0", "1"}),
containerFactory = "retryKafkaListenerContainerFactory")
public void receive(Event event, Acknowledgment acknowledgment) throws Exception {
serviceInvoker.callService(event);
acknowledgment.acknowledge();
}
#KafkaListener(topicPartitions = #TopicPartition(topic = "topic", partitions = {"2", "3"}),
containerFactory = "retryKafkaListenerContainerFactory")
public void receive1(Event event, Acknowledgment acknowledgment) throws Exception {
serviceInvoker.callService(event);
acknowledgment.acknowledge();
}
Now my questions are:
Let's say I have 2 machines where I deployed this code (with the same consumer group id). If I understood properly, if I get a event for a partition, one of the machines' kafkalistener for corresponding partition and will listen but the other machines' kafkalistener won't listen to this event. Is it?
My error handler is:
#Named
public class KafkaConsumerErrorHandler implements ErrorHandler {
#Inject
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) {
System.out.println("Shutting down all the containers");
kafkaListenerEndpointRegistry.stop();
}
}
Lets talk abt a scenario where a consumers' kafkalistener is called where it calls serviceInvoker.callService(event); but the service is down, then according to the retryKafkaListenerContainerFactory, it retries for 3 times then fails, then errorhandler is called thus stopping kafkaListenerEndpointRegistry. Will this shutdown all other consumers or machines with the same consumer group or just this consumer or machine?
Lets talk abt the scenerio in 2. Is there any configuration where we need to change to let kafka know to hold off acknowledgement for that much time?
My kafka producer produces messages for every 10 mins. Do I need to configure that 10 mins anywhere in my Consumer code or is it agnostic of such?
In my KafkaListener annotations I hardcoded topic name and partitions. Can I change it during run time?
Any help is really appreciated. Thanks in advance. :)
Correct; only 1 will get it.
It will only stop the local containers - Spring doesn't know anything about your other instances.
Since you have ackOnError=false, the offset won't be committed.
The consumer does not need to know how often messages are published.
You can't change them at runtime, but you can use property placeholders ${...} or Spel Expressions #{...} to set them up during application initialization.