kafka failed to update metadata from Java Code - java

i am running a Java Producer code. However, i get the below error :
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Here, is the snippet of my producer class :
import java.util.List;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import pojo.JsonToPojo;
public class KafkaSender {
public void sendtoKafka(List<JsonToPojo> data) {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "0.0.0.0:9092");
props.put(ProducerConfig.CLIENT_ID_CONFIG, "KafkaExampleProducer");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, 5);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 80554432);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, "60000");
Producer<String, JsonToPojo> producer = new KafkaProducer<String, JsonToPojo>(props);
TestCallback callback = new TestCallback();
for (JsonToPojo toKafka : data) {
ProducerRecord<String, JsonToPojo> record = new ProducerRecord<String, JsonToPojo>("dontknow", toKafka.group_city.toString(), toKafka);
// RecordMetadata metadata = producer.send(record).get();
// System.out.println("Hey" + metadata.topic());
producer.send(record, callback);
}
producer.close();
}
private static class TestCallback implements Callback {
#Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e != null) {
System.out.println("Error while producing message to topic :" + recordMetadata);
e.printStackTrace();
} else {
String message = String.format("sent message to topic:%s partition:%s offset:%s",
recordMetadata.topic(), recordMetadata.partition(), recordMetadata.offset());
System.out.println(message);
}
}
}
}
I am using kafka 0.9 version on a MapR cluster. Right now, there is just one broker which is running. I dont get any error apart from the one i have posted above. I have played with the server.properties file by changing a few parameters, but nothing seem to work.

Related

Java Consumer for Kafka in Cloudera Quickstart not working

I have a cloudera Quickstart VM. i have installed Kafka parcels using Cloudera Manager and its working fine inside the VM using console based consumer and producer.
But when i try to use java based consumer it does not produce or consume messages. I can list the topics.
But i cannot consume messages.
following is my code.
package kafka_consumer;
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
public class mclass {
public static void main(String[] args) {
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.0.75.1:9092");
// Just a user-defined string to identify the consumer group
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
// Enable auto offset commit
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
// List of topics to subscribe to
consumer.subscribe(Arrays.asList("second_topic"));
for (String k_topic : consumer.listTopics().keySet()) {
System.out.println(k_topic);
}
while (true) {
try {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
System.out.printf("Offset = %d\n", record.offset());
System.out.printf("Key = %s\n", record.key());
System.out.printf("Value = %s\n", record.value());
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
}
And following is the output of the code. while the console producer is producing messages but consumer is not able to receive it.
PS: i can telnet the port and ip of the Kafka broker. I can even list the topics. Consumer is constantly running without crashing but no messages is being consumed.

Concatenate logs by ID and time using Kafka Streams - Failed to flush state store

I want to concatenate logs by ID within a window of time using Kafka Streams.
For now, I can successfully count the number of logs having a same ID (the commented code).
However, when I replace the .count method with .aggregate I face following error:
"Failed to flush state store time-windowed-aggregation-stream-store"
Caused by: java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed cannot be cast to java.lang.String
I'm new to this and can't figure out the cause of this error, I think that having .withValueSerde(Serdes.String()) is supposed to prevent this.
Below my code:
package myapps;
import java.time.Duration;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.*;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Suppressed.*;
import org.apache.kafka.streams.state.WindowStore;
public class MyCode {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-mycode");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> source = builder.stream("streams-plaintext-input");
KStream<String, String> changedKeyStream = source.selectKey((k, v)
-> v.substring(v.indexOf("mid="),v.indexOf("mid=")+8));
/* // Working code for count
changedKeyStream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofSeconds(3))
.grace(Duration.ofSeconds(2)))
.count(Materialized.with(Serdes.String(), Serdes.Long())) // could be replaced with an aggregator (reducer?) ?
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream()
.print(Printed.toSysOut());
*/
changedKeyStream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofSeconds(3)))
.aggregate(
String::new, (String k, String v, String Result) -> { return Result+"\n"+v; },
Materialized.<String, String, WindowStore<Bytes, byte[]>>as("time-windowed-aggregated-stream-store") /* state store name */
.withValueSerde(Serdes.String())) /* serde for aggregate value */
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream()
.print(Printed.toSysOut());
changedKeyStream.to("streams-mycode-output", Produced.with(Serdes.String(), Serdes.String()));
final Topology topology = builder.build();
final KafkaStreams streams = new KafkaStreams(topology, props);
final CountDownLatch latch = new CountDownLatch(1);
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close();
latch.countDown();
}
});
// launch until control+c
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.out.print("Something went wrong!");
System.exit(1);
}
System.exit(0);
}
}
Thank you in advance for your help.
There are two option to fix it:
Pass org.apache.kafka.streams.kstream.Grouped to KStream::groupByKey.
Set org.apache.kafka.common.serialization.Serde to Materialized - Materialized::withKeySerde(...)
Sample code bellow:
Ad 1.
changedKeyStream
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(3)))
Ad 2.
changedKeyStream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofSeconds(3)))
.aggregate(
String::new, (String k, String v, String Result) -> { return Result+"_"+v; },
Materialized.<String, String, WindowStore<Bytes, byte[]>>as("time-windowed-aggregated-stream-store") /* state store name */
.withValueSerde(Serdes.String())
.withKeySerde(Serdes.String())
)

How can I send data to zipkin from kafka in java?

I hope you can help me. Let's use for example this very simple code
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
public class ms_example1 {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
final StreamsBuilder builder = new StreamsBuilder();
builder.stream("streams-plaintext-input").to("streams-pipe-output");
final Topology topology = builder.build();
final KafkaStreams streams = new KafkaStreams(topology, props);
final CountDownLatch latch = new CountDownLatch(1);
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close();
latch.countDown();
}
});
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
}
Like you can see this microservice only forwards messages from one kafka-topic to another. I also want to send this data to zipkin to see the duration of the messages or something like that.
Maybe I've seen the solution and don't get it but I realy have looked for a solution but didn't find one. You are my last hope. I have seen the brave api but I don't realy understand how to use it for kafka. I

kafka java consumer not reading data

I am trying to write a simple java kafka consumer to read data using similar code as in https://github.com/bkimminich/apache-kafka-book-examples/blob/master/src/test/kafka/consumer/SimpleHLConsumer.java.
Looks like my app is able to connect, but its not fetching any data. Please suggest.
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
//import scala.util.parsing.json.JSONObject
import scala.util.parsing.json.JSONObject;
public class SimpleHLConsumer {
private final ConsumerConnector consumer;
private final String topic;
public SimpleHLConsumer(String zookeeper, String groupId, String topic) {
Properties props = new Properties();
props.put("zookeeper.connect", zookeeper);
props.put("group.id", groupId);
// props.put("zookeeper.session.timeout.ms", "5000");
// props.put("zookeeper.sync.time.ms", "250");
// props.put("auto.commit.interval.ms", "1000");
consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
this.topic = topic;
}
public void testConsumer() {
Map<String, Integer> topicCount = new HashMap<>();
topicCount.put(topic, 1);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams = consumer.createMessageStreams(topicCount);
System.out.println(consumerStreams);
List<KafkaStream<byte[], byte[]>> streams = consumerStreams.get(topic);
System.out.println(streams);
System.out.println(consumer);
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
System.out.println("for loop");
System.out.println(it);
System.out.println("Message from Single Topic: " + new String(it.next().message()));
//System.out.println("Message from Single Topic: " + new String(it.message()));
while (it.hasNext()) {
System.out.println("in While");
System.out.println("Message from Single Topic: " + new String(it.next().message()));
}
}
// if (consumer != null) {
// consumer.shutdown();
// }
}
public static void main(String[] args) {
String topic = "test";
SimpleHLConsumer simpleHLConsumer = new SimpleHLConsumer("localhost:2181", "testgroup", topic);
simpleHLConsumer.testConsumer();
}
}
Here is the output i see in eclipse. It does seem to connect to my zookeeper , but it just hangs there, it does not display any message at all.
log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
SLF4J: The requested version 1.6 by your slf4j binding is not compatible with [1.5.5, 1.5.6]
SLF4J: See http://www.slf4j.org/codes.html#version_mismatch for further details.
{test=[testgroup kafka stream]}
[testgroup kafka stream]
kafka.javaapi.consumer.ZookeeperConsumerConnector#6200f9cb
for loop
Consumer iterator hasNext is blocking call. It will block indefinitely if no new message is available for consumption.
To verify this, change your code to
// Comment 2 lines below
// System.out.println(it);
// System.out.println("Message from Single Topic: " + new String(it.next().message()));
// Line below is blocking. Your code will hang till next message in topic.
// Add new message in topic using producer, message will appear in console
while (it.hasNext()) {
Better way is to execute code in separate thread. Use consumer.timeout.ms to specify time in ms, after which consumer will throw timeout exception
// keepRunningThread is flag to control when to exit consumer loop
while(keepRunningThread)
{
try
{
if(it.hasNext())
{
System.out.println(new String(it.next().message()));
}
}
catch(ConsumerTimeoutException ex)
{
// Timeout exception waiting for kafka message
// Wait for 5 (or t) seconds before checking for message again
Thread.sleep(5000);
}
}‍‍‍‍‍‍‍‍‍‍

Can i write Kafka consumer in java api to consume messages produced by python producer

I have producer code written in python which fetches tweets from twitter. I have created the topic named twitter_test.
When i use kafka-console-consumer i can see that there are lot tweets in that topic.
But when i tried to consume this messages from java consumer it is not fetching any data.
Below is my consumer code.
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.Properties;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.io.BinaryDecoder;
import org.apache.avro.io.DatumReader;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.specific.SpecificDatumReader;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class avro_twitter {
public static void main(String[] args) throws IOException {
Properties props = new Properties();
props.put("bootstrap.servers", "10.16.111.12:9092");
props.put("group.id", "groupid");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer");
props.put("auto.offset.reset", "earliest");
props.put("schema.registry.url", "10.16.111.12:8081");
String topic = "twitter_test";
KafkaConsumer<String, GenericRecord> consumer = new KafkaConsumer<String, GenericRecord>(props);
consumer.subscribe(Collections.singletonList(topic));
System.out.println("Reading topic:" + topic);
while (true) {
ConsumerRecords<String, GenericRecord> records = consumer.poll(1000);
for (ConsumerRecord<String, GenericRecord> record: records) {
String authid=record.value().get(1).toString();
String screen_name=record.value().get(1).toString();
String description=record.value().get(2).toString();
System.out.println(authid);
}
}
}
Any help would be really appreciated

Categories

Resources