I am pretty new to Kafka. I have my Zookeeper server running on port 2181 and Kafka server on port 9092. I have written a Simple Producer in java.
But whenever run the program, it shows me the following error:
USAGE: java [options] KafkaServer server.properties [--override property=value]*
Option Description
------ -----------
--override Optional property that should override values set in server.properties file
I am using Netbeans IDE with JDK 8 and have included all the Kafka jar files in the Library. I believe there's no error in the library files because the code builds correctly but doesn't run.
Here is the Simple Producer code:
package kafka;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;
public class Kafka {
private static Producer<Integer, String> producer;
private final Properties properties = new Properties();
public Kafka() {
properties.put("metadata.broker.list", "localhost:9092");
properties.put("serializer.class", "kafka.serializer.StringEncoder");
properties.put("request.required.acks", "1");
producer = new Producer<>(new ProducerConfig(properties));
}
public static void main(String args[]) {
Kafka k = new Kafka();
String topic = "test";
String msg = "hello world";
KeyedMessage<Integer, String> data = new KeyedMessage<>(topic, msg);
producer.send(data);
producer.close();
}
}
Kindly help :)
It looks like that Netbeans executes wrong class - not your kafka.Kafka class, but KafkaServer (it looks like this is a main class of Kafka itself). Please configure Netbeans to execute correct class.
I would recommend to start with existing sample of Producer from Confluent Examples, and re-use the Maven project...
I think your producer configuration is wrong. Here is an example from Kafka official documentation:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Just try smaller values for batch.size and buffer.memory.
Related
I'm using the below command to send records to a secure Kafka
bin/kafka-console-producer.sh --topic <My Kafka topic name> --bootstrap-server <My custom bootstrap server> --producer.config /Users/DY/SSL/ssl.properties
As you can see I have added the ssl.properties file's path to the --producer.config switch.
The ssl.properties file contains details about how to connect to secure kafka, its contents are below:
security.protocol=SSL
ssl.truststore.location=<My custom value>
ssl.truststore.password=<My custom value>
ssl.key.password=<My custom value>
ssl.keystore.location=<My custom value>
ssl.keystore.password=<My custom value>
Now, I want to use replicate this command with java producer.
The code that I've written is as:
public class MyProducer {
public static void main(String[] args) {
{
Properties properties = new Properties();
properties.put("bootstrap.servers", <My bootstrap server>);
properties.put("key.serializer", StringSerializer.class);
properties.put("value.serializer", StringSerializer.class);
properties.put("producer.config", "/Users/DY/SSL/ssl.properties");
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(properties);
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(
<My bootstrap server>, "Hello World from program");
Future<RecordMetadata> future = kafkaProducer.send(
producerRecord,
(metadata, exception) -> {
if(exception != null){
System.out.printf("some thing wrong");
exception.printStackTrace();
}
else{
System.out.println("Successfully transmitted");
}
});
future.get()
kafkaProducer.close();
}
}
}
This way of passing the properties.put("producer.config", "/Users/DY/SSL/ssl.properties"); however does not seem to work. Could anybody let me know what would be an appropriate way to do this
Rather than use any file to pass the properties individually, you can use static client configs as below;
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// for SSL Encryption
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
properties.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "<My custom value>");
// for SSL Authentication
properties.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<My custom value>");
Required classes are;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.config.SslConfigs;
You have to set each one as a discrete property in the producer Properties.
You could use Properties.load() with a FileInputStream or FileReader to load them from the file into your Properties object.
My goal is to use kafka test containers with spring boot context in tests without #DirtiesContext. Problem is that without starting container separately for each test class I have no idea how to consume messages that were produced only by particular test class or method.
So I end up consuming messages that were not a part of even test class that is running.
One solution might be to purge topic of messages. I have no idea how to do this, I've tried to restart container but then next test was not able to connect to kafka.
Second solution that I had in mind is to have consumer that will be created at the beginning of test method and somehow record messages from latest while other staff in test will be called. I've been able to do so with embeded kafka, I have no idea how to do this using test containers.
Current configuration looks like this:
#TestConfiguration
public class KafkaContainerConfig {
#Bean(initMethod = "start", destroyMethod = "stop")
public KafkaContainer kafkaContainer() {
return new KafkaContainer("5.0.3");
}
#Bean
public KafkaAdmin kafkaAdmin(KafkaProperties kafkaProperties, KafkaContainer kafkaContainer) {
kafkaProperties.setBootstrapServers(List.of(kafkaContainer.getBootstrapServers()));
return new KafkaAdmin(kafkaProperties.buildAdminProperties());
}
}
With annotation that will provide above configuration
#Target({ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
#Import(KafkaContainerConfig.class)
#EnableAutoConfiguration(exclude = TestSupportBinderAutoConfiguration.class)
#TestPropertySource("classpath:/application-test.properties")
#DirtiesContext
public #interface IncludeKafkaTestContainer {
}
And in test class itself with multiple such configuration it would looks like:
#IncludeKafkaTestContainer
#IncludePostgresTestContainer
#SpringBootTest(webEnvironment = RANDOM_PORT)
class SomeTest {
...
}
Currently consumer in test method is created this way:
KafkaConsumer<String, String> kafkaConsumer = createKafkaConsumer("topic_name");
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofSeconds(1));
List<ConsumerRecord<String, String>> topicMsgs = Lists.newArrayList(consumerRecords.iterator());
And:
public static KafkaConsumer<String, String> createKafkaConsumer(String topicName) {
Properties properties = new Properties();
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaContainer.getBootstrapServers());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "testGroup_" + topicName);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class)
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
kafkaConsumer.subscribe(List.of(topicName));
return kafkaConsumer;
}
Getting error while sending message to kafka topic in kerberosed enviornment. We have cluster on hdp 2.3
I followed this http://henning.kropponline.de/2016/02/21/secure-kafka-java-producer-with-kerberos/
But for sending messages, I have to do kinit explicitly first, then only I am able to send message to kafka topic.
I tried to do knit through java class but that also doesn't work.
PFB code:
package com.ct.test.kafka;
import java.util.Date;
import java.util.Properties;
import java.util.Random;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class TestProducer {
public static void main(String[] args) {
String principalName = "ctadmin";
String keyTabPath = "/etc/security/keytabs/ctadmin.keytab";
boolean authStatus = CTSecurityUtil.loginUserFromKeytab(principalName, keyTabPath);
if (!authStatus) {
System.out.println("Authntication fails, try something else " + authStatus);
} else {
System.out.println("Authntication successfull " + authStatus);
}
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
System.setProperty("java.security.auth.login.config", "/etc/kafka/2.3.4.0-3485/0/kafka_jaas.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("sun.security.krb5.debug", "true");
try {
long events = Long.parseLong("3");
Random rnd = new Random();
Properties props = new Properties();
System.out.println("After broker list- " + args[0]);
props.put("metadata.broker.list", args[0]);
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
props.put("security.protocol", "PLAINTEXTSASL");
//props.put("partitioner.class", "com.ct.test.kafka.SimplePartitioner");
System.out.println("After config prop -1");
ProducerConfig config = new ProducerConfig(props);
System.out.println("After config prop -2 config" + config);
Producer<String, String> producer = new Producer<String, String>(config);
System.out.println("After config prop -3");
for (long nEvents = 0L; nEvents < events; nEvents += 1L) {
Date runtime = new Date();
String ip = "192.168.2" + rnd.nextInt(255);
String msg = runtime + " www.example.com, " + ip;
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test_march4", ip, msg);
System.out.println("After config prop -1 data" + data);
producer.send(data);
}
producer.close();
} catch (Throwable th) {
th.printStackTrace();
}
}
}
Pom.xml : All dependency downloaded from hortonworks repo.
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.2.3.4.0-3485</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.2.3.4.0-3485</version>
</dependency>
<dependency>
<groupId>org.jasypt</groupId>
<artifactId>jasypt-spring31</artifactId>
<version>1.9.2</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1.2.3.4.0-3485</version>
</dependency>
</dependencies>
Error :
Case1 : when I specify myuser kafka_jass.conf
log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
After config prop -2 configkafka.producer.ProducerConfig#643293ae
java.lang.SecurityException: Configuration Error:
Line 6: expected [controlFlag]
at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:110)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:379)
at javax.security.auth.login.Configuration$2.run(Configuration.java:258)
at javax.security.auth.login.Configuration$2.run(Configuration.java:250)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:249)
at org.apache.kafka.common.security.kerberos.Login.login(Login.java:291)
at org.apache.kafka.common.security.kerberos.Login.<init>(Login.java:104)
at kafka.common.security.LoginManager$.init(LoginManager.scala:36)
at kafka.producer.Producer.<init>(Producer.scala:50)
at kafka.producer.Producer.<init>(Producer.scala:73)
at kafka.javaapi.producer.Producer.<init>(Producer.scala:26)
at com.ct.test.kafka.TestProducer.main(TestProducer.java:51)
Caused by: java.io.IOException: Configuration Error:
Line 6: expected [controlFlag]
at com.sun.security.auth.login.ConfigFile.match(ConfigFile.java:563)
at com.sun.security.auth.login.ConfigFile.parseLoginEntry(ConfigFile.java:413)
at com.sun.security.auth.login.ConfigFile.readConfig(ConfigFile.java:383)
at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:283)
at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:219)
at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:108)
MyUser_Kafka_jass.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
renewTicket=true
principal="ctadmin/prod-dev1-dn1#PROD.COM";
useKeyTab=true
serviceName="kafka"
keyTab="/etc/security/keytabs/ctadmin.keytab"
client=true;
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/ctadmin.keytab"
storeKey=true
useTicketCache=true
serviceName="zookeeper"
principal="ctadmin/prod-dev1-dn1#PROD.COM";
};
case2 : When I specify Kafkas own jaas file
Java config name: /etc/krb5.conf
Loaded from Java config
javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. Make sure -Djava.security.auth.login.config property passed to JVM and the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)'. Make sure you are using FQDN of the Kafka broker you are trying to connect to. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:899)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
at org.apache.kafka.common.security.kerberos.Login.login(Login.java:298)
at org.apache.kafka.common.security.kerberos.Login.<init>(Login.java:104)
at kafka.common.security.LoginManager$.init(LoginManager.scala:36)
at kafka.producer.Producer.<init>(Producer.scala:50)
at kafka.producer.Producer.<init>(Producer.scala:73)
at kafka.javaapi.producer.Producer.<init>(Producer.scala:26)
at com.ct.test.kafka.TestProducer.main(TestProducer.java:51)
This works fine, if I do kinit before running this app, else it will through above error.
I cant do this in my production environment, if there is any way to do this by our app itself then please help me out.
Please let me know if you need any more details.
Thanks:)
The error is in a semicolon you have in your jaas file as you can see in this piece of output:
Line 6: expected [controlFlag]
This line cannot have the semicolon:
principal="ctadmin/prod-dev1-dn1#PROD.COM";
it can only exist in the last line:
I don't know what mistake did first time, below things I did again, and it works fine.
First give all access to topic:
bin/kafka-acls.sh --add --allow-principals user:ctadmin --operation ALL --topic marchTesting --authorizer-properties zookeeper.connect={hostname}:2181
create jass file:
kafka-jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
principal="ctadmin#HSCALE.COM"
useKeyTab=true
serviceName="kafka"
keyTab="/etc/security/keytabs/ctadmin.keytab"
client=true;
};
Java Program:
package com.ct.test.kafka;
import java.util.Date;
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class KafkaProducer {
public static void main(String[] args) {
String topic = args[0];
Properties props = new Properties();
props.put("metadata.broker.list", "{Hostname}:6667");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
props.put("security.protocol", "PLAINTEXTSASL");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
for (int i = 0; i < 10; i++){
producer.send(new KeyedMessage<String, String>(topic, "Test Date: " + new Date()));
}
}
}
Run application:
java -Djava.security.auth.login.config=/home/ctadmin/kafka-jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=true -cp kafka-testing-0.0.1-jar-with-dependencies.jar com.ct.test.kafka.KafkaProducer
I want to initialize my redis address dynamicly by command line, And use it before a bolt's open method:
public class RunMyTopology {
#Parameter(names = { "-topologyName"}, description = "Topology name.")
private static String TOP_NAME = "demo";
#Parameter(names = { "-redisAddr"}, description = "Redis host address.", validateWith = IPValidator.class)
public static String REDIS_ADDR = "172.16.3.142";
public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException {
new JCommander(new RunMyTopology(), args);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new Spout(REDIS_ADDR), 1);
builder.setBolt("fixerBolt",new FixerBolt(REDIS_Addr),1).fieldsGrouping("spout", new Fields("busId"));
// And many other bolts need REDIS_ADDR
Config conf = new Config();
conf.put(Config.TOPOLOGY_WORKERS, 22);
StormSubmitter.submitTopology(TOP_NAME, new Conf, builder.createTopology());
}
}
Now I can achive it by passing constructor parameters, but if I have many
config values like redis address, this way looks ugly. How to notify the changed value in other way?
Unfortunately, there is no externalization of properties in Apache Storm.
But you can use many librairies that are available for this purpose, such as Spring (placeholder API) or Apache Commons Configuration (I personally use it with storm as it is quite lightweight and does that job well enough).
If you plan on using Commons Configuration:
define your property files for different environment DEV, PROD...
you must parse a commons configuration first with support for property overwritting (with an environment or system variable for instance)
then get all properties inside to put them into Storm Config (filter some if you take all system properties, it can be full of crap)
finally you can start your cluster
Hope that helps.
Here is a link to the documentation.
http://commons.apache.org/proper/commons-configuration/userguide_v1.10/overview.html#Using_Configuration
I am using KafkaSpout. Please find the test program below.
I am using Storm 0.8.1. Multischeme class is there in Storm 0.8.2. I will be using that. I just want to know how were the earlier versions working just by instantiating the StringScheme() class? Where can I download earlier versions of Kafka Spout? But I doubt that would be a correct alternative than to work on Storm 0.8.2. ??? (Confused)
When I run the code (given below) on storm cluster (i.e. when I push my topology) I get the following error (This happens when the Scheme part is commented else of course I will get compiler error as the class is not there in 0.8.1):
java.lang.NoClassDefFoundError: backtype/storm/spout/MultiScheme
at storm.kafka.TestTopology.main(TestTopology.java:37)
Caused by: java.lang.ClassNotFoundException: backtype.storm.spout.MultiScheme
In the code given below you may find the spoutConfig.scheme=new StringScheme(); part commented. I was getting compiler error if I don't comment that line which is but natural as there are no constructors in there. Also when I instantiate MultiScheme I get error as I dont have that class in 0.8.1.
public class TestTopology {
public static class PrinterBolt extends BaseBasicBolt {
public void declareOutputFields(OutputFieldsDeclarer declarer) {
}
public void execute(Tuple tuple, BasicOutputCollector collector) {
System.out.println(tuple.toString());
}
}
public static void main(String [] args) throws Exception {
List<HostPort> hosts = new ArrayList<HostPort>();
hosts.add(new HostPort("127.0.0.1",9092));
LocalCluster cluster = new LocalCluster();
TopologyBuilder builder = new TopologyBuilder();
SpoutConfig spoutConfig = new SpoutConfig(new KafkaConfig.StaticHosts(hosts, 1), "test", "/zkRootStorm", "STORM-ID");
spoutConfig.zkServers=ImmutableList.of("localhost");
spoutConfig.zkPort=2181;
//spoutConfig.scheme=new StringScheme();
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
builder.setSpout("spout",new KafkaSpout(spoutConfig));
builder.setBolt("printer", new PrinterBolt())
.shuffleGrouping("spout");
Config config = new Config();
cluster.submitTopology("kafka-test", config, builder.createTopology());
Thread.sleep(600000);
}
I had the same problem. Finally resolved it, and I put the complete running example up on github.
You are welcome to check it out here >
https://github.com/buildlackey/cep
(click on the storm+kafka directory for a sample program that should get you up and running).
We had a similar issue.
Our solution:
Open pom.xml
Change scope from provided to <scope>compile</scope>
If you want to know more about dependency scopes check the maven docu:
Maven docu - dependency scopes