My goal is to use kafka test containers with spring boot context in tests without #DirtiesContext. Problem is that without starting container separately for each test class I have no idea how to consume messages that were produced only by particular test class or method.
So I end up consuming messages that were not a part of even test class that is running.
One solution might be to purge topic of messages. I have no idea how to do this, I've tried to restart container but then next test was not able to connect to kafka.
Second solution that I had in mind is to have consumer that will be created at the beginning of test method and somehow record messages from latest while other staff in test will be called. I've been able to do so with embeded kafka, I have no idea how to do this using test containers.
Current configuration looks like this:
#TestConfiguration
public class KafkaContainerConfig {
#Bean(initMethod = "start", destroyMethod = "stop")
public KafkaContainer kafkaContainer() {
return new KafkaContainer("5.0.3");
}
#Bean
public KafkaAdmin kafkaAdmin(KafkaProperties kafkaProperties, KafkaContainer kafkaContainer) {
kafkaProperties.setBootstrapServers(List.of(kafkaContainer.getBootstrapServers()));
return new KafkaAdmin(kafkaProperties.buildAdminProperties());
}
}
With annotation that will provide above configuration
#Target({ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
#Import(KafkaContainerConfig.class)
#EnableAutoConfiguration(exclude = TestSupportBinderAutoConfiguration.class)
#TestPropertySource("classpath:/application-test.properties")
#DirtiesContext
public #interface IncludeKafkaTestContainer {
}
And in test class itself with multiple such configuration it would looks like:
#IncludeKafkaTestContainer
#IncludePostgresTestContainer
#SpringBootTest(webEnvironment = RANDOM_PORT)
class SomeTest {
...
}
Currently consumer in test method is created this way:
KafkaConsumer<String, String> kafkaConsumer = createKafkaConsumer("topic_name");
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofSeconds(1));
List<ConsumerRecord<String, String>> topicMsgs = Lists.newArrayList(consumerRecords.iterator());
And:
public static KafkaConsumer<String, String> createKafkaConsumer(String topicName) {
Properties properties = new Properties();
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaContainer.getBootstrapServers());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "testGroup_" + topicName);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class)
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
kafkaConsumer.subscribe(List.of(topicName));
return kafkaConsumer;
}
Related
I'm using the below command to send records to a secure Kafka
bin/kafka-console-producer.sh --topic <My Kafka topic name> --bootstrap-server <My custom bootstrap server> --producer.config /Users/DY/SSL/ssl.properties
As you can see I have added the ssl.properties file's path to the --producer.config switch.
The ssl.properties file contains details about how to connect to secure kafka, its contents are below:
security.protocol=SSL
ssl.truststore.location=<My custom value>
ssl.truststore.password=<My custom value>
ssl.key.password=<My custom value>
ssl.keystore.location=<My custom value>
ssl.keystore.password=<My custom value>
Now, I want to use replicate this command with java producer.
The code that I've written is as:
public class MyProducer {
public static void main(String[] args) {
{
Properties properties = new Properties();
properties.put("bootstrap.servers", <My bootstrap server>);
properties.put("key.serializer", StringSerializer.class);
properties.put("value.serializer", StringSerializer.class);
properties.put("producer.config", "/Users/DY/SSL/ssl.properties");
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(properties);
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(
<My bootstrap server>, "Hello World from program");
Future<RecordMetadata> future = kafkaProducer.send(
producerRecord,
(metadata, exception) -> {
if(exception != null){
System.out.printf("some thing wrong");
exception.printStackTrace();
}
else{
System.out.println("Successfully transmitted");
}
});
future.get()
kafkaProducer.close();
}
}
}
This way of passing the properties.put("producer.config", "/Users/DY/SSL/ssl.properties"); however does not seem to work. Could anybody let me know what would be an appropriate way to do this
Rather than use any file to pass the properties individually, you can use static client configs as below;
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// for SSL Encryption
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
properties.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "<My custom value>");
// for SSL Authentication
properties.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "<My custom value>");
properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<My custom value>");
Required classes are;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.config.SslConfigs;
You have to set each one as a discrete property in the producer Properties.
You could use Properties.load() with a FileInputStream or FileReader to load them from the file into your Properties object.
I am facing connection issues when running Kakfa test container(confluentinc/cp-kafka:5.4.3) with Spring Boot App. Wondering if someone has faced this issue as well. After kafka container starts, the Admin Client tries to connect to broker to fetch the metadata but fails to connect.
Error log:
[AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I tried following workarounds to ensure KafkaAdminClient uses right address but none of them worked:
Used boot strap server address
Used KAFKA_ADVERTISED_LISTENERS=BROKER://172.17.0.3:9092. This address was being set by testcontainers_start.sh within docker container
Used kafa.getContainerName() to form the address: Example: BROKER://t-adsad:9092
Used kafka.getHost() + “:” + kafka.getMappedPort(9092)
Test class:
#RunWith(SpringRunner.class)
#Import(KafkaTestContainersConfiguration.class)
#SpringBootTest
#DirtiesContext
public class KafkaTestContainersLiveTest {
#ClassRule
public static KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:5.4.3"));
#BeforeClass
public static void setupBootstrapServer(){
String server = "BROKER://"+kafka.getNetworkAliases().get(0)+":9092";
System.setProperty("kafka.bootstrap.servers", server);
}
Configuration class:
#Configuration
#EnableKafka
public class KafkaTestContainersConfiguration {
#Value("${kafka.bootstrap.servers}")
private String bootstrapServer;
#Value("${kafka.topic}")
private String topic;
public final int NUM_PARTITIONS=1;
public final short REPLICATION_FACTOR=1;
#Bean
public AdminClient adminClient() {
return KafkaAdminClient.create(adminClientConfigs());
}
public Map<String, Object> adminClientConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);
return props;
}
}
I am coding Kafka Broker and Consumer to catch messages from the application. When trying to get messages from Consumer, an error occurs
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at org.springframework.kafka.test.utils.KafkaTestUtils.getRecords(KafkaTestUtils.java:303)
at org.springframework.kafka.test.utils.KafkaTestUtils.getRecords(KafkaTestUtils.java:280)
On the application side (Producer), there is also a connection error
2020-03-25 12:29:33.689 WARN 25786 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1, transactionalId=tx0] Connection to node -1 (<here broker hostname>:9092) could not be established. Broker may not be available.
My project has the following dependencies:
compile "org.springframework.kafka:spring-kafka-test:2.4.4.RELEASE"
compile "org.springframework.kafka:spring-kafka:2.4.4.RELEASE"
Code of My Kafka Broker
public class KafkaServer {
private static final String BROKERPORT = "9092";
private static final String BROKERHOST = "localhost";
public static final String TOPIC1 = "fss-fsstransdata";
public static final String TOPIC2 = "fss-fsstransscores";
public static final String TOPIC3 = "fss-fsstranstimings";
public static final String TOPIC4 = "fss-fssdevicedata";
#Getter
private Consumer<String, String> consumer;
private EmbeddedKafkaBroker embeddedKafkaBroker;
public void run() {
String[] topics = {TOPIC1, TOPIC2, TOPIC3, TOPIC4};
this.embeddedKafkaBroker = new EmbeddedKafkaBroker(
1,
false,
1,
topics
).kafkaPorts(BROKERPORT);
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.consumerProps("consumer", "false", this.embeddedKafkaBroker));
this.consumer = new DefaultKafkaConsumerFactory<>(configs, new StringDeserializer(), new StringDeserializer()).createConsumer();
this.consumer.subscribe(Arrays.asList(topics));
}
}
Please help to deal with the situation. I am not good at kafka architecture and how it can be implemented on Spring.
The EmbeddedKafkaBroker is designed to be used from a Spring application context or by a JUnit4 #Rule or #ClassRule or by a JUnit5 Condition.
To use it outside those environments, you must call afterPropertiesSet() to initialize it and destroy() to shut it down.
If you are using spring then you need to annotate your bean with #EmbeddedKafka and then use #Autowire on EmbeddedKafkaBroker
Example embeded kafka annotation configuration:
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
brokerProperties = {// place your proerties here
})
What I would do is to create a spring bean KafkaServerConfig and place all my logic for configuration and bean construction inside.
PS: it should be noted that EmbeddedKafkaBroker is intended for unit tests.
I am new to Spark and does not have too much idea on it. I am working on an application in which data is traversing on different-2 Kafka topic and Spark Streaming reading the data from this topic. Its a SpringBoot project and i have 3 Spark consumer classes in it. The job of these SparkStreaming classes is to consume the data from a Kafka topic and send it to another topic. Code of SparkStreaming class is below-
#Service
public class EnrichEventSparkConsumer {
Collection<String> topics = Arrays.asList("eventTopic");
public void startEnrichEventConsumer(JavaStreamingContext javaStreamingContext) {
Map<String, Object> kafkaParams = new HashedMap();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", true);
JavaInputDStream<ConsumerRecord<String, String>> enrichEventRDD = KafkaUtils.createDirectStream(javaStreamingContext,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));
JavaDStream<String> enrichEventDStream = enrichEventRDD.map((x) -> x.value());
JavaDStream<EnrichEventDataModel> enrichDataModelDStream = enrichEventDStream.map(convertIntoEnrichModel);
enrichDataModelDStream.foreachRDD(rdd1 -> {
saveDataToElasticSearch(rdd1.collect());
});
enrichDataModelDStream.foreachRDD(enrichDataModelRdd -> {
if(enrichDataModelRdd.count() > 0) {
if(executor != null) {
executor.executePolicy(enrichDataModelRdd.collect());
}
}
});
}
static Function convertIntoEnrichModel = new Function<String, EnrichEventDataModel>() {
#Override
public EnrichEventDataModel call(String record) throws Exception {
ObjectMapper mapper = new ObjectMapper();
EnrichEventDataModel csvDataModel = mapper.readValue(record, EnrichEventDataModel.class);
return csvDataModel;
}
};
private void saveDataToElasticSearch(List<EnrichEventDataModel> baseDataModelList) {
for (EnrichEventDataModel baseDataModel : baseDataModelList)
dataModelServiceImpl.save(baseDataModel);
}
}
I am calling the method startEnrichEventConsumer() using CommandLineRunner.
public class EnrichEventSparkConsumerRunner implements CommandLineRunner {
#Autowired
JavaStreamingContext javaStreamingContext;
#Autowired
EnrichEventSparkConsumer enrichEventSparkConsumer;
#Override
public void run(String... args) throws Exception {
//start Raw Event Spark Cosnumer.
JobContextImpl jobContext = new JobContextImpl(javaStreamingContext);
//start Enrich Event Spark Consumer.
enrichEventSparkConsumer.startEnrichEventConsumer(jobContext.streamingctx());
}
}
Now i want to submit these three Spark Streaming classes on to the cluster. I read somewhere that i have to create a Jar file first then after it i can use Spark-submit command but i have some questions in my mind -
Should i create a different project with these 3 Spark Streaming classes?
As of now i am using CommandLineRunner to initiate SparkStreaming then when to submit cluster , should i create main() method in these class?
Please tell me how to do it. Thanks in advance.
No need for a different project.
You should create entry point/ main which is responsible of the JavaStreamingContext creation.
Create your jar with dependencies, the dependencies in one single jar file, don't forget to put provided scope for all your spark dependencies since you will use cluster's libraries.
Executing assembled Spark application is using spark-submit command-line application as follows:
./bin/spark-submit \
--class <main-class> \
--master <master-url> \
--deploy-mode <deploy-mode> \
--conf <key>=<value> \
... # other options
<application-jar> \
[application-arguments]
For local submit
bin/spark-submit \
--class package.Main \
--master local[2] \
path/to/jar argument1 argument2
I am pretty new to Kafka. I have my Zookeeper server running on port 2181 and Kafka server on port 9092. I have written a Simple Producer in java.
But whenever run the program, it shows me the following error:
USAGE: java [options] KafkaServer server.properties [--override property=value]*
Option Description
------ -----------
--override Optional property that should override values set in server.properties file
I am using Netbeans IDE with JDK 8 and have included all the Kafka jar files in the Library. I believe there's no error in the library files because the code builds correctly but doesn't run.
Here is the Simple Producer code:
package kafka;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;
public class Kafka {
private static Producer<Integer, String> producer;
private final Properties properties = new Properties();
public Kafka() {
properties.put("metadata.broker.list", "localhost:9092");
properties.put("serializer.class", "kafka.serializer.StringEncoder");
properties.put("request.required.acks", "1");
producer = new Producer<>(new ProducerConfig(properties));
}
public static void main(String args[]) {
Kafka k = new Kafka();
String topic = "test";
String msg = "hello world";
KeyedMessage<Integer, String> data = new KeyedMessage<>(topic, msg);
producer.send(data);
producer.close();
}
}
Kindly help :)
It looks like that Netbeans executes wrong class - not your kafka.Kafka class, but KafkaServer (it looks like this is a main class of Kafka itself). Please configure Netbeans to execute correct class.
I would recommend to start with existing sample of Producer from Confluent Examples, and re-use the Maven project...
I think your producer configuration is wrong. Here is an example from Kafka official documentation:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Just try smaller values for batch.size and buffer.memory.