I have two Kafka clusters, the IPs for which I am fetching dynamically from database. I am using #KafkaListener for creating listeners. Now I want to create multiple Kafka listeners at runtime depending on the bootstrap server attribute(comma-separated values), each one listening to a cluster. Can you please suggest me how do I achieve this?
Spring-boot: 2.1.3.RELEASE
Kafka-2.0.1
Java-8
Your requirements are not clear but, assuming you want the same listener configuration to listen to multiple clusters, here is one solution. i.e. make the listener bean a prototype and mutate the container factory for each instance...
#SpringBootApplication
#EnableConfigurationProperties(ClusterProperties.class)
public class So55311070Application {
public static void main(String[] args) {
SpringApplication.run(So55311070Application.class, args);
}
private final Map<String, MyListener> listeners = new HashMap<>();
#Bean
public ApplicationRunner runner(ClusterProperties props, ConsumerFactory<Object, Object> cf,
ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, KafkaListenerEndpointRegistry registry) {
return args -> {
AtomicInteger instance = new AtomicInteger();
Arrays.stream(props.getClusters()).forEach(cluster -> {
Map<String, Object> consumerProps = new HashMap<>(cf.getConfigurationProperties());
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, cluster);
String groupId = "group" + instance.getAndIncrement();
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
containerFactory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerProps));
this.listeners.put(groupId, context.getBean("listener", MyListener.class));
});
registry.getListenerContainers().forEach(c -> System.out.println(c.getGroupId())); // 2.2.5 snapshot only
};
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MyListener listener() {
return new MyListener();
}
}
class MyListener {
#KafkaListener(topics = "so55311070")
public void listen(String in) {
System.out.println(in);
}
}
#ConfigurationProperties(prefix = "kafka")
public class ClusterProperties {
private String[] clusters;
public String[] getClusters() {
return this.clusters;
}
public void setClusters(String[] clusters) {
this.clusters = clusters;
}
}
kafka.clusters=localhost:9092,localhost:9093
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
Result
group0
group1
...
2019-03-23 11:43:25.993 INFO 74869 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer
: partitions assigned: [so55311070-0]
2019-03-23 11:43:25.994 INFO 74869 --- [ntainer#1-0-C-1] o.s.k.l.KafkaMessageListenerContainer
: partitions assigned: [so55311070-0]
EDIT
Add code to retry starting failed containers.
It turns out we don't need a local map of listeners, the registry has a map of all containers, including the ones that failed to start.
#SpringBootApplication
#EnableConfigurationProperties(ClusterProperties.class)
public class So55311070Application {
public static void main(String[] args) {
SpringApplication.run(So55311070Application.class, args);
}
private boolean atLeastOneFailure;
private ScheduledFuture<?> restartTask;
#Bean
public ApplicationRunner runner(ClusterProperties props, ConsumerFactory<Object, Object> cf,
ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, KafkaListenerEndpointRegistry registry, TaskScheduler scheduler) {
return args -> {
AtomicInteger instance = new AtomicInteger();
Arrays.stream(props.getClusters()).forEach(cluster -> {
Map<String, Object> consumerProps = new HashMap<>(cf.getConfigurationProperties());
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, cluster);
String groupId = "group" + instance.getAndIncrement();
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
attemptStart(containerFactory, context, consumerProps, groupId);
});
registry.getListenerContainers().forEach(c -> System.out.println(c.getGroupId())); // 2.2.5 snapshot only
if (this.atLeastOneFailure) {
Runnable rescheduleTask = () -> {
registry.getListenerContainers().forEach(c -> {
this.atLeastOneFailure = false;
if (!c.isRunning()) {
System.out.println("Attempting restart of " + c.getGroupId());
try {
c.start();
}
catch (Exception e) {
System.out.println("Failed to start " + e.getMessage());
this.atLeastOneFailure = true;
}
}
});
if (!this.atLeastOneFailure) {
this.restartTask.cancel(false);
}
};
this.restartTask = scheduler.scheduleAtFixedRate(rescheduleTask,
Instant.now().plusSeconds(60),
Duration.ofSeconds(60));
}
};
}
private void attemptStart(ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, Map<String, Object> consumerProps, String groupId) {
containerFactory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerProps));
try {
context.getBean("listener", MyListener.class);
}
catch (BeanCreationException e) {
this.atLeastOneFailure = true;
}
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MyListener listener() {
return new MyListener();
}
#Bean
public TaskScheduler scheduler() {
return new ThreadPoolTaskScheduler();
}
}
class MyListener {
#KafkaListener(topics = "so55311070")
public void listen(String in) {
System.out.println(in);
}
}
Related
I have a legacy kafka topic where different type of messages get sent, these messages are written with a custom header with a specific key to discriminate the record.
On a given application I have multiple methods that I would like to annotate with custom annotation like #CustomKafkaListener(discriminator="xxx") which would be annotated with #KafkaListener.
How can I filter the messages so that if I have 2 messages sent to the central topic the method annotated with discriminator "xxx" would only read those messages whereas the method annotated with discriminator "yyy" would only read the "yyy" ones.
For example
#CustomKafkaListener(discriminator="com.mypackage.subpackage", topic="central-topic")
public void consumerMessagesXXX(ConsumerRecord r){
// reads only XXXX messages skip all others
}
#CustomKafkaListener(discriminator="com.mypackage", topic="central-topic")
public void consumerMessagesYYY(ConsumerRecord r){
// reads only YYY messages skip all others
}
I would like for the filter to be able to read the discriminator property of the target listener and decide dynamically if a message should be processed by that listener either by reflection or by some metadata provided to the filter for example
public boolean filter(ConsumerRecord consumerRecord, Consumer<Long, Event> consumer) {
var discriminatorPattern = consumer.getMetadataXXX();//retrieve discriminator information either by reflection or metadata
return
discriminatorPattern .matches(consumerRecord().lastHeader("discriminator").value());
}
Creating custom annotations is a pretty advanced topic; you would need to subclass the annotation bean post processor and come up with some mechanism to customize the endpoint by adding the filter strategy bean.
Feel free to open a new feature request on GitHub https://github.com/spring-projects/spring-kafka/issues
We could add a new property to pass the bean name of a RecordFilterStrategy bean from the #KafkaListener.
EDIT
I see you opened an issue; thanks.
Here is a work around to add the filters later...
#SpringBootApplication
public class So71237300Application {
public static void main(String[] args) {
SpringApplication.run(So71237300Application.class, args);
}
#KafkaListener(id = "xxx", topics = "so71237300", autoStartup = "false")
void listen1(String in) {
System.out.println("1:" + in);
}
#KafkaListener(id = "yyy", topics = "so71237300", autoStartup = "false")
void listen2(String in) {
System.out.println("2:" + in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so71237300").partitions(1).replicas(1).build();
}
#Bean
RecordFilterStrategy<String, String> xxx() {
return rec -> {
Header which = rec.headers().lastHeader("which");
return which == null || !Arrays.equals(which.value(), "xxx".getBytes());
};
}
#Bean
RecordFilterStrategy<String, String> yyy() {
return rec -> {
Header which = rec.headers().lastHeader("which");
return which == null || !Arrays.equals(which.value(), "yyy".getBytes());
};
}
#Bean
ApplicationRunner runner(RecordFilterStrategy<String, String> xxx, RecordFilterStrategy<String, String> yyy,
KafkaListenerEndpointRegistry registry, KafkaTemplate<String, String> template) {
return args -> {
ProducerRecord<String, String> record = new ProducerRecord<>("so71237300", "test.to.xxx");
record.headers().add("which", "xxx".getBytes());
template.send(record);
record = new ProducerRecord<>("so71237300", "test.to.yyy");
record.headers().add("which", "yyy".getBytes());
template.send(record);
updateListener("xxx", xxx, registry);
updateListener("yyy", yyy, registry);
registry.start();
};
}
private void updateListener(String id, RecordFilterStrategy<String, String> filter,
KafkaListenerEndpointRegistry registry) {
MessageListener listener = (MessageListener) registry.getListenerContainer(id).getContainerProperties()
.getMessageListener();
registry.getListenerContainer(id).getContainerProperties()
.setMessageListener(new FilteringMessageListenerAdapter<>(listener, filter));
}
}
1:test.to.xxx
2:test.to.yyy
EDIT2
This version uses a single filter and uses the consumer's group.id as the discriminator:
#SpringBootApplication
public class So71237300Application {
public static void main(String[] args) {
SpringApplication.run(So71237300Application.class, args);
}
#KafkaListener(id = "xxx", topics = "so71237300")
void listen1(String in) {
System.out.println("1:" + in);
}
#KafkaListener(id = "yyy", topics = "so71237300")
void listen2(String in) {
System.out.println("2:" + in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so71237300").partitions(1).replicas(1).build();
}
#Bean
RecordFilterStrategy<String, String> discriminator(
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
RecordFilterStrategy<String, String> filter = rec -> {
Header which = rec.headers().lastHeader("which");
return which == null || !Arrays.equals(which.value(), KafkaUtils.getConsumerGroupId().getBytes());
};
factory.setRecordFilterStrategy(filter);
return filter;
}
#Bean
ApplicationRunner runner(RecordFilterStrategy<String, String> discriminator,
KafkaListenerEndpointRegistry registry, KafkaTemplate<String, String> template) {
return args -> {
ProducerRecord<String, String> record = new ProducerRecord<>("so71237300", "test.to.xxx");
record.headers().add("which", "xxx".getBytes());
template.send(record);
record = new ProducerRecord<>("so71237300", "test.to.yyy");
record.headers().add("which", "yyy".getBytes());
template.send(record);
};
}
}
1:test.to.xxx
2:test.to.yyy
I'm really struggling to write a test to check if my Kafka Consumer is being correctly called when messages are sent to it's designated topic.
My consumer:
#Service
#Slf4j
#AllArgsConstructor(onConstructor = #__(#Autowired))
public class ProcessingConsumer {
private AppService appService;
#KafkaListener(
topics = "${topic}",
containerFactory = "processingConsumerContainerFactory")
public void listen(ConsumerRecord<Key, Value> message, Acknowledgment ack) {
try {
appService.processMessage(message);
ack.acknowledge();
} catch (Throwable t) {
log.error("error while processing message!", t);
}
}
}
My consumer config:
#EnableKafka
#Configuration
public class ProcessingCosumerConfig {
#Value("${spring.kafka.schema-registry-url}")
private String schemaRegistryUrl;
private KafkaProperties props;
public ProcessingCosumerConfig(KafkaProperties kafkaProperties) {
this.props = kafkaProperties;
}
public Map<String, Object> deserializerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
return props;
}
private KafkaAvroDeserializer getKafkaAvroDeserializer(Boolean isKey) {
KafkaAvroDeserializer kafkaAvroDeserializer = new KafkaAvroDeserializer();
kafkaAvroDeserializer.configure(deserializerConfigs(), isKey);
return kafkaAvroDeserializer;
}
private DefaultKafkaConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory<>(
props.buildConsumerProperties(),
getKafkaAvroDeserializer(true),
getKafkaAvroDeserializer(false));
}
#Bean(name = "processingConsumerContainerFactory")
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Key, Value>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Key, Value>
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
}
Finally, my (wannabe) test:
#DirtiesContext
public class ProcessingConsumerTest extends BaseIntegrationTest{
#Autowired private ProcessingProducerFixture processingProducer;
#Autowired private ProcessingConsumer processingConsumer;
#org.springframework.beans.factory.annotation.Value("${topic}")
String topic;
#Test
public void consumer_shouldConsumeMessages_whenMessagesAreSent() throws Exception{
Thread.sleep(1000);
ProducerRecord<Key, Value> message = new ProducerRecord<>(topic, new Key("b"), new Value("a", "b", "c", "d"));
processingProducer.send(message);
}
}
And that's about it for all I have so far.
I've tried checking if this approach gets to the consumer manually using debug and also even just putting simple prints there but the execution simply doesn't seems to get there. Also, if it could be somehow called correctly by my tests, I have no idea what to do to actually assert it in the actual test.
Inject a mock AppService into the listener and verify its processMessage() was called.
I am currently working on Kafka module where I am using spring-kafka abstraction of Kafka communication. I am able to integrate the producer & consumer from real implementation standpoint however, I am not sure how to test (specifically integration test) the business logic surrounds at consumer with #KafkaListener. I tried to follow spring-kafk documentation and various blogs on the topic but none of those answer my intended question.
Spring Boot test class
//imports not mentioned due to brevity
#RunWith(SpringRunner.class)
#SpringBootTest(classes = PaymentAccountUpdaterApplication.class,
webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class CardUpdaterMessagingIntegrationTest {
private final static String cardUpdateTopic = "TP.PRF.CARDEVENTS";
#Autowired
private ObjectMapper objectMapper;
#ClassRule
public static KafkaEmbedded kafkaEmbedded =
new KafkaEmbedded(1, false, cardUpdateTopic);
#Test
public void sampleTest() throws Exception {
Map<String, Object> consumerConfig =
KafkaTestUtils.consumerProps("test", "false", kafkaEmbedded);
consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerConfig);
ContainerProperties containerProperties = new ContainerProperties(cardUpdateTopic);
containerProperties.setMessageListener(new SafeStringJsonMessageConverter());
KafkaMessageListenerContainer<String, String>
container = new KafkaMessageListenerContainer<>(cf, containerProperties);
BlockingQueue<ConsumerRecord<String, String>> records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<String, String>) data -> {
System.out.println("Added to Queue: "+ data);
records.add(data);
});
container.setBeanName("templateTests");
container.start();
ContainerTestUtils.waitForAssignment(container, kafkaEmbedded.getPartitionsPerTopic());
Map<String, Object> producerConfig = KafkaTestUtils.senderProps(kafkaEmbedded.getBrokersAsString());
producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> pf =
new DefaultKafkaProducerFactory<>(producerConfig);
KafkaTemplate<String, Object> kafkaTemplate = new KafkaTemplate<>(pf);
String payload = objectMapper.writeValueAsString(accountWrapper());
kafkaTemplate.send(cardUpdateTopic, 0, payload);
ConsumerRecord<String, String> received = records.poll(10, TimeUnit.SECONDS);
assertThat(received).has(partition(0));
}
#After
public void after() {
kafkaEmbedded.after();
}
private AccountWrapper accountWrapper() {
return AccountWrapper.builder()
.eventSource("PROFILE")
.eventName("INITIAL_LOAD_CARD")
.eventTime(LocalDateTime.now().toString())
.eventID("8730c547-02bd-45c0-857b-d90f859e886c")
.details(AccountDetail.builder()
.customerId("idArZ_K2IgE86DcPhv-uZw")
.vaultId("912A60928AD04F69F3877D5B422327EE")
.expiryDate("122019")
.build())
.build();
}
}
Listener Class
#Service
public class ConsumerMessageListener {
private static final Logger LOGGER = LoggerFactory.getLogger(ConsumerMessageListener.class);
private ConsumerMessageProcessorService consumerMessageProcessorService;
public ConsumerMessageListener(ConsumerMessageProcessorService consumerMessageProcessorService) {
this.consumerMessageProcessorService = consumerMessageProcessorService;
}
#KafkaListener(id = "cardUpdateEventListener",
topics = "${kafka.consumer.cardupdates.topic}",
containerFactory = "kafkaJsonListenerContainerFactory")
public void processIncomingMessage(Payload<AccountWrapper,Object> payloadContainer,
Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.OFFSET) String offset) {
try {
// business logic to process the message
consumerMessageProcessorService.processIncomingMessage(payloadContainer);
} catch (Exception e) {
LOGGER.error("Unhandled exception in card event message consumer. Discarding offset commit." +
"message:: {}, details:: {}", e.getMessage(), messageMetadataInfo);
throw e;
}
acknowledgment.acknowledge();
}
}
My question is: In the test class I am asserting the partition, payload etc which is polling from BlockingQueue, however, my question is how can I verify that my business logic in the class annotated with #KafkaListener is getting executed properly and routing the messages to different topic based on error handling and other business scenarios. In some of the examples, I saw CountDownLatch to assert which I don't want to put in my business logic to assert in a production grade code. Also the message processor is Async so, how to assert the execution, not sure.
Any help, appreciated.
is getting executed properly and routing the messages to different topic based on error handling and other business scenarios.
An integration test can consume from that "different" topic to assert that the listener processed it as expected.
You could also add a BeanPostProcessor to your test case and wrap the ConsumerMessageListener bean in a proxy to verify the input arguments are as expected.
EDIT
Here is an example of wrapping the listener in a proxy...
#SpringBootApplication
public class So53678801Application {
public static void main(String[] args) {
SpringApplication.run(So53678801Application.class, args);
}
#Bean
public MessageConverter converter() {
return new StringJsonMessageConverter();
}
public static class Foo {
private String bar;
public Foo() {
super();
}
public Foo(String bar) {
this.bar = bar;
}
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
#Component
class Listener {
#KafkaListener(id = "so53678801", topics = "so53678801")
public void processIncomingMessage(Foo payload,
Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.OFFSET) String offset) {
System.out.println(payload);
// ...
acknowledgment.acknowledge();
}
}
and
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.ack-mode=manual
and
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { So53678801Application.class,
So53678801ApplicationTests.TestConfig.class})
public class So53678801ApplicationTests {
#ClassRule
public static EmbeddedKafkaRule embededKafka = new EmbeddedKafkaRule(1, false, "so53678801");
#BeforeClass
public static void setup() {
System.setProperty("spring.kafka.bootstrap-servers",
embededKafka.getEmbeddedKafka().getBrokersAsString());
}
#Autowired
private KafkaTemplate<String, String> template;
#Autowired
private ListenerWrapper wrapper;
#Test
public void test() throws Exception {
this.template.send("so53678801", "{\"bar\":\"baz\"}");
assertThat(this.wrapper.latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.wrapper.argsReceived[0]).isInstanceOf(Foo.class);
assertThat(((Foo) this.wrapper.argsReceived[0]).getBar()).isEqualTo("baz");
assertThat(this.wrapper.ackCalled).isTrue();
}
#Configuration
public static class TestConfig {
#Bean
public static ListenerWrapper bpp() { // BPPs have to be static
return new ListenerWrapper();
}
}
public static class ListenerWrapper implements BeanPostProcessor, Ordered {
private final CountDownLatch latch = new CountDownLatch(1);
private Object[] argsReceived;
private boolean ackCalled;
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof Listener) {
ProxyFactory pf = new ProxyFactory(bean);
pf.setProxyTargetClass(true); // unless the listener is on an interface
pf.addAdvice(interceptor());
return pf.getProxy();
}
return bean;
}
private MethodInterceptor interceptor() {
return invocation -> {
if (invocation.getMethod().getName().equals("processIncomingMessage")) {
Object[] args = invocation.getArguments();
this.argsReceived = Arrays.copyOf(args, args.length);
Acknowledgment ack = (Acknowledgment) args[1];
args[1] = (Acknowledgment) () -> {
this.ackCalled = true;
ack.acknowledge();
};
try {
return invocation.proceed();
}
finally {
this.latch.countDown();
}
}
else {
return invocation.proceed();
}
};
}
}
}
Can I please check with the community what is the best way to listen to multiple topics, with each topic containing a message of a different class?
I've been playing around with Spring Kafka for the past couple of days. My thought process so far:
Because you need to pass your deserializer into DefaultKafkaConsumerFactory when initializing a KafkaListenerContainerFactory. This seems to indicate that if I need multiple containers each deserializing a message of a different type, I will not be able to use the #EnableKafka and #KafkaListener annotations.
This leads me to think that the only way to do so would be to instantiate multiple KafkaMessageListenerContainers.
And given that KafkaMessageListenerContainers is single threaded and I need to listen to multiple topics at the same time, I really should be using multiple ConcurrentKafkaMessageListenerContainers.
Would I be on the right track here? Is there a better way to do this?
Thanks!
Here is a very simple example.
// -----------------------------------------------
// Sender
// -----------------------------------------------
#Configuration
public class SenderConfig {
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
......
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, Class1> producerFactory1() {
return new DefaultKafkaProducerFactory<String, Class1>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Class1> kafkaTemplate1() {
return new KafkaTemplate<>(producerFactory1());
}
#Bean
public Sender1 sender1() {
return new Sender1();
}
//-------- send the second class --------
#Bean
public ProducerFactory<String, Class2> producerFactory2() {
return new DefaultKafkaProducerFactory<String, Class2>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Class2> kafkaTemplate2() {
return new KafkaTemplate<>(producerFactory2());
}
#Bean
public Sender2 sender2() {
return new Sender2();
}
}
public class Sender1 {
#Autowired
private KafkaTemplate<String, Class1> kafkaTemplate1;
public void send(String topic, Class1 c1) {
kafkaTemplate1.send(topic, c1);
}
}
public class Sender2 {
#Autowired
private KafkaTemplate<String, Class2> kafkaTemplate2;
public void send(String topic, Class2 c2) {
kafkaTemplate2.send(topic, c2);
}
}
// -----------------------------------------------
// Receiver
// -----------------------------------------------
#Configuration
#EnableKafka
public class ReceiverConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
......
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
#Bean
public ConsumerFactory<String, Class1> consumerFactory1() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<>(Class1.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Class1> kafkaListenerContainerFactory1() {
ConcurrentKafkaListenerContainerFactory<String, Class1> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory1());
return factory;
}
#Bean
public Receiver1 receiver1() {
return new Receiver1();
}
//-------- add the second listener
#Bean
public ConsumerFactory<String, Class2> consumerFactory2() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<>(Class2.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Class2> kafkaListenerContainerFactory2() {
ConcurrentKafkaListenerContainerFactory<String, Class2> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory2());
return factory;
}
#Bean
public Receiver2 receiver2() {
return new Receiver2();
}
}
public class Receiver1 {
#KafkaListener(id="listener1", topics = "topic1", containerFactory = "kafkaListenerContainerFactory1")
public void receive(Class1 c1) {
LOGGER.info("Received c1");
}
}
public class Receiver2 {
#KafkaListener(id="listener2", topics = "topic2", containerFactory = "kafkaListenerContainerFactory2")
public void receive(Class2 c2) {
LOGGER.info("Received c2");
}
}
You can use the annotations, you would just need to use a different listener container factory for each.
The framework will create a listener container for each annotation.
You can also listen to multiple topics on a single-threaded container but they would be processed, er, on a single thread.
Take a look at the code from my SpringOne Platform talk last year - you might want to look at app6, which shows how to use a MessageConverter instead of a deserializer, which might help simplify your configuration.
I would like to use the code below to apply your sence
#Configuration
#EnableKafka
public class ConsumerConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Value("${kafka.group-id}")
private String groupId;
/**
* Configuration of Consumer properties.
*
* #return
*/
//#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
return props;
}
//#Bean
public ConsumerFactory<String, ClassA> consumerFactory1() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new ClassA());
}
/**
* Kafka Listener Container Factory.
* #return
*/
#Bean("kafkaListenerContainerFactory1")
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, ClassA>> kafkaListenerContainerFactory1() {
ConcurrentKafkaListenerContainerFactory<String, ClassA> factory;
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory1());
return factory;
}
//#Bean
public ConsumerFactory<String, ClassB> consumerFactory2() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new ClassB());
}
/**
* Kafka Listener Container Factory.
* #return
*/
#Bean("kafkaListenerContainerFactory2")
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, ClassB>> kafkaListenerContainerFactory2() {
ConcurrentKafkaListenerContainerFactory<String, ClassB> factory;
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory2());
return factory;
}
#Bean
public ReceiverClass receiver() {
return new ReceiverClass();
}
class ReceiverClass {
#KafkaListener(topics = "topic1", group = "group-id-test",
containerFactory = "kafkaListenerContainerFactory1")
public void receiveTopic1(ClassA a) {
System.out.println("ReceiverClass.receive() ClassA : " + a);
}
#KafkaListener(topics = "topic2", group = "group-id-test",
containerFactory = "kafkaListenerContainerFactory2")
public void receiveTopic2(ClassB b) {
System.out.println("ReceiverClass.receive() Classb : " + b);
}
}
class ClassB implements Deserializer {
#Override
public void configure(Map configs, boolean isKey) {
// TODO Auto-generated method stub
}
#Override
public Object deserialize(String topic, byte[] data) {
// TODO Auto-generated method stub
return null;
}
#Override
public void close() {
// TODO Auto-generated method stub
}
}
class ClassA implements Deserializer {
#Override
public void configure(Map configs, boolean isKey) {
// TODO Auto-generated method stub
}
#Override
public Object deserialize(String topic, byte[] data) {
// TODO Auto-generated method stub
return null;
}
#Override
public void close() {
// TODO Auto-generated method stub
}
}
}
I would like to create a consumer inside a Spring MVC web app. Basically, I'd like the web app to listen to some topics on Kafka and take some action based on the received messages.
All the examples I've seen so far are using either a standalone app with an infinite loop (using plain java) or inside a unit test (using spring):
#Autowired
private Listener listener;
#Autowired
private KafkaTemplate<Integer, String> template;
#Test
public void testSimple() throws Exception {
template.send("annotated1", 0, "foo");
template.flush();
assertTrue(this.listener.latch1.await(10, TimeUnit.SECONDS));
}
#Configuration
#EnableKafka
public class Config {
#Bean
ConcurrentKafkaListenerContainerFactory<Integer, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
...
return props;
}
#Bean
public Listener listener() {
return new Listener();
}
#Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
...
return props;
}
#Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
public class Listener {
private final CountDownLatch latch1 = new CountDownLatch(1);
#KafkaListener(id = "foo", topics = "annotated1")
public void listen1(String foo) {
this.latch1.countDown();
}
}
What would be the best place to create the listener? Should I put it there:
#SpringBootApplication
public class Application {
#Autowired
private Listener listener;
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}