Spring Kafka multiple topic for one class dynamically - java

I recently wanted to add a new behavior in my project that uses spring-kafka.
The idea is really simple :
App1 create a new scenario name "SCENARIO_1" and publish this string in the topic "NEW_SCENARIO"
App1 publish some message on topic "APP2-SCENARIO_1" and "APP3-SCENARIO_1"
App2 (group-id=app2) listens on NEW_SCENARIO and creates a new consumer<Object,String> listening on a new topic "APP2-SCENARIO_1"
App3 (group-id=app3) listens on NEW_SCENARIO and creates a new consumer<Object,String> listening on a new topic "APP3-SCENARIO_1"
The goal is to create dynamically new topics and consumer. I cannot use spring kafka annotation since I need it to be dynamic so I did this :
#KafkaListener(topics = ScenarioTopics.NEW_SCENARIO)
public void receive(final String topic) {
logger.info("Get new scenario " + topic + ", creating new consumer");
TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset(
"APP2_" + topic, 1, 0L);
ContainerProperties containerProps = new ContainerProperties(topicPartitionOffset);
containerProps.setMessageListener((MessageListener<Object, String>) message -> {
// process my message
});
KafkaMessageListenerContainer<Object, String> container = new KafkaMessageListenerContainer<>(kafkaPeopleConsumerFactory, containerProps);
container.start();
}
And this does not work. I'm missing probably something, but I can't figure what.
Here I have some logs that tells me that the leader is not available, which is weird since I got the new scenario event.
2022-03-14 18:08:26.057 INFO 21892 --- [ntainer#0-0-C-1] o.l.b.v.c.c.i.k.KafkaScenarioListener : Get new scenario W4BdDBEowY, creating new consumer
2022-03-14 18:08:26.061 INFO 21892 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
[...lot of things...]
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.0
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 8cb0a5e9d3441962
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1647277706067
2022-03-14 18:08:26.068 INFO 21892 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Subscribed to partition(s): PEOPLE_W4BdDBEowY-1
2022-03-14 18:08:26.072 INFO 21892 --- [ -C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Seeking to offset 0 for partition PEOPLE_W4BdDBEowY-1
2022-03-14 18:08:26.081 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 2 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
2022-03-14 18:08:26.081 INFO 21892 --- [ -C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Cluster ID: ebyKy-RVSRmUDaaeQqMaQg
2022-03-14 18:18:04.882 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 5314 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
2022-03-14 18:18:04.997 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 5315 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
How do I create dynamically a kafka consumer on a topic ? I think I do it very wrong, but I searched a lot and really didn't find anything.

There are several answers here about dynamically creating containers...
Trigger one Kafka consumer by using values of another consumer In Spring Kafka
Kafka Consumer in spring can I re-assign partitions programmatically?
Create consumer dynamically spring kafka
Dynamically start and off KafkaListener just to load previous messages at the start of a session

Related

Spring boot. Kafka. Disconnect from Node

I'm getting disconnected from a Node when trying to listen to the subscribed topic. I do not need produce messages, it is already implemented. VPN is used to connect to Kafka.
I use Spring boot 2.7.0, Java 17.
Configuration:
pom.xml:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.9.2</version>
</dependency>
Configuration class:
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.36.12.5:2181");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String>
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
Listener:
#Component
public class KafkaListenersService {
#KafkaListener(topics = "ift.notification.clientId.request", groupId = "group-id")
public void listen(String message) {
System.out.println("Received Message in group - group-id: " + message);
}
}
What steps I have already done:
I added host domain and its IP address to /etc/hosts. So it is being resolved correctly.
I used Offset Explorer 2 as a kafka tool and managed to connect to the specified host. I found the topic I needed and managed to read messages from it. I think it means that I am able to locally connect to kafka, so it means I can do it from Java too.
I also tried to move my settings for Kafka from #Configuration class to application.yml. It looked like this:
spring:
kafka:
consumer:
bootstrap-servers: 10.36.12.5:2181
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
Alas I got disconnected and failed to read any messages as well.
What I get in the logs:
2022-11-22 20:29:21.715 INFO 5005 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.2.3
2022-11-22 20:29:21.716 INFO 5005 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 50029d3ed8ba576f
2022-11-22 20:29:21.716 INFO 5005 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1669134561713
2022-11-22 20:29:21.719 INFO 5005 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-group-id-1, groupId=group-id] Subscribed to topic(s): ift.notification.clientId.request
2022-11-22 20:29:21.743 INFO 5005 --- [ main] insure.pulse.Main : Started Main in 2.153 seconds (JVM running for 2.83)
2022-11-22 20:29:22.265 INFO 5005 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-id-1, groupId=group-id] Node -1 disconnected.
2022-11-22 20:29:22.268 INFO 5005 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-id-1, groupId=group-id] Cancelled in-flight API_VERSIONS request with correlation id 1 due to node -1 being disconnected (elapsed time since creation: 149ms, elapsed time since send: 149ms, request timeout: 30000ms)
2022-11-22 20:29:22.268 WARN 5005 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-id-1, groupId=group-id] Bootstrap broker 10.36.12.5:2181 (id: -1 rack: null) disconnected
After that the Warning keeps repeating. I think it's because KafkaListener keeps trying to connect to Node.
Any help will be much appreciated. Feel free to ask for any additional info too, I will gladly provide it.
I actually found out what the reason was. I used connection to kafka via Offset Explorer 2 as a reference. I saw that Zookeeper port was 2181:
so I used it as a port for bootstrap-server in my application.yml as well:
spring:
kafka:
consumer:
bootstrap-servers: 10.36.12.5:2181
That was my mistake. It seems that zookeeper is not a bootstrap-server, but it kind of gives you the address to a bootstrap-server. So the address of the bootstrap server itself (and its specific port) should be specified in the application.yml:
spring:
kafka:
consumer:
bootstrap-servers: 10.36.12.5:9092
I found the bootstrap-server port in Offset Explorer as well:
When I fixed it everything worked fine.

kafka streams do not run with dynamically generated classes

I want to start a stream that deserialize a dynamically created Class. This Bean is created by use of reflection and URLCLassLOader with a given String Class as parameter, but the KafkaStreams API doesn't recognize my new class.
The streams work perfectly with pre-created Beans, but close automatically when the dynamic one is used. The deserilizer was created with Jackson and works alone too.
Here is the class parser code
#SuppressWarnings("unchecked")
public static Class<?> getClassFromSource(String className, String sourceCode)
throws IOException, ClassNotFoundException {
/*
* create an empty source file
*/
File sourceFile = new File(com.google.common.io.Files.createTempDir(), className + ".java");
sourceFile.deleteOnExit();
/*
* generate the source code, using the source filename as the class name write
* the source code into the source file
*/
try (FileWriter writer = new FileWriter(sourceFile)) {
writer.write(sourceCode);
}
/*
* compile the source file
*/
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
File parentDirectory = null;
try (StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null)) {
parentDirectory = sourceFile.getParentFile();
fileManager.setLocation(StandardLocation.CLASS_OUTPUT, Arrays.asList(parentDirectory));
Iterable<? extends JavaFileObject> compilationUnits = fileManager
.getJavaFileObjectsFromFiles(Arrays.asList(sourceFile));
compiler.getTask(null, fileManager, null, null, null, compilationUnits).call();
}
/*
* load the compiled class
*/
try (StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null)) {
parentDirectory = sourceFile.getParentFile();
fileManager.setLocation(StandardLocation.CLASS_OUTPUT, Arrays.asList(parentDirectory));
Iterable<? extends JavaFileObject> compilationUnits = fileManager
.getJavaFileObjectsFromFiles(Arrays.asList(sourceFile));
compiler.getTask(null, fileManager, null, null, null, compilationUnits).call();
}
/*
* load the compiled class
*/
try (URLClassLoader classLoader = URLClassLoader.newInstance(new URL[] { parentDirectory.toURI().toURL() })) {
return (Class<?>) classLoader.loadClass(className);
}
}
First i instantiate my Serdes that receive a Class as parameter
// dynamic generated class from a source class
Class clazz = getClassFromSource("DynamicClass", source);
// Serdes for created class that implements org.apache.kafka.common.serialization.Deserializer
DynamicDeserializer deserializer = new DynamicDeserializer(clazz);
DynamicSerializer serializer = new DynamicSerializer();
Serde<?> encryptedSerde = Serdes.serdeFrom(serializer, deserializer);
And then start the Stream topology that use this Serdes
StreamsBuilder builder = new StreamsBuilder();
KTable<String, Long> dynamicStream = builder
.stream(topicName, Consumed.with(Serdes.String(), encryptedSerde))
.groupByKey()
.count();
dynamicStream.to(outputTopicName, Produced.with(Serdes.String(), Serdes.Long()));
Stream topology should execute normally, but always generates this error
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.Target' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.layout' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.layout.ConversionPattern' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'stream.restart.application' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'aes.key.path' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'path.to.listening' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'admin.retries' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.rootLogger' was supplied but isn't a known config.
2019-09-01 14:54:16 INFO AppInfoParser:117 - Kafka version: 2.3.0
2019-09-01 14:54:16 INFO AppInfoParser:118 - Kafka commitId: fc1aaa116b661c8a
2019-09-01 14:54:16 INFO AppInfoParser:119 - Kafka startTimeMs: 1567360456724
2019-09-01 14:54:16 INFO KafkaStreams:800 - stream-client [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72] Started Streams client
2019-09-01 14:54:16 INFO StreamThread:740 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Starting
2019-09-01 14:54:16 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from CREATED to RUNNING
2019-09-01 14:54:16 INFO KafkaConsumer:1027 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Subscribed to pattern: 'DynamicBean|streamingbean-test-20190901145412544-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition'
2019-09-01 14:54:17 INFO Metadata:266 - [Producer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-producer] Cluster ID: tp7OBhwVRQqT2NpPlL55_Q
2019-09-01 14:54:17 INFO Metadata:266 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Cluster ID: tp7OBhwVRQqT2NpPlL55_Q
2019-09-01 14:54:17 INFO AbstractCoordinator:728 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Discovered group coordinator AcerDerick:9092 (id: 2147483647 rack: null)
2019-09-01 14:54:17 INFO ConsumerCoordinator:476 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Revoking previously assigned partitions []
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED
2019-09-01 14:54:17 INFO KafkaStreams:257 - stream-client [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72] State transition from RUNNING to REBALANCING
2019-09-01 14:54:17 INFO KafkaConsumer:1068 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2019-09-01 14:54:17 INFO StreamThread:324 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] partition revocation took 0 ms.
suspended active tasks: []
suspended standby tasks: []
2019-09-01 14:54:17 INFO AbstractCoordinator:505 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] (Re-)joining group
2019-09-01 14:54:17 ERROR StreamsPartitionAssignor:354 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer] DynamicClass is unknown yet during rebalance, please make sure they have been pre-created before starting the Streams application.
2019-09-01 14:54:17 INFO AbstractCoordinator:469 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Successfully joined group with generation 1
2019-09-01 14:54:17 INFO ConsumerCoordinator:283 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Setting newly assigned partitions:
2019-09-01 14:54:17 INFO StreamThread:1164 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Informed to shut down
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from PARTITIONS_REVOKED to PENDING_SHUTDOWN
2019-09-01 14:54:17 INFO StreamThread:1178 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Shutting down
2019-09-01 14:54:17 INFO KafkaConsumer:1068 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2019-09-01 14:54:17 INFO KafkaProducer:1153 - [Producer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2019-09-01 14:54:17 INFO StreamThread:1198 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Shutdown complete
After some time, I fixed this problem with a simple solution, but maybe not the most elegant. First I used a JSON String deserializer to get data from topic and then passed it to another deserializer that converts to my dynamic object.

camel kafka route does not stay up

I am trying to use kafka with camel and set up the following route:
public class WorkflowEventConsumerRoute extends RouteBuilder {
private static final String KAFKA_ENDPOINT =
"kafka:payments-bus?brokers=localhost:9092";
...
#Override
public void configure() {
from(KAFKA_ENDPOINT)
.routeId(format(KAFKA_CONSUMER))
.to("mock:end);
}
}
When I start my spring boot application I can see the route gets started but immediately after this it shuts down without any reasons given in the logs:
2018-12-21 12:06:45.012 INFO 12184 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2018-12-21 12:06:45.013 INFO 12184 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2018-12-21 12:06:45.014 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Route: kafka-consumer started and consuming from: kafka://payments-bus?brokers=localhost%3A9092
2018-12-21 12:06:45.015 INFO 12184 --- [r[payments-bus]] o.a.camel.component.kafka.KafkaConsumer : Subscribing payments-bus-Thread 0 to topic payments-bus
2018-12-21 12:06:45.015 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Total 1 routes, of which 1 are started
2018-12-21 12:06:45.015 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.23.0 (CamelContext: camel-1) started in 0.234 seconds
2018-12-21 12:06:45.019 INFO 12184 --- [ main] a.c.n.t.p.workflow.WorkflowApplication : Started WorkflowApplication in 3.815 seconds (JVM running for 4.513)
2018-12-21 12:06:45.024 INFO 12184 --- [ Thread-10] o.a.camel.spring.SpringCamelContext : Apache Camel 2.23.0 (CamelContext: camel-1) is shutting down
On the other hand if create an unit test and point to the same kafka endpoint I am able to read the kafka topic content using the org.apache.camel.ConsumerTemplate instance provided by the CamelTestSupport
Ultimately if I replace the kafka endpoint in my route with an activemq one the route starts OK and the application stays up.
Obviously I am missing something but I cannot figure out what.
Thank you in advance for your help.
Do your spring-boot app have a -web-starter or not. If not then you should turn on the camel run-controller to keep the boot application running.
In the application.properties add
camel.springboot.main-run-controller = true

Kafka Consumer Concurrency on Spring Boot Application Startup

I am experimenting on Kafka with Spring Boot.
Spring Boot 2.1.0.RELEASE
Spring-Kafka 2.2.0
My KafkaConfig for consumers looks like below:
#Bean
ThreadPoolTaskExecutor messageProcessorExecutor() {
ThreadPoolTaskExecutor exec = new ThreadPoolTaskExecutor();
exec.setCorePoolSize(10);
exec.setMaxPoolSize(20);
exec.setKeepAliveSeconds(30);
exec.setThreadNamePrefix("kafkaConsumer-");
return exec;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
DefaultKafkaConsumerFactory<String, String> consumerFactory = new DefaultKafkaConsumerFactory<>(consumerConfigs());
JsonDeserializer<String> valueDeserializer = new JsonDeserializer<>();
valueDeserializer.addTrustedPackages("path.to.my.pkgs");
consumerFactory.setValueDeserializer(valueDeserializer);
consumerFactory.setKeyDeserializer(new StringDeserializer());
return consumerFactory;
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(4000);
factory.getContainerProperties().setConsumerTaskExecutor(messageProcessorExecutor());
return factory;
}
private Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "groupId");
return props;
}
And I have one consumer.
#KafkaListener(topics = "Topic1", groupId = "groupId")
public void consume(MyMessage message) {
logger.info("Message is read.);
}
As you can see above configs, I have configured concurrency as 10. It is working in a way I want. When I push 3 message into related topic in Kafka, I can see each messages are consumed by different threads.
2018-12-12 23:41:50.416 INFO 1937 --- [kafkaConsumer-8] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:41:50.414 INFO 1937 --- [kafkaConsumer-2] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:41:50.461 INFO 1937 --- [kafkaConsumer-6] o.e.kafkalistener.KafkaListeners : Message is read.
However, consumer is working with one thread on application startup.
My test case:
shutdown the application
send 3 more messages to same Kafka topic
start the consumer application
I am seing logs like above:
2018-12-12 23:51:51.525 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:51.526 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:51.526 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:54.104 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Attempt to heartbeat failed since group is rebalancing
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Revoking previously assigned partitions [I have deleted here to make log more readable]
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [I have deleted here to make log more readable]
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] (Re-)joining group
2018-12-12 23:51:54.155 INFO 2023 --- [kafkaConsumer-9] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-10, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.155 INFO 2023 --- [kafkaConsumer-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-9] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-10, groupId=groupId] Setting newly assigned partitions [Topic1-0, Topic1-1, Topic1-2, Topic1-3, Topic1-12, Topic1-13, Topic1-14, Topic1-15, Topic1-16, Topic1-17, Topic1-18, Topic1-19, Topic1-4, Topic1-5, Topic1-6, Topic1-7, Topic1-8, Topic1-9, Topic1-10, Topic1-11]
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-2] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-3, groupId=groupId] Setting newly assigned partitions [Topic1-60, Topic1-61, Topic1-62, Topic1-63, Topic1-64, Topic1-65, Topic1-66, Topic1-67, Topic1-76, Topic1-77, Topic1-78, Topic1-79, Topic1-68, Topic1-69, Topic1-70, Topic1-71, Topic1-72, Topic1-73, Topic1-74, Topic1-75]
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-4] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-5, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-5, groupId=groupId] Setting newly assigned partitions [Topic1-116, Topic1-117, Topic1-118, Topic1-119, Topic1-108, Topic1-109, Topic1-110, Topic1-111, Topic1-112, Topic1-113, Topic1-114, Topic1-115, Topic1-100, Topic1-101, Topic1-102, Topic1-103, Topic1-104, Topic1-105, Topic1-106, Topic1-107]
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-6] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-7, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-6] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-7, groupId=groupId] Setting newly assigned partitions [Topic1-156, Topic1-157, Topic1-158, Topic1-159, Topic1-148, Topic1-149, Topic1-150, Topic1-151, Topic1-152, Topic1-153, Topic1-154, Topic1-155, Topic1-140, Topic1-141, Topic1-142, Topic1-143, Topic1-144, Topic1-145, Topic1-146, Topic1-147]
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-7] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-8, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-7] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-8, groupId=groupId] Setting newly assigned partitions [Topic1-160, Topic1-161, Topic1-162, Topic1-163, Topic1-172, Topic1-173, Topic1-174, Topic1-175, Topic1-176, Topic1-177, Topic1-178, Topic1-179, Topic1-164, Topic1-165, Topic1-166, Topic1-167, Topic1-168, Topic1-169, Topic1-170, Topic1-171]
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-5] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-6, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-5] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-6, groupId=groupId] Setting newly assigned partitions [Topic1-124, Topic1-125, Topic1-126, Topic1-127, Topic1-128, Topic1-129, Topic1-130, Topic1-131, Topic1-120, Topic1-121, Topic1-122, Topic1-123, Topic1-132, Topic1-133, Topic1-134, Topic1-135, Topic1-136, Topic1-137, Topic1-138, Topic1-139]
2018-12-12 23:51:54.158 INFO 2023 --- [afkaConsumer-10] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-11, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [afkaConsumer-10] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-11, groupId=groupId] Setting newly assigned partitions [Topic1-28, Topic1-29, Topic1-30, Topic1-31, Topic1-32, Topic1-33, Topic1-34, Topic1-35, Topic1-20, Topic1-21, Topic1-22, Topic1-23, Topic1-24, Topic1-25, Topic1-26, Topic1-27, Topic1-36, Topic1-37, Topic1-38, Topic1-39]
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-8] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-8] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=groupId] Setting newly assigned partitions [Topic1-188, Topic1-189, Topic1-190, Topic1-191, Topic1-192, Topic1-193, Topic1-194, Topic1-195, Topic1-180, Topic1-181, Topic1-182, Topic1-183, Topic1-184, Topic1-185, Topic1-186, Topic1-187, Topic1-196, Topic1-197, Topic1-198, Topic1-199]
2018-12-12 23:51:54.163 INFO 2023 --- [kafkaConsumer-3] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.165 INFO 2023 --- [kafkaConsumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=groupId] Setting newly assigned partitions [Topic1-92, Topic1-93, Topic1-94, Topic1-95, Topic1-96, Topic1-97, Topic1-98, Topic1-99, Topic1-84, Topic1-85, Topic1-86, Topic1-87, Topic1-88, Topic1-89, Topic1-90, Topic1-91, Topic1-80, Topic1-81, Topic1-82, Topic1-83]
2018-12-12 23:51:54.189 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Setting newly assigned partitions [Topic1-52, Topic1-53, Topic1-54, Topic1-55, Topic1-56, Topic1-57, Topic1-58, Topic1-59, Topic1-44, Topic1-45, Topic1-46, Topic1-47, Topic1-48, Topic1-49, Topic1-50, Topic1-51, Topic1-40, Topic1-41, Topic1-42, Topic1-43]
2018-12-12 23:51:54.192 INFO 2023 --- [kafkaConsumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-52, Topic1-53, Topic1-54, Topic1-55, Topic1-56, Topic1-57, Topic1-58, Topic1-59, Topic1-44, Topic1-45, Topic1-46, Topic1-47, Topic1-48, Topic1-49, Topic1-50, Topic1-51, Topic1-40, Topic1-41, Topic1-42, Topic1-43]
2018-12-12 23:51:54.278 INFO 2023 --- [kafkaConsumer-2] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-60, Topic1-61, Topic1-62, Topic1-63, Topic1-64, Topic1-65, Topic1-66, Topic1-67, Topic1-76, Topic1-77, Topic1-78, Topic1-79, Topic1-68, Topic1-69, Topic1-70, Topic1-71, Topic1-72, Topic1-73, Topic1-74, Topic1-75]
2018-12-12 23:51:54.278 INFO 2023 --- [kafkaConsumer-9] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-0, Topic1-1, Topic1-2, Topic1-3, Topic1-12, Topic1-13, Topic1-14, Topic1-15, Topic1-16, Topic1-17, Topic1-18, Topic1-19, Topic1-4, Topic1-5, Topic1-6, Topic1-7, Topic1-8, Topic1-9, Topic1-10, Topic1-11]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-4] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-116, Topic1-117, Topic1-118, Topic1-119, Topic1-108, Topic1-109, Topic1-110, Topic1-111, Topic1-112, Topic1-113, Topic1-114, Topic1-115, Topic1-100, Topic1-101, Topic1-102, Topic1-103, Topic1-104, Topic1-105, Topic1-106, Topic1-107]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-6] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-156, Topic1-157, Topic1-158, Topic1-159, Topic1-148, Topic1-149, Topic1-150, Topic1-151, Topic1-152, Topic1-153, Topic1-154, Topic1-155, Topic1-140, Topic1-141, Topic1-142, Topic1-143, Topic1-144, Topic1-145, Topic1-146, Topic1-147]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-8] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-188, Topic1-189, Topic1-190, Topic1-191, Topic1-192, Topic1-193, Topic1-194, Topic1-195, Topic1-180, Topic1-181, Topic1-182, Topic1-183, Topic1-184, Topic1-185, Topic1-186, Topic1-187, Topic1-196, Topic1-197, Topic1-198, Topic1-199]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-5] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-124, Topic1-125, Topic1-126, Topic1-127, Topic1-128, Topic1-129, Topic1-130, Topic1-131, Topic1-120, Topic1-121, Topic1-122, Topic1-123, Topic1-132, Topic1-133, Topic1-134, Topic1-135, Topic1-136, Topic1-137, Topic1-138, Topic1-139]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-7] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-160, Topic1-161, Topic1-162, Topic1-163, Topic1-172, Topic1-173, Topic1-174, Topic1-175, Topic1-176, Topic1-177, Topic1-178, Topic1-179, Topic1-164, Topic1-165, Topic1-166, Topic1-167, Topic1-168, Topic1-169, Topic1-170, Topic1-171]
2018-12-12 23:51:54.283 INFO 2023 --- [afkaConsumer-10] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-28, Topic1-29, Topic1-30, Topic1-31, Topic1-32, Topic1-33, Topic1-34, Topic1-35, Topic1-20, Topic1-21, Topic1-22, Topic1-23, Topic1-24, Topic1-25, Topic1-26, Topic1-27, Topic1-36, Topic1-37, Topic1-38, Topic1-39]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-3] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-92, Topic1-93, Topic1-94, Topic1-95, Topic1-96, Topic1-97, Topic1-98, Topic1-99, Topic1-84, Topic1-85, Topic1-86, Topic1-87, Topic1-88, Topic1-89, Topic1-90, Topic1-91, Topic1-80, Topic1-81, Topic1-82, Topic1-83]
I think, 3 unread messages are being read before all consumers are ready, and each of them is read by consumer named kafkaConsumer-1. This situation did not change when I pushed much more messages when the consumer application is closed.
How can I concurrently read every unread messages on application startup ?

Spring cloud stream / Kafka exceptions

I have problems with a service which uses spring cloud stream and kafka. The service had been working ok, but yesterday started reporting a series of exceptions on startup:
Checking for rethrow: count=2
2018-09-11 10:43:34.904 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Retry: count=2
2018-09-11 10:43:34.904 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.integration.channel.DirectChannel : preSend on channel 'payment-reply', message: GenericMessage [payload=byte[1478], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:602, deliveryAttempt=3, X-B3-ParentSpanId=a9fe9b1c87b14698, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedTopic=paymentResponse, spanTraceId=966a10371583367f, spanId=7aa71302bc18bb4c, spanParentSpanId=a9fe9b1c87b14698, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:601, nativeHeaders={spanTraceId=[966a10371583367f], spanId=[7aa71302bc18bb4c], spanParentSpanId=[a9fe9b1c87b14698], spanSampled=[0]}, kafka_offset=2299, X-B3-SpanId=7aa71302bc18bb4c, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2fb81502, X-B3-Sampled=0, X-B3-TraceId=966a10371583367f, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592853999}]
2018-09-11 10:43:34.904 DEBUG [payment-gateway,966a10371583367f,c94b21ccaaed668b,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Created a new span in pre sendNoopSpan{context=966a10371583367f/c94b21ccaaed668b}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,4476713d70434d52,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Created a new span in before handleNoopSpan{context=966a10371583367f/e1d1a2a6b9ad093e}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,4476713d70434d52,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Will finish the current span after message handled NoopSpan{context=966a10371583367f/4476713d70434d52}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,c94b21ccaaed668b,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Will finish the current span after completion NoopSpan{context=966a10371583367f/c94b21ccaaed668b}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-11 10:43:35.001 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {}
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Fetch READ_UNCOMMITTED at offset 0 for partition refundResponse-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Added READ_UNCOMMITTED fetch request for partition refundResponse-0 at offset 0 to node 10.244.0.194:9092 (id: 2 rack: null)
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Sending READ_UNCOMMITTED fetch for partitions [refundResponse-0] to broker 10.244.0.194:9092 (id: 2 rack: null)
2018-09-11 10:43:35.003 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Checking for rethrow: count=3
2018-09-11 10:43:35.003 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Retry failed last attempt: count=3
2018-09-11 10:43:35.004 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.i.h.a.ErrorMessageSendingRecoverer : Sending ErrorMessage: failedMessage: GenericMessage [payload=byte[1478], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:602, deliveryAttempt=3, X-B3-ParentSpanId=7aa71302bc18bb4c, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=966a10371583367f, spanId=c94b21ccaaed668b, spanParentSpanId=7aa71302bc18bb4c, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:601, nativeHeaders={spanTraceId=[966a10371583367f], spanId=[c94b21ccaaed668b], spanParentSpanId=[7aa71302bc18bb4c], spanSampled=[0]}, kafka_offset=2299, X-B3-SpanId=c94b21ccaaed668b, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2fb81502, X-B3-Sampled=0, X-B3-TraceId=966a10371583367f, id=83994228-ba45-2303-1f7e-2eaf8f49c400, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592853999, timestamp=1536662614904}]
2018-09-11 08:44:19.837 ERROR [payment-gateway,bd9888a7d590ebf7,535db983ae0aedab,false] 1 --- [container-0-C-1] o.s.integration.handler.LoggingHandler :
org.springframework.messaging.MessageDeliveryException:
Dispatcher has no subscribers for channel 'application-1.payment-reply'.; nested exception is org.springframework.integration.MessageDispatchingException:
Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[1197], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:426, deliveryAttempt=3, X-B3-ParentSpanId=760139e0bc5d9ac0, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=bd9888a7d590ebf7, spanId=5c6ac2c521faf6e7, spanParentSpanId=760139e0bc5d9ac0, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:425, nativeHeaders={spanTraceId=[bd9888a7d590ebf7], spanId=[535db983ae0aedab], spanParentSpanId=[5c6ac2c521faf6e7], spanSampled=[0], X-B3-TraceId=[bd9888a7d590ebf7], X-B3-SpanId=[535db983ae0aedab], X-B3-ParentSpanId=[5c6ac2c521faf6e7], X-B3-Sampled=[0]}, kafka_offset=2258, X-B3-SpanId=5c6ac2c521faf6e7, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#59715a4a, X-B3-Sampled=0, X-B3-TraceId=bd9888a7d590ebf7, id=88531659-3fb0-a59f-bb69-54c9ba82d608, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592840192, timestamp=1536655459828}], failedMessage=GenericMessage [payload=byte[1197], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:426, deliveryAttempt=3, X-B3-ParentSpanId=760139e0bc5d9ac0, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=bd9888a7d590ebf7, spanId=5c6ac2c521faf6e7, spanParentSpanId=760139e0bc5d9ac0, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:425, nativeHeaders={spanTraceId=[bd9888a7d590ebf7], spanId=[535db983ae0aedab], spanParentSpanId=[5c6ac2c521faf6e7], spanSampled=[0], X-B3-TraceId=[bd9888a7d590ebf7], X-B3-SpanId=[535db983ae0aedab], X-B3-ParentSpanId=[5c6ac2c521faf6e7], X-B3-Sampled=[0]}, kafka_offset=2258, X-B3-SpanId=5c6ac2c521faf6e7, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#59715a4a, X-B3-Sampled=0, X-B3-TraceId=bd9888a7d590ebf7, id=88531659-3fb0-a59f-bb69-54c9ba82d608, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592840192, timestamp=1536655459828}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
after some time we then see exceptions like this:
Caused by: org.springframework.messaging.core.DestinationResolutionException: failed to look up MessageChannel with name '946859a6-bc27-466d-91ba-3da93af50ac9:1' in the BeanFactory.; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named '946859a6-bc27-466d-91ba-3da93af50ac9:1' available
the connection to kafka is configured with a property: spring.kafka.bootstrap-server = kafka.kafka:9092
and the topics are configured with spring cloud stream properties: spring.cloud.stream.bindings.[topic-name].destination = blah
The interaction with kafka goes via spring integration with code like this:
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = KafkaConfig.ENRICH_PAYMENT, replyChannel = ChannelNames.PAYMENT_REPLY, replyTimeout = 10000)
String processPayment(String payload);
}
//Different class:
private final StreamGateway gateway;
...
gateway.processPayment(message)
This is running on an azure kubernetes deployment, and kafka is in a separate pod from the spring boot service.
thanks in advance.
Update:
The problem reoccured and some further investigation has highlighted a couple of things
Because we're using spring integration #MessagingGateway and #Gateway to create a synchronous interaction with Kafka, there is no normal topic StreamListener or subscriber
The problem is occurring when there is a lag on the topic, i.e. there are messages in the topic beyond the topic offset.
The lack of a normal StreamListener means the lag messages have no means of being processed. Only when a connection is made by the MessageGateway, is it possible for messages to be read from the topic.
One means of getting rid of the problem is to read all 'lag' messages, so that the lag is 0. The service will then start normally, however if I manually post messages to the topic (out-with the MessageGateway interaction), then the error reoccurs.
A second partial solution (which I dont fully understand yet) is to add a #DependsOn annotation to the MessageGateway, indicating that it requires a bean separately created with a #Input SubscribableChannel object. This means the SubscribableChannel must be created before the MessageGateway, therefore creating a Subscriber, however there is still no StreamListener, so exceptions are still thrown as lag messages are pulled from the topic, with no-where to go 🤨
While I am not sure about the details of your application, what is clear is that a Message gets delivered to an application-1.payment-reply channel which, as the error states, has no subscriber. Basically it means there is no listener on that channel (such as #StreamListener or #ServiceActivator etc).
It is a very common Spring Integration miss-configuration, but without looking at your app it is hard to say where it is.
On looking at the debug log I noticed that the service was connecting to other topics correctly, but having problems with the payment-reply topic. I tried deleting this topic and restarting the service. This fixed the problem.

Categories

Resources