Kafka Consumer Concurrency on Spring Boot Application Startup - java

I am experimenting on Kafka with Spring Boot.
Spring Boot 2.1.0.RELEASE
Spring-Kafka 2.2.0
My KafkaConfig for consumers looks like below:
#Bean
ThreadPoolTaskExecutor messageProcessorExecutor() {
ThreadPoolTaskExecutor exec = new ThreadPoolTaskExecutor();
exec.setCorePoolSize(10);
exec.setMaxPoolSize(20);
exec.setKeepAliveSeconds(30);
exec.setThreadNamePrefix("kafkaConsumer-");
return exec;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
DefaultKafkaConsumerFactory<String, String> consumerFactory = new DefaultKafkaConsumerFactory<>(consumerConfigs());
JsonDeserializer<String> valueDeserializer = new JsonDeserializer<>();
valueDeserializer.addTrustedPackages("path.to.my.pkgs");
consumerFactory.setValueDeserializer(valueDeserializer);
consumerFactory.setKeyDeserializer(new StringDeserializer());
return consumerFactory;
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(4000);
factory.getContainerProperties().setConsumerTaskExecutor(messageProcessorExecutor());
return factory;
}
private Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "groupId");
return props;
}
And I have one consumer.
#KafkaListener(topics = "Topic1", groupId = "groupId")
public void consume(MyMessage message) {
logger.info("Message is read.);
}
As you can see above configs, I have configured concurrency as 10. It is working in a way I want. When I push 3 message into related topic in Kafka, I can see each messages are consumed by different threads.
2018-12-12 23:41:50.416 INFO 1937 --- [kafkaConsumer-8] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:41:50.414 INFO 1937 --- [kafkaConsumer-2] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:41:50.461 INFO 1937 --- [kafkaConsumer-6] o.e.kafkalistener.KafkaListeners : Message is read.
However, consumer is working with one thread on application startup.
My test case:
shutdown the application
send 3 more messages to same Kafka topic
start the consumer application
I am seing logs like above:
2018-12-12 23:51:51.525 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:51.526 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:51.526 INFO 2023 --- [kafkaConsumer-1] o.e.kafkalistener.KafkaListeners : Message is read.
2018-12-12 23:51:54.104 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Attempt to heartbeat failed since group is rebalancing
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Revoking previously assigned partitions [I have deleted here to make log more readable]
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [I have deleted here to make log more readable]
2018-12-12 23:51:54.139 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] (Re-)joining group
2018-12-12 23:51:54.155 INFO 2023 --- [kafkaConsumer-9] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-10, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.155 INFO 2023 --- [kafkaConsumer-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-9] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-10, groupId=groupId] Setting newly assigned partitions [Topic1-0, Topic1-1, Topic1-2, Topic1-3, Topic1-12, Topic1-13, Topic1-14, Topic1-15, Topic1-16, Topic1-17, Topic1-18, Topic1-19, Topic1-4, Topic1-5, Topic1-6, Topic1-7, Topic1-8, Topic1-9, Topic1-10, Topic1-11]
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-2] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-3, groupId=groupId] Setting newly assigned partitions [Topic1-60, Topic1-61, Topic1-62, Topic1-63, Topic1-64, Topic1-65, Topic1-66, Topic1-67, Topic1-76, Topic1-77, Topic1-78, Topic1-79, Topic1-68, Topic1-69, Topic1-70, Topic1-71, Topic1-72, Topic1-73, Topic1-74, Topic1-75]
2018-12-12 23:51:54.156 INFO 2023 --- [kafkaConsumer-4] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-5, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-5, groupId=groupId] Setting newly assigned partitions [Topic1-116, Topic1-117, Topic1-118, Topic1-119, Topic1-108, Topic1-109, Topic1-110, Topic1-111, Topic1-112, Topic1-113, Topic1-114, Topic1-115, Topic1-100, Topic1-101, Topic1-102, Topic1-103, Topic1-104, Topic1-105, Topic1-106, Topic1-107]
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-6] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-7, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-6] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-7, groupId=groupId] Setting newly assigned partitions [Topic1-156, Topic1-157, Topic1-158, Topic1-159, Topic1-148, Topic1-149, Topic1-150, Topic1-151, Topic1-152, Topic1-153, Topic1-154, Topic1-155, Topic1-140, Topic1-141, Topic1-142, Topic1-143, Topic1-144, Topic1-145, Topic1-146, Topic1-147]
2018-12-12 23:51:54.157 INFO 2023 --- [kafkaConsumer-7] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-8, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-7] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-8, groupId=groupId] Setting newly assigned partitions [Topic1-160, Topic1-161, Topic1-162, Topic1-163, Topic1-172, Topic1-173, Topic1-174, Topic1-175, Topic1-176, Topic1-177, Topic1-178, Topic1-179, Topic1-164, Topic1-165, Topic1-166, Topic1-167, Topic1-168, Topic1-169, Topic1-170, Topic1-171]
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-5] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-6, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.158 INFO 2023 --- [kafkaConsumer-5] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-6, groupId=groupId] Setting newly assigned partitions [Topic1-124, Topic1-125, Topic1-126, Topic1-127, Topic1-128, Topic1-129, Topic1-130, Topic1-131, Topic1-120, Topic1-121, Topic1-122, Topic1-123, Topic1-132, Topic1-133, Topic1-134, Topic1-135, Topic1-136, Topic1-137, Topic1-138, Topic1-139]
2018-12-12 23:51:54.158 INFO 2023 --- [afkaConsumer-10] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-11, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [afkaConsumer-10] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-11, groupId=groupId] Setting newly assigned partitions [Topic1-28, Topic1-29, Topic1-30, Topic1-31, Topic1-32, Topic1-33, Topic1-34, Topic1-35, Topic1-20, Topic1-21, Topic1-22, Topic1-23, Topic1-24, Topic1-25, Topic1-26, Topic1-27, Topic1-36, Topic1-37, Topic1-38, Topic1-39]
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-8] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.159 INFO 2023 --- [kafkaConsumer-8] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=groupId] Setting newly assigned partitions [Topic1-188, Topic1-189, Topic1-190, Topic1-191, Topic1-192, Topic1-193, Topic1-194, Topic1-195, Topic1-180, Topic1-181, Topic1-182, Topic1-183, Topic1-184, Topic1-185, Topic1-186, Topic1-187, Topic1-196, Topic1-197, Topic1-198, Topic1-199]
2018-12-12 23:51:54.163 INFO 2023 --- [kafkaConsumer-3] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=groupId] Successfully joined group with generation 41
2018-12-12 23:51:54.165 INFO 2023 --- [kafkaConsumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=groupId] Setting newly assigned partitions [Topic1-92, Topic1-93, Topic1-94, Topic1-95, Topic1-96, Topic1-97, Topic1-98, Topic1-99, Topic1-84, Topic1-85, Topic1-86, Topic1-87, Topic1-88, Topic1-89, Topic1-90, Topic1-91, Topic1-80, Topic1-81, Topic1-82, Topic1-83]
2018-12-12 23:51:54.189 INFO 2023 --- [kafkaConsumer-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=groupId] Setting newly assigned partitions [Topic1-52, Topic1-53, Topic1-54, Topic1-55, Topic1-56, Topic1-57, Topic1-58, Topic1-59, Topic1-44, Topic1-45, Topic1-46, Topic1-47, Topic1-48, Topic1-49, Topic1-50, Topic1-51, Topic1-40, Topic1-41, Topic1-42, Topic1-43]
2018-12-12 23:51:54.192 INFO 2023 --- [kafkaConsumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-52, Topic1-53, Topic1-54, Topic1-55, Topic1-56, Topic1-57, Topic1-58, Topic1-59, Topic1-44, Topic1-45, Topic1-46, Topic1-47, Topic1-48, Topic1-49, Topic1-50, Topic1-51, Topic1-40, Topic1-41, Topic1-42, Topic1-43]
2018-12-12 23:51:54.278 INFO 2023 --- [kafkaConsumer-2] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-60, Topic1-61, Topic1-62, Topic1-63, Topic1-64, Topic1-65, Topic1-66, Topic1-67, Topic1-76, Topic1-77, Topic1-78, Topic1-79, Topic1-68, Topic1-69, Topic1-70, Topic1-71, Topic1-72, Topic1-73, Topic1-74, Topic1-75]
2018-12-12 23:51:54.278 INFO 2023 --- [kafkaConsumer-9] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-0, Topic1-1, Topic1-2, Topic1-3, Topic1-12, Topic1-13, Topic1-14, Topic1-15, Topic1-16, Topic1-17, Topic1-18, Topic1-19, Topic1-4, Topic1-5, Topic1-6, Topic1-7, Topic1-8, Topic1-9, Topic1-10, Topic1-11]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-4] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-116, Topic1-117, Topic1-118, Topic1-119, Topic1-108, Topic1-109, Topic1-110, Topic1-111, Topic1-112, Topic1-113, Topic1-114, Topic1-115, Topic1-100, Topic1-101, Topic1-102, Topic1-103, Topic1-104, Topic1-105, Topic1-106, Topic1-107]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-6] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-156, Topic1-157, Topic1-158, Topic1-159, Topic1-148, Topic1-149, Topic1-150, Topic1-151, Topic1-152, Topic1-153, Topic1-154, Topic1-155, Topic1-140, Topic1-141, Topic1-142, Topic1-143, Topic1-144, Topic1-145, Topic1-146, Topic1-147]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-8] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-188, Topic1-189, Topic1-190, Topic1-191, Topic1-192, Topic1-193, Topic1-194, Topic1-195, Topic1-180, Topic1-181, Topic1-182, Topic1-183, Topic1-184, Topic1-185, Topic1-186, Topic1-187, Topic1-196, Topic1-197, Topic1-198, Topic1-199]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-5] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-124, Topic1-125, Topic1-126, Topic1-127, Topic1-128, Topic1-129, Topic1-130, Topic1-131, Topic1-120, Topic1-121, Topic1-122, Topic1-123, Topic1-132, Topic1-133, Topic1-134, Topic1-135, Topic1-136, Topic1-137, Topic1-138, Topic1-139]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-7] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-160, Topic1-161, Topic1-162, Topic1-163, Topic1-172, Topic1-173, Topic1-174, Topic1-175, Topic1-176, Topic1-177, Topic1-178, Topic1-179, Topic1-164, Topic1-165, Topic1-166, Topic1-167, Topic1-168, Topic1-169, Topic1-170, Topic1-171]
2018-12-12 23:51:54.283 INFO 2023 --- [afkaConsumer-10] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-28, Topic1-29, Topic1-30, Topic1-31, Topic1-32, Topic1-33, Topic1-34, Topic1-35, Topic1-20, Topic1-21, Topic1-22, Topic1-23, Topic1-24, Topic1-25, Topic1-26, Topic1-27, Topic1-36, Topic1-37, Topic1-38, Topic1-39]
2018-12-12 23:51:54.283 INFO 2023 --- [kafkaConsumer-3] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [Topic1-92, Topic1-93, Topic1-94, Topic1-95, Topic1-96, Topic1-97, Topic1-98, Topic1-99, Topic1-84, Topic1-85, Topic1-86, Topic1-87, Topic1-88, Topic1-89, Topic1-90, Topic1-91, Topic1-80, Topic1-81, Topic1-82, Topic1-83]
I think, 3 unread messages are being read before all consumers are ready, and each of them is read by consumer named kafkaConsumer-1. This situation did not change when I pushed much more messages when the consumer application is closed.
How can I concurrently read every unread messages on application startup ?

Related

Kafka CooperativeStickyAssignor revokes/assigns partition in one rebalance cycle

I have an application that runs 6 consumers in parallel. I am getting some unexpected results when I use CooperativeStickyAssignor.
If I understand the mechanism correctly, if the consumer looses partition in one rebalance cycle, the partition will be assigned in the next rebalance cycle.
This assumption is based on the RebalanceProtocol documentation and few blog posts that describe the protocol, like this one on Confluent blog.
The assignor should not reassign any owned partitions immediately, but
instead may indicate consumers the need for partition revocation so
that the revoked partitions can be reassigned to other consumers in
the next rebalance event. This is designed for sticky assignment logic
which attempts to minimize partition reassignment with cooperative
adjustments.
Any member that revoked partitions then rejoins the group, triggering
a second rebalance so that its revoked partitions can be assigned.
Until then, these partitions are unowned and unassigned.
These are the logs from the application that uses protocol='cooperative-sticky'. In the same rebalance cycle (generationId=640) partition 74 moves from consumer-3 to consumer-4. I omitted the lines that are logged by the other 4 consumers.
Mind that the log is in reverse(bottom to top)
2022-12-14 11:18:24 1 --- [consumer-3] x.y.z.MyRebalanceHandler1 : New partition assignment: partition-59, seek to min common offset: 85120524
2022-12-14 11:18:24 1 --- [consumer-3] x.y.z.MyRebalanceHandler2 : Partitions [partition-59] assigned successfully
2022-12-14 11:18:24 1 --- [consumer-3] x.y.z.MyRebalanceHandler1 : Partitions assigned: [partition-59]
2022-12-14 11:18:24 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Adding newly assigned partitions: partition-59
2022-12-14 11:18:24 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Notifying assignor about the new Assignment(partitions=[partition-59])
2022-12-14 11:18:24 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Request joining group due to: need to revoke partitions [partition-26, partition-74] as indicated by the current assignment and re-join
2022-12-14 11:18:24 1 --- [consumer-3] x.y.z.MyRebalanceHandler2 : Partitions [partition-26, partition-74] revoked successfully
2022-12-14 11:18:24 1 --- [consumer-3] x.y.z.MyRebalanceHandler1 : Finished removing partition data
2022-12-14 11:18:24 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] (Re-)joining group
2022-12-14 11:18:24 1 --- [consumer-4] x.y.z.MyRebalanceHandler1 : New partition assignment: partition-74, seek to min common offset: 107317730
2022-12-14 11:18:24 1 --- [consumer-4] x.y.z.MyRebalanceHandler2 : Partitions [partition-74] assigned successfully
2022-12-14 11:18:24 1 --- [consumer-4] x.y.z.MyRebalanceHandler1 : Partitions assigned: [partition-74]
2022-12-14 11:18:24 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Adding newly assigned partitions: partition-74
2022-12-14 11:18:24 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Notifying assignor about the new Assignment(partitions=[partition-74])
2022-12-14 11:18:24 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Request joining group due to: need to revoke partitions [partition-57] as indicated by the current assignment and re-join
2022-12-14 11:18:24 1 --- [consumer-4] x.y.z.MyRebalanceHandler2 : Partitions [partition-57] revoked successfully
2022-12-14 11:18:24 1 --- [consumer-4] x.y.z.MyRebalanceHandler1 : Finished removing partition data
2022-12-14 11:18:22 1 --- [consumer-3] x.y.z.MyRebalanceHandler1 : Partitions revoked: [partition-26, partition-74]
2022-12-14 11:18:22 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Revoke previously assigned partitions partition-26, partition-74
2022-12-14 11:18:22 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Updating assignment with\n\tAssigned partitions: [partition-59]\n\tCurrent owned partitions: [partition-26, partition-74]\n\tAdded partitions (assigned - owned): [partition-59]\n\tRevoked partitions (owned - assigned): [partition-26, partition-74]
2022-12-14 11:18:22 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Successfully synced group in generation Generation{generationId=640, memberId='partition-3-my-client-id-my-group-id-c31afd19-3f22-43cb-ad07-9088aa98d3af', protocol='cooperative-sticky'}
2022-12-14 11:18:22 1 --- [consumer-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-3-my-client-id-my-group-id, groupId=my-group-id] Successfully joined group with generation Generation{generationId=640, memberId='partition-3-my-client-id-my-group-id-c31afd19-3f22-43cb-ad07-9088aa98d3af', protocol='cooperative-sticky'}
2022-12-14 11:18:22 1 --- [consumer-4] x.y.z.MyRebalanceHandler1 : Partitions revoked: [partition-57]
2022-12-14 11:18:22 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Revoke previously assigned partitions partition-57
2022-12-14 11:18:22 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Updating assignment with\n\tAssigned partitions: [partition-74]\n\tCurrent owned partitions: [partition-57]\n\tAdded partitions (assigned - owned): [partition-74]\n\tRevoked partitions (owned - assigned): [partition-57]
2022-12-14 11:18:21 1 --- [id-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Successfully synced group in generation Generation{generationId=640, memberId='partition-4-my-client-id-my-group-id-ae2af665-edc9-4a8e-b658-98372d142477', protocol='cooperative-sticky'}
2022-12-14 11:18:21 1 --- [consumer-4] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=partition-4-my-client-id-my-group-id, groupId=my-group-id] Successfully joined group with generation Generation{generationId=640, memberId='partition-4-my-client-id-my-group-id-ae2af665-edc9-4a8e-b658-98372d142477', protocol='cooperative-sticky'}
What am I missing here?
I expect that the partition gets revoked in one rebalance cycle and gets assigned in the next.
Kafka client version is 3.2.1.

Spring Kafka multiple topic for one class dynamically

I recently wanted to add a new behavior in my project that uses spring-kafka.
The idea is really simple :
App1 create a new scenario name "SCENARIO_1" and publish this string in the topic "NEW_SCENARIO"
App1 publish some message on topic "APP2-SCENARIO_1" and "APP3-SCENARIO_1"
App2 (group-id=app2) listens on NEW_SCENARIO and creates a new consumer<Object,String> listening on a new topic "APP2-SCENARIO_1"
App3 (group-id=app3) listens on NEW_SCENARIO and creates a new consumer<Object,String> listening on a new topic "APP3-SCENARIO_1"
The goal is to create dynamically new topics and consumer. I cannot use spring kafka annotation since I need it to be dynamic so I did this :
#KafkaListener(topics = ScenarioTopics.NEW_SCENARIO)
public void receive(final String topic) {
logger.info("Get new scenario " + topic + ", creating new consumer");
TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset(
"APP2_" + topic, 1, 0L);
ContainerProperties containerProps = new ContainerProperties(topicPartitionOffset);
containerProps.setMessageListener((MessageListener<Object, String>) message -> {
// process my message
});
KafkaMessageListenerContainer<Object, String> container = new KafkaMessageListenerContainer<>(kafkaPeopleConsumerFactory, containerProps);
container.start();
}
And this does not work. I'm missing probably something, but I can't figure what.
Here I have some logs that tells me that the leader is not available, which is weird since I got the new scenario event.
2022-03-14 18:08:26.057 INFO 21892 --- [ntainer#0-0-C-1] o.l.b.v.c.c.i.k.KafkaScenarioListener : Get new scenario W4BdDBEowY, creating new consumer
2022-03-14 18:08:26.061 INFO 21892 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
[...lot of things...]
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.0
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 8cb0a5e9d3441962
2022-03-14 18:08:26.067 INFO 21892 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1647277706067
2022-03-14 18:08:26.068 INFO 21892 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Subscribed to partition(s): PEOPLE_W4BdDBEowY-1
2022-03-14 18:08:26.072 INFO 21892 --- [ -C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Seeking to offset 0 for partition PEOPLE_W4BdDBEowY-1
2022-03-14 18:08:26.081 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 2 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
2022-03-14 18:08:26.081 INFO 21892 --- [ -C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Cluster ID: ebyKy-RVSRmUDaaeQqMaQg
2022-03-14 18:18:04.882 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 5314 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
2022-03-14 18:18:04.997 WARN 21892 --- [ -C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-people-creator-2, groupId=people-creator] Error while fetching metadata with correlation id 5315 : {PEOPLE_W4BdDBEowY=LEADER_NOT_AVAILABLE}
How do I create dynamically a kafka consumer on a topic ? I think I do it very wrong, but I searched a lot and really didn't find anything.
There are several answers here about dynamically creating containers...
Trigger one Kafka consumer by using values of another consumer In Spring Kafka
Kafka Consumer in spring can I re-assign partitions programmatically?
Create consumer dynamically spring kafka
Dynamically start and off KafkaListener just to load previous messages at the start of a session

While Using Kafka-Client getting this type of logs on console

I am getting below logs in my console but while publish the message get a message successfully but this happens in every time and it will print the below logs continue.
10:18:06.884 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-1, groupId=test] Sending asynchronous auto-commit of offsets {shayona-0=OffsetAndMetadata{offset=11349, leaderEpoch=0, metadata=''}}
10:18:06.884 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-1, groupId=test] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-test-1, correlationId=1093) and timeout 30000 to node 2147482646: {group_id=test,generation_id=18,member_id=consumer-test-1-52154059-bfce-41f8-b05e-2e6973910aa9,group_instance_id=null,topics=[{name=shayona,partitions=[{partition_index=0,committed_offset=11349,committed_leader_epoch=0,committed_metadata=,_tagged_fields={}}],_tagged_fields={}}],_tagged_fields={}}
10:18:06.886 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-1, groupId=test] Received OFFSET_COMMIT response from node 2147482646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-test-1, correlationId=1093): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='shayona', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])])
10:18:06.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-1, groupId=test] Committed offset 11349 for partition shayona-0
10:18:06.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-1, groupId=test] Completed asynchronous auto-commit of offsets {shayona-0=OffsetAndMetadata{offset=11349, leaderEpoch=0, metadata=''}}
10:18:07.177 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-1, groupId=test] Received FETCH response from node 1001 for request with header RequestHeader(apiKey=FETCH, apiVersion=12, clientId=consumer-test-1, correlationId=1092): org.apache.kafka.common.requests.FetchResponse#2c715e84
10:18:07.177 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-test-1, groupId=test] Node 1001 sent an incremental fetch response with throttleTimeMs = 0 for session 1022872780 with 0 response partition(s), 1 implied partition(s)
10:18:07.177 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-test-1, groupId=test] Added READ_UNCOMMITTED fetch request for partition shayona-0 at position FetchPosition{offset=11349, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1001 rack: null)], epoch=0}} to node localhost:9092 (id: 1001 rack: null)
10:18:07.177 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-test-1, groupId=test] Built incremental fetch (sessionId=1022872780, epoch=1035) for node 1001. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
10:18:07.177 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-test-1, groupId=test] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(shayona-0)) to broker localhost:9092 (id: 1001 rack: null)
10:18:07.177 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-1, groupId=test] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=12, clientId=consumer-test-1, correlationId=1094) and timeout 30000 to node 1001: {replica_id=-1,max_wait_ms=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1022872780,session_epoch=1035,topics=[],forgotten_topics_data=[],rack_id=,_tagged_fields={}}
Help me to out from this. Below I mention consumer application.
Properties props = new Properties();
props.setProperty("bootstrap.servers", "localhost:9092");
props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.setProperty("group.id", "test");
props.setProperty("enable.auto.commit", "true");
props.setProperty("auto.commit.interval.ms", "1000");
org.apache.kafka.clients.consumer.KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer(props);
String topic[] = {"shayona"};
consumer.subscribe(Arrays.asList(topic));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
}

Spring boot cache Hazelcast return empty list of cache names and no metrics will display

I started to work with Hazelcast cache and I want to expose metrics for them and I don't know how to do it.
My java-config
`#Configuration
public class HazelcastConfiguration {
#Bean
public Config config(){
return new Config()
.setInstanceName("hazelcast-instace")
.addMapConfig(
new MapConfig()
.setName("testing")
.setMaxSizeConfig(new MaxSizeConfig(10, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU)
.setTimeToLiveSeconds(1000)
.setStatisticsEnabled(true)
);
}
}`
During startup application, I see only this logs
2019-11-30 19:56:01.579 INFO 13444 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [3.12.4] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2019-11-30 19:56:01.671 INFO 13444 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [3.12.4] Picked [192.168.43.2]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2019-11-30 19:56:01.694 INFO 13444 --- [ main] com.hazelcast.system : [192.168.43.2]:5701 [dev] [3.12.4] Hazelcast 3.12.4 (20191030 - eab1290) starting at [192.168.43.2]:5701
2019-11-30 19:56:01.695 INFO 13444 --- [ main] com.hazelcast.system : [192.168.43.2]:5701 [dev] [3.12.4] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
2019-11-30 19:56:02.037 INFO 13444 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [192.168.43.2]:5701 [dev] [3.12.4] Backpressure is disabled
2019-11-30 19:56:02.761 INFO 13444 --- [ main] com.hazelcast.instance.Node : [192.168.43.2]:5701 [dev] [3.12.4] Creating MulticastJoiner
2019-11-30 19:56:02.998 INFO 13444 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [192.168.43.2]:5701 [dev] [3.12.4] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2019-11-30 19:56:02.999 INFO 13444 --- [ main] c.h.internal.diagnostics.Diagnostics : [192.168.43.2]:5701 [dev] [3.12.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2019-11-30 19:56:03.007 INFO 13444 --- [ main] com.hazelcast.core.LifecycleService : [192.168.43.2]:5701 [dev] [3.12.4] [192.168.43.2]:5701 is STARTING
2019-11-30 19:56:05.085 INFO 13444 --- [ main] c.h.internal.cluster.ClusterService : [192.168.43.2]:5701 [dev] [3.12.4]
Members {size:1, ver:1} [
Member [192.168.43.2]:5701 - 6ed511ff-b20b-4875-9b39-2dc734d4a9aa this
]
2019-11-30 19:56:05.142 INFO 13444 --- [ main] com.hazelcast.core.LifecycleService : [192.168.43.2]:5701 [dev] [3.12.4] [192.168.43.2]:5701 is STARTED
2019-11-30 19:56:05.295 INFO 13444 --- [e.HealthMonitor] c.h.internal.diagnostics.HealthMonitor : [192.168.43.2]:5701 [dev] [3.12.4] processors=4, physical.memory.total=23,9G, physical.memory.free=11,8G, swap.space.total=27,2G, swap.space.free=10,3G, heap.memory.used=306,9M, heap.memory.free=357,1M, heap.memory.total=664,0M, heap.memory.max=5,3G, heap.memory.used/total=46,23%, heap.memory.used/max=5,63%, minor.gc.count=0, minor.gc.time=0ms, major.gc.count=0, major.gc.time=0ms, load.process=100,00%, load.system=100,00%, load.systemAverage=n/a thread.count=37, thread.peakCount=37, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=1, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0,00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
after autowired class CacheManager I and run on it function getCacheNames I see an empty list, and I don't know why?
and during first added to the cache in the log display
2019-11-30 20:11:12.760 INFO 16464 --- [nio-8080-exec-6] c.h.i.p.impl.PartitionStateManager : [192.168.43.2]:5701 [dev] [3.12.4] Initializing cluster partition table arrangement...
In Metrics Config file I have this and nothing in metrics will diplay:
#Autowired
private CacheMetricsRegistrar cacheMetricsRegistrar;
anyone have ide why its doesn't work?
The cache is initialized when you insert the first entry into it. That is why you don't see any cache in CacheManager at the beginning. After you insert the first value you should see HazelcastCache in cacheManager.caches.
The same with CacheMetricsRegistrar, I just tried and see HazelcastCacheMeterBinderProvider in binderProviders.
For Spring Boot 2.x create your registration component:
#Component
#AllArgsConstructor
public class CacheMetricsRegistrator {
private final CacheMetricsRegistrar cacheMetricsRegistrar;
private final CacheManager cacheManager;
private final Config cacheConfig;
#PostConstruct
public void register() {
this.cacheConfig.getMapConfigs().keySet().forEach(
cacheName -> this.cacheMetricsRegistrar.bindCacheToRegistry(
this.cacheManager.getCache(cacheName))
);
}
}
for your cache configuration of Hazelcast:
#EnableCaching
#Configuration
public class HazelcastConfig {
private MapConfig mapPortfolioCache() {
return new MapConfig()
.setName("my-entity-cache")
.setEvictionConfig(new EvictionConfig().setMaxSizePolicy(MaxSizePolicy.FREE_HEAP_SIZE).setSize(200))
.setTimeToLiveSeconds(60*15);
}
#Bean
public Config hazelCastConfig() {
Config config = new Config()
.setInstanceName("my-application-hazelcast")
.setNetworkConfig(new NetworkConfig().setJoin(new JoinConfig().setMulticastConfig(new MulticastConfig().setEnabled(false))))
.addMapConfig(mapPortfolioCache());
SubZero.useAsGlobalSerializer(config);
return config;
}
}

kafka streams do not run with dynamically generated classes

I want to start a stream that deserialize a dynamically created Class. This Bean is created by use of reflection and URLCLassLOader with a given String Class as parameter, but the KafkaStreams API doesn't recognize my new class.
The streams work perfectly with pre-created Beans, but close automatically when the dynamic one is used. The deserilizer was created with Jackson and works alone too.
Here is the class parser code
#SuppressWarnings("unchecked")
public static Class<?> getClassFromSource(String className, String sourceCode)
throws IOException, ClassNotFoundException {
/*
* create an empty source file
*/
File sourceFile = new File(com.google.common.io.Files.createTempDir(), className + ".java");
sourceFile.deleteOnExit();
/*
* generate the source code, using the source filename as the class name write
* the source code into the source file
*/
try (FileWriter writer = new FileWriter(sourceFile)) {
writer.write(sourceCode);
}
/*
* compile the source file
*/
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
File parentDirectory = null;
try (StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null)) {
parentDirectory = sourceFile.getParentFile();
fileManager.setLocation(StandardLocation.CLASS_OUTPUT, Arrays.asList(parentDirectory));
Iterable<? extends JavaFileObject> compilationUnits = fileManager
.getJavaFileObjectsFromFiles(Arrays.asList(sourceFile));
compiler.getTask(null, fileManager, null, null, null, compilationUnits).call();
}
/*
* load the compiled class
*/
try (StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null)) {
parentDirectory = sourceFile.getParentFile();
fileManager.setLocation(StandardLocation.CLASS_OUTPUT, Arrays.asList(parentDirectory));
Iterable<? extends JavaFileObject> compilationUnits = fileManager
.getJavaFileObjectsFromFiles(Arrays.asList(sourceFile));
compiler.getTask(null, fileManager, null, null, null, compilationUnits).call();
}
/*
* load the compiled class
*/
try (URLClassLoader classLoader = URLClassLoader.newInstance(new URL[] { parentDirectory.toURI().toURL() })) {
return (Class<?>) classLoader.loadClass(className);
}
}
First i instantiate my Serdes that receive a Class as parameter
// dynamic generated class from a source class
Class clazz = getClassFromSource("DynamicClass", source);
// Serdes for created class that implements org.apache.kafka.common.serialization.Deserializer
DynamicDeserializer deserializer = new DynamicDeserializer(clazz);
DynamicSerializer serializer = new DynamicSerializer();
Serde<?> encryptedSerde = Serdes.serdeFrom(serializer, deserializer);
And then start the Stream topology that use this Serdes
StreamsBuilder builder = new StreamsBuilder();
KTable<String, Long> dynamicStream = builder
.stream(topicName, Consumed.with(Serdes.String(), encryptedSerde))
.groupByKey()
.count();
dynamicStream.to(outputTopicName, Produced.with(Serdes.String(), Serdes.Long()));
Stream topology should execute normally, but always generates this error
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.Target' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.layout' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout.layout.ConversionPattern' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'stream.restart.application' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'aes.key.path' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'path.to.listening' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.appender.stdout' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'admin.retries' was supplied but isn't a known config.
2019-09-01 14:54:16 WARN ConsumerConfig:355 - The configuration 'log4j.rootLogger' was supplied but isn't a known config.
2019-09-01 14:54:16 INFO AppInfoParser:117 - Kafka version: 2.3.0
2019-09-01 14:54:16 INFO AppInfoParser:118 - Kafka commitId: fc1aaa116b661c8a
2019-09-01 14:54:16 INFO AppInfoParser:119 - Kafka startTimeMs: 1567360456724
2019-09-01 14:54:16 INFO KafkaStreams:800 - stream-client [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72] Started Streams client
2019-09-01 14:54:16 INFO StreamThread:740 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Starting
2019-09-01 14:54:16 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from CREATED to RUNNING
2019-09-01 14:54:16 INFO KafkaConsumer:1027 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Subscribed to pattern: 'DynamicBean|streamingbean-test-20190901145412544-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition'
2019-09-01 14:54:17 INFO Metadata:266 - [Producer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-producer] Cluster ID: tp7OBhwVRQqT2NpPlL55_Q
2019-09-01 14:54:17 INFO Metadata:266 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Cluster ID: tp7OBhwVRQqT2NpPlL55_Q
2019-09-01 14:54:17 INFO AbstractCoordinator:728 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Discovered group coordinator AcerDerick:9092 (id: 2147483647 rack: null)
2019-09-01 14:54:17 INFO ConsumerCoordinator:476 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Revoking previously assigned partitions []
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED
2019-09-01 14:54:17 INFO KafkaStreams:257 - stream-client [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72] State transition from RUNNING to REBALANCING
2019-09-01 14:54:17 INFO KafkaConsumer:1068 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2019-09-01 14:54:17 INFO StreamThread:324 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] partition revocation took 0 ms.
suspended active tasks: []
suspended standby tasks: []
2019-09-01 14:54:17 INFO AbstractCoordinator:505 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] (Re-)joining group
2019-09-01 14:54:17 ERROR StreamsPartitionAssignor:354 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer] DynamicClass is unknown yet during rebalance, please make sure they have been pre-created before starting the Streams application.
2019-09-01 14:54:17 INFO AbstractCoordinator:469 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Successfully joined group with generation 1
2019-09-01 14:54:17 INFO ConsumerCoordinator:283 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-consumer, groupId=streamingbean-test-20190901145412544] Setting newly assigned partitions:
2019-09-01 14:54:17 INFO StreamThread:1164 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Informed to shut down
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from PARTITIONS_REVOKED to PENDING_SHUTDOWN
2019-09-01 14:54:17 INFO StreamThread:1178 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Shutting down
2019-09-01 14:54:17 INFO KafkaConsumer:1068 - [Consumer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2019-09-01 14:54:17 INFO KafkaProducer:1153 - [Producer clientId=streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2019-09-01 14:54:17 INFO StreamThread:207 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2019-09-01 14:54:17 INFO StreamThread:1198 - stream-thread [streamingbean-test-20190901145412544-15574162-7649-4c98-acd2-7a68ced01d72-StreamThread-1] Shutdown complete
After some time, I fixed this problem with a simple solution, but maybe not the most elegant. First I used a JSON String deserializer to get data from topic and then passed it to another deserializer that converts to my dynamic object.

Categories

Resources