Kafka consumer in spring cloud stream dont start - java

here is the configuration of my consumer
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
input:
destination: greeting
content-type: application/json
kafka:
binder:
brokers: kafka
zkNodes: zookeeper
the code of my app
#SpringBootApplication
#EnableIntegration
#EnableBinding(CommandSink.class)
public class KafkaTesterApplication {
private static Logger logger = LogManager.getLogger(KafkaTesterApplication.class);
/**
* #param args
*/
public static void main(String[] args) {
SpringApplication.run(KafkaTesterApplication.class, args);
}
#ServiceActivator(inputChannel="input")
public void receiveMessage(String message) {
logger.debug("receive {}", message);
}
}
and the sink interface
public interface CommandSink {
public static final String CHANNEL = "input";
#Input(CommandSink.CHANNEL)
SubscribableChannel command();
}
it looks like consumer doesn't connect to zookeeper and kafka.
any idea?

Ok, we found the solution...
We don't know why but a topic was missing. The most curious in the problem was that the consumer with zookeeper (old style) can consume messages.
The missing topic was
__consumer_offsets

Related

Why the the reactive code is not executed in spring-boot?

I am using Spring boot to implement reactive micro-services. However, the reactive code is never executed in lambda function. My implementation as below. publishEventScheduler is created during application start-up. I am using this code together with Kafka to send an event to user micro-service to create user.
MainServiceApplication.java
public class MainServiceApplication {
private final Integer threadPoolSize;
private final Integer taskQueueSize;
public MainServiceApplication(
#Value("${app.threadPoolSize:10}") Integer threadPoolSize,
#Value("${app.taskQueueSize:100}") Integer taskQueueSize) {
this.threadPoolSize = threadPoolSize;
this.taskQueueSize = taskQueueSize;
}
#Bean
public Scheduler publishEventScheduler() {
LOG.info("Creating a message scheduler with connectionPoolSize = {}", threadPoolSize);
return Schedulers.newBoundedElastic(threadPoolSize, taskQueueSize, "publish-pool");
}
public static void main(String[] args) {
SpringApplication.run(MainServiceApplication.class, args);
}
}
MainIntegration.java
the function createUser() is called with a POST request from Postman (break point stop at subscribeOn(publishEventScheduler)) but sendMessageUser() is never executed (break point in the function not working)
#Component
public class MainIntegration implements UserService, TodoService {
private final String todoServiceUrl;
private final String userServiceUrl;
private final WebClient webClient;
private final StreamBridge streamBridge;
private final Scheduler publishEventScheduler;
public MainIntegration(
#Qualifier("publishEventScheduler") Scheduler publishEventScheduler,
WebClient.Builder webClient,
StreamBridge streamBridge,
#Value("${app.user-service.host}") String userServiceHost,
#Value("${app.user-service.port}") int userServicePort
) {
this.publishEventScheduler = publishEventScheduler;
this.webClient = webClient.build();
this.streamBridge = streamBridge;
userServiceUrl = "http://" + userServiceHost + ":" + userServicePort + "/user";
}
#Override
public Mono<User> createUser(User body) {
return Mono.fromCallable(() -> {
sendMessageUser("user-out-0", new Event<Event.Type, String, User >(Event.Type.CREATE, body.getUserName(), body));
return body;
}).subscribeOn(publishEventScheduler);
}
private void sendMessageUser(String bindingName, Event<Type, String, User> event) {
LOG.debug("Sending a {} message to {}", bindingName, event.getEventType());
Message<Event<Type, String, User>> message = MessageBuilder.withPayload(event)
.setHeader("partitionKey", event.getKey())
.build();
streamBridge.send(bindingName, message);
}
application.yaml
server.port: 7000
server.error.include-message: always
app:
user-service:
host: localhost
port: 7002
spring:
cloud:
stream:
default-binder: kafka
default-contentType: application/json
bindings:
user-out-0:
destination: user-service
producer:
required-groups: auditGroup
kafka:
binder:
brokers: 127.0.0.1
defaultBrokerPort: 2181
rabbitmq:
host: 127.0.0.1
port: 5672
username: guest
password: guest

Spring Kafka - missing information about __TypeId__ in the consumer headers

I'm exploring Spring Kafka API (spring-boot-starter-parent version 2.7.4) and I found strange behavior in the consumer with standard #KafkaListener annotation.
I produce messages with KafkaTemplate and add custom header prop __ProducerApp__, but I have standard header prop __TypeId__ too because it is automatically added by Spring starter implementation.
Properties:
spring:
kafka:
bootstrap-servers: localhost:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
consumer:
group-id: consumer-localhost
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties.spring.json.trusted.packages: '*'
Producer class:
#Component
public class KafkaExampleProducer {
private final KafkaTemplate<String, KafkaPayload> kafkaTemplate;
public KafkaExampleProducer(KafkaTemplate<String, KafkaPayload> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendPayload(KafkaPayload payload) {
ProducerRecord<String, KafkaPayload> record = new ProducerRecord<>(
KafkaExampleTopicConfig.EXAMPLE_TOPIC_NAME, UUID.randomUUID().toString(), payload
);
record.headers().add("__ProducerApp__", "ExampleApp-localhost".getBytes(StandardCharsets.UTF_8));
kafkaTemplate.send(record);
}
}
I can see fulfilled headers in the web UI for Apache Kafka:
But in the consumer, after receiving a message from a topic I see only the __ProducerApp__ header prop.
Listener class:
#Component
public class KafkaExampleListener {
private final Logger logger = LoggerFactory.getLogger(KafkaExampleListener.class);
#KafkaListener(topics = KafkaExampleTopicConfig.EXAMPLE_TOPIC_NAME)
public void listenMessage(
ConsumerRecord<String, KafkaPayload> consumerRecord
) {
logger.info("Received message:\nKey: {}, type: {}, producer: {}",
consumerRecord.key(),
extractHeaderValue(consumerRecord.headers(), "__TypeId__"),
extractHeaderValue(consumerRecord.headers(), "__ProducerApp__")
);
}
private String extractHeaderValue(Headers headers, String headerId) {
return StreamSupport.stream(headers.spliterator(), false)
.filter(header -> header.key().equals(headerId))
.findFirst()
.map(header -> new String(header.value()))
.orElse("N/A");
}
}
The console result presents that headers are received without __TypeId__ prop:
Received message:
Key: 3e8ee64e-b691-48e1-98b1-614291cc0451, type: N/A, producer: ExampleApp-localhost
You did not add your beans configs, but my guess is that you are missing the correct deserializer props.
Add:
#Bean
RecordMessageConverter messageConverter() { return new
StringJsonMessageConverter(); }
Also, instead of a JsonDeserializer use a StringDeserializer in your consumer value-deserializer
The JsonDeserializer strips the type headers by default to avoid polluting the application with internals.
/**
* Set to false to retain type information headers after deserialization.
* Default true.
* #param removeTypeHeaders true to remove headers.
* #since 2.2
*/
public void setRemoveTypeHeaders(boolean removeTypeHeaders) {
this.removeTypeHeaders = removeTypeHeaders;
this.setterCalled = true;
}
Or set spring.json.remove.type.headers: false.

How to implement custom kafka Partition using spring cloud stream

I am trying to implement a custom Kafka Partitioner using spring cloud stream bindings. I would like to just custom Partition the user topic and not do anything with company topic(Kafka will use DefaultPartitioner in this case).
My bindings configuration:
spring:
cloud:
stream:
bindings:
comp-out:
destination: company
contentType: application/json
user-out:
destination: user
contentType: application/json
As per reference document: https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/2.1.0.RC4/single/spring-cloud-stream-binder-kafka.html#_partitioning_with_the_kafka_binder
I modified the configuration to this:
spring:
cloud:
stream:
bindings:
comp-out:
destination: company
contentType: application/json
user-out:
destination: user
contentType: application/json
producer:
partitioned: true
partitionSelectorClass: config.UserPartitioner
I Post the message into Stream using this:
public void postUserStream(User user) throws ServiceException {
try {
LOG.info("Posting User {} into Kafka stream...", user);
MessageChannel messageChannel = messageStreams.outboundUser();
messageChannel
.send(MessageBuilder.withPayload(user)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
} catch (Exception ex) {
LOG.error("Error while populating User stream into Kafka.. ", ex);
throw ex;
}
}
My UserPartitioner Class:
public class UserPartitioner extends DefaultPartitioner {
#Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes,
Cluster cluster) {
String partitionKey = null;
if (Objects.nonNull(value)) {
User user = (User) value;
partitionKey = String.valueOf(user.getCompanyId()) + "_" + String.valueOf(user.getId());
keyBytes = partitionKey.getBytes();
}
return super.partition(topic, partitionKey, keyBytes, value, valueBytes, cluster);
}
}
I end up receiving following exception:
Description:
Failed to bind properties under 'spring.cloud.stream.bindings.user-out.producer' to org.springframework.cloud.stream.binder.ProducerProperties:
Property: spring.cloud.stream.bindings.user-out.producer.partitioned
Value: true
Origin: "spring.cloud.stream.bindings.user-out.producer.partitioned" from property source "bootstrapProperties"
Reason: No setter found for property: partitioned
Action:
Update your application's configuration
Any reference link on how to set up Custom Partition using message binders will be helpful.
Edit: Based on the documentation Tried the below steps as well:
user-out:
destination: user
contentType: application/json
producer:
partitionKeyExtractorClass: config.SimpleUserPartitioner
#Component
public class SimpleUserPartitioner implements PartitionKeyExtractorStrategy {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof BaseUser) {
BaseUser user = (BaseUser) message.getPayload();
return user.getId();
}
return 10;
}
}
update 2: Solution that worked for me add partitioncount to bindings and autoaddpartitions to true in binder:
spring:
logging:
level: info
cloud:
stream:
bindings:
user-out:
destination: user
contentType: application/json
producer:
partition-key-expression: headers['partitionKey']
partition-count: 4
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
autoAddPartitions: true
There is no property partitioned; the getter depends on other properties...
public boolean isPartitioned() {
return this.partitionKeyExpression != null
|| this.partitionKeyExtractorName != null;
}
partitionSelectorClass: config.UserPartitioner
The UserPartitioner is a Kafka Partitioner - it determines which consumers get which partitions (on the consumer side)
The partitionSelectorClass has to be a PartitionSelectorStrategy - it determines which partition a record is sent to (on the producer side).
These are completely different objects.
If you really want to customize the way partitions are distributed across consumer instances, that is a Kafka concern and has nothing to do with Spring.
Furthermore, all consumer bindings in the same binder will use the same Partitioner. You would have to configure multiple binders to have different Partitioners.
Given your question, I think you are simply confusing Partitioner with PartitionSelectorStrategy and you need the latter.
Also, note; The partitionSelectorClass . has been deprecated for a while now and have been removed in the current master (won't be available in 3.0.0) in favor of partitionSelectorName - https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.0.M1/spring-cloud-stream.html#spring-cloud-stream-overview-partitioning

Spring cloud stream custom binder not registered. Disables the kafka binder if used #Configuration

I'm trying to make a custom spring cloud stream binder but it just wont register itself:
Binder Implementation:
public class DPSBinder implements Binder<SubscribableChannel, ConsumerProperties, ProducerProperties> {
private DecisionPersistenceServiceClient dpsClient;
private MessageHandler dpsClientConsumerMessageHandler = null;
public DPSBinder(DecisionPersistenceServiceClient dpsClient) {
this.dpsClient = dpsClient;
}
#Override
public Binding<SubscribableChannel> bindConsumer(String name, String group, SubscribableChannel inboundBindTarget,
ConsumerProperties consumerProperties) {
return null;
}
#Override
public Binding<SubscribableChannel> bindProducer(String name, SubscribableChannel outboundBindTarget,
ProducerProperties producerProperties) {
switch (name) {
case "PERSIST_POST":
this.dpsClientConsumerMessageHandler = message -> dpsClient.persist((DPAPayload) message.getPayload());
break;
default:
this.dpsClientConsumerMessageHandler = null;
}
if (this.dpsClientConsumerMessageHandler != null)
this.subscribe(outboundBindTarget);
return () -> this.dpsClientConsumerMessageHandler = null;
}
public void subscribe(SubscribableChannel outboundBindTarget) {
outboundBindTarget.subscribe(this.dpsClientConsumerMessageHandler);
}}
configuration class:
#Configuration
public class DPSBinderConfiguration {
#Bean
public DPSBinder dpsBinder(DecisionPersistenceServiceClient dpsClient) {
return new DPSBinder(dpsClient);
}}
spring.binders file:
dps:something.something.DPSBinderConfiguration
application.yml
application.yml
spring:
cloud:
stream:
bindings:
input:
destination: DPP_EVENTS
group: dpp-local
binder: kafka
output:
destination: PERSIST_POST
binder: dps
binders:
kafka:
type: kafka
dps:
type: dps
I've followed the spring cloud stream guidelines for creating a custome binder but this is not working. Moreover, using the #Configuration for creating binder beans disables the kafka binder which i've added on classpath.
I found the issue. Actually #Configuration should not be used where the binding bean is being declared.
Also, there were some logical issues in My binder implementation which i fixed.

Listening to many Kafka Streams in Spring

I'm developing an application in the event-driven architecture.
I'm trying to model the following flow of events:
UserAccountCreated (user-management-events) -> sending an e-mail -> MailNotificationSent (notification-service-events)
The notification-service application executes the whole flow. It waits for the UserAccountCreated event by listening to user-management-events topic. When the event is received, the application sends the email and publishes a new event - MailNotificationSent to the notification-service-events topic.
I have no problems with listening to the first event (UserAccountCreated) - application receives it and performs the rest of the flow. I also have no problem with publishing the MailNotificationSent event. Unfortunately, for development purposes, I want to listen to the MailNotificationSent event in the notification service, so the application has to listen to both UserAccountCreated and MailNotificationSent. Here I'm not able to make it works.
Let's take a look at the implementation:
NotificationStreams:
public interface NotificationStreams {
String INPUT = "notification-service-events-in";
String OUTPUT = "notification-service-events-out";
#Input(INPUT)
SubscribableChannel inboundEvents();
#Output(OUTPUT)
MessageChannel outboundEvents();
}
NotificationsEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class NotificationEventsListener {
#StreamListener(NotificationStreams.INPUT)
public void notificationServiceEventsIn(Flux<ActivationLinkSent> input) {
input.subscribe(event -> {
log.info("Received event ActivationLinkSent: " + event.toString());
});
}
}
UserManagementEvents:
public interface UserManagementEvents {
String INPUT = "user-management-events";
#Input(INPUT)
SubscribableChannel inboundEvents();
}
UserManagementEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class UserManagementEventsListener {
private final Gate gate;
#StreamListener(UserManagementEvents.INPUT)
public void userManagementEvents(Flux<UserAccountCreated> input) {
input.subscribe(event -> {
log.info("Received event UserAccountCreated: " + event.toString());
gate.dispatch(SendActivationLink.builder()
.email(event.getEmail())
.username(event.getUsername())
.build()
);
});
}
}
KafkaStreamsConfig:
#EnableBinding(value = {NotificationStreams.class, UserManagementEvents.class})
public class KafkaStreamsConfig {
}
EventPublisher:
#Slf4j
#RequiredArgsConstructor
#Component
public class EventPublisher {
private final NotificationStreams eventsStreams;
private final AvroMessageBuilder messageBuilder;
public void publish(Event event) {
MessageChannel messageChannel = eventsStreams.outboundEvents();
AvroActivationLinkSent activationLinkSent = new AvroActivationLinkSent(); activationLinkSent.setEmail(((ActivationLinkSent)event).getEmail());
activationLinkSent.setUsername(((ActivationLinkSent)event).getUsername() + "-domain");
activationLinkSent.setTimestamp(System.currentTimeMillis());
messageChannel.send(messageBuilder.buildMessage(activationLinkSent));
}
}
application config:
spring:
devtools:
restart:
enabled: true
cloud:
stream:
default:
contentType: application/*+avro
kafka:
binder:
brokers: localhost:9092
schemaRegistryClient:
endpoint: http://localhost:8990
kafka:
consumer:
group-id: notification-group
auto-offset-reset: earliest
kafka:
bootstrap:
servers: localhost:9092
The application seems to ignore the notification-service-events listener. It works when listening to only one stream.
I'm almost 100% sure that this is not an issue with publishing the event, because I've connected manually to Kafka and verified that messages are published properly:
kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic notification-service-events-out --from-beginning
Do you have any ideas what else I should check? Is there any additional configuration on the Spring side?
I've found where the problem was.
I was missing bindings configuration. In the application properties, I should have added the following lines:
cloud:
stream:
bindings:
notification-service-events-in:
destination: notification-service-events
notification-service-events-out:
destination: notification-service-events
user-management-events-in:
destination: user-management-events
In the user-management-service I didn't have such a problem because I used a different property:
cloud:
stream:
default:
contentType: application/*+avro
destination: user-management-events

Categories

Resources