#KafkaListener with multiples independent topics processing - java

Software used:
Java version: 8
SpringBoot version: 2.4.0
SpringKafka version: 2.7.2
I have this method in my spring:
#KafkaListener(topics="#{consumerSpring.topics}", groupId="#{consumerSpring.consumerId}", concurrency="#{consumerSpring.recommendedConcurrency}")
public void listenKafkbooiaTopic(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#{consumerSpring.topics} returns
{"topic1", "topic2", "topic3"}
#{consumerSpring.consumerId} returns:
myConsumer
#{consumerSpring.recommendedConcurrency} returns:
3
OK! This is working fine! But I need isolate these topics, for example:
TopicA is stuck in fatal error and it's calling:
ack.nack(5 * 1000);
But the topics: TopicB and TopicC aren't stuck. Then I need that these topics continue the execution normally.
Basically I need the same behavior as if I declared two separate structures, example:
#KafkaListener(topics="topica", groupId="#{consumerSpring.consumerId}")
public void listenerTopicB(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#KafkaListener(topics="topicb", groupId="#{consumerSpring.consumerId}")
public void listenerTopicA(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}

There is no need for multiple methods.
You can put multiple #KafkaListener annotations on a single method and each one will create a separate container.

Related

Polled Consumer With Functional Programming & Stream bridge

I'm using spring cloud stream with kafka broker for microservice inter-communication. As part of which, stream bridge will be used to send the message, which is fine.
But during consumption of said message, the message need not be immediately consumed, rather when a condition is satisfied, then only, it should be consumed.
From the documentation, I understand that I need to use Polled Consumers(do correct me if I'm mistaken) for this.
This is what I've tried from what I've understood of the documentation.
application.properties
spring.cloud.stream.pollable-source = consumeResponse
spring.cloud.stream.function.definition = consumeResponse
#stream bridge
spring.cloud.stream.bindings.outputchannel1.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.outputchannel1.binder= kafka1
#polled Consumer
spring.cloud.stream.bindings.consumeResponse-in-0.binder= kafka1
spring.cloud.stream.bindings.consumeResponse-in-0.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.consumeResponse-in-0.group = consumer_cloud_stream1
spring.cloud.stream.binders.kafka1.type=kafka
MainApplication.java
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
//produce message
for (int i = 0; i < 5; i++) {
streamBridge.send("outputchannel1", "msg"+i);
System.out.println("Request :: " + "msg"+i);
}
};
}
#Bean
public Consumer<String> consumeResponse() {
return (response) -> {
//consume message
System.out.println("Response :: " + response);
};
}
#Bean
public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {
return args -> {
while (someCondition) { //some condition that checks whether or not to consume the message
try {
//condition satisfied, so forward message to consumer
if (!destIn.poll(m -> {
String newPayload = ((String) m.getPayload());
destOut.send(new GenericMessage<>(newPayload));
})) {
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
};
}
But this throws the following exception:-
Parameter 1 of method poller in com.MainApplication required a single bean, but 2 were found:
- nullChannel: defined in null
- errorChannel: defined in null
I'd appreciate it if someone could help me out here or point me towards a working example for the same.
Spring boot version: 2.6.4,
Spring cloud version: 2021.0.1
Why don't you just inject the StreamBridge into your runner instead of the message channel?
By default, stream bridge output channels are created on-demand (first send).

Why pulling message have nothing google pubsub?

I have subscription VIEW_TOPIC with pull strategy. Why I cannot see any message although have 7 delay messages? I cannot figure out what am I missing. By the way, I'm running subscriber on k8s GCP. I was also add GOOGLE_APPLICATION_CREDENTIALS variable environment.
Subscriber configuration
private Subscriber buildSubscriber() {
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
TopicName topicName = TopicName.of(projectId, topic);
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Create a pull subscription with default acknowledgement deadline of 10 seconds.
// Messages not successfully acknowledged within 10 seconds will get resent by the server.
Subscription subscription =
subscriptionAdminClient.createSubscription(
subscriptionName, topicName, PushConfig.getDefaultInstance(), 10);
System.out.println("Created pull subscription: " + subscription.getName());
} catch (IOException e) {
LOGGER.error("Cannot create pull subscription");
} catch (AlreadyExistsException existsException) {
LOGGER.warn("Subscription already created");
}
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
LOGGER.info("Subscribe topic: " + topic + " | SubscriptionId: " + subscriptionId);
// default is 4 * num of processor
ExecutorProvider executorProvider = InstantiatingExecutorProvider.newBuilder().build();
Subscriber.Builder subscriberBuilder = Subscriber.newBuilder(subscriptionName, new MessageReceiverImpl())
.setExecutorProvider(executorProvider);
// The subscriber will pause the message stream and stop receiving more messages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
.setMaxOutstandingElementCount(100)
// the maximum size of messages the subscriber
// receives before pausing the message stream.
// 10Mib
.setMaxOutstandingRequestBytes(10L * 1024L * 1024L)
.build();
subscriberBuilder.setFlowControlSettings(flowControlSettings);
Subscriber subscriber = subscriberBuilder.build();
subscriber.addListener(new ApiService.Listener() {
#Override
public void failed(ApiService.State from, Throwable failure) {
LOGGER.error(from, failure);
}
}, MoreExecutors.directExecutor());
return subscriber;
}
Subscriber
public void startSubscribeMessage() {
LOGGER.info("Begin subscribe topic " + topic);
this.subscriber.startAsync().awaitRunning();
LOGGER.info("Subscriber start successfully!!!");
}
public class MessageReceiverImpl implements MessageReceiver {
private static final Logger LOGGER = Logger.getLogger(MessageReceiverImpl.class);
private final LogSave logSave = MatchSave.getInstance();
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
ByteString data = message.getData();
// Get the schema encoding type.
String encoding = message.getAttributesMap().get("googclient_schemaencoding");
Req.LogReq logReqMsg = null;
try {
switch (encoding) {
case "BINARY":
logReqMsg = Req.LogReq.parseFrom(data);
break;
case "JSON":
Req.LogReq.Builder msgBuilder = Req.LogReq.newBuilder();
JsonFormat.parser().merge(data.toStringUtf8(), msgBuilder);
logReqMsg = msgBuilder.build();
break;
}
LOGGER.info((JsonFormat.printer().omittingInsignificantWhitespace().print(logReqMsg)));
logSave.addLogMsg(battleLogMsg);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
consumer.ack();
}
}
With Req.LogReq is a proto message. My dependency:
// google cloud
implementation platform('com.google.cloud:libraries-bom:22.0.0')
implementation 'com.google.cloud:google-cloud-pubsub'
implementation group: 'com.google.protobuf', name: 'protobuf-java-util', version: '3.17.2'
And the call function logSave.addLogMsg(battleLogMsg); is add message to CopyOnWriteArrayList

Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo

Manual ACK for AggregatingMessageHandler

I'm trying to build integration scenario like this Rabbit -> AmqpInboundChannelAdapter(AcknowledgeMode.MANUAL) -> DirectChannel -> AggregatingMessageHandler -> DirectChannel -> AmqpOutboundEndpoint.
I want to aggregate messages in-memory and release it if I aggregate 10 messages, or if timeout of 10 seconds is reached. I suppose this config is OK:
#Bean
#ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler aggregator(){
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(), new SimpleMessageStore(10));
aggregatingMessageHandler.setCorrelationStrategy(new HeaderAttributeCorrelationStrategy(AmqpHeaders.CORRELATION_ID));
//default false
aggregatingMessageHandler.setExpireGroupsUponCompletion(true); //when grp released (using strategy), remove group so new messages in same grp create new group
aggregatingMessageHandler.setSendPartialResultOnExpiry(true); //when expired because timeout and not because of strategy, still send messages grouped so far
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(TimeUnit.SECONDS.toMillis(10))); //timeout after X
//timeout is checked only when new message arrives!!
aggregatingMessageHandler.setReleaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(10, TimeUnit.SECONDS.toMillis(10)));
aggregatingMessageHandler.setOutputChannel(amqpOutputChannel());
return aggregatingMessageHandler;
}
Now, my question is - is there any easier way to manualy ack messages except creating my own implementation of AggregatingMessageHandler in this way:
public class ManualAckAggregatingMessageHandler extends AbstractCorrelatingMessageHandler {
...
private void ackMessage(Channel channel, Long deliveryTag){
try {
Assert.notNull(channel, "Channel must be provided");
Assert.notNull(deliveryTag, "Delivery tag must be provided");
channel.basicAck(deliveryTag, false);
}
catch (IOException e) {
throw new MessagingException("Cannot ACK message", e);
}
}
#Override
protected void afterRelease(MessageGroup messageGroup, Collection<Message<?>> completedMessages) {
Object groupId = messageGroup.getGroupId();
MessageGroupStore messageStore = getMessageStore();
messageStore.completeGroup(groupId);
messageGroup.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
ackMessage(channel, deliveryTag);
});
if (this.expireGroupsUponCompletion) {
remove(messageGroup);
}
else {
if (messageStore instanceof SimpleMessageStore) {
((SimpleMessageStore) messageStore).clearMessageGroup(groupId);
}
else {
messageStore.removeMessagesFromGroup(groupId, messageGroup.getMessages());
}
}
}
}
UPDATE
I managed to do it after your help. Most important parts: Connection factory must have factory.setPublisherConfirms(true). AmqpOutboundEndpoint must have this two settings: outboundEndpoint.setConfirmAckChannel(manualAckChannel()) and outboundEndpoint.setConfirmCorrelationExpressionString("#root"), and this is implementation of rest of classes:
public class ManualAckPair {
private Channel channel;
private Long deliveryTag;
public ManualAckPair(Channel channel, Long deliveryTag) {
this.channel = channel;
this.deliveryTag = deliveryTag;
}
public void basicAck(){
try {
this.channel.basicAck(this.deliveryTag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}
}
public abstract class AbstractManualAckAggregatingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
public static final String MANUAL_ACK_PAIRS = PREFIX + "manualAckPairs";
#Override
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
Map<String, Object> aggregatedHeaders = super.aggregateHeaders(group);
List<ManualAckPair> manualAckPairs = new ArrayList<>();
group.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
manualAckPairs.add(new ManualAckPair(channel, deliveryTag));
});
aggregatedHeaders.put(MANUAL_ACK_PAIRS, manualAckPairs);
return aggregatedHeaders;
}
}
and
#Service
public class ManualAckServiceActivator {
#ServiceActivator(inputChannel = "manualAckChannel")
public void handle(#Header(MANUAL_ACK_PAIRS) List<ManualAckPair> manualAckPairs) {
manualAckPairs.forEach(manualAckPair -> {
manualAckPair.basicAck();
});
}
}
Right, you don't need such a complex logic for the aggregator.
You simply can acknowledge them after the aggregator release - in the service activator in between aggregator and that AmqpOutboundEndpoint.
And right you have to use there basicAck() with the multiple flag to true:
#param multiple true to acknowledge all messages up to and
Well, for that purpose you definitely need a custom MessageGroupProcessor to extract the highest AmqpHeaders.DELIVERY_TAG for the whole batch and set it as a header for the output aggregated message.
You might just extend DefaultAggregatingMessageGroupProcessor and override its aggregateHeaders():
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
*
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {

akka websocket with java, counting clients number, sending message to client

I'm following the akka java websocket tutorial in attempt to create a websocket server. I want to implement 2 extra features:
Being able to display the number of connected clients, but the result
is always 0 or 1 , even when I know I have 100's concurrently
connected clients.
Websocket communication is biDirectional. Currently the server only respond with a message when client sends a message. How do I initiate sending a message from server to client?
Here's original akka java server example code with minimum modification of my client counting implementation:
public class websocketServer {
private static AtomicInteger connections = new AtomicInteger(0);//connected clients count.
public static class MyTimerTask extends TimerTask {
//called every second to display number of connected clients.
#Override
public void run() {
System.out.println("Conncurrent connections: " + connections);
}
}
//#websocket-handling
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
connections.incrementAndGet();
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter();
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
connections.decrementAndGet();
return result;
}
public static void main(String[] args) throws Exception {
ActorSystem system = ActorSystem.create();
TimerTask timerTask = new MyTimerTask();
Timer timer = new Timer(true);
timer.scheduleAtFixedRate(timerTask, 0, 1000);
try {
final Materializer materializer = ActorMaterializer.create(system);
final Function<HttpRequest, HttpResponse> handler = request -> handleRequest(request);
CompletionStage<ServerBinding> serverBindingFuture =
Http.get(system).bindAndHandleSync(
handler, ConnectHttp.toHost("****", 1183), materializer);
// will throw if binding fails
serverBindingFuture.toCompletableFuture().get(1, TimeUnit.SECONDS);
System.out.println("Press ENTER to stop.");
new BufferedReader(new InputStreamReader(System.in)).readLine();
timer.cancel();
} catch (Exception e){
e.printStackTrace();
}
finally {
system.terminate();
}
}
//#websocket-handler
/**
* A handler that treats incoming messages as a name,
* and responds with a greeting to that name
*/
public static Flow<Message, Message, NotUsed> greeter() {
return
Flow.<Message>create()
.collect(new JavaPartialFunction<Message, Message>() {
#Override
public Message apply(Message msg, boolean isCheck) throws Exception {
if (isCheck) {
if (msg.isText()) {
return null;
} else {
throw noMatch();
}
} else {
return handleTextMessage(msg.asTextMessage());
}
}
});
}
public static TextMessage handleTextMessage(TextMessage msg) {
if (msg.isStrict()) // optimization that directly creates a simple response...
{
return TextMessage.create("Hello " + msg.getStrictText());
} else // ... this would suffice to handle all text messages in a streaming fashion
{
return TextMessage.create(Source.single("Hello ").concat(msg.getStreamedText()));
}
}
//#websocket-handler
}
Addressing your 2 bullet points below:
1 - you need to attach your metrics to the Message flow - and not to the HttpRequest flow - to effectively count the active connections. You can do this by using watchTermination. Code example for the handleRequest method below
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter().watchTermination((nu, cd) -> {
connections.incrementAndGet();
cd.whenComplete((done, throwable) -> connections.decrementAndGet());
return nu;
});
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
return result;
}
2 - for the server to independently send messages you could create its Message Flow using Flow.fromSinkAndSource. Example below (this will only send one message):
public static Flow<Message, Message, NotUsed> greeter() {
return Flow.fromSinkAndSource(Sink.ignore(),
Source.single(new akka.http.scaladsl.model.ws.TextMessage.Strict("Hello!"))
);
}
In the handleRequest method you increment and then decrement the counter connections, so at the end the value is always 0.
public static HttpResponse handleRequest(HttpRequest request) {
...
connections.incrementAndGet();
...
connections.decrementAndGet();
return result;
}

Categories

Resources