Manual ACK for AggregatingMessageHandler - java

I'm trying to build integration scenario like this Rabbit -> AmqpInboundChannelAdapter(AcknowledgeMode.MANUAL) -> DirectChannel -> AggregatingMessageHandler -> DirectChannel -> AmqpOutboundEndpoint.
I want to aggregate messages in-memory and release it if I aggregate 10 messages, or if timeout of 10 seconds is reached. I suppose this config is OK:
#Bean
#ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler aggregator(){
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(), new SimpleMessageStore(10));
aggregatingMessageHandler.setCorrelationStrategy(new HeaderAttributeCorrelationStrategy(AmqpHeaders.CORRELATION_ID));
//default false
aggregatingMessageHandler.setExpireGroupsUponCompletion(true); //when grp released (using strategy), remove group so new messages in same grp create new group
aggregatingMessageHandler.setSendPartialResultOnExpiry(true); //when expired because timeout and not because of strategy, still send messages grouped so far
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(TimeUnit.SECONDS.toMillis(10))); //timeout after X
//timeout is checked only when new message arrives!!
aggregatingMessageHandler.setReleaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(10, TimeUnit.SECONDS.toMillis(10)));
aggregatingMessageHandler.setOutputChannel(amqpOutputChannel());
return aggregatingMessageHandler;
}
Now, my question is - is there any easier way to manualy ack messages except creating my own implementation of AggregatingMessageHandler in this way:
public class ManualAckAggregatingMessageHandler extends AbstractCorrelatingMessageHandler {
...
private void ackMessage(Channel channel, Long deliveryTag){
try {
Assert.notNull(channel, "Channel must be provided");
Assert.notNull(deliveryTag, "Delivery tag must be provided");
channel.basicAck(deliveryTag, false);
}
catch (IOException e) {
throw new MessagingException("Cannot ACK message", e);
}
}
#Override
protected void afterRelease(MessageGroup messageGroup, Collection<Message<?>> completedMessages) {
Object groupId = messageGroup.getGroupId();
MessageGroupStore messageStore = getMessageStore();
messageStore.completeGroup(groupId);
messageGroup.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
ackMessage(channel, deliveryTag);
});
if (this.expireGroupsUponCompletion) {
remove(messageGroup);
}
else {
if (messageStore instanceof SimpleMessageStore) {
((SimpleMessageStore) messageStore).clearMessageGroup(groupId);
}
else {
messageStore.removeMessagesFromGroup(groupId, messageGroup.getMessages());
}
}
}
}
UPDATE
I managed to do it after your help. Most important parts: Connection factory must have factory.setPublisherConfirms(true). AmqpOutboundEndpoint must have this two settings: outboundEndpoint.setConfirmAckChannel(manualAckChannel()) and outboundEndpoint.setConfirmCorrelationExpressionString("#root"), and this is implementation of rest of classes:
public class ManualAckPair {
private Channel channel;
private Long deliveryTag;
public ManualAckPair(Channel channel, Long deliveryTag) {
this.channel = channel;
this.deliveryTag = deliveryTag;
}
public void basicAck(){
try {
this.channel.basicAck(this.deliveryTag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}
}
public abstract class AbstractManualAckAggregatingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
public static final String MANUAL_ACK_PAIRS = PREFIX + "manualAckPairs";
#Override
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
Map<String, Object> aggregatedHeaders = super.aggregateHeaders(group);
List<ManualAckPair> manualAckPairs = new ArrayList<>();
group.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
manualAckPairs.add(new ManualAckPair(channel, deliveryTag));
});
aggregatedHeaders.put(MANUAL_ACK_PAIRS, manualAckPairs);
return aggregatedHeaders;
}
}
and
#Service
public class ManualAckServiceActivator {
#ServiceActivator(inputChannel = "manualAckChannel")
public void handle(#Header(MANUAL_ACK_PAIRS) List<ManualAckPair> manualAckPairs) {
manualAckPairs.forEach(manualAckPair -> {
manualAckPair.basicAck();
});
}
}

Right, you don't need such a complex logic for the aggregator.
You simply can acknowledge them after the aggregator release - in the service activator in between aggregator and that AmqpOutboundEndpoint.
And right you have to use there basicAck() with the multiple flag to true:
#param multiple true to acknowledge all messages up to and
Well, for that purpose you definitely need a custom MessageGroupProcessor to extract the highest AmqpHeaders.DELIVERY_TAG for the whole batch and set it as a header for the output aggregated message.
You might just extend DefaultAggregatingMessageGroupProcessor and override its aggregateHeaders():
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
*
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {

Related

#KafkaListener with multiples independent topics processing

Software used:
Java version: 8
SpringBoot version: 2.4.0
SpringKafka version: 2.7.2
I have this method in my spring:
#KafkaListener(topics="#{consumerSpring.topics}", groupId="#{consumerSpring.consumerId}", concurrency="#{consumerSpring.recommendedConcurrency}")
public void listenKafkbooiaTopic(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#{consumerSpring.topics} returns
{"topic1", "topic2", "topic3"}
#{consumerSpring.consumerId} returns:
myConsumer
#{consumerSpring.recommendedConcurrency} returns:
3
OK! This is working fine! But I need isolate these topics, for example:
TopicA is stuck in fatal error and it's calling:
ack.nack(5 * 1000);
But the topics: TopicB and TopicC aren't stuck. Then I need that these topics continue the execution normally.
Basically I need the same behavior as if I declared two separate structures, example:
#KafkaListener(topics="topica", groupId="#{consumerSpring.consumerId}")
public void listenerTopicB(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#KafkaListener(topics="topicb", groupId="#{consumerSpring.consumerId}")
public void listenerTopicA(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
There is no need for multiple methods.
You can put multiple #KafkaListener annotations on a single method and each one will create a separate container.

Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo

Better way of error handling in Kafka Consumer

I have a Springboot app configured with spring-kafka where I want to handle all sorts of error that can happen while listening to a topic. If any message is missed / not able to be consumed because of either Deserialization or any other Exception, there will be 2 retries and after which the message should be logged to an error file. I have two approaches that can be followed :-
First Approach( Using SeekToCurrentErrorHandler with DeadLetterPublishingRecoverer):-
#Autowired
KafkaTemplate<String,Object> template;
#Bean(name = "kafkaSourceProvider")
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".DLT", r.partition());
}
});
ErrorHandler errorHandler = new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 2L));
factory.setErrorHandler(errorHandler);
return factory;
}
But for this we require addition topic(a new .DLT topic) and then we can log it to a file.
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#KafkaListener( topics = MY_TOPIC + ".DLT", groupId = MY_ID)
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Approach 2 ( Using custom SeekToCurrentErrorHandler) :-
#Bean
public ConcurrentKafkaListenerContainerFactory<K, V> consumerFactory() {
Map<String, Object> config = appProperties.getSource()
.getProperties();
ConcurrentKafkaListenerContainerFactory<K, V> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(config));
factory.setErrorHandler(new CustomSeekToCurrentErrorHandler());
factory.setRetryTemplate(retryTemplate());
return factory;
}
private RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy());
retryTemplate.setRetryPolicy(aSimpleReturnPolicy);
}
public class CustomSeekToCurrentErrorHandler extends SeekToCurrentErrorHandler {
private static final int MAX_RETRY_ATTEMPTS = 2;
CustomSeekToCurrentErrorHandler() {
super(MAX_RETRY_ATTEMPTS);
}
#Override
public void handle(Exception exception, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
try {
if (!records.isEmpty()) {
log.warn("Exception: {} occurred with message: {}", exception, exception.getMessage());
super.handle(exception, records, consumer, container);
}
} catch (SerializationException e) {
log.warn("Exception: {} occurred with message: {}", e, e.getMessage());
}
}
}
Can anyone provide their suggestions on what's the standard way to implement this kind of feature. In first approach we do see an overhead of creation of .DLT topics and an additional #KafkaListener. In second approach, we can directly log our consumer record exception.
With the first approach, it is not necessary to use a DeadLetterPublishingRecoverer, you can use any ConsumerRecordRecoverer that you want; in fact the default recoverer simply logs the failed message.
/**
* Construct an instance with the default recoverer which simply logs the record after
* the backOff returns STOP for a topic/partition/offset.
* #param backOff the {#link BackOff}.
* #since 2.3
*/
public SeekToCurrentErrorHandler(BackOff backOff) {
this(null, backOff);
}
And, in the FailedRecordTracker...
if (recoverer == null) {
this.recoverer = (rec, thr) -> {
...
logger.error(thr, "Backoff "
+ (failedRecord == null
? "none"
: failedRecord.getBackOffExecution())
+ " exhausted for " + ListenerUtils.recordToString(rec));
};
}
Backoff (and a limit to retries) was added to the error handler after adding retry in the listener adapter, so it's "newer" (and preferred).
Also, using in-memory retry can cause issues with rebalancing if long BackOffs are employed.
Finally, only the SeekToCurrentErrorHandler can deal with deserialization problems (via the ErrorHandlingDeserializer).
EDIT
Use the ErrorHandlingDeserializer together with a SeekToCurrentErrorHandler. Deserialization exceptions are considered fatal and the recoverer is called immediately.
See the documentation.
Here is a simple Spring Boot application that demonstrates it:
public class So63236346Application {
private static final Logger log = LoggerFactory.getLogger(So63236346Application.class);
public static void main(String[] args) {
SpringApplication.run(So63236346Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63236346").partitions(1).replicas(1).build();
}
#Bean
ErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler((rec, ex) -> log.error(ListenerUtils.recordToString(rec, true) + "\n"
+ ex.getMessage()));
}
#KafkaListener(id = "so63236346", topics = "so63236346")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so63236346", "{\"field\":\"value1\"}");
template.send("so63236346", "junk");
template.send("so63236346", "{\"field\":\"value2\"}");
};
}
}
package com.example.demo;
public class Thing {
private String field;
public Thing() {
}
public Thing(String field) {
this.field = field;
}
public String getField() {
return this.field;
}
public void setField(String field) {
this.field = field;
}
#Override
public String toString() {
return "Thing [field=" + this.field + "]";
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.properties.spring.json.value.default.type=com.example.demo.Thing
Result
Thing [field=value1]
2020-08-10 14:30:14.780 ERROR 78857 --- [o63236346-0-C-1] com.example.demo.So63236346Application : so63236346-0#7
Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is org.apache.kafka.common.errors.SerializationException: Can't deserialize data [[106, 117, 110, 107]] from topic [so63236346]
2020-08-10 14:30:14.782 INFO 78857 --- [o63236346-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-so63236346-1, groupId=so63236346] Seeking to offset 8 for partition so63236346-0
Thing [field=value2]
The expectation was to log any exception that we might get at the container level as well as the listener level.
Without retrying, following is the way I have done error handling:-
If we encounter any exception at the container level, we should be able to log the message payload with the error description and seek that offset and skip it and go ahead receiving the next offset. Though it is done only for DeserializationException, the rest of the exceptions also needs to be seek and offsets needs to be skipped for them.
#Component
public class KafkaContainerErrorHandler implements ErrorHandler {
private static final Logger logger = LoggerFactory.getLogger(KafkaContainerErrorHandler.class);
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, MessageListenerContainer container) {
String s = thrownException.getMessage().split("Error deserializing key/value for partition ")[1].split(". If needed, please seek past the record to continue consumption.")[0];
// modify below logic according to your topic nomenclature
String topics = s.substring(0, s.lastIndexOf('-'));
int offset = Integer.parseInt(s.split("offset ")[1]);
int partition = Integer.parseInt(s.substring(s.lastIndexOf('-') + 1).split(" at")[0]);
logger.error("...")
TopicPartition topicPartition = new TopicPartition(topics, partition);
logger.info("Skipping {} - {} offset {}", topics, partition, offset);
consumer.seek(topicPartition, offset + 1);
}
#Override
public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) {
}
}
factory.setErrorHandler(kafkaContainerErrorHandler);
If we get any exception at the #KafkaListener level, then I am configuring my listener with my custom error handler and logging the exception with the message as can be seen below:-
#Bean("customErrorHandler")
public KafkaListenerErrorHandler listenerErrorHandler() {
return (m, e) -> {
logger.error(...);
return m;
};
}

Resend messages after timeout

I have a list of objects that I put in Spring AMQP. Objects come from the controller. There is a service that processes these objects. And this service may crash with an OutOfMemoryException. Therefore, I run several instances of the application.
There is a problem: when the service crashes, I lose the received messages. I read about NACK. And could use it in case of Exception or RuntimeException. But my service crashes in Error. Therefore, I cannot send NACK. Is it possible to set a timeout in AMQP, after which I would be sent a message again if I had not confirmed the messages that had arrived earlier?
Here is the code I wrote:
public class Exchanges {
public static final String EXC_RENDER_NAME = "render.exchange.topic";
public static final TopicExchange EXC_RENDER = new TopicExchange(EXC_RENDER_NAME, true, false);
}
public class Queues {
public static final String RENDER_NAME = "render.queue.topic";
public static final Queue RENDER = new Queue(RENDER_NAME);
}
#RequiredArgsConstructor
#Service
public class RenderRabbitEventListener extends RabbitEventListener {
private final ApplicationEventPublisher eventPublisher;
#RabbitListener(bindings = #QueueBinding(value = #Queue(Queues.RENDER_NAME),
exchange = #Exchange(value = Exchanges.EXC_RENDER_NAME, type = "topic"),
key = "render.#")
)
public void onMessage(Message message, Channel channel) {
String routingKey = parseRoutingKey(message);
log.debug(String.format("Event %s", routingKey));
RenderQueueObject queueObject = parseRender(message, RenderQueueObject.class);
handleMessage(queueObject);
}
public void handleMessage(RenderQueueObject render) {
GenericSpringEvent<RenderQueueObject> springEvent = new GenericSpringEvent<>(render);
springEvent.setRender(true);
eventPublisher.publishEvent(springEvent);
}
}
And this is the method that sends messages:
    #Async ("threadPoolTaskExecutor")
    #EventListener (condition = "# event.queue")
    public void start (GenericSpringEvent <RenderQueueObject> event) {
        RenderQueueObject renderQueueObject = event.getWhat ();
        send (RENDER_NAME, renderQueueObject);
}
private void send(String routingKey, Object queue) {
try {
rabbitTemplate.convertAndSend(routingKey, objectMapper.writeValueAsString(queue));
} catch (JsonProcessingException e) {
log.warn("Can't send event!", e);
}
}
You need to close the connection to get the message re-queued.
It's best to terminate the application after an OOME (which, of course, will close the connection).

akka websocket with java, counting clients number, sending message to client

I'm following the akka java websocket tutorial in attempt to create a websocket server. I want to implement 2 extra features:
Being able to display the number of connected clients, but the result
is always 0 or 1 , even when I know I have 100's concurrently
connected clients.
Websocket communication is biDirectional. Currently the server only respond with a message when client sends a message. How do I initiate sending a message from server to client?
Here's original akka java server example code with minimum modification of my client counting implementation:
public class websocketServer {
private static AtomicInteger connections = new AtomicInteger(0);//connected clients count.
public static class MyTimerTask extends TimerTask {
//called every second to display number of connected clients.
#Override
public void run() {
System.out.println("Conncurrent connections: " + connections);
}
}
//#websocket-handling
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
connections.incrementAndGet();
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter();
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
connections.decrementAndGet();
return result;
}
public static void main(String[] args) throws Exception {
ActorSystem system = ActorSystem.create();
TimerTask timerTask = new MyTimerTask();
Timer timer = new Timer(true);
timer.scheduleAtFixedRate(timerTask, 0, 1000);
try {
final Materializer materializer = ActorMaterializer.create(system);
final Function<HttpRequest, HttpResponse> handler = request -> handleRequest(request);
CompletionStage<ServerBinding> serverBindingFuture =
Http.get(system).bindAndHandleSync(
handler, ConnectHttp.toHost("****", 1183), materializer);
// will throw if binding fails
serverBindingFuture.toCompletableFuture().get(1, TimeUnit.SECONDS);
System.out.println("Press ENTER to stop.");
new BufferedReader(new InputStreamReader(System.in)).readLine();
timer.cancel();
} catch (Exception e){
e.printStackTrace();
}
finally {
system.terminate();
}
}
//#websocket-handler
/**
* A handler that treats incoming messages as a name,
* and responds with a greeting to that name
*/
public static Flow<Message, Message, NotUsed> greeter() {
return
Flow.<Message>create()
.collect(new JavaPartialFunction<Message, Message>() {
#Override
public Message apply(Message msg, boolean isCheck) throws Exception {
if (isCheck) {
if (msg.isText()) {
return null;
} else {
throw noMatch();
}
} else {
return handleTextMessage(msg.asTextMessage());
}
}
});
}
public static TextMessage handleTextMessage(TextMessage msg) {
if (msg.isStrict()) // optimization that directly creates a simple response...
{
return TextMessage.create("Hello " + msg.getStrictText());
} else // ... this would suffice to handle all text messages in a streaming fashion
{
return TextMessage.create(Source.single("Hello ").concat(msg.getStreamedText()));
}
}
//#websocket-handler
}
Addressing your 2 bullet points below:
1 - you need to attach your metrics to the Message flow - and not to the HttpRequest flow - to effectively count the active connections. You can do this by using watchTermination. Code example for the handleRequest method below
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter().watchTermination((nu, cd) -> {
connections.incrementAndGet();
cd.whenComplete((done, throwable) -> connections.decrementAndGet());
return nu;
});
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
return result;
}
2 - for the server to independently send messages you could create its Message Flow using Flow.fromSinkAndSource. Example below (this will only send one message):
public static Flow<Message, Message, NotUsed> greeter() {
return Flow.fromSinkAndSource(Sink.ignore(),
Source.single(new akka.http.scaladsl.model.ws.TextMessage.Strict("Hello!"))
);
}
In the handleRequest method you increment and then decrement the counter connections, so at the end the value is always 0.
public static HttpResponse handleRequest(HttpRequest request) {
...
connections.incrementAndGet();
...
connections.decrementAndGet();
return result;
}

Categories

Resources