My RetryTemplate config:
#Configuration
#EnableRetry
public class RetryTemplateConfig {
#Value("${spring.retry.attempts}")
private int maxAttempts;
#Value("${spring.retry.period}")
private long backOffPeriod;
#Bean
public RetryTemplate retryTemplate() {
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(maxAttempts);
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(backOffPeriod);
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setBackOffPolicy(backOffPolicy);
return retryTemplate;
}
}
Scheduled method which invokes method that uses Retry:
#Scheduled(cron = "${schedule.cron.update.users}")
public void sendToUsers() throws Exception {
log.info("Scheduled sending for users started");
try {
mailSender.sendToUsers();
} catch (MessagingException | IOException | TemplateException e) {
log.error("Error occurred while sending email message to users: {}", e.toString());
}
log.info("Scheduled sending for users finished");
}
Method in which i want to use RetryTemplate:
public void sendToUsers() throws Exception {
String subject = reportGenerationService.getEmailSubjectForUser();
Map<String, List<BadUtmMark>> utmMarksGroupedByEmail = userService.getUtmMarksGroupedByEmail(LocalDate.now());
if (utmMarksGroupedByEmail.isEmpty()) {
log.info("All's fine - no broken utm marks in database. Emails to users will not be send.");
}
for (Map.Entry<String, List<BadUtmMark>> pair : utmMarksGroupedByEmail.entrySet()) {
retryTemplate.execute(retryContext -> {
String emailTo = pair.getKey();
List<BadUtmMark> badUtmMarks = pair.getValue();
String report = reportGenerationService.getReportForUser(emailTo, badUtmMarks, template);
MimeMessage mimeMessage = getMimeMessage(subject, report, Collections.singletonList(emailTo));
log.info("Message will be sent to: {}; from: {}; with subject: {}", pair.getKey(), from, subject);
mailSender.send(mimeMessage);
return true;
});
}
}
Expected behaviour: I want to send emails for 5 people. If error occurs, then try to send email to this user 5 more times, and then if retries exhausted, keep going and send email for next user in for loop.
Really behaviour: If error occurs, service will catch exception and stop looping.
If i'll move retry logic to this method:
#Scheduled(cron = "${schedule.cron.update.users}")
public void sendToUsers() throws Exception {
log.info("Scheduled sending for users started");
try {
retryTemplate.execute(retryContext - > {
log.warn("Sending email to users. Attempt: {}", retryContext.getRetryCount());
mailSender.sendToUsers();
return true;
});
} catch (MessagingException | IOException | TemplateException e) {
log.error("Error occurred while sending email message to users: {}", e.toString());
}
log.info("Scheduled sending for users finished");
}
It works better, but still not what I expect. With this case if error occurs, service will try to send email 5 more times, but if retries exhausted, service will stop looping. So, if error occurs with one of the user, the service will try to send 5 more times for this user, and then will stop, ignoring other users in map.
But I want to do 5 retries for each email in my map. How can i do this?
In your first version, use this execute instead...
/**
* Keep executing the callback until it either succeeds or the policy dictates that we
* stop, in which case the recovery callback will be executed.
*
* #see RetryOperations#execute(RetryCallback, RecoveryCallback)
* #param retryCallback the {#link RetryCallback}
* #param recoveryCallback the {#link RecoveryCallback}
* #throws TerminatedRetryException if the retry has been manually terminated by a
* listener.
*/
#Override
public final <T, E extends Throwable> T execute(RetryCallback<T, E> retryCallback,
RecoveryCallback<T> recoveryCallback) throws E {
return doExecute(retryCallback, recoveryCallback, null);
}
template.execute(context -> {
...
}, context -> {
logger.error("Failed to send to ...");
});
If the callback exits normally, the failure is "recovered" and the exception not rethrown.
Related
Software used:
Java version: 8
SpringBoot version: 2.4.0
SpringKafka version: 2.7.2
I have this method in my spring:
#KafkaListener(topics="#{consumerSpring.topics}", groupId="#{consumerSpring.consumerId}", concurrency="#{consumerSpring.recommendedConcurrency}")
public void listenKafkbooiaTopic(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#{consumerSpring.topics} returns
{"topic1", "topic2", "topic3"}
#{consumerSpring.consumerId} returns:
myConsumer
#{consumerSpring.recommendedConcurrency} returns:
3
OK! This is working fine! But I need isolate these topics, for example:
TopicA is stuck in fatal error and it's calling:
ack.nack(5 * 1000);
But the topics: TopicB and TopicC aren't stuck. Then I need that these topics continue the execution normally.
Basically I need the same behavior as if I declared two separate structures, example:
#KafkaListener(topics="topica", groupId="#{consumerSpring.consumerId}")
public void listenerTopicB(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
#KafkaListener(topics="topicb", groupId="#{consumerSpring.consumerId}")
public void listenerTopicA(#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName, #Payload String message, Acknowledgment ack) throws Exception {
ConsumerSpring consumer = this.consumerSpring();
//
//
KafkaHandlerReturn handlerReturn = consumer.getKafkaProxy().handleRequest(
topicName,
consumer.getConsumerId(),
message);
if (handlerReturn.equals(KafkaHandlerReturn.SUCCESS) || handlerReturn.equals(KafkaHandlerReturn.FAIL_LOGIC)) {
ack.acknowledge();
} else {
ack.nack(5 * 1000);
}
}
There is no need for multiple methods.
You can put multiple #KafkaListener annotations on a single method and each one will create a separate container.
I have subscription VIEW_TOPIC with pull strategy. Why I cannot see any message although have 7 delay messages? I cannot figure out what am I missing. By the way, I'm running subscriber on k8s GCP. I was also add GOOGLE_APPLICATION_CREDENTIALS variable environment.
Subscriber configuration
private Subscriber buildSubscriber() {
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
TopicName topicName = TopicName.of(projectId, topic);
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Create a pull subscription with default acknowledgement deadline of 10 seconds.
// Messages not successfully acknowledged within 10 seconds will get resent by the server.
Subscription subscription =
subscriptionAdminClient.createSubscription(
subscriptionName, topicName, PushConfig.getDefaultInstance(), 10);
System.out.println("Created pull subscription: " + subscription.getName());
} catch (IOException e) {
LOGGER.error("Cannot create pull subscription");
} catch (AlreadyExistsException existsException) {
LOGGER.warn("Subscription already created");
}
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
LOGGER.info("Subscribe topic: " + topic + " | SubscriptionId: " + subscriptionId);
// default is 4 * num of processor
ExecutorProvider executorProvider = InstantiatingExecutorProvider.newBuilder().build();
Subscriber.Builder subscriberBuilder = Subscriber.newBuilder(subscriptionName, new MessageReceiverImpl())
.setExecutorProvider(executorProvider);
// The subscriber will pause the message stream and stop receiving more messages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
.setMaxOutstandingElementCount(100)
// the maximum size of messages the subscriber
// receives before pausing the message stream.
// 10Mib
.setMaxOutstandingRequestBytes(10L * 1024L * 1024L)
.build();
subscriberBuilder.setFlowControlSettings(flowControlSettings);
Subscriber subscriber = subscriberBuilder.build();
subscriber.addListener(new ApiService.Listener() {
#Override
public void failed(ApiService.State from, Throwable failure) {
LOGGER.error(from, failure);
}
}, MoreExecutors.directExecutor());
return subscriber;
}
Subscriber
public void startSubscribeMessage() {
LOGGER.info("Begin subscribe topic " + topic);
this.subscriber.startAsync().awaitRunning();
LOGGER.info("Subscriber start successfully!!!");
}
public class MessageReceiverImpl implements MessageReceiver {
private static final Logger LOGGER = Logger.getLogger(MessageReceiverImpl.class);
private final LogSave logSave = MatchSave.getInstance();
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
ByteString data = message.getData();
// Get the schema encoding type.
String encoding = message.getAttributesMap().get("googclient_schemaencoding");
Req.LogReq logReqMsg = null;
try {
switch (encoding) {
case "BINARY":
logReqMsg = Req.LogReq.parseFrom(data);
break;
case "JSON":
Req.LogReq.Builder msgBuilder = Req.LogReq.newBuilder();
JsonFormat.parser().merge(data.toStringUtf8(), msgBuilder);
logReqMsg = msgBuilder.build();
break;
}
LOGGER.info((JsonFormat.printer().omittingInsignificantWhitespace().print(logReqMsg)));
logSave.addLogMsg(battleLogMsg);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
consumer.ack();
}
}
With Req.LogReq is a proto message. My dependency:
// google cloud
implementation platform('com.google.cloud:libraries-bom:22.0.0')
implementation 'com.google.cloud:google-cloud-pubsub'
implementation group: 'com.google.protobuf', name: 'protobuf-java-util', version: '3.17.2'
And the call function logSave.addLogMsg(battleLogMsg); is add message to CopyOnWriteArrayList
I need to migrate to Kinesis library to version 2.2.11 so I followed the tutorial: https://docs.aws.amazon.com/streams/latest/dev/kcl-migration.html
I need to run multiple instances of my consumer app, so every one of them needs to have an unique application name in order to have a separate lease table in DynamoDb.
When initializing the consumer Kinesis runs DynamoDBLeaseRefresher.createLeaseTableIfNotExists which checks if a new table needs to be created for this application name and creates one if it cannot be found.
So 2 operations are performed:
DescribeTable - it returns the table info or throws a ResourceNotFoundExecption,
if needed - CreateTable.
The problem for me is with the DescribeTable method. When I am looking for an existing table it returns it with no problem. But when I am looking for a non-existent table it throws the ResourceNotFoundExecption -> so far so good. Unfortunately it then gets wrapped and is now:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: software.amazon.awssdk.awscore.exception.AwsServiceException$Builder.extendedRequestId(Ljava/lang/String;)Lsoftware/amazon/awssdk/awscore/exception/AwsServiceException$Builder;
and the app expecting ResourceNotFoundException gets something different instead and crashes.
The wrapped exception message is a bit misleading: "Unable to execute HTTP request" since the request was performed and returned the proper message: "Resource not found".
Funny thing is that it sometimes works, the exception does not get wrapped, the CreateTable operation is performed and the consumer starts properly.
I have made a workaround for it for now where I just create the table before the initialization of the LeaseCoordinator, so it always gets the existing table.
here is my code:
public KinesisStreamReaderService(String streamName, String applicationName, String regionName) {
KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(Region.of(connectionProperties.getRegion()))
.httpClientBuilder(createHttpClientBuilder())
.build();
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(regionName)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(regionName)).build();
// if(!dynamoDbTableExists(dynamoClient, applicationName)) {
// createDynamoDbTable(dynamoClient, applicationName);
// }
ConfigsBuilder configsBuilder = new ConfigsBuilder(streamName, applicationName, kinesisClient,
dynamoClient, cloudWatchClient, workerId(), KinesisReaderProcessor::new);
configsBuilder.retrievalConfig().initialPositionInStreamExtended(
InitialPositionInStreamExtended.newInitialPosition(
InitialPositionInStream.LATEST));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig(),
configsBuilder.retrievalConfig().retrievalSpecificConfig(new PollingConfig(streamName, kinesisClient))
);
}
private void createDynamoDbTable(DynamoDbAsyncClient dynamoClient, String applicationName) {
log.info("Creating new lease table: {}", applicationName);
CompletableFuture<CreateTableResponse> createTableFuture = dynamoClient
.createTable(CreateTableRequest.builder()
.provisionedThroughput(ProvisionedThroughput.builder().readCapacityUnits(10L).writeCapacityUnits(10L).build())
.tableName(applicationName)
.keySchema(KeySchemaElement.builder().attributeName("leaseKey").keyType(KeyType.HASH).build())
.attributeDefinitions(AttributeDefinition.builder().attributeName("leaseKey").attributeType(
ScalarAttributeType.S).build())
.build());
try {
CreateTableResponse createTableResponse = createTableFuture.get();
log.debug("Created new lease table: {}", createTableResponse.tableDescription().tableName());
} catch (InterruptedException | ExecutionException e) {
throw new DataStreamException(e.getMessage(), e);
}
}
private boolean dynamoDbTableExists(DynamoDbAsyncClient dynamoClient, String tableName) {
CompletableFuture<DescribeTableResponse> describeTableResponseCompletableFutureNew = dynamoClient
.describeTable(DescribeTableRequest.builder()
.tableName(tableName).build());
try {
DescribeTableResponse describeTableResponseNew = describeTableResponseCompletableFutureNew
.get();
return nonNull(describeTableResponseNew);
} catch (InterruptedException | ExecutionException e) {
log.info(e.getMessage(), e);
}
return false;
}
private static String workerId() {
String workerId;
try {
workerId = format("%s_%s", getLocalHost().getCanonicalHostName(), randomUUID().toString());
} catch (UnknownHostException e) {
workerId = randomUUID().toString();
}
return workerId;
}
#Override
public void read(Consumer<String> consumer) {
this.consumer = consumer;
scheduler.run();
}
private class KinesisReaderProcessor implements ShardRecordProcessor {
private String shardId;
#Override
public void initialize(InitializationInput initializationInput) {
this.shardId = initializationInput.shardId();
log.info("Initializing record processor for shard: {}", shardId);
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
log.debug("Checking shard {} for new records", shardId);
List<KinesisClientRecord> records = processRecordsInput.records();
if (!records.isEmpty()) {
log.debug("Processing {} records from kinesis stream shard {}", records.size(), shardId);
records.forEach(record -> {
String json = UTF_8.decode(record.data()).toString();
log.info(json);
consumer.accept(json);
});
}
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
log.info("Record processor has lost lease, terminating");
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
try {
shutdownRequestedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
}
}
Am I missing some configuration for the scheduler or something? Why is it sometimes working?
Thanks
Edit:
The problem is this block of code in DynamoDBLeaseRefresher.tableStatus() is invoked to check if the table exists:
DescribeTableResponse result;
try {
try {
result =
(DescribeTableResponse)FutureUtils.resolveOrCancelFuture(this.dynamoDBClient.describeTable(request), this.dynamoDbRequestTimeout);
} catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
} catch (InterruptedException var6) {
throw new DependencyException(var6);
}
} catch (ResourceNotFoundException var7) {
log.debug("Got ResourceNotFoundException for table {} in leaseTableExists, returning false.", this.table);
return null;
}
and in my case it should get ResourceNotFoundException if the table is not found, but as I said the expection gets wrapped to CompletionException before it reaches the appropriate catch block and is caught in the code here:
catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
This is happening 20 times in the loop while trying to Initialize the LeaseCoordinator and then just stops trying to initialize the connection. (As mentioned above it works occasionally, but that makes it even stranger to me)
With my workaround it only needs 1 try to get initialized
You don't need to create a lease table manually - DynamoDBLeaseCoordinator will create one if not exists on initialization and wait until it exists:
#Override
public void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
final boolean newTableCreated =
leaseRefresher.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
if (newTableCreated) {
log.info("Created new lease table for coordinator with initial read capacity of {} and write capacity of {}.",
initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
}
// Need to wait for table in active state.
final long secondsBetweenPolls = 10L;
final long timeoutSeconds = 600L;
final boolean isTableActive = leaseRefresher.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
if (!isTableActive) {
throw new DependencyException(new IllegalStateException("Creating table timeout"));
}
}
The issue in your case, I think, is that it's eventually created and you probably should periodically check until table appears - like DynamoDBLeaseCoordinator#initialize() does.
I'm trying to build integration scenario like this Rabbit -> AmqpInboundChannelAdapter(AcknowledgeMode.MANUAL) -> DirectChannel -> AggregatingMessageHandler -> DirectChannel -> AmqpOutboundEndpoint.
I want to aggregate messages in-memory and release it if I aggregate 10 messages, or if timeout of 10 seconds is reached. I suppose this config is OK:
#Bean
#ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler aggregator(){
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(), new SimpleMessageStore(10));
aggregatingMessageHandler.setCorrelationStrategy(new HeaderAttributeCorrelationStrategy(AmqpHeaders.CORRELATION_ID));
//default false
aggregatingMessageHandler.setExpireGroupsUponCompletion(true); //when grp released (using strategy), remove group so new messages in same grp create new group
aggregatingMessageHandler.setSendPartialResultOnExpiry(true); //when expired because timeout and not because of strategy, still send messages grouped so far
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(TimeUnit.SECONDS.toMillis(10))); //timeout after X
//timeout is checked only when new message arrives!!
aggregatingMessageHandler.setReleaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(10, TimeUnit.SECONDS.toMillis(10)));
aggregatingMessageHandler.setOutputChannel(amqpOutputChannel());
return aggregatingMessageHandler;
}
Now, my question is - is there any easier way to manualy ack messages except creating my own implementation of AggregatingMessageHandler in this way:
public class ManualAckAggregatingMessageHandler extends AbstractCorrelatingMessageHandler {
...
private void ackMessage(Channel channel, Long deliveryTag){
try {
Assert.notNull(channel, "Channel must be provided");
Assert.notNull(deliveryTag, "Delivery tag must be provided");
channel.basicAck(deliveryTag, false);
}
catch (IOException e) {
throw new MessagingException("Cannot ACK message", e);
}
}
#Override
protected void afterRelease(MessageGroup messageGroup, Collection<Message<?>> completedMessages) {
Object groupId = messageGroup.getGroupId();
MessageGroupStore messageStore = getMessageStore();
messageStore.completeGroup(groupId);
messageGroup.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
ackMessage(channel, deliveryTag);
});
if (this.expireGroupsUponCompletion) {
remove(messageGroup);
}
else {
if (messageStore instanceof SimpleMessageStore) {
((SimpleMessageStore) messageStore).clearMessageGroup(groupId);
}
else {
messageStore.removeMessagesFromGroup(groupId, messageGroup.getMessages());
}
}
}
}
UPDATE
I managed to do it after your help. Most important parts: Connection factory must have factory.setPublisherConfirms(true). AmqpOutboundEndpoint must have this two settings: outboundEndpoint.setConfirmAckChannel(manualAckChannel()) and outboundEndpoint.setConfirmCorrelationExpressionString("#root"), and this is implementation of rest of classes:
public class ManualAckPair {
private Channel channel;
private Long deliveryTag;
public ManualAckPair(Channel channel, Long deliveryTag) {
this.channel = channel;
this.deliveryTag = deliveryTag;
}
public void basicAck(){
try {
this.channel.basicAck(this.deliveryTag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}
}
public abstract class AbstractManualAckAggregatingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
public static final String MANUAL_ACK_PAIRS = PREFIX + "manualAckPairs";
#Override
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
Map<String, Object> aggregatedHeaders = super.aggregateHeaders(group);
List<ManualAckPair> manualAckPairs = new ArrayList<>();
group.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
manualAckPairs.add(new ManualAckPair(channel, deliveryTag));
});
aggregatedHeaders.put(MANUAL_ACK_PAIRS, manualAckPairs);
return aggregatedHeaders;
}
}
and
#Service
public class ManualAckServiceActivator {
#ServiceActivator(inputChannel = "manualAckChannel")
public void handle(#Header(MANUAL_ACK_PAIRS) List<ManualAckPair> manualAckPairs) {
manualAckPairs.forEach(manualAckPair -> {
manualAckPair.basicAck();
});
}
}
Right, you don't need such a complex logic for the aggregator.
You simply can acknowledge them after the aggregator release - in the service activator in between aggregator and that AmqpOutboundEndpoint.
And right you have to use there basicAck() with the multiple flag to true:
#param multiple true to acknowledge all messages up to and
Well, for that purpose you definitely need a custom MessageGroupProcessor to extract the highest AmqpHeaders.DELIVERY_TAG for the whole batch and set it as a header for the output aggregated message.
You might just extend DefaultAggregatingMessageGroupProcessor and override its aggregateHeaders():
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
*
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
I'm following the akka java websocket tutorial in attempt to create a websocket server. I want to implement 2 extra features:
Being able to display the number of connected clients, but the result
is always 0 or 1 , even when I know I have 100's concurrently
connected clients.
Websocket communication is biDirectional. Currently the server only respond with a message when client sends a message. How do I initiate sending a message from server to client?
Here's original akka java server example code with minimum modification of my client counting implementation:
public class websocketServer {
private static AtomicInteger connections = new AtomicInteger(0);//connected clients count.
public static class MyTimerTask extends TimerTask {
//called every second to display number of connected clients.
#Override
public void run() {
System.out.println("Conncurrent connections: " + connections);
}
}
//#websocket-handling
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
connections.incrementAndGet();
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter();
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
connections.decrementAndGet();
return result;
}
public static void main(String[] args) throws Exception {
ActorSystem system = ActorSystem.create();
TimerTask timerTask = new MyTimerTask();
Timer timer = new Timer(true);
timer.scheduleAtFixedRate(timerTask, 0, 1000);
try {
final Materializer materializer = ActorMaterializer.create(system);
final Function<HttpRequest, HttpResponse> handler = request -> handleRequest(request);
CompletionStage<ServerBinding> serverBindingFuture =
Http.get(system).bindAndHandleSync(
handler, ConnectHttp.toHost("****", 1183), materializer);
// will throw if binding fails
serverBindingFuture.toCompletableFuture().get(1, TimeUnit.SECONDS);
System.out.println("Press ENTER to stop.");
new BufferedReader(new InputStreamReader(System.in)).readLine();
timer.cancel();
} catch (Exception e){
e.printStackTrace();
}
finally {
system.terminate();
}
}
//#websocket-handler
/**
* A handler that treats incoming messages as a name,
* and responds with a greeting to that name
*/
public static Flow<Message, Message, NotUsed> greeter() {
return
Flow.<Message>create()
.collect(new JavaPartialFunction<Message, Message>() {
#Override
public Message apply(Message msg, boolean isCheck) throws Exception {
if (isCheck) {
if (msg.isText()) {
return null;
} else {
throw noMatch();
}
} else {
return handleTextMessage(msg.asTextMessage());
}
}
});
}
public static TextMessage handleTextMessage(TextMessage msg) {
if (msg.isStrict()) // optimization that directly creates a simple response...
{
return TextMessage.create("Hello " + msg.getStrictText());
} else // ... this would suffice to handle all text messages in a streaming fashion
{
return TextMessage.create(Source.single("Hello ").concat(msg.getStreamedText()));
}
}
//#websocket-handler
}
Addressing your 2 bullet points below:
1 - you need to attach your metrics to the Message flow - and not to the HttpRequest flow - to effectively count the active connections. You can do this by using watchTermination. Code example for the handleRequest method below
public static HttpResponse handleRequest(HttpRequest request) {
HttpResponse result;
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeter().watchTermination((nu, cd) -> {
connections.incrementAndGet();
cd.whenComplete((done, throwable) -> connections.decrementAndGet());
return nu;
});
result = WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
result = HttpResponse.create().withStatus(413);
}
return result;
}
2 - for the server to independently send messages you could create its Message Flow using Flow.fromSinkAndSource. Example below (this will only send one message):
public static Flow<Message, Message, NotUsed> greeter() {
return Flow.fromSinkAndSource(Sink.ignore(),
Source.single(new akka.http.scaladsl.model.ws.TextMessage.Strict("Hello!"))
);
}
In the handleRequest method you increment and then decrement the counter connections, so at the end the value is always 0.
public static HttpResponse handleRequest(HttpRequest request) {
...
connections.incrementAndGet();
...
connections.decrementAndGet();
return result;
}