How to pace the consumption of a sqs queue using spring integration - java

I am trying to set up a integration flow to consume messages from a amazon sqs queue and its working fine so far. But i would like to pace the number of messages per minutes or seconds. e.g. 20 messages per minute.
Here is the definition of my sql listener bean
#Bean
public MessageProducer mySqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(this.amazonSqs, queueName);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setVisibilityTimeout(TIMEOUT_VISIBILITY);
adapter.setWaitTimeOut(TIMEOUT_MESSAGE_WAIT);
adapter.setMaxNumberOfMessages(prefetch);
adapter.setOutputChannel(processMessageChannel());
return adapter;
}
As you can see, I'm setting the maximum number of messages to fetch per poll, but how to set the delay between polls?
In a regular jms queue I could use a JMS.inboundAdapter using a custom poller but it seems that using SqsMessageDrivenChannelAdapter I cant set any poll timer value.
Maybe I could use a MessageProducer other than SqsMessageDrivenChannelAdapter but which one?
Is it possible to set a JMS.inboundAdapter using sqs?

Spring Integration SqsMessageDrivenChannelAdapter is a message-driver active component. It is based on the SimpleMessageListenerContainer from the Springh Cloud AWS project which has long-running while() loop to call AmazonSQS.receiveMessage(). The logic in that loop isn't too complicated:
try {
ReceiveMessageResult receiveMessageResult = getAmazonSqs().receiveMessage(this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isQueueRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(messageBatchLatch, messageExecutor));
} else {
messageBatchLatch.countDown();
}
}
try {
messageBatchLatch.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
} catch (Exception e) {
As you see we create there messageBatchLatch and wait for it after the loop.
Each messages is processed by their own SignalExecutingRunnable which countDown()s in the end of MessageExecutor. So, what you would like to do maybe achieved with an artificial Thread.sleep() in the target service method to have some more interval in between SQS polls.
But I hear your request and we indeed have to add something like:
/**
* The sleep interval in milliseconds used in the main loop between shards polling cycles.
* Defaults to {#code 1000} minimum {#code 250}.
* #param idleBetweenPolls the interval to sleep between shards polling cycles.
*/
public void setIdleBetweenPolls(int idleBetweenPolls) {
this.idleBetweenPolls = Math.max(250, idleBetweenPolls);
}
I did this for the KinesisMessageDrivenChannelAdapter, but here we have to request Spring Cloud AWS to do that for the SimpleMessageListenerContainer.

Related

Enable Sequential kafka Asynchronous send() message for 2 kafka producers configured in 2 regions

Scenario : Two kafka producer configured with different bootstrap servers (different regions), we are trying to send message to primary cluster first and if primary cluster is down ( due to timeout or exception) , it switches to secondary cluster.
Can it be done via asynchronous call, i.e. to check primary cluster health and if unable to send message, switch to other cluster? Currently im doing in a sync way as its blocking call for web threads :
for (int i=0; i <delegateList.size(); i++) {
T delegate = delegateList.get(i);
ClusterHealthCheck clusterHealth = this.clusterHealth.get(i);
if( !clusterHealth.isHealthy()) {
continue;
}
try {
// proxy is used which is on send() kafka producer api
Object result = method.invoke(delegate, args);
if( result instanceof Future) {
result.get();
}
return result;

How to verify if a Kafka Topic is not empty meaning has at least 1 message?

I am writing a little Kafka metrics exporter (Yes there are loads available like prometheus etc but I want a light weight custom one. Kindly excuse me on this).
As part of this I would like to know as soon as first message is received (or topic has messages) in a Kafka topic. I am using Spring Boot and Kafka.
I have the below code which gives the name of the topic and number of partitions. I want to know if the topic has messages? Kindly let me know how can I get this stat. Any lead is much appreciated!
#ReadOperation
public List<TopicManifest> kafkaTopic() throws ExecutionException, InterruptedException {
ListTopicsOptions listTopicsOptions = new ListTopicsOptions();
listTopicsOptions.listInternal(true);
ListTopicsResult listTopicsResult = adminClient.listTopics(listTopicsOptions);
Set<String> topics = listTopicsResult.names().get().stream().filter(topic -> !topic.startsWith("_")).collect(Collectors.toSet());
System.out.println(topics);
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(topics);
Map<String, KafkaFuture<TopicDescription>> topicNameValues = describeTopicsResult.topicNameValues();
List<TopicManifest> topicManifests = topicNameValues.entrySet().stream().map(entry -> {
try {
TopicDescription topicDescription = entry.getValue().get();
return TopicManifest.builder().name(entry.getKey())
.noOfPartitions(topicDescription.partitions().size())
.build();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return null;
}).collect(Collectors.toList());
return topicManifests;
}
Create a KafkaConsumer and call endOffsets (the consumer does not need to be subscribed to the topic(s)).
#Bean
ApplicationRunner runner1(ConsumerFactory cf) {
return args -> {
try (Consumer consumer = cf.createConsumer()) {
System.out.println(consumer.endOffsets(List.of(new TopicPartition("ktest29", 0),
new TopicPartition("ktest29", 1),
new TopicPartition("ktest29", 2))));
}
};
}
Offsets stored on the topic never reduce. Getting the end offset doesn't guarantee you have a non-empty topic (start and end offsets for the topic partitions could be the same).
Instead, you will still create a consumer but set
auto.offset.reset=earliest
group.id=UUID.randomUUID()
Then subscribe and run
ConsumerRecords records = consumer.poll(Duration.ofSeconds(2));
boolean empty = records.count() == 0;
By setting auto.offset.earliest with a random group, you are guaranteed to start at the earliest offset and seek to the first available record, if it exists, at which point, you can try to poll any number of records, to see if any are returned within the specified timeout.
This should work for regular and compacted topics without needing to check committed offsets.

How to check gcp pubsub empty/inactive subscription

I have an application that subscribes to a topic in GCP and when there is some messages over there it downloads them and sends them to a queue on ActiveMQ.
In order to make this process fast, I am using executorService and launching multiple threads for sending messages to activeMQ. Since this the subscription is supposed to be an ongoing task I am putting the code in a while(true) loop, and hence I can't shutdown the executorService in a normal fashion, as I will be creating and shutting down the executor service in every loop.
I am searching for an elegant way to shutdown the executorService when the subscription is empty (no data in the topic) for like 2 or 3 minutes or some inactivity window. and then of course it starts again when there is some new data.
The following is my idea which I don't like, which is just a counter that I am incrementing when the subscription retrieves no data.
I am looking for a more elegant way of doing that.
#Service
#Slf4j
public class PubSubSubscriberService {
private static final int EMPTY_SUBSCRIPTION_COUNTER = 4;
private static final Logger businessLogger = LoggerFactory.getLogger("BusinessLogger");
private Queue<PubsubMessage> messages = new ConcurrentLinkedQueue<>();
public void pullMessagesAndSendToBroker(CompositeConfigurationElement cce) {
var patchSize = cce.getSubscriber().getPatchSize();
var nThreads = cce.getSubscriber().getSendingParallelThreads();
var scheduledTasks = 0;
var subscribeCounter = 0;
ThreadPoolExecutor threadPoolExecutor = null;
while (true) {
try {
if (subscribeCounter < EMPTY_SUBSCRIPTION_COUNTER) {
log.info("Creating Executor Service for uploading to broker with a thread pool of Size: " + nThreads);
threadPoolExecutor = getThreadPoolExecutor(nThreads);
}
var subscriber = this.getSubscriber(cce);
this.startSubscriber(subscriber, cce);
this.checkActivity(threadPoolExecutor, subscribeCounter++);
// send patches of {{ messagesPerIteration }}
while (this.messages.size() > patchSize) {
if (poolIsReady(threadPoolExecutor, nThreads)) {
UploadTask task = new UploadTask(this.messages, cce, cf, patchSize);
threadPoolExecutor.submit(task);
scheduledTasks ++;
}
subscribeCounter = 0;
}
// send the rest
if (this.messages.size() > 0) {
UploadTask task = new UploadTask(this.messages, cce, cf, patchSize);
threadPoolExecutor.submit(task);
scheduledTasks ++;
subscribeCounter = 0;
}
if (scheduledTasks > 0) {
businessLogger.info("Scheduled " + scheduledTasks + " upload tasks of size upto: " + patchSize + ", preparing to start subscribing for 30 more sec") ;
scheduledTasks = 0;
}
} catch ( Exception e) {
e.printStackTrace();
businessLogger.error(e.getMessage());
}
}
Your pool take few space and memory and consume almost no CPU when it's not used. Set a max limit to your Pool capacity and use it with trying to downscale it. If you have too much messages to process, the task are queued waiting a free executor pool to complete the task.
If you have scalability up and down concerne, you design could be reviewed. Instead of executorPool internal to the pod, you could trigger an event in your cluster and process them in parallel, on other pods. These pods will be able to scale up and down according to the traffic (have a look to Knative)

How to move error message to Azure dead letter queue(Topics - Subscription) using Java?

I need to send my messages to Dead letter queue from azure topic subscription incase of any error while reading and processing the message from topic. So I tried testing pushing message directly to DLQ.
My sample code will be like
static void sendMessage()
{
// create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sender()
.topicName(topicName)
.buildClient();
// send one message to the topic
senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
}
static void resceiveAsync() {
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.connectionString(connectionString)
.receiver()
.topicName(topicName)
.subscriptionName(subName)
.buildAsyncClient();
// receive() operation continuously fetches messages until the subscription is disposed.
// The stream is infinite, and completes when the subscription or receiver is closed.
Disposable subscription = receiver.receiveMessages().subscribe(message -> {
System.out.printf("Id: %s%n", message.getMessageId());
System.out.printf("Contents: %s%n", message.getBody().toString());
}, error -> {
System.err.println("Error occurred while receiving messages: " + error);
}, () -> {
System.out.println("Finished receiving messages.");
});
// Continue application processing. When you are finished receiving messages, dispose of the subscription.
subscription.dispose();
// When you are done using the receiver, dispose of it.
receiver.close();
}
I tried getting the deadletter queue path
String dlq = EntityNameHelper.formatDeadLetterPath(topicName);
I got path of dead letter queue like = "mytopic/$deadletterqueue"
But It's not working while passing path as topic name. It throwing a Entity topic not found exception.
Any one can you please advise me on this
Reference :
How to move error message to Azure dead letter queue using Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues#moving-messages-to-the-dlq
How to push the failure messages to Azure service bus Dead Letter Queue in Spring Boot Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions-legacy#receive-messages-from-a-subscription
You probably know that a message will be automatically moved to the deadletter queue if you throw exceptions during processing, and the maximum delievery count is exceeded. If you want to explicitly move the message to the DLQ, you can do so as well. A common case for this is if you know that the message can never succeed because of its contents.
You cannot send new messages directly to the DLQ, because then you would have two messages in the system. You need to call a special operation on the parent entity. Also, <topic path>/$deadletterqueue does not work, because this would be the DLQ of all subscriptions. The correct entity path is built like this:
<queue path>/$deadletterqueue
<topic path>/Subscriptions/<subscription path>/$deadletterqueue
https://github.com/Azure/azure-service-bus/blob/master/samples/Java/azure-servicebus/DeadletterQueue/src/main/java/com/microsoft/azure/servicebus/samples/deadletterqueue/DeadletterQueue.java
This sample code is for queues, but you should be able to adapt it to topics quite easily:
// register the RegisterMessageHandler callback
receiver.registerMessageHandler(
new IMessageHandler() {
// callback invoked when the message handler loop has obtained a message
public CompletableFuture<Void> onMessageAsync(IMessage message) {
// receives message is passed to callback
if (message.getLabel() != null &&
message.getContentType() != null &&
message.getLabel().contentEquals("Scientist") &&
message.getContentType().contentEquals("application/json")) {
// ...
} else {
return receiver.deadLetterAsync(message.getLockToken());
}
return receiver.completeAsync(message.getLockToken());
}
// callback invoked when the message handler has an exception to report
public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
System.out.printf(exceptionPhase + "-" + throwable.getMessage());
}
},
// 1 concurrent call, messages are auto-completed, auto-renew duration
new MessageHandlerOptions(1, false, Duration.ofMinutes(1)),
executorService);

java websphere MQ

My aim is to put n number of messages in a for loop to a WebSphere MQ queue using WebSphere MQ java programming.
My java program will run as a standalone program.
If any exception in between , I need to rollback all the messages.
If no exception then I should commit all the messages .
The outside world should not see my messages in the queue until I complete fully.
How do I achieve this?
Updated with sample code as per reply from T.Rob:
Please check if sample code is fine ?
Does setting MQGMO_SYNCPOINT is only related to my program's invocation ?
(because similar programs running parallely will also be putting messages on the same queue and those messages should not gett affected by my program's SYNCPOINT.)
public void sendMsg() {
MQQueue queue = null;
MQQueueManager queueManager = null;
MQMessage mqMessage = null;
MQPutMessageOptions pmo = null;
System.out.println("Entering..");
try {
MQEnvironment.hostname = "x.x.x.x";
MQEnvironment.channel = "xxx.SVRCONN";
MQEnvironment.port = 9999;
queueManager = new MQQueueManager("XXXQMANAGER");
int openOptions = MQConstants.MQOO_OUTPUT;
queue = queueManager.accessQueue("XXX_QUEUENAME", openOptions, null, null, null);
pmo = new MQPutMessageOptions();
pmo.options = CMQC.MQGMO_SYNCPOINT;
String input = "testing";
System.out.println("sending messages....");
for (int i = 0; i < 10; i++) {
input = input + ": " + i;
mqMessage = new MQMessage();
mqMessage.writeString(input);
System.out.println("Putting message: " + i);
queue.put(mqMessage, pmo);
}
queueManager.commit();
System.out.println("Exiting..");
} catch (Exception e) {
e.printStackTrace();
try {
System.out.println("rolling back messages");
if (queueManager != null)
queueManager.backout();
} catch (MQException e1) {
e1.printStackTrace();
}
} finally {
try {
if (queue != null)
queue.close();
if (queueManager != null)
queueManager.close();
} catch (MQException e) {
e.printStackTrace();
}
}
}
WMQ supports both local and global (XA) units of work. The local units of work are available simply by specifying the option. Global XA transactions require a transaction manager, as mentioned by keithkreissl in another answer.
For what you described, a POJO doing messaging under syncpoint, specify MQC.MQGMO_SYNCPOINT in your MQGetMessageOptions. When you are ready to commit, issue the MQQManager.commit() or MQQManager.backout() call.
Note that the response and doc provided by ggrandes refers to the JMS and not Java classes. The Java classes use Java equivalents of the WMQ procedural API, can support many threads (doc) and even provide connection pooling (doc). Please refer to the Java documentation rather than the JMS documentation for the correct behavior. Also, I've linked to the WMQ V7.5 documentation which goes with the latest WMQ Java V7.5 client. The later clients have a lot more local functionality (tracing, flexible install path, MQClient.ini, etc.) and work with back-level QMgrs. It is highly recommended to be using the latest client and the download is free.
you only need to create a session with transaction enabled.
Session session;
// ...
boolean transacted = true;
session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);
try {
// ...do things...
session.commit();
} catch (Exception e) {
session.rollback();
}
// ...
WARN-NOTE: Sessions are not thread-safe ;-)
Doc Websphere MQ/JMS
If you have access to a transaction manager and more importantly an XATransaction wired up to your MQ access, you can start a transaction at the beginning of your message processing put all the messages on the queue then commit the transaction. Using the XATransactions it will not put any messages until the transaction commits. If you don't have access to that, you can do a little more plumbing by placing your messages in a local data object, wrap your code in a try/catch if no exceptions iterate through the local data object sending the messages. The issue with the later approach is that it will commit all your other processing but if a problem occurs in the sending of messages your other processing will not be rolled back.

Categories

Resources