AWS SQS: how do we consume message - java

I want to convert one of my synchronous API into asynchronous. And I believe queue are one way to do this. Like a publisher will push(synchronously) the message into queue which will be consumed by consumer API from the queue.
I was curious to know what is the right way of consuming AWS SimpleQueueService messages. Can queue call an API to deliver the message to it or the only way to do is to poll the queue. But I believe that polling will make our system busy waiting so it is best the queue deliver the message to API.
What is possible way to do this?

If you want to consume from SQS you have the following methods:
Polling using the SDK to consume messages
Using the Amazon SQS Java Messaging Library
Subscribing to an SNS Topic
Using Lambda.
If you intend to retrieve to get responses back you can also take advantage of virtual queues.

In application.yml
sqs:
region: ap-south-1
accessKeyId: arunsinghgujjar
secretAccessKey: jainpurwalearunsingh/saharanpursepauchepuna
cloud:
aws:
end-point:
uri: https://arun-learningsubway-1.amazonaws.com/9876974864/learningsubway_SQS.fifo
queue:
max-poll-time: 20
max-messages: 10
fetch-wait-on-error: 60
enabled: true
content: sqs
Write SQS client
public String sendMessage(MessageDistributionEvent messageDistributionEvent) {
SendMessageResponse sendMessage = null;
try {
Map<String, MessageAttributeValue> attributes = new HashMap<>();
String recepList = "";
for (Integer myInt : messageDistributionEvent.getRecipients()) {
recepList = recepList + "_" + myInt;
}
SendMessageRequest sendMsgRequest = SendMessageRequest.builder()
.queueUrl(url)
.messageBody(messageDistributionEvent.getChannelId() + "_" + messageDistributionEvent.getMessageId() + "" + recepList)
.messageGroupId("1")
.messageAttributes(attributes)
.build();
sendMessage = sqsClient.sendMessage(sendMsgRequest);
} catch (Exception ex) {
log.info("failed to send message :" + ex);
}
return sendMessage.sequenceNumber();
}
Read Message from Queue
ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest.builder()
.queueUrl(url)
.waitTimeSeconds(maxPollTime)
.maxNumberOfMessages(maxMessages)
.messageAttributeNames("MessageLabel")
.build();
List<Message> sqsMessages = sqsClient.receiveMessage(receiveMessageRequest).messages();
Reference
https://learningsubway.com/read-write-data-into-aws-sqs-using-java/

Related

How to move error message to Azure dead letter queue(Topics - Subscription) using Java?

I need to send my messages to Dead letter queue from azure topic subscription incase of any error while reading and processing the message from topic. So I tried testing pushing message directly to DLQ.
My sample code will be like
static void sendMessage()
{
// create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sender()
.topicName(topicName)
.buildClient();
// send one message to the topic
senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
}
static void resceiveAsync() {
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.connectionString(connectionString)
.receiver()
.topicName(topicName)
.subscriptionName(subName)
.buildAsyncClient();
// receive() operation continuously fetches messages until the subscription is disposed.
// The stream is infinite, and completes when the subscription or receiver is closed.
Disposable subscription = receiver.receiveMessages().subscribe(message -> {
System.out.printf("Id: %s%n", message.getMessageId());
System.out.printf("Contents: %s%n", message.getBody().toString());
}, error -> {
System.err.println("Error occurred while receiving messages: " + error);
}, () -> {
System.out.println("Finished receiving messages.");
});
// Continue application processing. When you are finished receiving messages, dispose of the subscription.
subscription.dispose();
// When you are done using the receiver, dispose of it.
receiver.close();
}
I tried getting the deadletter queue path
String dlq = EntityNameHelper.formatDeadLetterPath(topicName);
I got path of dead letter queue like = "mytopic/$deadletterqueue"
But It's not working while passing path as topic name. It throwing a Entity topic not found exception.
Any one can you please advise me on this
Reference :
How to move error message to Azure dead letter queue using Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues#moving-messages-to-the-dlq
How to push the failure messages to Azure service bus Dead Letter Queue in Spring Boot Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions-legacy#receive-messages-from-a-subscription
You probably know that a message will be automatically moved to the deadletter queue if you throw exceptions during processing, and the maximum delievery count is exceeded. If you want to explicitly move the message to the DLQ, you can do so as well. A common case for this is if you know that the message can never succeed because of its contents.
You cannot send new messages directly to the DLQ, because then you would have two messages in the system. You need to call a special operation on the parent entity. Also, <topic path>/$deadletterqueue does not work, because this would be the DLQ of all subscriptions. The correct entity path is built like this:
<queue path>/$deadletterqueue
<topic path>/Subscriptions/<subscription path>/$deadletterqueue
https://github.com/Azure/azure-service-bus/blob/master/samples/Java/azure-servicebus/DeadletterQueue/src/main/java/com/microsoft/azure/servicebus/samples/deadletterqueue/DeadletterQueue.java
This sample code is for queues, but you should be able to adapt it to topics quite easily:
// register the RegisterMessageHandler callback
receiver.registerMessageHandler(
new IMessageHandler() {
// callback invoked when the message handler loop has obtained a message
public CompletableFuture<Void> onMessageAsync(IMessage message) {
// receives message is passed to callback
if (message.getLabel() != null &&
message.getContentType() != null &&
message.getLabel().contentEquals("Scientist") &&
message.getContentType().contentEquals("application/json")) {
// ...
} else {
return receiver.deadLetterAsync(message.getLockToken());
}
return receiver.completeAsync(message.getLockToken());
}
// callback invoked when the message handler has an exception to report
public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
System.out.printf(exceptionPhase + "-" + throwable.getMessage());
}
},
// 1 concurrent call, messages are auto-completed, auto-renew duration
new MessageHandlerOptions(1, false, Duration.ofMinutes(1)),
executorService);

Vertx services not accepting messages continuously when running on local JVM over a finite set of data when deployed as separate fat-jars

I am getting started with vertx and was trying out point to point messaging on event bus. I have 2 services both created as separate maven projects and deployed as fat-jars
1) Read from a file and send the content as a message over an address - ContentParserService.java
2) Read the message and reply to the incoming message- PingService.java
Both these services are deployed as separate jars kind of a microservice fashion
The code is as follows: ContentParserService.java
#Override
public void start(Future<Void> startFuture) throws Exception {
super.start(startFuture);
// Reference to the eventbus running on JVM
EventBus eventBus = vertx.eventBus();
// Read file using normal java mechanism
try {
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(ClassLoader.getSystemResourceAsStream(
(config().getString("filename")))));
bufferedReader.readLine(); //read first line
String line = null;
while ((line = bufferedReader.readLine()) != null) {
String[] data = line.split(",");
// Create RealEstate Object
RealEstateTransaction realEstateData = createTransactionObject(data);
// Construct Message JSON
JsonObject messageJSON = constructMessageJson(realEstateData);
// Send message to PING address over the Event Bus
eventBus.send("PING", Json.encode(messageJSON), reply -> {
if (reply.succeeded())
System.out.println("Received Reply: " + reply.result().body());
else {
System.out.println("No reply");
}
});
}
} catch (IOException e) {
startFuture.fail(e.getMessage());
}
The code is as follows: PingService.java
#Override
public void start(Future<Void> startFuture) throws Exception {
super.start(startFuture);
System.out.println("Referencing event bus");
// Reference to the event bus running on the JVM
EventBus eventBus = vertx.eventBus();
System.out.println("Creating HttpServer");
// Create HTTP Server to handle incoming requests
HttpServer httpServer = vertx.createHttpServer();
System.out.println("Creating Router");
// Create Router for routing to appropriate endpoint
Router router = Router.router(vertx);
System.out.println("Starting to consume message sent over event bus");
// Consume the incoming message over the address PING
eventBus.consumer("PING", event -> {
System.out.println("Received message: " + event.body());
event.reply("Received at PING address");
});
System.out.println("Receiver ready and receiving messages");
When i run both the services I run on the same machine with the java -jar command for each of the service. What i observed was when i deploy the first jar of ContentParserService, it immediately starts and sends messages over the event bus, but by the time i start the pingservice jar , it is not able to receive any message sent over the event bus because my pingService is a separate fatjar and a microservice in itself. The file that i am reading is a finite lenght csv file of around 200 entries. This case would work if i bundle both the services in a single fat jar.
How should i achieve the different fat jars services able to send message to each other in my case.
This case works when both verticles in the same jar only because there's no network delay. But your usecase for EventBus is incorrect, since it doesn't persist messages, hence cannot replay them. You should start sending messages only when the other side is ready to receive them.
You need to reverse the dependency. In your ContentParserService register for some "ready" event, then start your while loop only when you get it:
vertx.eventBus().consumer("ready", (message) -> {
while ((line = bufferedReader.readLine()) != null) {
...
}
});
Now, what will happen if ContentParserService is actually slower and misses the "ready" event? Use vertx.setPeriodic() for that. So you start your PingService, and periodically tell ContentParserService that you're ready to receive some messages.
Or, as an option, just don't use EventBus at all between you services, and switch to something with persistence, like RabbitMQ or Kafka.

Accses to data from IoT Hub Azure with Java

I send data to the IoT Hub and receive it, it works, but i dont know how i can work with the received Data: here is my Code to receive data:
public void accept(PartitionReceiver receiver)
{
System.out.println("** Created receiver on partition " + partitionId);
try {
while (true) {
Iterable<EventData> receivedEvents = receiver.receive(10).get();
int batchSize = 0;
if (receivedEvents != null)
{
for(EventData receivedEvent: receivedEvents)
{
System.out.println(String.format("| Time: %s", receivedEvent.getSystemProperties().getEnqueuedTime()));
System.out.println(String.format("| Device ID: %s", receivedEvent.getProperties().get("iothub-connection-device-id")));
System.out.println(String.format("| Message Payload: %s", new String(receivedEvent.getBody(), Charset.defaultCharset())));
batchSize++;
}
}
}
}
catch (Exception e)
{
System.out.println("Failed to receive messages: " + e.getMessage());
}
}
I would like to work with the received data, here I become the data as JSON String:
System.out.println(String.format("| Message Payload: %s", new String(receivedEvent.getBody(), Charset.defaultCharset())));
The dataoutput is: product: xy, price: 2.3.
I would like take the data to :
String product= product;
double price= price;
How can I the received Payload save in the variable?
Thanks
There are two kinds of messages which include device-to-cloud and cloud-to-device.
For the first kind, device-to-cloud messages, as #DominicBetts said, you can refer to the section Receive device-to-cloud messages to know how to receive d2c messages with Event Hub-compatible endpoint. And there are two samples as references on GitHub, please see below.
Simple send/receive sample: Shows how to connect then send and receive messages to and from IoT Hub, passing the protocol of your choices as a parameter.
Simple sample handling messages received: : Shows how to connect to IoT Hub and manage messages received from IoT Hub, passing the protocol of your choices as a parameter.
For the second kind, cloud-to-device messages, you can refer to the section Receiving messages on the simulated device to know how to receive c2d messages. The sample code in the article was writen for C#, but I think it's simple for using Java instead of C#, please notice the note in the section for choosing the suitable protocol.

RabbitMQ: publish in node.js subscribe in Java

Given the following publisher in node.js and the following subscriber in java (this setup is fully functional) I have the following two questions:
What should I use as the third argument in queueBind and why? Why does it works as is ("test" is a random pick)?
Is there a way to specify queue in addition to exchange in rabbit.js? If yes then how? If not then why and which module should I use instead (code example would be welcome)?
// node.js
var context = require("rabbit.js").createContext();
var pub = context.socket('PUB');
pub.connect(config.exchange);
server.post("/message/:msg", function(req, res) {
pub.write(req.params.msg, 'utf8');
res.end();
});
// java
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(host);
try {
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(exchange, "fanout");
String queueName = channel.queueDeclare().getQueue();
channel.queueBind(queueName, exchange, "test"); // Question1: what should I use as the third argument and why?
// Question2: is there a way to configure rabbit.js with a queue name instead?
//channel.queueDeclare(queueName, false, false, false, null);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(queueName, true, consumer);
try {
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
LOG.info("Received message: " + message);
}
} catch (InterruptedException e) {
LOG.catching(e);
} finally {
channel.close();
connection.close();
}
} catch (IOException e) {
LOG.catching(e);
}
Own answer, what I've digged up so far:
The third argument, the routing key, is what is known as topic in rabbit.js. By supplying test I am only subscribing to messages send to the test topic or without a topic set (default in rabbit.js). If I were to use topic in the publisher as well, I could use pub.publish(topic, message, encoding) instead of pub.write(message, encoding) or supply it to the connect method
Does not look so and still do not know why really. The argument goes that rabbit.js is a higher-level library and it, therefore, makes certain simplifications. Why exactly this simplification is made I do not know. However, I primarily wanted to use a single exchange for multiple communication threads, which I can also achieve by using topics/routing keys. So not a big deal.

Remove a JMS message from MQ Queue using JMSMessageID

Is there a way to remove a JMS message from an IBM MQ Queue using JMSMessageId ina Java application(not using tools)? Also are such operations vendor-specific?
Looked through the API for receive operations which are used to remove messages, but for removing specific messages, do we need to filter using MessageSelector and remove appropriately, or is there a more simple way? [checking for any available method which can be directly used]
Can you please provide tutorials/examples [can be links too] to show the API usage for such operations?
When you use JMSMessageID as the only message property in a selector, WMQ optimizes the lookup to be the same as a native WMQ API get by MQMD.MessageID which is an indexed field in the queue. Please see the JMS Message Selection topic for more details.
QueueReceiver rcvr = sess.createReceiver(inputQ, "JMSCorrelationID = '"+msgId+"'")
You can also do the same thing using native WMQ API calls using Java native code. You would do a normal GET operation but specify the message ID in the MQMD structure.
myMsg.messageId = someMsgID;
MQGetMessageOptions gmo = new MQGetMessageOptions();
myQueue.get(myMsg, gmo);
How to delete specific message form queue by using messageid?
I also have like your problem, I provide the resuable function. You just need to pass MessageId and Queue name. It is ok for me.
private void deleteMessage(String messageId, String queueName) {
try {
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi");
JMXConnector jmxc = JMXConnectorFactory.connect(url);
MBeanServerConnection conn = jmxc.getMBeanServerConnection();
ObjectName name = new ObjectName("org.apache.activemq:type=Broker,brokerName=localhost");
BrokerViewMBean proxy = (BrokerViewMBean)MBeanServerInvocationHandler.newProxyInstance(conn, name, BrokerViewMBean.class, true);
for (ObjectName queue : proxy.getQueues()) {
QueueViewMBean queueBean = (QueueViewMBean) MBeanServerInvocationHandler.newProxyInstance(conn, queue, QueueViewMBean.class, true);
if(queueBean.getName().equals(queueName)) {
System.out.println("Deleted : " + messageId);
queueBean.removeMessage(messageId);
return;
}
}
} catch(Exception e) {
e.printStackTrace();
}
}
I use activemq-all-5.8.0.jar.

Categories

Resources