Publishing message on JMS queue? - java

i am new to JMS and going thru the example of Active MQ Hello world. Say i have a scenario whenever i make entry
under employee table in DB, i have to put the message in queue.here is the producer code snippet from hello world example
public static class HelloWorldProducer {
public void createMessageOnQueue() {
try {
// Create a ConnectionFactory
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
// Create a Connection
Connection connection = connectionFactory.createConnection();
connection.start();
// Create a Session
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the destination (Topic or Queue)
Destination destination = session.createQueue("TEST.FOO");
// Create a MessageProducer from the Session to the Topic or Queue
MessageProducer producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
// Create a messages
String text = "Hello world! From: " + Thread.currentThread().getName() + " : " + this.hashCode();
TextMessage message = session.createTextMessage(text);
// Tell the producer to send the message
System.out.println("Sent message: "+ message.hashCode() + " : " + Thread.currentThread().getName());
producer.send(message);
// Clean up
session.close();
connection.close();
}
catch (Exception e) {
System.out.println("Caught: " + e);
e.printStackTrace();
}
}
}
Now my question is if i close the connection and session, will it close the queue also? If yes,what will happen if message has not been consumed yet?
Second question is if i need to publish the message on same queue(i.e "TEST.FOO") second time , do i need to call createMessageOnQueue method second time. If yes, will it not create new queue with session.createQueue("TEST.FOO")?

Now my question is if i close the connection and session, will it
close the queue also? If yes,what will happen if message has not been
consumed yet?
message will still be on queue. No such thing as 'closing a queue'.
Second question is if i need to publish the message on same queue(i.e
"TEST.FOO") second time , do i need to call createMessageOnQueue
method second time. If yes, will it not create new queue with
session.createQueue("TEST.FOO")?
session.createQueue("TEST.FOO") does not necessarily create queue, it just get a reference to existing queue.
javadoc of session#createQueue()
Note that this method simply creates an object that encapsulates the
name of a topic. It does not create the physical topic in the JMS
provider. JMS does not provide a method to create the physical topic,
since this would be specific to a given JMS provider. Creating a
physical topic is provider-specific and is typically an administrative
task performed by an administrator, though some providers may create
them automatically when needed.

The queue is created once and only you can delete it manually.
Once the message is sent to a queue, it will wait on the queue until it's consumed (unlike topics).
You don't need to re-create the message if you want to send it twice. But then again, why would you send it two times?
I feel that your problem might be solved using JMS transactions.

Related

How to move error message to Azure dead letter queue(Topics - Subscription) using Java?

I need to send my messages to Dead letter queue from azure topic subscription incase of any error while reading and processing the message from topic. So I tried testing pushing message directly to DLQ.
My sample code will be like
static void sendMessage()
{
// create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sender()
.topicName(topicName)
.buildClient();
// send one message to the topic
senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
}
static void resceiveAsync() {
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.connectionString(connectionString)
.receiver()
.topicName(topicName)
.subscriptionName(subName)
.buildAsyncClient();
// receive() operation continuously fetches messages until the subscription is disposed.
// The stream is infinite, and completes when the subscription or receiver is closed.
Disposable subscription = receiver.receiveMessages().subscribe(message -> {
System.out.printf("Id: %s%n", message.getMessageId());
System.out.printf("Contents: %s%n", message.getBody().toString());
}, error -> {
System.err.println("Error occurred while receiving messages: " + error);
}, () -> {
System.out.println("Finished receiving messages.");
});
// Continue application processing. When you are finished receiving messages, dispose of the subscription.
subscription.dispose();
// When you are done using the receiver, dispose of it.
receiver.close();
}
I tried getting the deadletter queue path
String dlq = EntityNameHelper.formatDeadLetterPath(topicName);
I got path of dead letter queue like = "mytopic/$deadletterqueue"
But It's not working while passing path as topic name. It throwing a Entity topic not found exception.
Any one can you please advise me on this
Reference :
How to move error message to Azure dead letter queue using Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues#moving-messages-to-the-dlq
How to push the failure messages to Azure service bus Dead Letter Queue in Spring Boot Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions-legacy#receive-messages-from-a-subscription
You probably know that a message will be automatically moved to the deadletter queue if you throw exceptions during processing, and the maximum delievery count is exceeded. If you want to explicitly move the message to the DLQ, you can do so as well. A common case for this is if you know that the message can never succeed because of its contents.
You cannot send new messages directly to the DLQ, because then you would have two messages in the system. You need to call a special operation on the parent entity. Also, <topic path>/$deadletterqueue does not work, because this would be the DLQ of all subscriptions. The correct entity path is built like this:
<queue path>/$deadletterqueue
<topic path>/Subscriptions/<subscription path>/$deadletterqueue
https://github.com/Azure/azure-service-bus/blob/master/samples/Java/azure-servicebus/DeadletterQueue/src/main/java/com/microsoft/azure/servicebus/samples/deadletterqueue/DeadletterQueue.java
This sample code is for queues, but you should be able to adapt it to topics quite easily:
// register the RegisterMessageHandler callback
receiver.registerMessageHandler(
new IMessageHandler() {
// callback invoked when the message handler loop has obtained a message
public CompletableFuture<Void> onMessageAsync(IMessage message) {
// receives message is passed to callback
if (message.getLabel() != null &&
message.getContentType() != null &&
message.getLabel().contentEquals("Scientist") &&
message.getContentType().contentEquals("application/json")) {
// ...
} else {
return receiver.deadLetterAsync(message.getLockToken());
}
return receiver.completeAsync(message.getLockToken());
}
// callback invoked when the message handler has an exception to report
public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
System.out.printf(exceptionPhase + "-" + throwable.getMessage());
}
},
// 1 concurrent call, messages are auto-completed, auto-renew duration
new MessageHandlerOptions(1, false, Duration.ofMinutes(1)),
executorService);

Clean up Redisson pub/sub listeners when assoicated object is 'stale'

I'm trying to implement a simple Websocket application in Java that is able to scale horizontally, by using Redis and the Redisson library.
The Websocket server basically keeps track of connected clients, and sends message that are received to an Rtopic - this works great.
To consume, I have code that adds a listener when a client is registered : it associated a Client object with a listener by:
private static RedissonClient redisson = RedissonRedisServer.createRedisConnectionWithConfig();
public static final RTopic subcriberTopic = redisson.getTopic("clientsMapTopic");
public static boolean sendToPubSub(ConnectedClient q, String message) {
boolean[] success = {true};
MessageListener<Message> listener = new MessageListener<Message>() {
#Override
public void onMessage(CharSequence channel, Message message) {
logger.debug("The message is : " + message.getMediaId());
try {
logger.debug("ConnectedClient mediaid: " + q.getMediaid() + ",Message mediaid " + message.getMediaId());
if (q.getMediaid().equals(message.getMediaId())) {
// we need to verify if the message goes to the right receiver
logger.debug("MESSAGE from PUBSUB to (" + q.getId() + ") # " + q.getSession().getId() + " " + message);
// this is the actual message to the websocket client
// this executes on the wrong connected client when the connection is closed and reopened
q.getSession().getBasicRemote().sendText(message.getMessage());
}
} catch (Exception e) {
e.printStackTrace();
success[0] = false;
}
}
};
int listenerId = subcriberTopic.addListener(Message.class, listener);
}
The problem I am observing is as follows:
initial connection from client registers listener associated with that object
sent message to the ws server gets picked up by listener and sent properly
disconnect websocket - create new connection - new listener gets created
sent message to the ws server gets picked up by same original listener and uses that connected client instead of the newly registered one
sending fails (because client and ws connection don't exist)and is not processes further
It seems I just need to remove the listener for the client if the client gets removed, but I haven't found a good way to do that because although I see in the debugger that the listener has the associated connected client object, I'm unable to retrieve them without adding code for that.
Am I observing this correctly and what is a good way to make this work properly?
When I was writing the question, I kind of leaned to an answer that I had in mind and tried, which worked.
I added a ConcurrentHashmap to keep track of the relation between the connected client and the listener.
In the logic where I handled websocket error that pointed to a client removing, I then removed the listener that was associated (and the entry from the map).
Now it works as expected.
small snippet:
int listenerId = subcriberTopic.addListener(Message.class, listener);
clientListeners.put(q,(Integer)listenerId);
And then in the websocket onError handler that triggers the cleanup:
// remove the associated listener
int listenerIdForClient = MessageContainer.clientListeners.get(cP);
MessageContainer.subcriberTopic.removeListener((Integer) listenerIdForClient);
// remove entry from map
MessageContainer.clientListeners.remove(cP);
Now the listener gets cleaned up properly and the next time a new listener gets created and handles the messages.

Unable to push message into ActiveMQ

I'm successfully pushing message into ActiveMQ from local Eclipse setup. However, the same code does not push message when I try to execute from server as a cron job. It does not even throw an exception during code execution.
Java environment - 1.8
Supporting jars used:
slf4j-api-1.8.0-beta2.jar
javax.annotation-api-1.2.jar
javax.jms-api-2.0.1.jar
management-api-1.1-rev-1.jar
activemq-core-5.7.0.jar
Code:
try {
map = getMessageDetails(session,"MessageQueueEmail");
userName = map.get("userName");
password = map.get("password");
hostName = map.get("mqHostName");
queue = map.get("queueName");
// Create a ConnectionFactory
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(userName, password, hostName);
// Create a Connection
connection = factory.createConnection();
// start the Connection
connection.start();
System.out.println("MQ started connection");
// Create a Session
sessionMQ = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the destination Queue
Destination destination = sessionMQ.createQueue(queue);
// Create a MessageProducer from the Session to the Queue
messageProducer = sessionMQ.createProducer(destination);
messageProducer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
// Create a message
Message message = sessionMQ.createTextMessage(textMsg);
System.out.println("MQ Message sent successfully");
// Tell the producer to send the message
messageProducer.send(message);
} catch(Exception e) {
e.printStackTrace();
System.out.println("\n::::::::::::Error occurred sendEmailMessageToIntranet::::::::::::: " + e.getMessage());
}
Thanks everyone for response. The issue is resolved after importing correct certificate file to the server. Wondering, why MQ attempts failure notification had not logged
Your code looks ok except you might have expiration going. Try with PERSISTENT and most likely is the issues that you are not redirecting stderr in your cronjob ? Make sure you do something like this:
*/1 * * * * /something/send.sh &>> /something/out.log
And then check in the morning.

RabbitMQ basicPublish not inserting messages to queue

This is probably some silly mistake I'm missing, but here is the issue:
I am trying to insert a simple "hello" message into a Rabbit queue, with a predefined exchange and routing key.
This is the code that I am using:
private static void send_equity_task_to_rabbitmq(ConnectionFactory factory) throws IOException,TimeoutException{
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare("b", false, false, false, null);
channel.exchangeDeclare("b", "direct");
channel.basicPublish("b","b",null, "hello".getBytes());
channel.close();
connection.close();
}
public static void main(String[] argv) throws TimeoutException,IOException {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
Date start_time= Calendar.getInstance().getTime();
Long start_time_timestamp=System.currentTimeMillis();
System.out.println("[INFO] Starting connection to queue at:"+start_time);
send_equity_task_to_rabbitmq(factory);
Long end_time_timestamp=System.currentTimeMillis();
System.out.println("[INFO] Message sent and processed successfully after:"+ (end_time_timestamp-start_time_timestamp)+" miliseconds");
}
}
The code runs without any error. However, when I check the amount of records inside the "b" queue, I get:
$ rabbitmqctl list_queues
Listing queues ...
b 0
...done.
I don't have consumers for this queue at the moment, so I assume since it has 0 records, that I am using basicPublish badly.
What could be wrong?
Thank you.
I think you need to bind the queue to the exchange. You've created a queue called "b" and an exchange called "b". The exchange will distribute messages to queues that are bound to it, using the "b" routingKey, but as the "b" queue isn't bound to the "b" exchange, the "b" exchange doesn't publish to that queue.

java websphere MQ

My aim is to put n number of messages in a for loop to a WebSphere MQ queue using WebSphere MQ java programming.
My java program will run as a standalone program.
If any exception in between , I need to rollback all the messages.
If no exception then I should commit all the messages .
The outside world should not see my messages in the queue until I complete fully.
How do I achieve this?
Updated with sample code as per reply from T.Rob:
Please check if sample code is fine ?
Does setting MQGMO_SYNCPOINT is only related to my program's invocation ?
(because similar programs running parallely will also be putting messages on the same queue and those messages should not gett affected by my program's SYNCPOINT.)
public void sendMsg() {
MQQueue queue = null;
MQQueueManager queueManager = null;
MQMessage mqMessage = null;
MQPutMessageOptions pmo = null;
System.out.println("Entering..");
try {
MQEnvironment.hostname = "x.x.x.x";
MQEnvironment.channel = "xxx.SVRCONN";
MQEnvironment.port = 9999;
queueManager = new MQQueueManager("XXXQMANAGER");
int openOptions = MQConstants.MQOO_OUTPUT;
queue = queueManager.accessQueue("XXX_QUEUENAME", openOptions, null, null, null);
pmo = new MQPutMessageOptions();
pmo.options = CMQC.MQGMO_SYNCPOINT;
String input = "testing";
System.out.println("sending messages....");
for (int i = 0; i < 10; i++) {
input = input + ": " + i;
mqMessage = new MQMessage();
mqMessage.writeString(input);
System.out.println("Putting message: " + i);
queue.put(mqMessage, pmo);
}
queueManager.commit();
System.out.println("Exiting..");
} catch (Exception e) {
e.printStackTrace();
try {
System.out.println("rolling back messages");
if (queueManager != null)
queueManager.backout();
} catch (MQException e1) {
e1.printStackTrace();
}
} finally {
try {
if (queue != null)
queue.close();
if (queueManager != null)
queueManager.close();
} catch (MQException e) {
e.printStackTrace();
}
}
}
WMQ supports both local and global (XA) units of work. The local units of work are available simply by specifying the option. Global XA transactions require a transaction manager, as mentioned by keithkreissl in another answer.
For what you described, a POJO doing messaging under syncpoint, specify MQC.MQGMO_SYNCPOINT in your MQGetMessageOptions. When you are ready to commit, issue the MQQManager.commit() or MQQManager.backout() call.
Note that the response and doc provided by ggrandes refers to the JMS and not Java classes. The Java classes use Java equivalents of the WMQ procedural API, can support many threads (doc) and even provide connection pooling (doc). Please refer to the Java documentation rather than the JMS documentation for the correct behavior. Also, I've linked to the WMQ V7.5 documentation which goes with the latest WMQ Java V7.5 client. The later clients have a lot more local functionality (tracing, flexible install path, MQClient.ini, etc.) and work with back-level QMgrs. It is highly recommended to be using the latest client and the download is free.
you only need to create a session with transaction enabled.
Session session;
// ...
boolean transacted = true;
session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);
try {
// ...do things...
session.commit();
} catch (Exception e) {
session.rollback();
}
// ...
WARN-NOTE: Sessions are not thread-safe ;-)
Doc Websphere MQ/JMS
If you have access to a transaction manager and more importantly an XATransaction wired up to your MQ access, you can start a transaction at the beginning of your message processing put all the messages on the queue then commit the transaction. Using the XATransactions it will not put any messages until the transaction commits. If you don't have access to that, you can do a little more plumbing by placing your messages in a local data object, wrap your code in a try/catch if no exceptions iterate through the local data object sending the messages. The issue with the later approach is that it will commit all your other processing but if a problem occurs in the sending of messages your other processing will not be rolled back.

Categories

Resources