My problem because I try to implement priority queue with rabbitMQ but its always random. Even when I set priority #RabbitListener(queues = QUEUE_MESSAGES, priority = "10").
I send 100 messages to two queus :
public void sendRequest() {
for (int i = 0; i < 100; i++) {
try {
rabbitTemplate.convertAndSend(ProducerConfig.QUEUE_MESSAGES2,
new MessageDTO("Subject Two", "content2"), message -> {
message.getMessageProperties().setPriority(Integer.valueOf(10));
return message;
});
rabbitTemplate.convertAndSend(ProducerConfig.QUEUE_MESSAGES,
new MessageDTO("Subject One", "content1"), message -> {
message.getMessageProperties().setPriority(Integer.valueOf(1));
return message;
});
System.out.println("messages has been send");
} catch (AmqpException ex) {
System.out.println(ex.getMessage());
}
}
}
So I have two listeners :
#RabbitListener(queues = QUEUE_MESSAGES, priority = "1")
public void receiveMessage(MessageDTO message) throws BusinessException, InterruptedException {
try {
System.out.println(message.getSubject());
} catch (Exception ex) {
System.out.println("exception" + ex.getMessage());
}
}
#RabbitListener(queues = QUEUE_MESSAGES2, priority = "10")
public void receiveMessage2(MessageDTO message) throws BusinessException, InterruptedException {
try {
System.out.println(message.getSubject());
} catch (Exception ex) {
System.out.println("exception" + ex.getMessage());
}
}
My output is random like this :
Subject One
Subject Two
Subject One
Subject Two
Subject One
Subject Two
Subject One
Subject Two
Subject One
Subject Two
Subject One
Subject Two
Subject One
Subject Two
Subject One
I need to receive all messages from first queue then receive messages from seconds queue. Can anybody help ?
I already even try with this in application.properties :
spring.rabbitmq.listener.simple.prefetch=1
My version is : RabbitMQ 3.8.12 Erlang 23.2.6
#EDIT
I set priority in producer config to queue and in sending request priority to messages but it deosnt helps
Producer config :
#Bean
public Declarables fanoutBindings() {
Queue messageQueue = QueueBuilder.durable(QUEUE_MESSAGES)
.withArgument("x-dead-letter-exchange", DLX_EXCHANGE_MESSAGES)
.withArgument("x-priority", Integer.valueOf(1))
.build();
Queue messageQueue2 = QueueBuilder.durable(QUEUE_MESSAGES2)
.withArgument("x-dead-letter-exchange", DLX_EXCHANGE_MESSAGES)
.withArgument("x-priority", Integer.valueOf(10))
.build();
Queue deadLetterQueue = QueueBuilder.durable(QUEUE_MESSAGES_DLQ).build();
Queue parkingLotQueue = QueueBuilder.durable(QUEUE_PARKING_LOT).build();
FanoutExchange deadLetterExchange = new FanoutExchange(DLX_EXCHANGE_MESSAGES);
return new Declarables(
messageQueue,
parkingLotQueue,
deadLetterQueue,
messageQueue2,
deadLetterExchange,
BindingBuilder.bind(deadLetterQueue).to(deadLetterExchange));
}
The priority property on #RabbitListener is the consumer priority. Consumers with higher priority will receive messages when they are active, while lower priority consumers will only get messages when higher priority consumers block. This assumes those consumers are consuming from the same queue, which is not your case.
If you want to implement priority messages, you need to define a Priority Queue with a max priority and set the priority property when sending the message (messages without priority will be treated as 0 priority).
Related
I have subscription VIEW_TOPIC with pull strategy. Why I cannot see any message although have 7 delay messages? I cannot figure out what am I missing. By the way, I'm running subscriber on k8s GCP. I was also add GOOGLE_APPLICATION_CREDENTIALS variable environment.
Subscriber configuration
private Subscriber buildSubscriber() {
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
TopicName topicName = TopicName.of(projectId, topic);
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Create a pull subscription with default acknowledgement deadline of 10 seconds.
// Messages not successfully acknowledged within 10 seconds will get resent by the server.
Subscription subscription =
subscriptionAdminClient.createSubscription(
subscriptionName, topicName, PushConfig.getDefaultInstance(), 10);
System.out.println("Created pull subscription: " + subscription.getName());
} catch (IOException e) {
LOGGER.error("Cannot create pull subscription");
} catch (AlreadyExistsException existsException) {
LOGGER.warn("Subscription already created");
}
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
LOGGER.info("Subscribe topic: " + topic + " | SubscriptionId: " + subscriptionId);
// default is 4 * num of processor
ExecutorProvider executorProvider = InstantiatingExecutorProvider.newBuilder().build();
Subscriber.Builder subscriberBuilder = Subscriber.newBuilder(subscriptionName, new MessageReceiverImpl())
.setExecutorProvider(executorProvider);
// The subscriber will pause the message stream and stop receiving more messages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
.setMaxOutstandingElementCount(100)
// the maximum size of messages the subscriber
// receives before pausing the message stream.
// 10Mib
.setMaxOutstandingRequestBytes(10L * 1024L * 1024L)
.build();
subscriberBuilder.setFlowControlSettings(flowControlSettings);
Subscriber subscriber = subscriberBuilder.build();
subscriber.addListener(new ApiService.Listener() {
#Override
public void failed(ApiService.State from, Throwable failure) {
LOGGER.error(from, failure);
}
}, MoreExecutors.directExecutor());
return subscriber;
}
Subscriber
public void startSubscribeMessage() {
LOGGER.info("Begin subscribe topic " + topic);
this.subscriber.startAsync().awaitRunning();
LOGGER.info("Subscriber start successfully!!!");
}
public class MessageReceiverImpl implements MessageReceiver {
private static final Logger LOGGER = Logger.getLogger(MessageReceiverImpl.class);
private final LogSave logSave = MatchSave.getInstance();
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
ByteString data = message.getData();
// Get the schema encoding type.
String encoding = message.getAttributesMap().get("googclient_schemaencoding");
Req.LogReq logReqMsg = null;
try {
switch (encoding) {
case "BINARY":
logReqMsg = Req.LogReq.parseFrom(data);
break;
case "JSON":
Req.LogReq.Builder msgBuilder = Req.LogReq.newBuilder();
JsonFormat.parser().merge(data.toStringUtf8(), msgBuilder);
logReqMsg = msgBuilder.build();
break;
}
LOGGER.info((JsonFormat.printer().omittingInsignificantWhitespace().print(logReqMsg)));
logSave.addLogMsg(battleLogMsg);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
consumer.ack();
}
}
With Req.LogReq is a proto message. My dependency:
// google cloud
implementation platform('com.google.cloud:libraries-bom:22.0.0')
implementation 'com.google.cloud:google-cloud-pubsub'
implementation group: 'com.google.protobuf', name: 'protobuf-java-util', version: '3.17.2'
And the call function logSave.addLogMsg(battleLogMsg); is add message to CopyOnWriteArrayList
I have created an SQS consumer which is supposed to pick a single message at once, process(takes 20 minute on an avg) it and then acknowledge. However what it is doing is, it picks all the messages(available in queue) at once and move them in flight(most annoying part) and then it process one by one but the last message would still remain in flight till visibility timeout expires (though all other messages would have processed).
I have tried giving a timeout in receive but that didn't work. I am using the below code to poll the queue and process the messages accordingly
public void startReceiving(String sqsServiceUrl, String queueName) throws JMSException {
String msgAsString = StringUtils.EMPTY;
do {
tryToReconnect(sqsServiceUrl, queueName);
msgAsString = receiveMessage(getMessageConsumer(sqsServiceUrl, queueName));
} while(!StringUtils.equalsIgnoreCase(msgAsString, "exit"));
}
private String receiveMessage(MessageConsumer consumer) throws JMSException {
Message message = consumer.receive(0);
String msgAsString = StringUtils.EMPTY;
} else {
try {
msgAsString = ((SQSTextMessage) message).getText();
/*Do some processing and overwrite msgAsString value with returned one*/
} catch (Exception e) {
LOG.error(e.getMessage());
}
finally{
message.acknowledge();
}
}
return msgAsString;
}
private void tryToReconnect(String sqsServiceUrl, String queueName) throws JMSException {
String currentHour = Calendar.getInstance().get(Calendar.DAY_OF_MONTH) + "-" + Calendar.getInstance().get(Calendar.HOUR_OF_DAY);
if (!savedHour.equals(currentHour) || (session == null || messageConsumer == null)){
synchronized(lock){
if (messageConsumer == null){
savedHour = currentHour;
SQSConnection connection = createSqsConnection(sqsServiceUrl, queueName);
session = createSqsSession(connection);
messageConsumer = createMessageReciever(connection, queueName);
}
}
}
}
I am polling the queue with the help of infinite loop, I want the code in such a way so that it picks one message at a time from queue, process it, acknowledge it and then ONLY pick the next available one.
How to read a message from WebSphere MQ without deleting the original message from queue?
I have spring application which reads the message from the WebSphere MQ.
After reading, I have a process method which will process the data retrieved from queue.
Step 1:
response = jmsTemplate.receive();
//Message automatically removed from queue.
Step 2:
process(response);
There are chances of throwing exceptions in process method. In case of exceptions, I need to retain the message in the queue.
Is it possible? Is their any way to delete the message only on user acknowledgement?
I tried adding the following:
jmsTemplate.setSessionAcknowledgeMode(javax.jms.Session.CLIENT_ACKNOWLEDGE);
...but still the message is getting deleted.
JmsTemplate creating code snippet:
JndiConnectionFactorySupport connectionFactoryBean = new JndiConnectionFactorySupport();
connectionFactoryBean.setBindingsDir(this.bindingDir);
connectionFactoryBean
.setConnectionFactoryName(connectionFactoryName);
connectionFactoryBean.afterPropertiesSet();
jmsTemplate.setConnectionFactory(connectionFactoryBean.getObject());
JndiDestinationResolver destinationResolver = new JndiDestinationResolver();
destinationResolver.setJndiTemplate(connectionFactoryBean
.getJndiTemplate());
jmsTemplate.setDestinationResolver(destinationResolver);
jmsTemplate.setReceiveTimeout(20000);
jmsTemplate.setDefaultDestinationName(this.defaultDestinationName);
Tried the jmsTemplate.execute() method as below:
#SuppressWarnings({ "unused", "unchecked" })
Message responseMessage = (Message) jmsTemplate.execute(
new SessionCallback() {
public Object doInJms(Session session)
throws JMSException {
MessageConsumer consumer = session
.createConsumer(jmsTemplate.getDestinationResolver().resolveDestinationName(session, "QUEUE_NAME", false));
Message response = consumer.receive(1);
try {
testMethod();//this method will throw exception.
response.acknowledge();
consumer.close();
} catch(Exception e){
consumer.close();//control will come here.
}
return response;
}
}, true);
You can't do that with receive() methods because the operation is complete (from the session perspective) when the receive method returns.
You need to run the code that might fail within the scope of the session; e.g. with a JmsTemplate.execute() with a SessionCallback - something like this...
this.jmsTemplate.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
this.jmsTemplate.convertAndSend("foo", "bar");
try {
String value = this.jmsTemplate.execute(session -> {
MessageConsumer consumer = session.createConsumer(
this.jmsTemplate.getDestinationResolver().resolveDestinationName(session, "foo", false));
String result;
try {
Message received = consumer.receive(5000);
result = (String) this.jmsTemplate.getMessageConverter().fromMessage(received);
// Do some stuff that might throw an exception
received.acknowledge();
}
finally {
consumer.close();
}
return result;
}, true);
System.out.println(value);
}
catch (Exception e) {
e.printStackTrace();
}
You have to browse the queue.
Example of real code that was executed making usage of Websphere MQ
public void browseMessagesAndJiraCreation(String jiraUserName, String jiraPassword) {
int counterMessages = jmsTemplate.browse(destinationQueueName, new BrowserCallback<Integer>() {
#Override
public Integer doInJms(final Session session, final QueueBrowser queueBrowser) throws JMSException {
Enumeration<TextMessage> enumeration = queueBrowser.getEnumeration();
int counterMessages = 0;
while (enumeration.hasMoreElements()) {
counterMessages += 1;
TextMessage msg = enumeration.nextElement();
logger.info("Found : {}", msg.getText());
JiraId jiraId = jiraManager.createIssue(jiraUserName, jiraPassword);
jiraManager.attachFileToJira(jiraId, msg.getText(), jiraUserName, jiraPassword);
}
return counterMessages;
}
});
logger.info("{}:messages were browsed and processed from queue:{}.", counterMessages, destinationQueueName);
}
Explanations:
usage of the Spring Framework JmsTemplate
you pass the String gestinationQueueName (example destinationQueueName=QL.PREFCNTR.USER.REPLY)
Java enumeration of Text messages
counterMessages is the counter of messages that were processed
messages are NOT consumed!
You can add transactional processing of JMS messages. See the example
Your listener should be "transacted". Like this
<jms:listener-container connection-factory="connectionFactory" acknowledge="transacted">
<jms:listener ref="notificationProcessor" destination="incoming.queue"/>
</jms:listener-container>
I've recently learned RabbitMQ with hopes of implementing it in my work flow. (I will be implementing it in Java) I just finished all the tutorials and was curious how I would implement a "constant" queue instead of a "temporary" queue. Or at least allow the consumer to get the message that the exchange sent. For example if I send a topic of "kern.overflow" but a consumer is offline, as soon as my consumer comes online as long as it is listening for something related to "kern.#" or "#.overflow" I want it to receive un-received messages.
Here is the code to:
create a persistent queue
bind the queue to the topic with "kern.#" as routing-key:
code:
String myPersistentQueue = "myPersistentQueue";
boolean isDurable = true;
boolean isExclusive = false;
boolean isAutoDelete = false;
channel.queueDeclare(myPersistentQueue, isDurable, isExclusive, isAutoDelete, null);
channel.queueBind(myPersistentQueue, "myTopic", "kern.#");
final QueueingConsumer consumer = new QueueingConsumer(channel);
boolean autoAck = true;
String tag1 = channel.basicConsume(myPersistentQueue, autoAck, consumer);
executorService.execute(new Runnable() {
#Override
public void run() {
while (true) {
Delivery delivery;
try {
delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("Received: " + message);
} catch (Exception ex) {
Logger.getLogger(TestMng.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
});
System.out.println("Consumers Ready");
When you publish a message to myTopic using kern.overflow as routing-key the message is stored to the myPersistentQueue queue. The client can be off-line, when the client is on-line can get the messages.
I am having a scenerio that , i am having more than 4 clients and i want to send a single queue messages to all of that clients. I didnt acknowledge for the client side. So anyone can get that messages from the queue. But the case is that i want to know the number of consumers who consumed that message. Can anyone help me to get the consumer count.
Here below is code that i wrote.
public static boolean sendMessage(String messageText)
{
try {
StompConnection connection = new StompConnection();
HashMap<String, String> header = new HashMap<String, String>();
header.put(PERSISTENT, "true");
connection.open(URLhost, port);
connection.connect("", "");
connection.begin("MQClient");
Thread.sleep(100);
connection.send(queuePath, messageText, "MQClient", header);
connection.commit("MQClient");
connection.disconnect();
return true;
} catch (Exception e) {
throw new BasicException(AppLocal.getIntString("ActiveMQ service ERROR"), e);
}
}
public static String receiveMessage() {
try {
StompConnection connection = new StompConnection();
connection.open(URLhost, port);
connection.connect("", "");
connection.subscribe(queuePath, Subscribe.AckModeValues.INDIVIDUAL);
connection.begin("MQClient");
Thread.sleep(1000);//below not a good NO DATA test .. worked by making thread sleep a while
if (connection.getStompSocket().getInputStream().available() > 1)
{
StompFrame message = connection.receive();
connection.commit("MQClient");
connection.disconnect();
return message.getBody();
}
else
return "";
} catch (Exception e) {
e.printStackTrace();
}
return "";
}
If you are writing to a Queue, then exactly one consumer will receive the message. The whole goal of point-to-point messaging is that only one of the consumers will receive the message.
If you want to send a message and have it be received by all of the consumers, then you'd want to use a Topic instead of a Queue.
If you switch to a topic, multiple clients can consume that same message.
You can probably figure out how many consumed your message by subscribing to the ActiveMQ.Advisory.MessageConsumed.Topic