I could find way to create delay between supply from producer and consumption by consumer.
But I want to know if there is any possible way to create delay on every message.Say I want my consumer to pick only 1 message every 2 seconds but I want my producer to produce at its best performance rate as my consumer is not as efficient as producer.
So, is there a way to control delay on each message before it is sent from queue to consumer?
I tried weblogic.jms.extensions.WLMessageProducer producer =
(weblogic.jms.extensions.WLMessageProducer)queueSender; on producer
and
`weblogic.jms.extensions.WLMessage message=(weblogic.jms.extensions.WLMessage)tMessage;
message.setJMSDeliveryTime(20000);`
onmessage but not seeing any difference.
You'll probably want:
((weblogic.jms.extensions.WLMessageProducer)producer).setTimeToDeliver(2000);
http://docs.oracle.com/cd/E15051_01/wls/docs103/javadocs/weblogic/jms/extensions/WLMessageProducer.html#setTimeToDeliver(long)
I'm not sure what your first attempt was supposed to do. But setJMSDeliveryTime has been deprecated since Weblogic 9.
there is a bit of a contradiction in your question, in that "consumer to pick only 1 message every 2 seconds" is not the same thing as "control delay on each message before it is sent from queue to consumer". E.g. if your producer was putting in message at say 10,000/hr, and if you put a deal of 30 mins on each message, your consumer would still attempt to consume at 10,000/hr if it could. The only impact of the delay would be that the consumer would not start consuming until 30 mins after the producer started injecting.
Assuming the former is what you want to do, to do this I believe the only option in WebLogic is to implemented something in your consumer code to slow down processing on that side.
Setting Time to Deliver Override on the queue settings implements a delay for each message, but does not change the rate. You can also set the Time to Deliver in code from the producer, but the WebLogic queue setting would take precede (override!) if also set.
Hope thats some help!
Related
I'm implementing a daily job which get data from a MongoDB (around 300K documents) and for each of them publish a message on a RabbitMQ queue.
On the other side I have some consumers on the same queue, which ideally should work in parallel.
Everything is working but not as much as I would, specially regarding consumers performances.
This is how I declare the queue:
rabbitMQ.getChannel().queueDeclare(QUEUE_NAME, true, false, false, null);
This is how the publishing is done:
rabbitMQ.getChannel().basicPublish("", QUEUE_NAME, null, body.getBytes());
So the channel used to declare the queue is used to publish all the messages.
And this is how the consumers are instantiated in a for loop (10 in total, but it can be any number):
Channel channel = rabbitMQ.getConnection().createChannel();
MyConsumer consumer = new MyConsumer(customMapper, channel, subscriptionUpdater);
channel.basicQos(1); // also tried with 0, 10, 100, ...
channel.basicConsume(QUEUE_NAME, false, consumer);
So for each consumer I create a new channel and this is confirmed by logs:
...
com.rabbitmq.client.impl.recovery.AutorecoveringChannel#bdd2027
com.rabbitmq.client.impl.recovery.AutorecoveringChannel#5d1b9c3d
com.rabbitmq.client.impl.recovery.AutorecoveringChannel#49a26d19
...
As far as I've understood from my very short RabbitMQ experience, this should guarantee that all the consumer are called.
By the way, consumers need between 0.5 to 1.2 seconds to complete their task. I have just spotted very few 3 seconds.
I have two separate queues and I repeat what I said above two times (using the same RabbitMQ connection).
So, I have tested publishing 100 messages for each queue. Both of them have 10 consumers with qos=1.
I didn't expect to have exactly a delivery/consume performance of 10/s, instead I noticed:
actual values are around 0.4 and 1.0.
at least all the consumers bound to the queue have received a message, but it doesn't look like "fair dispatching".
it took about 3 mins 30 secs to consume all the messages on both queues.
Am I missing the main concept of threading within RabbitMQ? Or any specific configuration which might be still at default value?
I'm on it since very few days so this might be possible.
Please notice that I'm in the fortunate position where I can control both publishing and consuming parts :)
I'm using RabbitMQ 3.7.3 locally, so it cannot be any network latency issue.
Thanks for your help!
The setup of RabbitMQ channels and consumers were correct in the end: so one channel for each consumer.
The problem was having the consumers calling a synchronized method to find and update a MongoDB document.
This was delaying the execution time of some consumers: even worst, the more consumers I was adding (thinking to speed up processing), the less message rate/s I was getting.
I have moved the MongoDB part on he publishing side where I don't have to care about synchronization because it's done in sequence by one publisher only. I have a slightly decreased delivery rate/s but now with just 5 consumers I easily reach an ack rate of 50-60/s.
Lessons learnt:
create a separate channel for the publisher.
create a separate channel for each consumer.
let RabbitMQ manage threading for the consumers (--> you can instantiate them on the main thread).
(if possible) back off publishing to give the queues 100% time to deal with consumers.
set a qos > 1 for each consumer channel. But this really depends on your scenario and architecture: you must do some performance test.
As a general rule:
(1) calculate/estimate delivery time.
(2) calculate/estimate ack time.
(3) calculate/estimate consumer time.
qos = (1) + (2) + (3) / (3)
This will give you an initial qos value to test and tweak based on your scenario. The final goal is to have 100% utilization for all the available consumers.
Sometimes due to some external problems, I need to requeue a message by basic.reject with requeue = true.
But I don't need to consume it immediately because it will possibly fail again in a short time. If I continuously requeue it, this may result in infinite loop and requeue.
So I need to consume it later, say one minute later,
And I need to know how many times the messages has been requeue so that I can stop requeue it but only reject it to declare it fails to consume.
PS: I am using Java client.
There are multiple solutions to point 1.
First one is the one chosen by Celery (a Python producer/consumer library that can use RabbitMQ as broker). Inside your message, add a timestamp at which the task should be executed. When your consumer gets the message, do not ack it and check its timestamp. As soon as the timestamp is reached, the worker can execute the task. (Note that the worker can continue working on other tasks instead of waiting)
This technique has some drawbacks. You have to increase the QoS per channel to an arbitrary value. And if your worker is already working on a long running task, the delayed task wont be executed until the first task has finished.
A second technique is RabbitMQ-only and is much more elegant. It takes advantage of dead-letter exchanges and Messages TTL. You create a new queue which isn't consumed by anybody. This queue has a dead-letter exchange that will forward the messages to the consumer queue. When you want to defer a message, ack it (or reject it without requeue) from the consumer queue and copy the message into the dead-lettered queue with a TTL equal to the delay you want (say one minute later). At (roughly) the end of TTL, the defered message will magically land in the consumer queue again, ready to be consumed. RabbitMQ team has also made the Delayed Message Plugin (this plugin is marked as experimental yet fairly stable and potential suitable for production use as long as the user is aware of its limitations and has serious limitations in term of scalability and reliability in case of failover, so you might decide whether you really want to use it in production, or if you prefer to stick to the manual way, limited to one TTL per queue).
Point 2. just requires putting a counter in your message and handling this inside your app. You can choose to put this counter in a header or directly in the body.
I need to create an application wherein I have to retrieve all the elements inside the JMS queue within a given time limit.
For instance, the given the limit is 10 seconds. So every 10 seconds, the application should create a new Thread wherein the Thread is responsible for 1) connecting to the JMS queue and 2) retrieving all the messages during the time of connection.
So in 10 seconds, lets say that there were 15 TextMessages in the queue. I only want the current executing thread to retrieve those 15 TextMessages and nothing else. I'm afraid that the thread would pick up additional messages.
Is there a facility to limit how much messages a consumer can take? Maybe something feature which would let me see how much the queue contains?
One method I can think of is that you create a receiver from a session that uses CLIENT_ACKNOWLEDGE acknowledgement mode. Now start the receiver and receive the messages. Yes you will receive some additional messages. Now as you receive a message get it JMSTimestamp and see whether it belongs to the time duration your thread is interested in. If the message is as per your time requirement acknowledge it. If not do not acknowledge it in which case it will persist on the server and may be picked up by other threads looking for messages with different time stamps.
Another efficient way would be using message selector. Since JMSTimestamp is a message header and can be used in a selector you can take advantage of it. Create receiver with a selector on JMSTimestamp with your time range requirement. Only messages satisfying the selector will be received.
I'm a bit new to camel, so please forgive me if this is a stupid question!
In Camel, I have a list of competing consumers against a queue, queue-1. I'd like each consumer to wait 1 hour between attempts to read the queue, but once an hour has passed, each consumer should continuously poll until it receives a message. Once it receives a message, it should process it, and then wait an hour before attempting another read, and so on.
Here's the route I have set up:
from("aws-sqs://queue-1?accessKey=ABC&secretKey=XYZ&maxMessagesPerPoll=1")
.unmarshal().base64()
.unmarshal().serialization()
.throttle(1)
.timePeriodMillis(TimeUnit.HOUR.toMillis(1))
.bean(new ProcessorBean())
.marshal().serialization()
.marshal().base64()
.to("aws-sqs://queue-2?accessKey=ABC&secretKey=XYZ");
It is my understanding that routes execute synchronously (with the exception of specific components designed to work asynchronously). Based on that understanding, I believe this route satisfies those requirements.
Will this do what I want? Why or why not?
Your route will consumes a message in the queue and then wait for one hour.
If you want to wait an hour and then read a message, look at ScheduledPollConsumer Options (Doc)
Some options allow to use scheduler like Quartz2 or Spring based scheduler.
Use the log component if you want to be sure: .to("log:com.mycompany.order?level=DEBUG").
Say you have two Spring DefaultMessageListenerContainer listening on the same Queue (ActiveMQ for example), started in different VMs.
Send 1 000 000 messages in. After 500 000 of messages you want the rest of them to be handled by only one DefaultMessageListenerContainer, BUT without calling destroy or shutdown on the other (since you might needed it in the future - and must keep it manageable with JMX). The numbers are just for example here and should be ignored, could be replaced with - "after some time, after some messages, etc etc"
This sounds easy : call stop on the other DefaultMessageListenerContainer. Wrong, since messages are dispatched in a Round Robin fashion and they get registered with the Consumer.
Add transaction support and throw an error in the second DefaultMessageListenerContainer every time a messages comes in, it will be rolled-back and taken (round-robin) by the first one. Wrong again, the messages somehow registers with the consumer, not allowing the first DefaultMessageListenerContainer to take the message.
Even if you you shutdown/destroy the first DMLC - the message is NOT consumed by the other DMLC. They ARE Consumed only if I kill the JVM that the now shutdown/destroyed DMLC was running in.
My Solution so far : Because of the Session.AUTO_ACKNOWLEDGE messages are taken out from the Queue before they enter the onMessage method in the MessageListener of the DefaultMessageListenerContainer. In the MessageListener implement SessionAwareMessageListener and re-send a fresh copy of the message with the same payload.
But this looks really dirty - I wish I could do it more in a "JMS"-ish way.
I don't fully grasp this part: "[the messages] get registered with the Consumer". Do you mean that ActiveMQ decides which listener to send it to? What exactly happens when you call "stop" on the DMLC?
I don't know if this is going to overcome your difficulties, but here's an idea: message selectors in DMLCs are live: you can change them any time and they are effective immediately. Perhaps try changing the message selector to "FALSE"; all cached messages should finish processing and new ones should stop coming.