I'm a bit new to camel, so please forgive me if this is a stupid question!
In Camel, I have a list of competing consumers against a queue, queue-1. I'd like each consumer to wait 1 hour between attempts to read the queue, but once an hour has passed, each consumer should continuously poll until it receives a message. Once it receives a message, it should process it, and then wait an hour before attempting another read, and so on.
Here's the route I have set up:
from("aws-sqs://queue-1?accessKey=ABC&secretKey=XYZ&maxMessagesPerPoll=1")
.unmarshal().base64()
.unmarshal().serialization()
.throttle(1)
.timePeriodMillis(TimeUnit.HOUR.toMillis(1))
.bean(new ProcessorBean())
.marshal().serialization()
.marshal().base64()
.to("aws-sqs://queue-2?accessKey=ABC&secretKey=XYZ");
It is my understanding that routes execute synchronously (with the exception of specific components designed to work asynchronously). Based on that understanding, I believe this route satisfies those requirements.
Will this do what I want? Why or why not?
Your route will consumes a message in the queue and then wait for one hour.
If you want to wait an hour and then read a message, look at ScheduledPollConsumer Options (Doc)
Some options allow to use scheduler like Quartz2 or Spring based scheduler.
Use the log component if you want to be sure: .to("log:com.mycompany.order?level=DEBUG").
Related
Sometimes due to some external problems, I need to requeue a message by basic.reject with requeue = true.
But I don't need to consume it immediately because it will possibly fail again in a short time. If I continuously requeue it, this may result in infinite loop and requeue.
So I need to consume it later, say one minute later,
And I need to know how many times the messages has been requeue so that I can stop requeue it but only reject it to declare it fails to consume.
PS: I am using Java client.
There are multiple solutions to point 1.
First one is the one chosen by Celery (a Python producer/consumer library that can use RabbitMQ as broker). Inside your message, add a timestamp at which the task should be executed. When your consumer gets the message, do not ack it and check its timestamp. As soon as the timestamp is reached, the worker can execute the task. (Note that the worker can continue working on other tasks instead of waiting)
This technique has some drawbacks. You have to increase the QoS per channel to an arbitrary value. And if your worker is already working on a long running task, the delayed task wont be executed until the first task has finished.
A second technique is RabbitMQ-only and is much more elegant. It takes advantage of dead-letter exchanges and Messages TTL. You create a new queue which isn't consumed by anybody. This queue has a dead-letter exchange that will forward the messages to the consumer queue. When you want to defer a message, ack it (or reject it without requeue) from the consumer queue and copy the message into the dead-lettered queue with a TTL equal to the delay you want (say one minute later). At (roughly) the end of TTL, the defered message will magically land in the consumer queue again, ready to be consumed. RabbitMQ team has also made the Delayed Message Plugin (this plugin is marked as experimental yet fairly stable and potential suitable for production use as long as the user is aware of its limitations and has serious limitations in term of scalability and reliability in case of failover, so you might decide whether you really want to use it in production, or if you prefer to stick to the manual way, limited to one TTL per queue).
Point 2. just requires putting a counter in your message and handling this inside your app. You can choose to put this counter in a header or directly in the body.
I'm using Camel for a while and I'm a huge admirer of its simplicity.
The use case
Given this simple route:
from("mock:some-route")
// split 1
.split().method("splitterBean", "split")
// now we have an ArrayList of n messages (let's say 10)
.to(ExchangePattern.InOut, "jms:some-thing");
If we assume that we have 10 messages after the split(), this route will immediately send 10 messages to the "to" endpoint. So jms:some-thing will receive all 10 messages at once.
The problem
--> Please note that the "out" endpoint is inOut, so we have timeouts in place when the receiver must acknowledge the message.
The application on the receiving end of jms:some-thing has to do quite some work for each message. As all 10 messages were written at the same time, the same timeout applies for all of them.
So we increased that timeout.
But one day, we will have 1000 messages and the timeout will again be to low.
What i want to archieve
I want to implement a pattern where I'll send only 1 message at once after the split, then sending the next after 1 message is acknowledged by the receiving system.
So instead of sending the 10 messages at once, I want
Send 1 message
Wait for the acknowledgment of that message
Send the next
Wait again
And so on..
How to implement such behavior?
I looked at the documentation, but none of the EIP components seem to fulfill that need?
Thanks for any input
You can have an intermediate seda queue with only one thread.
from("mock:some-route")
.split().method("splitterBean", "split")
.to("seda:your-seda-queue?waitForTaskToComplete=Always&timeout=0");
from("seda:your-seda-queue?waitForTaskToComplete=Always&timeout=0")
.to(ExchangePattern.InOut, "jms:some-thing");
By default, a seda queue will have a single consuming thread, and will block the calling thread until a consumer becomes available. More on seda details here
Saying that, your sending to a jms topic, which is really what you should be using to queue up your requests instead of a seda queue. You should look into implementing this logic asynchronously, and waiting on a reply topic rather than using a timeout.
I need to create an application wherein I have to retrieve all the elements inside the JMS queue within a given time limit.
For instance, the given the limit is 10 seconds. So every 10 seconds, the application should create a new Thread wherein the Thread is responsible for 1) connecting to the JMS queue and 2) retrieving all the messages during the time of connection.
So in 10 seconds, lets say that there were 15 TextMessages in the queue. I only want the current executing thread to retrieve those 15 TextMessages and nothing else. I'm afraid that the thread would pick up additional messages.
Is there a facility to limit how much messages a consumer can take? Maybe something feature which would let me see how much the queue contains?
One method I can think of is that you create a receiver from a session that uses CLIENT_ACKNOWLEDGE acknowledgement mode. Now start the receiver and receive the messages. Yes you will receive some additional messages. Now as you receive a message get it JMSTimestamp and see whether it belongs to the time duration your thread is interested in. If the message is as per your time requirement acknowledge it. If not do not acknowledge it in which case it will persist on the server and may be picked up by other threads looking for messages with different time stamps.
Another efficient way would be using message selector. Since JMSTimestamp is a message header and can be used in a selector you can take advantage of it. Create receiver with a selector on JMSTimestamp with your time range requirement. Only messages satisfying the selector will be received.
I could find way to create delay between supply from producer and consumption by consumer.
But I want to know if there is any possible way to create delay on every message.Say I want my consumer to pick only 1 message every 2 seconds but I want my producer to produce at its best performance rate as my consumer is not as efficient as producer.
So, is there a way to control delay on each message before it is sent from queue to consumer?
I tried weblogic.jms.extensions.WLMessageProducer producer =
(weblogic.jms.extensions.WLMessageProducer)queueSender; on producer
and
`weblogic.jms.extensions.WLMessage message=(weblogic.jms.extensions.WLMessage)tMessage;
message.setJMSDeliveryTime(20000);`
onmessage but not seeing any difference.
You'll probably want:
((weblogic.jms.extensions.WLMessageProducer)producer).setTimeToDeliver(2000);
http://docs.oracle.com/cd/E15051_01/wls/docs103/javadocs/weblogic/jms/extensions/WLMessageProducer.html#setTimeToDeliver(long)
I'm not sure what your first attempt was supposed to do. But setJMSDeliveryTime has been deprecated since Weblogic 9.
there is a bit of a contradiction in your question, in that "consumer to pick only 1 message every 2 seconds" is not the same thing as "control delay on each message before it is sent from queue to consumer". E.g. if your producer was putting in message at say 10,000/hr, and if you put a deal of 30 mins on each message, your consumer would still attempt to consume at 10,000/hr if it could. The only impact of the delay would be that the consumer would not start consuming until 30 mins after the producer started injecting.
Assuming the former is what you want to do, to do this I believe the only option in WebLogic is to implemented something in your consumer code to slow down processing on that side.
Setting Time to Deliver Override on the queue settings implements a delay for each message, but does not change the rate. You can also set the Time to Deliver in code from the producer, but the WebLogic queue setting would take precede (override!) if also set.
Hope thats some help!
Say you have two Spring DefaultMessageListenerContainer listening on the same Queue (ActiveMQ for example), started in different VMs.
Send 1 000 000 messages in. After 500 000 of messages you want the rest of them to be handled by only one DefaultMessageListenerContainer, BUT without calling destroy or shutdown on the other (since you might needed it in the future - and must keep it manageable with JMX). The numbers are just for example here and should be ignored, could be replaced with - "after some time, after some messages, etc etc"
This sounds easy : call stop on the other DefaultMessageListenerContainer. Wrong, since messages are dispatched in a Round Robin fashion and they get registered with the Consumer.
Add transaction support and throw an error in the second DefaultMessageListenerContainer every time a messages comes in, it will be rolled-back and taken (round-robin) by the first one. Wrong again, the messages somehow registers with the consumer, not allowing the first DefaultMessageListenerContainer to take the message.
Even if you you shutdown/destroy the first DMLC - the message is NOT consumed by the other DMLC. They ARE Consumed only if I kill the JVM that the now shutdown/destroyed DMLC was running in.
My Solution so far : Because of the Session.AUTO_ACKNOWLEDGE messages are taken out from the Queue before they enter the onMessage method in the MessageListener of the DefaultMessageListenerContainer. In the MessageListener implement SessionAwareMessageListener and re-send a fresh copy of the message with the same payload.
But this looks really dirty - I wish I could do it more in a "JMS"-ish way.
I don't fully grasp this part: "[the messages] get registered with the Consumer". Do you mean that ActiveMQ decides which listener to send it to? What exactly happens when you call "stop" on the DMLC?
I don't know if this is going to overcome your difficulties, but here's an idea: message selectors in DMLCs are live: you can change them any time and they are effective immediately. Perhaps try changing the message selector to "FALSE"; all cached messages should finish processing and new ones should stop coming.