I hope someone can provide some help on this matter.
I am using camel rabbitmq and for testing purpose I am trying to send a message to the queue, which I'm trying to display in rabbitmq interface and then also read it back.
However I can't get this working.
What I believe works is that I created, in the exchange tab of rabbitmq management interface, a new exchange.
In my java code I send the message to that exchange. When the code is executed, I can see a spike in the web interface showing that something has been received but I can't see what has been received.
When I try to read, I can't read and get the following errror:
< in route: Route(route2)[[From[rabbitmq://192.168.59.103:5672/rt... because of Route route2 has no output processors. You need to add outputs to the route such as to("log:foo").
Can someone provide me a practical example on how to send a message,see it in the web interace and also read it? any tutorial showing this process will be also appreciated.
Thank you
=================
SECOND PART
The error I'm getting now is the following:
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; reason: {#method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - cannot redeclare exchange 'rhSearchExchange' in vhost '/' with different type, durable, internal or autodelete value, class-id=40, method-id=10), null, ""}
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:343)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:216)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118)
... 47 more
I have the following settings:
I get this error, I believe I’m doing something wrong with the URI and I have to define some extra parameters that I’m missing
My exchange is of direct type
My queue is of durable type
And my uri is :
rabbitmq://192.168.59.105:5672/rhSearchExchange?username=guest&password=guest&routingKey=rhSearchQueue
any input on this?
Thanks
So I was able to figure this out yesterday, I had the same (or at least similar) problems you were having.
The options you have in the RabbitMQ URI must exactly match the options that your exchange was created with. For example, in my configuration, I had an exchange called tasks that was a direct type, was durable, and was not configured to autodelete. Note that the default value for the autodelete option in the rabbitmq camel component is true. Additionally, I wanted to get the messages with the routing key camel. That means my rabbitmq URI needed to look like:
rabbitmq:localhost:5672/tasks?username=guest&password=guest&autoDelete=false&routingKey=camel
Additionally, I wanted to read from an existing queue, called task_queue rather than have the rabbitmq camel component declare it's own queue. Therefore, I also needed to add an additional query parameter, so my rabbitmq URI was
rabbitmq:localhost:5672/tasks?username=guest&password=guest&autoDelete=false&routingKey=camel&queue=task_queue
This configuration worked for me. Below, I added some Java code snippets from the code that configures the exchange and queue and sends a message, and my Camel Route configuration.
Exchange and Queue configuration:
rabbitConnFactory = new ConnectionFactory();
rabbitConnFactory.setHost("localhost");
final Connection conn = rabbitConnFactory.newConnection();
final Channel channel = conn.createChannel();
// declare a direct, durable, non autodelete exchange named 'tasks'
channel.exchangeDeclare("tasks", "direct", true);
// declare a durable, non exclusive, non autodelete queue named 'task_queue'
channel.queueDeclare("task_queue", true, false, false, null);
// bind 'task_queue' to the 'tasks' exchange with the routing key 'camel'
channel.queueBind("task_queue", "tasks", "camel");
Sending a message:
channel.basicPublish("tasks", "camel", MessageProperties.PERSISTENT_TEXT_PLAIN, "hello, world!".getBytes());
Camel Route:
#Override
public void configure() throws Exception {
from("rabbitmq:localhost:5672/tasks?username=guest&password=guest&autoDelete=false&routingKey=camel&queue=task_queue")
.to("mock:result");
}
I hope this helps!
Because this it the top hit on Google for rabbitmq/camel integration I feel the need to add a bit more to the subject. The lack of simple camel examples is astonishing to me.
import org.apache.camel.CamelContext;
import org.apache.camel.ConsumerTemplate;
import org.apache.camel.Endpoint;
import org.apache.camel.Exchange;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.impl.DefaultCamelContext;
import org.junit.Test;
public class CamelTests {
CamelContext context;
ProducerTemplate producer;
ConsumerTemplate consumer;
Endpoint endpoint;
#Test
public void camelRabbitMq() throws Exception {
context = new DefaultCamelContext();
context.start();
endpoint = context.getEndpoint("rabbitmq://192.168.56.11:5672/tasks?username=benchmark&password=benchmark&autoDelete=false&routingKey=camel&queue=task_queue");
producer = context.createProducerTemplate();
producer.setDefaultEndpoint(endpoint);
producer.sendBody("one");
producer.sendBody("two");
producer.sendBody("three");
producer.sendBody("four");
producer.sendBody("done");
consumer = context.createConsumerTemplate();
String body = null;
while (!"done".equals(body)) {
Exchange receive = consumer.receive(endpoint);
body = receive.getIn().getBody(String.class);
System.out.println(body);
}
context.stop();
}
}
Related
I am building a system that will receive messages via a Message broker (Currently, JMS) from different systems. All the messages from all the senders systems have a deviceId and there is no order in the reception of the message.
For instance, system A can send a message with deviceId=1 and system b be can send a message with deviceId=2.
My goal is not to start processing of the messages concerning the same deviceId unless I got all the message from all the senders with the same deviceId.
For example, if I have 3 systems A, B and C sending messages to my system :
System A sends messageA1 with deviceId=1
System B sends messageB1 with deviceId=1
System C sends messageC1 with deviceId=3
System C sends messageC2 with deviceId=1 <--- here I should start processing of messageA1, messageB1 and messageC2 because they are having the same deviceID 1.
Should this problem be resolved by using some sync mechanism in my system , by the message broker or an integration framework like spring-integration/apache camel ?
A similar solution with the Aggregator (what #Artem Bilan mentioned) can also be implemented in Camel with a custom AggregationStrategy and with controlling the Aggregator completion by using the Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP property.
The following might be a good starting point. (You can find the sample project with tests here)
Route:
from("direct:start")
.log(LoggingLevel.INFO, "Received ${headers.system}${headers.deviceId}")
.aggregate(header("deviceId"), new SignalAggregationStrategy(3))
.log(LoggingLevel.INFO, "Signaled body: ${body}")
.to("direct:result");
SignalAggregationStrategy.java
public class SignalAggregationStrategy extends GroupedExchangeAggregationStrategy implements Predicate {
private int numberOfSystems;
public SignalAggregationStrategy(int numberOfSystems) {
this.numberOfSystems = numberOfSystems;
}
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Exchange exchange = super.aggregate(oldExchange, newExchange);
List<Exchange> aggregatedExchanges = exchange.getProperty("CamelGroupedExchange", List.class);
// Complete aggregation if we have "numberOfSystems" (currently 3) different messages (where "system" headers are different)
// https://github.com/apache/camel/blob/master/camel-core/src/main/docs/eips/aggregate-eip.adoc#completing-current-group-decided-from-the-aggregationstrategy
if (numberOfSystems == aggregatedExchanges.stream().map(e -> e.getIn().getHeader("system", String.class)).distinct().count()) {
exchange.setProperty(Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP, true);
}
return exchange;
}
#Override
public boolean matches(Exchange exchange) {
// make it infinite (4th bullet point # https://github.com/apache/camel/blob/master/camel-core/src/main/docs/eips/aggregate-eip.adoc#about-completion)
return false;
}
}
Hope it helps!
You can do this in Apache Camel using a caching component. I think there is the EHCache component.
Essentially:
You receive a message with a given deviceId say deviceId1.
You look up in your cache to see which messages have been received for deviceId1.
As long as you have not received all three you add the current system/message to the cache.
Once all messages are there you process and clear the cache.
You could then off course route each incoming message to a specific deviceId based queue for temporary storage. This can be JMS, ActiveMQ or something similar.
Spring Integration provides component for exactly this kind of tasks - do not emit until the whole group is collected. And it's name an Aggregator. Your deviceId is definitely a correlationKey. The releaseStrategy really may be based on the number of systems - how much deviceId1 messages you are waiting before proceed to the next step.
When I sent messages in a Camel Context Component to its endpoint, I have to wait for a response message with acknowledgement. If no response is received within timeout time, an exception shall be thrown back to the camel route.
I tried to implement it the following way:
I used a multicast to generate a timeout response while the original message is sent to the endpoint. The timeout response is delayed and if no response is received after this timeout, a timeout exception shall be thrown back on the route.
So I have the following route:
private final String internalRespUri = "direct:internal_resp";
private final String internalRespTimeout = "seda:internaltimeout";
#Override
public void configure() {
SendController send_controller = new SendController();
TimeoutResponse resp = new TimeoutResponse();
from(Endpoints.MESSAGE_IN.direct())
.errorHandler(noErrorHandler())
.routeId(Endpoints.MESSAGE_IN.atsm())
.log("Incoming message at segment in")
.process(send_controller)
.log("Message after send controller")
.multicast().parallelProcessing()
.log("After wiretap")
.to(internalRespTimeout, Endpoints.SEGMENT_OUT.direct());
from(internalRespTimeout)
.errorHandler(noErrorHandler())
.routeId(internalRespTimeout)
.log("begin response route")
.log("timeout response route")
.process(resp)
.log("modify message to response")
.delay(1000)
.log("after delay")
.to(internalRespUri);
from(Endpoints.SEGMENT_IN.seda())
.routeId(Endpoints.SEGMENT_IN.atsm())
.to(internalRespUri);
from(internalRespUri)
.errorHandler(noErrorHandler())
.routeId(internalRespUri)
.log("after response gathering point")
.choice()
.when(header(HeaderKeys.TYPE.key()).isEqualTo(UserMessageType.RESP.toString()))
.log("process responses")
.process(send_controller)
.otherwise()
.log("no response")
.to(Endpoints.MESSAGE_OUT.direct());
}
The problem is that the exception thrown in the SendController is not propagated over the SEDA endpoint internalRespTimeout.
If I use a direct endpoint instead it works, but then I have another problem:
The delay blocks the route while a received response message from endpoint Endpoints.SEGMENT_IN.seda() may not be transmitted.
Are SEDA endpoint generally not able to propagate exceptions?
How can I achieve a solution to my problem?
Thanks,
Sven
I have an idea:
Instead of throwing an exception, I possibly could use transactions for timeout.
Could this work?
I am currently not aware of a way to propagate and exception back over a SEDA endpoint in camel. The way the error handling works is based on channels between endpoints. When you use a SEDA endpoint the code will keep processing and not wait for the code since it will keep processing. I am having a bit of trouble understanding what you would like to accomplish, but I will list some similar alternatives you might be able to use.
-The first is to use a route level error handler in your SEDA based route and store the exception using a unique Id that you can lookup later.
-The second is to pass the data into a Java Bean where you have full control of what you are doing and could even consider something like using a Guava's Futures to run the code asynchronously while doing other tasks.
If you can explain what you are trying to accomplish a bit better I might be able to make a clearer suggestion.
I am trying to fetch a message with a particular correlation id like explained in rabbitmq docs. However I see that the irrelevant messages gets dequeued. I do not want it to happen. How can I tell rabbitmq to not dequeue after I get message and get to know that this is not the one I was looking for. Please help me.
`
.
.
replyQueueName = channel.queueDeclare().getQueue();
consumer = new QueueingConsumer(channel);
channel.basicConsume(replyQueueName, false, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
System.out.println(delivery.getProperties().getCorrelationId());
if (delivery.getProperties().getCorrelationId().equals(corrId)) {
response = new String(delivery.getBody());
break;
}
}
`
You can't do what you want, the way you want. The "selective consumer" is an anti-pattern in RabbitMQ.
Instead, you should design your RabbitMQ setup so that your messages are routed to a queue that only contains messages for the intended consumer.
I wrote more about this, here: http://derickbailey.com/2015/07/22/airport-baggage-claims-selective-consumers-and-rabbitmq-anti-patterns/
If you can afford to lose the order of messages you can use the re-queueing mechanism.
Try turning off auto ack.
If not, you have to redesign your application to inject headers or routing keys to route to a particular queue.
We're using Spring Integration 4.2.0. We have a flow that uses a Message Router and have a desire to be able to log where a message was routed to (actual Destination name and ideally Destination type along with the raw payload). In our case our routers have output channels which have JmsSendingMessageHandler's as endpoints.
What we would like to see is something like this in our logs:
[INFO ] message routed to [amq | queue://QUEUE1] : This is a message!
[INFO ] message routed to [wmq | queue://QUEUE2] : This is also a message!
[INFO ] message routed to [ems | queue://QUEUE3] : This is also a message!
[INFO ] message routed to [wmq | topic://TOPIC1] : This is also a message!
The router config is similar to this:
<int:router id="messageRouter"
input-channel="inputChannel"
resolution-required="false"
ref="messageRouterServiceImpl"
method="route"
default-output-channel="unroutedChannel">
<int:mapping value="channelAlias1" channel="channel1" />
<int:mapping value="channelAlias2" channel="channel2" />
<int:mapping value="channelAlias3" channel="channel3" />
<int:mapping value="routerErrorChannel" channel="routerErrorChannel"/>
<int:mapping value="nullChannel" channel="nullChannel"/>
</int:router>
I have a solution for achieving this but I'll admit it is a bit ugly as it queries the Spring ApplicationContext then uses reflection to ultimately obtain the Destination's name.
Alternatively I suppose I could put a logger at the front of every channel that the router outputs to but was trying to avoid repeatedly having to remember to do this for every flow that we use a router in.
I'm wondering if anyone has suggestions for a cleaner way of doing this. I can share my code if you'd like. Perhaps Spring Integration Java DSL would help with this?
The one of point where you can hook is <int:wire-tap> - a global ChannelInterceptor for the particular channels by the pattern for their name.
This WireTap may send messages to the <int:logging-channel-adapter> or any other custom service to log via desired way or do anything else.
Another good out-of-the-box feature for you is <int:message-history>. With that you will have a path how message has traveled through your flow, including the routing logic. You can find it as a part of the MessageHeaders.
If I understand your use case correctly, you want
router to log this information after successfully sending the message to expected destination.
You do not want to put loggers, as it will be forced to have for every new flow attached to router
One approach which I can think of is-
1- Extend MethodInvokingRouter for your custom router implementation.
2- Override handleMessageInternal method from AbstractMessageRouter class.
Here is the code snippet-
public class CustomRouter extends MethodInvokingRouter {
Logger log = Logger.getLogger(CustomRouter.class);
Map<Message<?>,String> m = new ConcurrentHashMap<>();
public CustomRouter(Object object, Method method) {
super(object, method);
}
public CustomRouter(Object object, String methodName) {
super(object, methodName);
}
public CustomRouter(Object object) {
super(object);
}
public String route(Message<?> message) {
String destinationName = null;
/*
* Business logic to get destination name
*/
destinationName = "DERIVED VALUE AS PER THE ABOVE BUSINESS LOGIC";
//NOTE:- here also we can log (optimistic way of saying that messages are being routed to...), but at this point of time we are not sure whether the returned string value will get resolved to a message channel and message will be routed to the destination.
//Put that name to map so that it can be extracted later on in handleMessageInternal method
m.put(message, destinationName); // O(1) complexity
return destinationName;
}
#Override
protected void handleMessageInternal(Message<?> message) {
super.handleMessageInternal(message);
//At this point we are quit sure that channel has been resolved and message has been sent to destination
/*
* get the destination name returned from route method from populated map
*
* As at this point we know whatever return value was (from route method), there exists a message channel.
*
*/
String key = m.get(message);
// get the key-value pair for channel mapping
Map<String,String> mappedChannelMap = super.getChannelMappings();
// get destination name where message is routed
String destinationName = mappedChannelMap.get(key); // O(1) complexity
//Now log to a file as per the requirement
log.info("message routed to "+destinationName+" Message is- "+message.getPayload().toString());
}
}
I haven't tried this piece of code, there may exist some improvement. What's your thought...
I've been attempting to get camel to route using the RabbitMQComponent releases in the 2.12.1-SNAPSHOT. In doing so, I've been able to consume easily, but have ad issues when routing to another queues.
CamelContext context = new DefaultCamelContext();
context.addComponent("rabbit-mq", factoryComponent());
from("rabbit-mq://localhost/test.exchange&queue=test.queue&username=guest&password=guest&autoDelete=false&durable=true")
.log("${in.body}")
.to("rabbit-mq://localhost/out.queue&routingKey=out.queue&durable=true&autoAck=false&autoDelete=false&username=guest&password=guest")
.end();
In this, I've verified that there the specified exchanges are configured with the appropriate routing keys. I've noted that I'm able to consume in volume, but not able to produce to the out.queue.
The following are the only reference to the RabbitMQProducer that would process the message.
09:10:28,119 DEBUG RabbitMQProducer[main]: - Starting producer: Producer[rabbit-mq://localhost/out.queue?autoAck=false&autoDelete=false&durable=true&password=xxxxxx&routingKey=out.queue&username=guest]
09:10:48,238 DEBUG RabbitMQProducer[Camel (camel-1) thread #11 - ShutdownTask]: - Stopping producer: Producer[rabbit-mq://localhost/out.queue?autoAck=false&autoDelete=false&durable=true&password=xxxxxx&routingKey=out.queue&username=guest]
I've spent time looking into the Camel unit tests for the RabbitMQ component, but I've seen nothing of extremely valuable use. Has anyone been able to get this to work?
Thanks.
i did it using spring dsl. Here's the url that I used. Isn't port number necessary in java dsl ?
rabbitmq://localhost:5672/subscribeExchange?queue=subscribeQueue&durable=true&username=guest&password=guest&routingKey=subscribe
As per http://camel.apache.org/rabbitmq.html the port is optional.
Best
I came across the same issue even though I'm trying after 5 years since the original question was asked. But posting here how I got it working incase anyone else face the same issue.
The problem is, rabbitmq routing key doesn't gets changed even though we add the 'routingKey' to the URI. The trick was to add a header before sending out. If you log the message receive and message which is being sent out we can clearly see the routing key is the same.
Below is my code. It will read the message from 'receiveQueue' and send to 'sendQueue'
#Value("${rabbit.mq.host}")
private String host;
#Value("${rabbit.mq.port}")
private int port;
#Value("${rabbit.mq.exchange}")
private String exchange;
#Value("${rabbit.mq.receive.queue}")
private String receiveQueue;
#Value("${rabbit.mq.send.queue}")
private String sendQueue;
public void configure() throws Exception {
String uriPattern = "rabbitmq://{0}:{1}/{2}?queue={3}&declare=false";
String fromUri = MessageFormat.format(uriPattern, host, port, exchange, receiveQueue);
String toUri = MessageFormat.format(uriPattern, host, port, exchange, sendQueue);
from(fromUri).to("log:Incoming?showAll=true&multiline=true").
unmarshal().json(JsonLibrary.Gson, Message.class).bean(MessageReceiver.class).to("direct:out");
from("direct:out").marshal().json(JsonLibrary.Gson).setHeader("rabbitmq.ROUTING_KEY",
constant(sendQueue)).to(toUri);
}