I have a route which looks like the following:
from("seda:in")
.routeId("aggregation")
.process(filterProcessor)
.aggregate(header("flag", new MyAggregationStrategy())
.completionInterval(10000)
.multicast()
.to(sftpUris);
I'd like to be able to access the producers for each of the URIs in the to clause and check the status of the SFTP connection.
So far I haven't worked out a way of doing this for single (non-multicast) producer, so solutions to that would be useful as well.
According to the Camel docs, you should look at the CamelFtpReplyCode header (and perhaps CamelFtpReplyString) to determine what happened with the request. By default, FTP errors do NOT raise an exception.
Related
Using spring-camel, I have built a route that consumes from a JMS topic (with JMSReplyTo expected to be set for each input message), splits the message into smaller chunks, sends them to a REST processsor, then aggregates the answers and should produce an output message to the destination pointed by JMSReplyTo. Unfortunately, camel implicitly utilises the JMSReplyTo destination in one of the intermediate steps (producing an unmarshalled POJO).
We have a functional requirement to adapt JMSReplyTo in order to provide a request-reply messaging service.
I am able to read the JMSReplyTo header before ending the route and I am explicitly converting it to CamelJmsDestinationName, which successfully overrides the destination for JMS component and produces the message on the output topic. I am not sure if this is the best approach and the problem is that camel still utilises the JMSReplyTo on its own.
My RouteBuilder configuration is as follows:
from("jms:topic:T.INPUT")
.process(requestProcessor)
.unmarshal().json(JsonLibrary.Jackson, MyRequest.class)
.split(messageSplitter)
.process(restProcessor)
.aggregate(messagesAggregator)
.unmarshal().json(JsonLibrary.Jackson, BulkResponses.class)
.process(responseProcessor)
.to("jms:topic:recipientTopic");
T.INPUT is the name of the input topic, while recipientTopic is just a placeholder that will be replaced by CamelJmsDestinationName.
I'm not keen on using CamelJmsDestinationName and a sort of a mocked up topic name in route configuration so I'm open to find a better solution. It would be great if camel utilised the JMSReplyTo automatically to produce the output message to the output topic.
Currently, the problem is that camel produces an intermediate output on the JMSReplyTo topic BUT the output is an unmarshalled MyRequest object, which results in an exception saying "ClassNotFoundException: (package name).MyRequest", which is obvious since this is only a class used in my internal processing - I don't want to produce this to the output topic. It seems like Camel does implicitly use the JMSReplyTo destination between requestProcessor and messageSplitter processing... Why? What am I doing wrong? What are the best practices?
Use "disableReplyTo=true" in Endpoint. Camel will not try to use any reply option.
Refer: https://camel.apache.org/jms.html for more details
I have found the answer... this is absurdly easy but I haven't seen it anywhere in the documentation.
You just need to call .stop() to mark the route as completed, and Camel will reply the body you configured in the last step to the destination configured in ${header.JMSReplyTo}. It's that simple.
So you can do:
from("jms:my-queue")
.unmarshall().json(JsonLibrary.Jsonb, InboundMessage.class)
.bean(SomeProcessingBean.class)
....
.log(LoggingLevel.INFO, "Sending reply to: " + simple("${header.JMSReplyTo}").getExpression().toString())
.marshall().json(JsonLibrary.Jsonb, ReplyMessage.class)
.stop();
And you will receive reply.
I wonder why no one has found this before... Nothing when I search the doc or here.... I must be dumb, or the doc is incomplete...but I am not dumb, so.
So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.
I am trying to implement a process consisting of several webservice-calls, initiated by a JMS-message read by Spring-integration. Since there are no transactions across these WS-calls, I would like to keep track of how far my process has gone, so that steps that are already carried out are skipped when retrying message processing.
Example steps:
Retrieve A (get A.id)
Create new B for A (using A.id, getting B.id)
Create new C for B (using B.id, getting C.id)
Now, if the first attempt fails in step 3, I already have a created a B, and know it's id. So if I want to retry the message, it will skip the second step, and not leave me with an incomplete B.
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
The way it works at the moment:
Message is read
Some exception is thrown
Message processing halts, and ActiveMQ places the message on DLQ
How I would like it to work:
Message is read
Some exception is thrown
The exception is handled, with the result of this handling being an extra header property added to the original message
ActiveMQ places the message on DLQ
One thing that might achieve this is the following:
Read the message
Start processing, wrapped in try-catch
On exception, get the extra information from the exception, create a new message based on the original one, add extra info to header and send it directly to the DLQ
Swallow the exception so the original message dissappears
This feels kinda hackish though, hopefully there is a more elegant solution.
It's hard to generalize without more information about your flow(s) but you could consider adding a custom request handler advice to decorate and/or re-route failed messages. See Adding Behavior to Endpoints.
As the other answer says, you can't modify the message but you can build a new one from it.
EDIT:
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
Ahhh... now I think I know what you are asking; no, you can't "decorate" the existing message; you can republish it with additional headers instead of throwing an exception.
You can republish in the advice, or in the error flow.
It might seem like a "hack" to you, but the JMS API provides no mechanism to do what you want.
From the spring forum:
To place new header to the MessageHeaders you should use
MessageBuilder, because not only headers, but entire Message is
immutable.
return MessageBuilder.fromMessage(message).setHeader(updateflag, message.getHeaders().get("Lgg_Rid") == "ACK" ? "CONF" : "FAIL").build();
In an asynchronous context, errors will go to an error channel - either one you configure yourself and indicate in the message headers with errorChannel, or a global error channel if none is specified. See for more details here.
We are using Camel fluent builders to set up a series of complex routes, in which we are using dynamic routing using the RecipientList functionality.
We've encountered issues where in some cases, the recipient list contains a messaging endpoint that doesn't exist (for example, something like seda:notThere).
A simple example is something like this:
from("seda:SomeSource")....to("seda:notThere");
How can I configure the route so that if the exchange tries to route to an endpoint that doesn't already exist, an error is thrown?
I'm using Camel 2.9.x, and I've already experimented with the Dead Letter Channel and various Error Handler implementations, with (seemingly) no errors or warnings logged.
The only logging I see indicates that Camel is (attempting to) send to the endpoint which doesn't exist:
2013-07-03 16:07:08,030|main|DEBUG|o.a.c.p.SendProcessor|>>>> Endpoint[seda://notThere] Exchange[Message: x.y.Z#293b9fae]
Thanks in advance!
All endpoints behave differently in this case.
If you attempt to write to a ftp server that does not exist, you certainly get an error (connection refused or otherwise)..
This is also true for a number of endpoints.
SEDA queues gets created if the do not exist and the message will be left there. So your route actually sends to "notThere" and the message will still be there until the application restarts or someone starts to consume messages from seda:notThere. This is the way seda queues are designed. If you set the size of the seda queue by to("seda:notThere?size=100"), then if there is noone reading (or reading slowly) you will get exceptions on message 101 and forward.
If you need to be sure some route is consuming your messages, use "direct" instead of "seda". You can even have some middle layer to use the features of seda with respect to staging and the features of direct knowing there is a consumer active (if sent from recipient list with perhaps user input (god forbid).
from("whatever").recipentList( ... ); // "direct:ep1" work, "direct:ep2" throws exception
from("direct:ep1").to("seda:ep1");
from("seda:ep1").doRealStagedStuffHere();
Is it possible to manage connection timeouts or errors in a MessageDrivenBean?
You can make the factory to retry connecting a certain number of times but... is it possible to make some actions each time that a reconnection retrial is neccesary? Is it possible to register an ExceptionListener into the MessageDrivenBean's connection somehow?
Thanks a lot.
Finally I wasn't able to do this but I changed jmsjra to JMSJCA that fits better my needings. JMSJCA is included in Glassfish ESB project.
You can always have some sort of error topic or queue that you can post the exception to from your MDB. Including the correlationID in the error message to synchronize with the original message if that is desired.