javax.jms.IllegalStateException: The Session is closed - java

I have different problems with a Camel Producer that I tried to solved but I've fallen into other problems.
1) The first implementation I did was to create a producer template each time we needed to communicate with an ActiveMQ topic. That resulted with poor memory results leading to server crashing after sometime.
The solution for the memory problem was to stop() producer template after each request. That fix has corrected the memory issue but cause some latency problem.
2) I read somewhere that it's not necessary to create each time a producer template. So I decide to fix the latency problem and declared only one producer template in my class and use it for each request. It seem to work fine, no memory leak, fix the latency problem...
BUT, when we send multiple queries that take a lot of time (20 sec each), it looks like we hit a timeout and the component crash with something like «javax.jms.IllegalStateException: The Session is closed».
Is there a way to do multi threading? Is this cause by using InOut exchange pattern? How the MAXIMUM_CACHE_POOL_SIZE works? Is my implementation is right?
I've put a sample of the code of my component:
public void process(Exchange exchange) throws Exception
{
Message in = exchange.getIn();
if (producerTemplate == null) {
CamelContext camelContext = exchange.getContext();
//camelContext.getProperties().put(Exchange.MAXIMUM_CACHE_POOL_SIZE, "50");
producerTemplate = camelContext.createProducerTemplate();
}
...
result = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel1}}")), ExchangePattern.InOut, messageToSend).toString();
...
finalResult = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel2}}")), ExchangePattern.InOut, result).toString();
...
in.setBody(finalResult );
}

Yes it is because you use InOut pattern.
Your route expects a response to the specified reply queue, which is never received, and therefore results in the default 20 sec. timeout.
Change the Exchange pattern to InOnly to resolve your issue.
Apart from that, your posted code seems to be fine.
The MAXIMUM_CACHE_POOL_SIZE is used internally in Camel, and thus does not effect the ActiveMQ endpoint settings.

Related

Apache Camel in Spring Boot - Queue polling consumer on request

I am struggling to find a fully fledged example of how to use Apache Camel in Spring Boot framework for the purpose of a polling consumer.
I have looked at this: https://camel.apache.org/manual/latest/polling-consumer.html as well as this: https://camel.apache.org/components/latest/timer-component.html but the code examples are not wide enough for me to understand what it is that I need to do to accomplish my task in Java.
I'm typically a C# developer, so a lot of these small references to things don't make sense.
I am seeking an example of the following to do in Java including all the imports and other dependencies that are required to get this to work.
What I am trying to do, is the following
A web request is made to an endpoint, which should trigger the start of a polling consumer
The polling consumer needs to poll another web endpoint with a provided "ID" that needs to be sent to the consumer at the time that it is trigger.
The polling consumer should poll every X seconds (let's say 5 seconds).
Once a specific successful response is received from the endpoint we are polling, the consumer should stop polling and send a message to another web endpoint.
I would like to know if this is possible, and if so, can you provide a small example of everything that is needed to achieve this (as the documentation from the Camel website is extremely sparse in terms of imports and class structure etc.)?
After discussions with some fellow Java colleagues, they have assured me that this use case is not one that Camel is designed for. This is reason it was so difficult to find anything on the internet before I posted this question.
For those that are seeking this answer via Google, the best suggested approach is to use a different tool or just use standard java.
In my case, I ended up using plain old Java thread to achieve what was required. Once the request is received I simply start a new Runnable thread, that handles the checking of the result from the other service, sleeps for X seconds, and terminates when the response is successful.
A simple example is below:
Runnable runner = new Runnable() {
#Override
public void run() {
boolean cont = true;
while (cont) {
cont = getResponseFromServer();
try {
Thread.sleep(5000);
} catch (Exception e) {
// we don't care about this, it just means this time it didn't sleep
}
}
}
}
new Thread(runner).start();

Are there any tuning parameters to speed up my application which stalls when sending messages to a JMS queue

I'm trying to execute an application under (reasonable) load. What is happening under load is that when trying to place a message onto a queue, the application stalls for about 4 seconds before completing the send. The strange part is that immediately after doing this, the next message takes a matter of milliseconds to place onto the queue. The message is in fact the same message - so the message size isn't a factor.
The application is using Spring Boot 2.1.6, Apache Qpid 0.43.0 as the JMS/AMQP provider.
The message bus being used is Azure ServiceBus, but I have observed the same behaviour using Artemis.
On the Apache Qpid JmsConnectionFactory, I've tried fiddling with the properties "forceSyncSend".
I've tried using the Spring Boot CachingConnectionFactory to cache message producers only. I have increased the default cache size from 1 to 20 without any success.
I've looked at the JmsTemplate parameters but can't find any parameters in regard to message producers (plenty with listeners but that's another story).
The code doing the sending is quite simple:
private void sendToQueue(Object message, String queueName) {
jmsTemplate.convertAndSend(queueName, message, (Message jmsMessage) -> {
jmsMessage.setStringProperty(OBJECT_TYPE_PARAMETER, message.getClass().getSimpleName());
return jmsMessage;
});
Is there anything obvious to try? Are there any tuning parameters to stop this stalling happening?
The load on the system is not trivial, but it is not excessive (it needs to go a lot higher than where it is at the moment!)
Any ideas?

Slow message consumption using AmazonSQSClient

So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.

camel redeliver : How do I retry processing a message from a certain point back

I wanted to apply re-deliver and use Dead Letter Channel. So found this very useful apache-camel FAQ link. I followed the steps as mentioned in this website.
I added a little more logic, the code is available in github .
Basically, as per the FAQ, we have to split the routes, such that(as mentioned in the FAQ) the camel can handle any exception in the sub route.
Here is the code (Camel Route):
#Override
public void configure() throws Exception {
errorHandler(deadLetterChannel("mock:error")
.logExhaustedMessageBody(true)
.logExhausted(true)
.logRetryStackTrace(true)
.maximumRedeliveries(3)
.redeliveryDelay(1000)
.backOffMultiplier(2)
.useOriginalMessage()
.useExponentialBackOff());
from("direct:start")
.log("Dump incoming body: " + body())
.to("direct:sub")
.end();
from("direct:sub")
.errorHandler(noErrorHandler())
.process(new SubRouteProcessor())
.log("Dump incoming body: "+ body())
.process(new NewSubRouteProcessor())
.transform(body().append("Modified Data !"))
.to("mock:result");
}
I wrote a simple unit test, to trigger the exception, hence the redeliver code is executed.
Now the difficulty/problem: The expected behavior is to do only 3 redeliver attempts, with 1000 millis delay and ExponentialBackOff.
However, the Test runs forever, keeps on doing the re-delivery with exponential delay. If I remove/comment the 2 processor calls, the code does run fine, means on exception, it does only 3 retries.
Could you please help to understand,
1) what is wrong in this code/route?
2) what is causing code to run forever?
3) why removing these processors, it works,retry happens for
said number of times only?
I just want - when ever the "direct:sub" throws "exception", the control goes back to "direct:start", re-try processing message from a certain point.
Thank you !!
I checked out your code and simply by changing getOut to getIn in both of your processors, the test passes.
Edit: My suspicion from the beginning was that your processors were somehow overriding the headers Camel set in order to do a certain amount of redelivery. In your case:
CamelRedelivered=true, CamelRedeliveryCounter=1, CamelRedeliveryMaxCounter=3
The header CamelRedeliveryCounter was not incremented when you were setting the Body on the getOut exchange. In my mind, it should and this is a bug; but maybe it is working as designed? Maybe #claus-ibsen could help with that.
Hope this helps.
R.

How can I invoke sendAndReceive when a flow contains different type of channels?

I have spent a few good amount of hours reading about Spring Integration. And today I started experimenting with the framework. There are aspects of how it works that I have trouble understanding despite of all my reading. I hope somebody here can put me back on tracks.
I have the following channel and endpoint defined:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
ref="defaultOrderService"
method="placeOrder"/>
Since the channel is a DirectChannel I expect everything to happen within a single thread and get a return value at the end.
The placeOrder method look as follows:
#Override
public Order placeOrder(Order order) {
return order;
}
In my main method I have:
MessageChannel input = context.getBean("orderSource", MessageChannel.class);
Message<Order> message = MessageBuilder.withPayload(new Order(123)).build();
MessagingTemplate messenger = new MessagingTemplate(input);
Message<?> result = messenger.sendAndReceive(message);
Object found = result.getPayload();
And this all works like a charm. The found is the order the service activator sends back.
My problem starts when I want to notify a set of subscribers that the order was placed. For simplicity, let's do this synchronously, like this:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
output-channel="savedOrders"
ref="defaultOrderService"
method="validateOrder"/>
<in:publish-subscribe-channel id="savedOrders"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
The question now is what should the input channel expect in return when I invoke sendAndReceive?
My current code blocks and I never reach the end of the main thread.
How can I make sure I receive a reply containing the result of the service activator as it passed it to all subscribers?
Also I am really curious about what a given channel can expect in terms of returning values when there are asynchronous channels in the flow. I'd like to get the result at end of a transaction and before new thread is spawn, but I don't know how to do that.
Any thoughts, advice or guidance?
Presumably, your "notify" methods return null. If that's the case, there's no "reply" sent to the MessagingTemplate.
Make the final one return the order, or add a <bridge/> to nowhere as a fourth subscriber to the pub-sub channel.
A bridge to nowhere is simply a bridge with no output channel. When a message arrives at an endpoint that produces a reply, and there is no output-channel, the message's replyChannel header is used to route the reply to the originator.
It works with async channels too, but I'd need to understand your requirements there before I can provide guidance.
Also, consider using a Messaging Gateway on the calling side instead of building a message yourself and using the MessagingTemplate. Rather than exposing your caller to the messaging infrastructure, the framework will create a proxy for you that will take care of all that and you just interact with the POJI.
I spent some more time reading and I discovered that this is all a matter of configuring the reply channel either in the message or in the gateway and using bridge just as Gary Rusell suggested did the trick for me.
This is my code, now working:
<in:channel id="arrivals"/>
<in:service-activator input-channel="arrivals"
output-channel="validated"
ref="defaultOrderService"
method="validateOrder"/>
<in:channel id="validated"/>
<in:service-activator input-channel="validated"
output-channel="persisted"
ref="defaultOrderService"
method="placeOrder"/>
<in:publish-subscribe-channel id="persisted"/>
<in:channel id="replyChannel"/>
<in:bridge input-channel="persisted" output-channel="replyChannel"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
<in:gateway id="orderService"
service-interface="codemasters.services.OrderService"
default-request-channel="arrivals"
default-reply-channel="replyChannel"/>
And using a gateway, this all looks much cooler now:
OrderService service = context.getBean("orderService", OrderService.class);
Order result = service.validateOrder(new Order(4321));

Categories

Resources