Enabling connection pooling in apache camel Kafka component - java

I have an apache camel route which is calling apache kafka topic (producer) just like below in my spring boot application (deployed on Tomcat server):
from("timer://foo?period=1000").to("kafka:myTopic?brokers=localhost:9092");
This spring boot app is a rest API that is supposed to get around 300 TPS.
Q1) I know that a single tomcat server thread serves each request coming to my spring boot app. Does the same thread will be used in the above line of code used by apache camel for invoking the myTopic? Or apache camel uses some connection pooling internally just like RestTemplate?
Q2) Because the TPS will be increasing to 500 TPS in near future, does it make any sense to introduce pooling for the above line of code? I believe if I use connection pooling, my application performance will increase. However, I am not able to find the code which I can use to enable connection pooling for the above line of code.
If anyone has any idea please let me know. Please note that I am not looking for parallel processing so seda or multicast in camel is not an option. I am only having a single call to kafka topic as shown above in the code so just looking for how to enable connection pooling in this line of code.
Thanks.

Related

Apache camel don´t use all the dynamic queues created

I'm using apache camel for consuming an IBM Mq, I use jms for that, everything is ok that works fine, but in the performance testing the api create a lot of dynamic queues but just use once, I've used a lot of properties for solve this problem but I didn't get it yet. my api use a pattern InOut so the responses are in queue in a dynamic queue, when exist a lot of them, for example my api create 50 dynamic queues, but just use 3 of them.
Here are the properties I used to solve it, but didn´t work for me:
-maxConcurrentConsumers
-conccurrentConsumers
-threads
I found a solution for this and is this.
this is my consume to mq
.setHeader("CamelJmsDestinationName",
constant("queue:///"+queue+"?targetClient=1"))
.to("jms://queue:" + queue
+"?exchangePattern=InOut"
+"&replyToType=Temporary"
+"&requestTimeout=10s"
+"&useMessageIDAsCorrelationID=true"
+"&replyToConcurrentConsumers=40"
+"&replyToMaxConcurrentConsumers=90"
+"&cacheLevelName=CACHE_CONSUMER")
.id("idJms")
and this is the properties to connect the mq
ibm.mq.queueManager=${MQ_QUEUE_MANAGER}
ibm.mq.channel=${MQ_CHANNEL}
ibm.mq.connName=${MQ_HOST_NAME}
ibm.mq.user=${MQ_USER_NAME}
ibm.mq.additionalProperties.WMQ_SHARE_CONV_ALLOWED_YES=${MQ_SHARECNV}
ibm.mq.defaultReconnect=${MQ_RECONNECT}
# Config SSL
ibm.mq.ssl-f-i-p-s-required=false
ibm.mq.user-authentication-m-q-c-s-p=${MQ_AUTHENTICATION_MQCSP:false}
ibm.mq.tempModel=MQMODEL
the issue was in the MQ Model, the MQModel has to be shared if you are using the pattern inOut, this is because the concurrent create dynamic queues using the mqModel

Performance settings for ActiveMQ producer using Apache Camel in Spring boot framework

We have a spring boot application and we are using apache camel as a framework for message processing. We are trying to best optimize our application settings to make the enqueue of messages on the ActiveMQ queue fast which is received by the Logstash on the other end of the queue as consumers.
The documentation is scattered at many places and there are too many configurations available.
For example, the camel link for spring boot specifies 102 options. Similarly, the activemq apache camel link details these with much more.
This is what we have currently configured:
Application.properties:
################################################
# Spring Active MQ
################################################
spring.activemq.broker-url=tcp://localhost:61616
spring.activemq.packages.trust-all=true
spring.activemq.user=admin
spring.activemq.password=admin
Apache Camel
.to("activemq:queue:"dataQueue"?messageConverter=#queueMessageConverter");
Problem:
1 - We suspect that we have to use poolConnectionFactory and not default Spring JMS Template bean which is somehow auto picked up.
2 - We also want the process to be asynchornous. We just want to put the message on queue and dont want to wait for any ACK from activemq or do anyretry or something.
3 - We want to wait for retry only if queue is full.
4 - Where should we set the settings for ActiveMq size? and also the activemq is putting things in Dead letter queue in case no consumer availaible? We want to override that behaviour and want to keep the message in there. (Is this have to be configured in Activemq and not in Our app/apache camel)
Update
Here is we have solved it after some more investigation and based on feedback for now. Note: this does not involve retrying, for that we will try the option suggested in the answer.
For Seda queues:
producer:
.to("seda:somequeue?waitForTaskToComplete=Never");
consumer:
.from("seda:somequeue?concurrentConsumers=20");
Active MQ:
.to("activemq:queue:dataQueue?disableReplyTo=true);
Application.Properties:
#Enable poolconnection factory
spring.activemq.pool.enabled=true
spring.activemq.pool.blockIfFull=true
spring.activemq.pool.max-connections=50
Yes, you need to use pooledConnectionFactory. Especially with Camel+Spring Boot. Or look to use the camel-sjms component. The culprit is Spring's JMSTemplate. Super high latency.
Send NON_PERSISTENT and AUTO_ACK, also turn on sendAsync on the connection factory
You need to catch javax.jms.ResourceAllocationException in your route to do retries when Producer Flow Control kicks in (aka queue or broker is full)
ActiveMQ does sizing based on bytes, not message count. See the SystemUsage settings in Producer Flow Control docs and Per-Destination Policies policies for limiting queue size based on bytes.

Continue Spring Kafka Startup even on Kafka Connection Failure

Is there a configuration I can use to instruct Spring to continue on startup and initialize the Beans even if Kafka connection failed?
I am using Spring Framework 5.2.3 and Spring Kafka 2.5.3.RELEASE.
If you need kafka beans for your application to work in every use case then continue with startup if there is no kafka connection makes no sense. Your application will not be able to do anything without kafka.
But if some parts of your application do not need kafka and you would like to use only those parts then you can either mark kafka related beans as lazy or make all beans lazy by default. In this case spring will create beans only when they are actually needed. And even if there is no kafka connection available parts of your app that do not need kafka will work.

how to make persistent JMS messages with java spring boot application?

I am trying to make a queue with activemq and spring boot using this link and it looks fine. What I am unable to do is to make this queue persistent after application goes down. I think that SimpleJmsListenerContainerFactory should be durable to achieve that but when I set factory.setSubscriptionDurable(true) and factory.setClientId("someid") I am unable to receive messages any more. I would be greatfull for any suggestions.
I guess you are embedding the broker in your application. While this is ok for integration tests and proof of concepts, you should consider having a broker somewhere in your infrastructure and connect to it. If you choose that, refer to the ActiveMQ documentation and you should be fine.
If you insist on embedding it, you need to provide a brokerUrl that enables message persistence.
Having said that, it looks like you misunderstand durable subscriber and message persistence. The latter can be achieved by having a broker that actually stores the content of the queue somewhere so that if the broker is stopped and restarted, it can restore the content of its queue. The former is to be able to receive a message even if the listener is not active at a period of time.
you can enable persistence of messages using ActiveMQConnectionFactory.
as mentioned in the spring boot link you provided, this ActiveMQConnectionFactory gets created automatically by spring boot.so you can have this bean in your application configuration created manually and you can set various property as well.
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
Here is the link http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html

Does Apache Camel support own connection pool for JMS from the box?

I have one question regarding JMS in Camel.
So I'm using JMS provided by some firm. But this JMS implementation does not provide Pooled Connection Factory.
So does camel has default pooled connection implementation?
Or it do smth trivial like:
1) Open connection
2) Open Session
3) Read/Write message
4) Close Session
5) Close connection
Because if believe my logs camel works like I mention in second case.
Thanks.
Camel pretty much uses JmsTemplate (from Spring Framework) for sending messages.
ActiveMQs thoughts of JmsTemplate
Essentially, you are true for the "producing" scenario, unless the underlying Jms provider features a pooling Connection Factory. This is usually the case if you run Spring or Camel inside App Servers.
If you set up something like
from("jms:queue:QUEUE.IN").to("somewhere:over/the/rainbow");
Then one or more on going consumers will be active, not destroying the session for each message (only commiting the message if you set transactions up). There is also a possiblity to pool the response listener for JMS request/response. Refer to camel.apache.org/jms for info.
But you are right, if you have a remote (non pooling) JMS provider and fires of frequent outgoing messages from Camel, this could be somewhat of a performance issue.
Use the spring CachingConnectionFactory. Btw. which JMS provider do you use?
http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/jms/connection/CachingConnectionFactory.html

Categories

Resources