I am using apache active mq, and have a common consumer for multiple operations. For every consumer it will take around 15-20 mins for code execution, and I want if in between the another message is produced then both messages should be consumed in parallel though both have one consumer only.
In application context where you have defined the DefaultMessageListenerContainer bean, You can add "concurrentConsumers" property and define the number of concurrent consumers you want, to listen to your message Queue.
Below is an example for the same
<bean id="jmsMailContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers">
<value>100</value>
</property>
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="mailDestination"/>
<property name="messageListener" ref="jmsMailConsumer"/>
</bean>
Related
I am trying to create an Spring batch asynchronous job with in another job. Say Job-1 should be completed and Job-2 should be executed. But problem is Job-1 is waiting till Job-2 is getting completed which i don't want. I have used JobStep as well but it is happening in an synchronous way and not helpful. Can some one help me how to use Asynchronously where Job-1 should not wait till Job-2 is completed ?
Sample xml snippet below
<bean id="taskExecutorAsync" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="jobLauncherAsync" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="taskExecutorAsync" />
</bean>
<bean id="CreationProcess" class="test.CreationProcess">
<property name="jobLauncher" ref="jobLauncherAsync" />
<property name="jobRepository" ref="jobRepository" />
<property name="jobExplorer" ref="jobExplorer" />
</bean>
Thanks
You could use a SimpleAsyncTaskExecutor executor to avoid blocking.
I tried to create a separate thread, returned back and new thread updated the details. Unable to create a new asynchronous spring batch job with in another job.
In short, you can't using the JobStep. The reason is because a Job is a state machine with each Step serving as a state. In order for the Job to transition to the next state (aka complete in your use case), the current state (your child job) needs to complete.
You can launch jobs from other jobs, but to do so, you'll need to write a Tasklet to launch the job on a new thread (using a TaskExecutor) and return immediately.
I am using Spring Integration (2.2.0) with WebSphere (8.0.0.x), in order to send messages via JMS (Tibco EMS).
Communication between components is working fine, but we have observed huge latencies between the messaging hops. These are in line with what we see in the EMS logs:
2014-09-30 06:04:19.940 [user#host]: Destroyed consumer (connid=19202559, sessid=28728543, consid=328585032) on queue 'test.queue3.request'
2014-09-30 06:04:19.969 [user#host]: Created consumer (connid=19202564, sessid=28728551, consid=328585054) on queue 'test.queue2.request'
2014-09-30 06:04:20.668 [user#host]: Destroyed consumer (connid=19202562, sessid=28728549, consid=328585048) on queue 'test.queue1.request'
2014-09-30 06:04:20.733 [user#host]: Created consumer (connid=19202567, sessid=28728555, consid=328585071) on queue 'test.queue5.request'
2014-09-30 06:04:20.850 [user#host]: Destroyed consumer (connid=19202563, sessid=28728550, consid=328585051) on queue 'test.queue4.request'
2014-09-30 06:04:21.001 [user#host]: Destroyed consumer (connid=19202564, sessid=28728551, consid=328585054) on queue 'test.queue2.request'
2014-09-30 06:04:21.701 [user#host]: Created consumer (connid=19202571, sessid=28728561, consid=328585093) on queue 'test.queue3.request'
2014-09-30 06:04:21.762 [user#host]: Destroyed consumer (connid=19202567, sessid=28728555, consid=328585071) on queue 'test.queue5.request'
Apparently, consumers are constantly being destroyed and re-created. This is not only bad for EMS, but also it's killing the latency, as messages are not delivered until the consumer is back online.
This is how the consumers are defined:
<jee:jndi-lookup id="rawConnectionFactory" jndi-name="jms/QueueCF"/>
<bean id="jmsDestinationResolver"
class="org.springframework.jms.support.destination.JndiDestinationResolver"/>
<bean id="connectionFactory"
class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="rawConnectionFactory"
p:username="${jms.internal.username}"
p:password="${jms.internal.password}"/>
<bean id="taskExecutor"
class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor"
p:workManagerName="wm/mc"
p:resourceRef="false"/>
<bean id="transactionManager"
class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>
<bean id="adp1Container"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:taskExecutor-ref="taskExecutor"
p:destinationName="requestQueue1" p:connectionFactory-ref="connectionFactory"
p:destinationResolver-ref="jmsDestinationResolver"
p:transactionManager-ref="transactionManager" />
<jms:inbound-gateway id="jmsInAdapter1"
request-channel="adapter1logic" container="adp1Container" />
<channel id="adapter1logic" />
Update:
This behaviour is related to the use of the transaction manager.
If we specify the connection to the EMS server directly in Spring (indicating there the host, port, user, password), consumers are still constantly recreated, but for some reason these recreations are not affecting the end-to-end latencies. Connections are apparently being managed better in Spring than in WAS.
How to configure WAS so that consumers trigger as quick as in Spring?
If, along with the previous change, I also remove the reference to the transaction manager in the DefaultMessageListenerContainer, consumers stop destroying and constructing altogether.
What could be the issue with WebSphere's transaction manager? Why are consumers destroying and constructing when WAS' transaction manager is in use? Is there any configuration that can be adjusted?
You should not see consumers being recycled like that, unless your listener is throwing an exception. Container consumers a long-lived by default. I suggest you turn on DEBUG (or even TRACE) logging for the container to figure out what's going on.
Suggesting to wrap your connection factory with CachingConnectionFactory decorator and configure the session caching strategy:
<bean id="cacheConnFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="cacheProducers" value="true" />
<property name="cacheConsumers" value="true" />
<property name="sessionCacheSize" value="10" />
</bean>
Use the above connection factory in your DMLC along with cacheLevel settings as follows:
<bean id="adp1Container"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:taskExecutor-ref="taskExecutor"
p:destinationName="requestQueue1"
p:connectionFactory-ref="cacheConnFactory"
p:destinationResolver-ref="jmsDestinationResolver"
p:transactionManager-ref="transactionManager">
<property name="sessionTransacted" value="true" />
<property name="cacheLevel" value="3" /> <!-- Consumer Level -->
</bean>
We have a web application that uses C3P0 to pool our connections. We inject C3P0 as a data source into JdbcTemplate. You can see how we do this here:
<bean id="dataSourceDev" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="driverClass" value="${databasedev.driver}" />
<property name="jdbcUrl" value="${databasedev.url}"/>
<property name="user" value="${databasedev.username}"/>
<property name="password" value="${databasedev.password}"/>
<property name="initialPoolSize" value="5" />
<property name="minPoolSize" value="5" />
<property name="maxPoolSize" value="1000" />
<property name="acquireIncrement" value="5" />
<property name="maxStatements" value="1000" />
<property name="maxStatementsPerConnection" value="1000"/>
<property name="maxIdleTime" value="10800"/> <!-- 3 hours -->
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<constructor-arg>
<ref bean="dataSourceDev" />
</constructor-arg>
</bean>
<bean id="someDaoBean" class="com.gedi.platform.dao.SomeDaoClass">
<property name="jdbcTemplate" ref="jdbcTemplate" />
</bean>
<bean id="someResourceClass" class="com.gedi.platform.SomeResourceClass">
<property name="someDao" ref="someDaoBean" />
</bean>
You can see that it's a Java EE web application - it uses Jetty as its application server. My question is, how does Jetty instantiate our beans, and how will that affect connection pooling? If we have dozens of users using the web site at different times, will all of these users be placed in the same connection pool? Or is there only one connection pool per client, in which every HTTP client creates new instances of Resource, DAO, JdbcTemplate and C3P0?
Am I being clear? What I want to have is one connection pool for all HTTP requests, regardless of whether they come from web browsers originating in Boston or New Zealand. That way, the connection pool is exerting its maximum effects. However, if a new connection pool is instantiated for every HTTP client, then the pooling doesn't end up being much of an improvement.
Edit
An important tidbit - We use the Jersey reference implementation of JAX-RS to produce a RESTful interface. So our servlet dispatches requests through Jersey which finds a suitable Resource class/method to handle them. I wonder whether Jersey re-instantiates these classes on every request, or keeps one instance of them at all times.
Neither Jersey nor Jetty are relevant here. Spring is important here. And in Spring every bean (like your dataSourceDev, jdbcTemplate and someDaoBean) are singletons. That means when Spring application context starts, it will creatly exactly one instance of each of them.
That means no matter what uses your DataSource (web request, background job, etc.), the same instance (thus the same connection pool) is used. You are right that if connection pool was created per each request, it would not have been much of an improvement. Actually it would be much, much slower.
But in your case (and this is how 99% of web applications work) all code requiring database access will compete and reuse the same connections (or wait if none available). BTW make sure your database can actually handle 1000 concurrent connections.
Spring creates the beans and caches them, so unless you have specified the beans as prototype scoped(which creates a new bean for each request), all bean are singleton's by default. Jetty doesn't interfere.
When a request comes in, the DispatcherServlet catches the request and hands it off to the appropriate handler. The handler is the same bean if it has not been declared as a prototype bean.
You understood the connection pool correctly. This is exactly why the concept was created. It doesn't matter where the request came from, the maximum number of connections to the database at any point in time will be the one you have defined in the maxPoolSize property.
Is there any possibility to know inside the onMessage method, which queue the MessageListener is listening to?
My Spring-config (a part of it):
<bean id="abstractMessageListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true">
<property name="connectionFactory" ref="connectionFactory" />
<property name="maxConcurrentConsumers" value="5"/>
<property name="receiveTimeout" value="10000" />
</bean>
<bean class="org.springframework.jms.listener.DefaultMessageListenerContainer" parent="abstractMessageListenerContainer">
<property name="destinationName" value="MY.QUEUE" />
<property name="messageListener" ref="myMessageListener" />
</bean>
<bean id="myMessageListener" class="my.package.structure.ListenerClass"></bean>
My Listener Class:
public class ListenerClass implements MessageListener {
public void onMessage(Message msg) {
//where do I know from, on which queue the message has been written to?
}
}
Is there any out-of-the box solution? Or any custom solution to get the queue/destination-name?
Do need the queue in a subsequent batch-processing...
Easy. in trivial cases at least:
msg.getJMSDestination() will give you the destination as a javax.jms.Destination object. Typically .toString() returns something like: queue://MYQUEUENAME
However, in some JMS implementations, there might be multihop queues, such as a static pub/sub setup in WebSphere MQ where you might write your message to one queue and it will bounce around a route to end up in a completly different queue. Also, you might have the case of an ESB with logic in the middle that routes the message. In such cases, you will need to think twice before relying to much on the JMSDestination attribute. Otherwise, go ahead.
I'm fairly new to Spring JMS, and I've found lots of documentation and examples at the Spring site and elsewhere, but my use case doesn't seem to be described anywhere, or at least in a way I can understand. I hope you might be able to help.
I would like to create a publisher of a topic and several durable subscribers to that topic. I'm working on the first subscriber now, and it is intended to run hourly (on a timer) and drain the topic of messages and process them all at once (i.e. to send an email summarizing all messages).
I do not know how to configure this setup in Spring, although I feel like this should be easy. Advice would be tremendously helpful.
My plan, such as it is, was to have the timer invoke the "processBatch" method, which would call receiveAndConvert() in a loop until it timed out, building up its list of messages.
This doesn't seem to work, though, because the consumer isn't really subscribed to the topic. Certainly not before it's run, and potentially not afterward.
How can I configure this using Spring and/or direct ActiveMQ?
I'm not sure where I ended up with my XML is a useful place for this discussion to start, but I'll provide it in case it is helpful:
<beans>
<!-- some unrelated beans -->
<!-- my Active MQ connection factory -->
<bean id="mqConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://broker"/>
</bean>
</property>
</bean>
<!-- my topic -->
<amq:topic id="completionsTopic" physicalName="completions.topic"/>
<!-- my subscriber -->
<bean id="emailer" class="com.j128.Emailer">
<property name="jmsTemplate">
<bean class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="mqConnectionFactory"/>
<property name="defaultDestination" ref="completionsTopic"/>
<property name="receiveTimeout" value="2000"/>
</bean>
</property>
</bean>
<!-- my scheduler and periodic call to the topic drainer -->
<task:scheduler id="taskScheduler" pool-size="10"/>
<task:scheduled-tasks>
<!-- send emails hourly -->
<task:scheduled ref="emailer" method="processBatch" cron="0 * * * *"/>
</task:scheduled-tasks>
</beans>
But I'm certain I fundamentally have the wrong strategy and that there's a simple way to configure this.
Thank you for your assistance.
Have a look at how-does-a-queue-compare-to-a-topic: "Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message."
And for a durable topic how-do-durable-queues-and-topics-work: "Durable topics however are different as they must logically persist an instance of each suitable message for every durable consumer - since each durable consumer gets their own copy of the message"
So with a non-durable topic, your plan wont work as the hourly job won't get any messages as it isn't running when the messages are published. If you set up a durable topic then it might work but it depends what you expect to happen when you say your subscriber will "drain the topic of messages". All it can do it read the messages published to it since it last run, it can't affect the messages going to other subscribers.
For a discussion around durable subscribers (I haven't used them on ActiveMQ) see this