Mule / Spring transaction is not propagated - java

I have a problem with database transactions in mule flow. This is the flow that i have defined:
<flow name="createPortinCaseServiceFlow">
<vm:inbound-endpoint path="createPortinCase" exchange-pattern="request-response">
<custom-transaction action="ALWAYS_BEGIN" factory-ref="muleTransactionFactory"/>
</vm:inbound-endpoint>
<component>
<spring-object bean="checkIfExists"/>
</component>
<component>
<spring-object bean="createNewOne"/>
</component>
</flow>
The idea is that in checkIfExists we verify if some data exists (in the database) if it does we throw an exception. If it does not we go to createNewOne and create a new data.
The problem
is that if we run the flow concurrently new objects will be created multiple times in createNewOne and they should not be as we invoke checkIfExists just before it. This means that the transaction is not working properly.
More info:
both createNewOne and checkIfExists have following annotation:
#Transactional(propagation = Propagation.MANDATORY)
Definition of muleTransactionFactory looks as follows
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="teleportNpDataSource"/>
<property name="entityManagerFactory" ref="npEntityManagerFactory"/>
<property name="nestedTransactionAllowed" value="true"/>
<property name="defaultTimeout" value="${teleport.np.tm.transactionTimeout}"/>
</bean>
<bean id="muleTransactionFactory" class="org.mule.module.spring.transaction.SpringTransactionFactory">
<property name="manager" ref="transactionManager"/>
</bean>
I have set the TRACE log level (as #Shailendra suggested) and i have discovered that transaction is reused in all of the spring beans:
00:26:32.751 [pool-75-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Participating in existing transaction
In the logs transaction is commited at the same time which means that those transactions are created properly but there are executed concurrently which causes an issue.

Issue may be because multi-threading. When you post multiple objects to VM, they will be dispatched to multiple receiving threads and if multi-threading is not properly handled in your components then you might run into the issue you mentioned.
Test making these change - Add VM connector reference and turn off the dispatcher threading profile. That way, VM will process messages one at a time as there is just one dispatcher thread.
<vm:connector name="VM" validateConnections="true" doc:name="VM" >
<dispatcher-threading-profile doThreading="false"/>
</vm:connector>
<flow name="testFlow8">
<vm:inbound-endpoint exchange-pattern="one-way" doc:name="VM" connector-ref="VM">
<custom-transaction action="NONE"/>
</vm:inbound-endpoint>
</flow>
Be aware of the fact that, if the number of incoming messages on VM are very high and time taken to process each message is more, then you may run into SEDA-QUEUE errors due to no thread availability.
If without threading, your flow is behaving correctly then you may need to look how your components should behave in multi-threading.
Hope That helps!

Related

How to make an asynchronous spring batch job with in another job

I am trying to create an Spring batch asynchronous job with in another job. Say Job-1 should be completed and Job-2 should be executed. But problem is Job-1 is waiting till Job-2 is getting completed which i don't want. I have used JobStep as well but it is happening in an synchronous way and not helpful. Can some one help me how to use Asynchronously where Job-1 should not wait till Job-2 is completed ?
Sample xml snippet below
<bean id="taskExecutorAsync" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="jobLauncherAsync" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="taskExecutorAsync" />
</bean>
<bean id="CreationProcess" class="test.CreationProcess">
<property name="jobLauncher" ref="jobLauncherAsync" />
<property name="jobRepository" ref="jobRepository" />
<property name="jobExplorer" ref="jobExplorer" />
</bean>
Thanks
You could use a SimpleAsyncTaskExecutor executor to avoid blocking.
I tried to create a separate thread, returned back and new thread updated the details. Unable to create a new asynchronous spring batch job with in another job.
In short, you can't using the JobStep. The reason is because a Job is a state machine with each Step serving as a state. In order for the Job to transition to the next state (aka complete in your use case), the current state (your child job) needs to complete.
You can launch jobs from other jobs, but to do so, you'll need to write a Tasklet to launch the job on a new thread (using a TaskExecutor) and return immediately.

Spring RabbitMQ SimpleRabbitListenerContainerFactory usage

From the docs, I want to use consume from queues by dynamically changing the consumers without restarting the application.
I do see that Spring RabbitMQ latest version supports the same, but no clue/example/explanation to change the same. I couldn't see proper source code for the same or how to pass params like maxConcurrentConsumers
I am using XML based configuration of Spring RabbitMQ along with Spring integration
<bean id="rabbitListenerContainerFactory"
class="org.springframework.amqp.rabbit.config.SimpleRabbitListenerContainerFactory">
<property name="connectionFactory" ref="rabbitConnectionFactory"/>
<property name="concurrentConsumers" value="3"/>
<property name="maxConcurrentConsumers" value="10"/>
<property name="acknowledgeMode" value="AUTO" />
</bean>
<int-amqp:inbound-channel-adapter channel="lowInboundChannel" queue-names="lowLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
<int-amqp:inbound-channel-adapter channel="highInboundChannel" queue-names="highLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
Can anyone guide me how to dynamically configure the consumers?
First of all you shouldn't share the same rabbitListenerContainerFactory for different <int-amqp:inbound-channel-adapter>s, because they do this:
protected void onInit() {
this.messageListenerContainer.setMessageListener(new ChannelAwareMessageListener() {
So, only last adapter wins.
From other side there is even no reason to have several adapters. You can specify queue-names="highLoadQueue,lowLoadQueue" for a single adapter.
Although in case of listener-container you must specify queues on the SimpleRabbitListenerContainerFactory.
If you want to change some rabbitListenerContainerFactory options at runtime, you can just inject it to some service and invoke its setters.
Let me know if I have missed anything.

How will this connection pooling scenario pan out?

We have a web application that uses C3P0 to pool our connections. We inject C3P0 as a data source into JdbcTemplate. You can see how we do this here:
<bean id="dataSourceDev" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="driverClass" value="${databasedev.driver}" />
<property name="jdbcUrl" value="${databasedev.url}"/>
<property name="user" value="${databasedev.username}"/>
<property name="password" value="${databasedev.password}"/>
<property name="initialPoolSize" value="5" />
<property name="minPoolSize" value="5" />
<property name="maxPoolSize" value="1000" />
<property name="acquireIncrement" value="5" />
<property name="maxStatements" value="1000" />
<property name="maxStatementsPerConnection" value="1000"/>
<property name="maxIdleTime" value="10800"/> <!-- 3 hours -->
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<constructor-arg>
<ref bean="dataSourceDev" />
</constructor-arg>
</bean>
<bean id="someDaoBean" class="com.gedi.platform.dao.SomeDaoClass">
<property name="jdbcTemplate" ref="jdbcTemplate" />
</bean>
<bean id="someResourceClass" class="com.gedi.platform.SomeResourceClass">
<property name="someDao" ref="someDaoBean" />
</bean>
You can see that it's a Java EE web application - it uses Jetty as its application server. My question is, how does Jetty instantiate our beans, and how will that affect connection pooling? If we have dozens of users using the web site at different times, will all of these users be placed in the same connection pool? Or is there only one connection pool per client, in which every HTTP client creates new instances of Resource, DAO, JdbcTemplate and C3P0?
Am I being clear? What I want to have is one connection pool for all HTTP requests, regardless of whether they come from web browsers originating in Boston or New Zealand. That way, the connection pool is exerting its maximum effects. However, if a new connection pool is instantiated for every HTTP client, then the pooling doesn't end up being much of an improvement.
Edit
An important tidbit - We use the Jersey reference implementation of JAX-RS to produce a RESTful interface. So our servlet dispatches requests through Jersey which finds a suitable Resource class/method to handle them. I wonder whether Jersey re-instantiates these classes on every request, or keeps one instance of them at all times.
Neither Jersey nor Jetty are relevant here. Spring is important here. And in Spring every bean (like your dataSourceDev, jdbcTemplate and someDaoBean) are singletons. That means when Spring application context starts, it will creatly exactly one instance of each of them.
That means no matter what uses your DataSource (web request, background job, etc.), the same instance (thus the same connection pool) is used. You are right that if connection pool was created per each request, it would not have been much of an improvement. Actually it would be much, much slower.
But in your case (and this is how 99% of web applications work) all code requiring database access will compete and reuse the same connections (or wait if none available). BTW make sure your database can actually handle 1000 concurrent connections.
Spring creates the beans and caches them, so unless you have specified the beans as prototype scoped(which creates a new bean for each request), all bean are singleton's by default. Jetty doesn't interfere.
When a request comes in, the DispatcherServlet catches the request and hands it off to the appropriate handler. The handler is the same bean if it has not been declared as a prototype bean.
You understood the connection pool correctly. This is exactly why the concept was created. It doesn't matter where the request came from, the maximum number of connections to the database at any point in time will be the one you have defined in the maxPoolSize property.

Spring JMS - Draining Topic on a Timer

I'm fairly new to Spring JMS, and I've found lots of documentation and examples at the Spring site and elsewhere, but my use case doesn't seem to be described anywhere, or at least in a way I can understand. I hope you might be able to help.
I would like to create a publisher of a topic and several durable subscribers to that topic. I'm working on the first subscriber now, and it is intended to run hourly (on a timer) and drain the topic of messages and process them all at once (i.e. to send an email summarizing all messages).
I do not know how to configure this setup in Spring, although I feel like this should be easy. Advice would be tremendously helpful.
My plan, such as it is, was to have the timer invoke the "processBatch" method, which would call receiveAndConvert() in a loop until it timed out, building up its list of messages.
This doesn't seem to work, though, because the consumer isn't really subscribed to the topic. Certainly not before it's run, and potentially not afterward.
How can I configure this using Spring and/or direct ActiveMQ?
I'm not sure where I ended up with my XML is a useful place for this discussion to start, but I'll provide it in case it is helpful:
<beans>
<!-- some unrelated beans -->
<!-- my Active MQ connection factory -->
<bean id="mqConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://broker"/>
</bean>
</property>
</bean>
<!-- my topic -->
<amq:topic id="completionsTopic" physicalName="completions.topic"/>
<!-- my subscriber -->
<bean id="emailer" class="com.j128.Emailer">
<property name="jmsTemplate">
<bean class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="mqConnectionFactory"/>
<property name="defaultDestination" ref="completionsTopic"/>
<property name="receiveTimeout" value="2000"/>
</bean>
</property>
</bean>
<!-- my scheduler and periodic call to the topic drainer -->
<task:scheduler id="taskScheduler" pool-size="10"/>
<task:scheduled-tasks>
<!-- send emails hourly -->
<task:scheduled ref="emailer" method="processBatch" cron="0 * * * *"/>
</task:scheduled-tasks>
</beans>
But I'm certain I fundamentally have the wrong strategy and that there's a simple way to configure this.
Thank you for your assistance.
Have a look at how-does-a-queue-compare-to-a-topic: "Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message."
And for a durable topic how-do-durable-queues-and-topics-work: "Durable topics however are different as they must logically persist an instance of each suitable message for every durable consumer - since each durable consumer gets their own copy of the message"
So with a non-durable topic, your plan wont work as the hourly job won't get any messages as it isn't running when the messages are published. If you set up a durable topic then it might work but it depends what you expect to happen when you say your subscriber will "drain the topic of messages". All it can do it read the messages published to it since it last run, it can't affect the messages going to other subscribers.
For a discussion around durable subscribers (I haven't used them on ActiveMQ) see this

What's the right way to ensure jms consumers are closed using spring integration?

I'm using spring integration to invoke a service on the other end of an active mq. My config looks like:
<bean id="jmsConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg>
<bean class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="${risk.approval.queue.broker}"
p:userName="${risk.approval.queue.username}"
p:password="${risk.approval.queue.password}"
/>
</constructor-arg>
<property name="reconnectOnException" value="true"/>
<property name="sessionCacheSize" value="100"/>
</bean>
<!-- create and close a connection to prepopulate the pool -->
<bean factory-bean="jmsConnectionFactory" factory-method="createConnection" class="javax.jms.Connection"
init-method="close" />
<integration:channel id="riskApprovalRequestChannel"/>
<integration:channel id="riskApprovalResponseChannel"/>
<jms:outbound-gateway id="riskApprovalServiceGateway"
request-destination-name="${risk.approval.queue.request}"
reply-destination-name="${risk.approval.queue.response}"
request-channel="riskApprovalRequestChannel"
reply-channel="riskApprovalResponseChannel"
connection-factory="jmsConnectionFactory"
receive-timeout="5000"/>
<integration:gateway id="riskApprovalService" service-interface="com.my.super.ServiceInterface"
default-request-channel="riskApprovalRequestChannel"
default-reply-channel="riskApprovalResponseChannel"/>
What I've noticed is that with this config the consumers created to grab the matching request from active mq never close. Every request increments the consumer count.
I can stop this from happening by adding
<property name="cacheConsumers" value="false" />
To the CachingConnectionFactory.
However according to the java docs for CachingConnectionFactory :
Note that durable subscribers will only be cached until logical
closing of the Session handle.
Which suggests that the session is never being closed.
Is this a bad thing? Is there a better way to stop the consumers from piling up?
Cheers,
Peter
First, you don't need the init-method on your factory-bean - it does nothing - the session factory only has one connection and calling close() on it is a no-op. (CCF is a subclass of SingleConnectionFactory).
Second; caching consumers is the default; sessions are never closed, unless the number of sessions exceeds the sessionCacheSize (which you have set to 100).
When close() is called on a cached session, it is cached for reuse; that's what the caching connection factory is for - avoiding the overhead of session creation for every request.
If you don't want the performance benefit of caching sessions, producers and consumers, use the SingleConnectionFactory instead. See the JavaDoc for CachingConnectionFactory.
Does the following work when using cachingConnectionFactory?
In your spring config file add in the connection factory config details something like this: cacheConsumers="false"
Default Behaviour is true which was causing a connection leak in the Queue.

Categories

Resources