I have been struggling to find a work around to be able to dynamically read the polling frequency in mule flow. Currently I am reading that from a file using spring's Propertyplaceholder at the start up and value remains the same even if the fie is changed(as we all know)..
Since poll tag needs to be the first component in the flow, There is nothing much i could do to read the "live" file update.
Is there any way I could set the polling frequency dynamically read from a file(without requiring restart)?
For Reference:
<spring:beans>
<context:property-placeholder location="file:///C:/Users/test/config.properties" />
</spring:beans>
<flow name="querying-database-pollingFlow1" doc:name="querying-database-pollingFlow1">
<poll doc:name="Poll3e3">
<fixed-frequency-scheduler frequency="${pollinginterval}"/>
<db:select config-ref="MySQL_Configuration1" doc:name="Perform a query in MySQL">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active';]]></db:dynamic-query>
</db:select>
</poll>
....</flow>
There is absolutely no issue with <fixed-frequency-scheduler frequency="${pollinginterval}"/> as you can dynamically read polling frequency from a properties file ...
The only thing I am concern here is :- <context:property-placeholder location="file:///C:/Users/test/config.properties" />
Since you are reading from a properties file outside your classpath, better try with the following :-
<context:property-placeholder
location="file:C:/Users/test/config.properties" />
One more thing .. if you are using Spring beans for properties file use the following way :-
<spring:beans>
<spring:bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<spring:property name="locations">
<spring:list>
<spring:value>file:C:/Users/test/config.properties</spring:value>
</spring:list>
</spring:property>
</spring:bean>
</spring:beans>
The clean way using FixedFrequencyScheduler is not there. You could potentially go to the registry, fetch your flow by name, then get the MessageSource and cast it to FixedFrequencyScheduler set the new interval and stop-start, however if you take a look to the code you'll see there is no setter for it and reflexion it's just too dirty.
My first choice would probably be to leverage a quartz endpoint and then leverage the quartz abilities to expose the configuration throught jmx/rmi.
I would definitely advise against using hot deploy to solve this problem especially if you need to change the frequency often. There is a risk that this will lead to problems with permgen running out of memory.
Instead you could use a flow with a quartz endpoint that fires at a relatively low frequency. Then add a filter that only lets through the message at the required frequency.
The filter can either watch a properties file for changes or expose attributes over JMX to allow you to change the frequency. Something like this.
<spring:beans>
<spring:bean id="frequencyFilter" class="FrequencyFilter" />
</spring:beans>
<flow name="trigger-polling-every-second" doc:name="trigger-polling-every-second">
<quartz:inbound-endpoint repeatInterval="1000" doc:name="Quartz" responseTimeout="10000" jobName="poll-trigger">
<quartz:event-generator-job>
<quartz:payload>Scheduled Trigger</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<filter ref="frequencyFilter" />
<vm:outbound-endpoint path="query-database" />
</flow>
<flow name="query-database">
<vm:inbound-endpoint path="query-database" />
<db:select config-ref="databaseConfig" doc:name="Perform a query in database">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active']]></db:dynamic-query>
</db:select>
<logger level="ERROR" message="#[payload]"/>
</flow>
Related
I have a problem with database transactions in mule flow. This is the flow that i have defined:
<flow name="createPortinCaseServiceFlow">
<vm:inbound-endpoint path="createPortinCase" exchange-pattern="request-response">
<custom-transaction action="ALWAYS_BEGIN" factory-ref="muleTransactionFactory"/>
</vm:inbound-endpoint>
<component>
<spring-object bean="checkIfExists"/>
</component>
<component>
<spring-object bean="createNewOne"/>
</component>
</flow>
The idea is that in checkIfExists we verify if some data exists (in the database) if it does we throw an exception. If it does not we go to createNewOne and create a new data.
The problem
is that if we run the flow concurrently new objects will be created multiple times in createNewOne and they should not be as we invoke checkIfExists just before it. This means that the transaction is not working properly.
More info:
both createNewOne and checkIfExists have following annotation:
#Transactional(propagation = Propagation.MANDATORY)
Definition of muleTransactionFactory looks as follows
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="teleportNpDataSource"/>
<property name="entityManagerFactory" ref="npEntityManagerFactory"/>
<property name="nestedTransactionAllowed" value="true"/>
<property name="defaultTimeout" value="${teleport.np.tm.transactionTimeout}"/>
</bean>
<bean id="muleTransactionFactory" class="org.mule.module.spring.transaction.SpringTransactionFactory">
<property name="manager" ref="transactionManager"/>
</bean>
I have set the TRACE log level (as #Shailendra suggested) and i have discovered that transaction is reused in all of the spring beans:
00:26:32.751 [pool-75-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Participating in existing transaction
In the logs transaction is commited at the same time which means that those transactions are created properly but there are executed concurrently which causes an issue.
Issue may be because multi-threading. When you post multiple objects to VM, they will be dispatched to multiple receiving threads and if multi-threading is not properly handled in your components then you might run into the issue you mentioned.
Test making these change - Add VM connector reference and turn off the dispatcher threading profile. That way, VM will process messages one at a time as there is just one dispatcher thread.
<vm:connector name="VM" validateConnections="true" doc:name="VM" >
<dispatcher-threading-profile doThreading="false"/>
</vm:connector>
<flow name="testFlow8">
<vm:inbound-endpoint exchange-pattern="one-way" doc:name="VM" connector-ref="VM">
<custom-transaction action="NONE"/>
</vm:inbound-endpoint>
</flow>
Be aware of the fact that, if the number of incoming messages on VM are very high and time taken to process each message is more, then you may run into SEDA-QUEUE errors due to no thread availability.
If without threading, your flow is behaving correctly then you may need to look how your components should behave in multi-threading.
Hope That helps!
I have a channel that is used as input-channel in a chain. I need to use it only when the environmental variable sd is not true. Is it possible to write this condition into the spring-integration file without creating an additional Java filter? So, I would like this chain not to work when -Dsd=true in the startup script and work in any other case.
<int:channel id="sdCreationChannel">
<int:queue/>
</int:channel>
<int:chain input-channel="sdCreationChannel" output-channel="debugLogger">
<int:poller fixed-delay="500" />
<int:filter ref="sdIntegrationExistingRequestSentFilter" method="filter"/>
<int:transformer ref="sdCreationTransformer" method="transformOrder"/>
<int:service-activator ref="sdCreationServiceImpl" method="processMessage">
<int:request-handler-advice-chain>
<ref bean="retryAdvice"/>
</int:request-handler-advice-chain>
</int:service-activator>
</int:chain>
The <chain> is a normal endpoint which can be started/stopped according it lifecycle contract.
So, you can start/stop it by its id at runtime at any time or with any condition.
Another trick that it is just enough to add auto-startup="false" to its definition based on that variable.
M-m-m. I think that should work even with normal property-placeholder:
<int:chain auto-startup="${myChain.autoStartup}">
From other side you can take a look to the profile feature and configure it like this:
<beans profile="myChain.profile">
<int:chain>
....
</int:chain>
</beans>
UPDATE
According to your concern:
So, I would like this chain not to work when -Dsd=true in the startup script and work in any other case
As I said above: you can just only mark it in auto-startup="false" from the beginning, for example using the same Environment:
<int:chain auto-startup="#{environment.getProperty('sd', true)}">
From the docs, I want to use consume from queues by dynamically changing the consumers without restarting the application.
I do see that Spring RabbitMQ latest version supports the same, but no clue/example/explanation to change the same. I couldn't see proper source code for the same or how to pass params like maxConcurrentConsumers
I am using XML based configuration of Spring RabbitMQ along with Spring integration
<bean id="rabbitListenerContainerFactory"
class="org.springframework.amqp.rabbit.config.SimpleRabbitListenerContainerFactory">
<property name="connectionFactory" ref="rabbitConnectionFactory"/>
<property name="concurrentConsumers" value="3"/>
<property name="maxConcurrentConsumers" value="10"/>
<property name="acknowledgeMode" value="AUTO" />
</bean>
<int-amqp:inbound-channel-adapter channel="lowInboundChannel" queue-names="lowLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
<int-amqp:inbound-channel-adapter channel="highInboundChannel" queue-names="highLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
Can anyone guide me how to dynamically configure the consumers?
First of all you shouldn't share the same rabbitListenerContainerFactory for different <int-amqp:inbound-channel-adapter>s, because they do this:
protected void onInit() {
this.messageListenerContainer.setMessageListener(new ChannelAwareMessageListener() {
So, only last adapter wins.
From other side there is even no reason to have several adapters. You can specify queue-names="highLoadQueue,lowLoadQueue" for a single adapter.
Although in case of listener-container you must specify queues on the SimpleRabbitListenerContainerFactory.
If you want to change some rabbitListenerContainerFactory options at runtime, you can just inject it to some service and invoke its setters.
Let me know if I have missed anything.
I am trying to get Hazelcast 3.0.2 working with Spring abstraction however it seems the TTL functionality is not working.
I have configured my spring context in the following way
<cache:annotation-driven cache-manager="cacheManager" mode="proxy" proxy-target-class="true" />
<bean id="cacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager">
<constructor-arg ref="hzInstance" />
</bean>
<hz:hazelcast id="hzInstance">
<hz:config>
<hz:group name="instance" password="password" />
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.logging.type">slf4j</hz:property>
<hz:property name="hazelcast.jmx">true</hz:property>
<hz:property name="hazelcast.jmx.detailed">true</hz:property>
</hz:properties>
<hz:network port="8995" port-auto-increment="true">
<hz:join>
<hz:tcp-ip enabled="true">
<hz:interface>10.0.5.5</hz:interface>
<hz:interface>10.0.5.7</hz:interface>
</hz:tcp-ip>
</hz:join>
</hz:network>
<hz:map name="somecache"
backup-count="1"
max-size="0"
eviction-percentage="30"
read-backup-data="false"
time-to-live-seconds="120"
eviction-policy="NONE"
merge-policy="hz.ADD_NEW_ENTRY" />
</hz:config>
</hz:hazelcast>
I then made a simple test class having the following method
#Cacheable("somecache")
public boolean insertDataIntoCache(String data) {
logger.info("Inserting data = '{}' into cache",data);
return true;
}
I also made some method to print some information from every map Hazelcast finds and also the entires inside. Inserting the data and caching seems to work fine however the entry never expires even though I set a TTL of 120 seconds.
When I write the data from the cache it shows me that there is one map called "somecache" and that map has a TTL of 120 seconds but when I loop through the entries, it finds all the ones I inserted with a expirationTime of 0. I am not what is supposed to be the behaviour of hazelcast (maybe a map ttl takes precedence over an entry ttl) but in any case it will just not expire.
Is anybody aware of any issues with 3.0.2 and spring cache? I should also mention that I have other applications in the same application server running an older version of Hazelcast however they have their own separate config and my test application seems to be keeping to itself and not conflicting with anything.
Any input is appreciated.
EDIT 1:
It seems to work if I downgrade to using HZ 2.6.3 so it looks like there is a bug somewhere in hazelcast 3 regarding TTL
I just stumbled on the same thing and it seems that it has been fixed about a month ago: https://github.com/hazelcast/hazelcast/commit/602ce5835a7cc5e495b8e75aa3d4192db34d8b1a#diff-d20dd943d2216ab106807892ead44871
Basically TTL was overridden when you use Hazelcast Spring integration.
I'm using Velocity and Spring. Within Spring, I'm using the VelocityViewResolver paired with the ContentNegotiatingViewResolver. For the most part, this works great. The only problem is that the ContentNegotiatingViewResolver queries the VelocityViewResolver with many different content sets (as it should).
When the Velocity engine doesn't find the particular template, an error is produced similar to the following:
2011-02-04 13:37:15,074 ERROR [http-8080-2] VelocityEngine: ResourceManager : unable to find resource 'foo.json.vm' in any resource loader.
This is not ideal. Ideally, if a template isn't found, a warning or something similar would be produced. If a template doesn't exist for a particular content type, I don't really care... as that means that content type isn't supported through that view resolver.
Any idea on how I could suppress this error though the VelocityViewResolver, VelocityView, or ContentNegotiatingViewResolver?
So, I found that the best way to do this was to add a logger statement to my log config file specifically for the Velocity engine (Velocity and my project both use Commons logging). My logger statement looks like this:
<logger name="org.apache.velocity.app">
<level value="OFF" />
</logger>
The problem will be fixed in Spring 3.2, see SPR-8640. After this improvement you will be able to configure Velocity view resolver to check unresolved views only once.
This happens because your ContentNegotiatingViewResolver uses VelocityViewResolver. You can stop it from doing that by giving it an empty (but non-null) list of view resolvers.
<bean
class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver">
...
<property name="viewResolvers">
<list />
</property>
</bean>