How to open/close Spring Integration channel based on environmental variable? - java

I have a channel that is used as input-channel in a chain. I need to use it only when the environmental variable sd is not true. Is it possible to write this condition into the spring-integration file without creating an additional Java filter? So, I would like this chain not to work when -Dsd=true in the startup script and work in any other case.
<int:channel id="sdCreationChannel">
<int:queue/>
</int:channel>
<int:chain input-channel="sdCreationChannel" output-channel="debugLogger">
<int:poller fixed-delay="500" />
<int:filter ref="sdIntegrationExistingRequestSentFilter" method="filter"/>
<int:transformer ref="sdCreationTransformer" method="transformOrder"/>
<int:service-activator ref="sdCreationServiceImpl" method="processMessage">
<int:request-handler-advice-chain>
<ref bean="retryAdvice"/>
</int:request-handler-advice-chain>
</int:service-activator>
</int:chain>

The <chain> is a normal endpoint which can be started/stopped according it lifecycle contract.
So, you can start/stop it by its id at runtime at any time or with any condition.
Another trick that it is just enough to add auto-startup="false" to its definition based on that variable.
M-m-m. I think that should work even with normal property-placeholder:
<int:chain auto-startup="${myChain.autoStartup}">
From other side you can take a look to the profile feature and configure it like this:
<beans profile="myChain.profile">
<int:chain>
....
</int:chain>
</beans>
UPDATE
According to your concern:
So, I would like this chain not to work when -Dsd=true in the startup script and work in any other case
As I said above: you can just only mark it in auto-startup="false" from the beginning, for example using the same Environment:
<int:chain auto-startup="#{environment.getProperty('sd', true)}">

Related

Why do I get Referenced bean nullChannel not found?

I usually get a warning about nullChannel not being defined in STS Problems view:
Referenced bean 'nullChannel' not found
But then if I add a declaration in context file, like <int:channel id="nullChannel" /> or <int:publish-subscribe-channel id="nullChannel">, I get an:
java.lang.IllegalStateException: The bean name 'nullChannel' is reserved.
I guess that's a warning I can safely ignore, but I usually try to zero out warnings, so is there something I'm missing?
UPDATE
These are the portions involved with the warning, removing them made it disappear:
<int:header-value-router input-channel="listOfMaps" header-name="transaction_type" resolution-required="false" default-output-channel="nullChannel">
<int:mapping value="52" channel="requests52ListOfMaps"/>
</int:header-value-router>
<int:service-activator input-channel="httpRequestsSendsChannel" output-channel="nullChannel" ref="conversionController" method="enable52Delivery" />
<int:service-activator input-channel="httpRequestsDeletesChannel" output-channel="nullChannel" ref="inspector" method="inspect" />
I am not sure why it was ok for me last week (probably pilot error) but I get it now with
<int:service-activator input-channel="errorChannel" output-channel="nullChannel" expression="foo" />
and, if I flip the in/out channels, the warning changes to errorChannel - the reason we don't get a warning for the input channel is because STS presumably knows that we will create input channels on the fly if needed.
I guess STS just doesn't know about these implicit beans.
I'll ask the STS guys if we can come up with a way to give them a list of implicit beans to suppress these warnings.
If that's not possible, we could consider relaxing the rule preventing the adding of a custom nullChannel bean.

Mule / Spring transaction is not propagated

I have a problem with database transactions in mule flow. This is the flow that i have defined:
<flow name="createPortinCaseServiceFlow">
<vm:inbound-endpoint path="createPortinCase" exchange-pattern="request-response">
<custom-transaction action="ALWAYS_BEGIN" factory-ref="muleTransactionFactory"/>
</vm:inbound-endpoint>
<component>
<spring-object bean="checkIfExists"/>
</component>
<component>
<spring-object bean="createNewOne"/>
</component>
</flow>
The idea is that in checkIfExists we verify if some data exists (in the database) if it does we throw an exception. If it does not we go to createNewOne and create a new data.
The problem
is that if we run the flow concurrently new objects will be created multiple times in createNewOne and they should not be as we invoke checkIfExists just before it. This means that the transaction is not working properly.
More info:
both createNewOne and checkIfExists have following annotation:
#Transactional(propagation = Propagation.MANDATORY)
Definition of muleTransactionFactory looks as follows
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="teleportNpDataSource"/>
<property name="entityManagerFactory" ref="npEntityManagerFactory"/>
<property name="nestedTransactionAllowed" value="true"/>
<property name="defaultTimeout" value="${teleport.np.tm.transactionTimeout}"/>
</bean>
<bean id="muleTransactionFactory" class="org.mule.module.spring.transaction.SpringTransactionFactory">
<property name="manager" ref="transactionManager"/>
</bean>
I have set the TRACE log level (as #Shailendra suggested) and i have discovered that transaction is reused in all of the spring beans:
00:26:32.751 [pool-75-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Participating in existing transaction
In the logs transaction is commited at the same time which means that those transactions are created properly but there are executed concurrently which causes an issue.
Issue may be because multi-threading. When you post multiple objects to VM, they will be dispatched to multiple receiving threads and if multi-threading is not properly handled in your components then you might run into the issue you mentioned.
Test making these change - Add VM connector reference and turn off the dispatcher threading profile. That way, VM will process messages one at a time as there is just one dispatcher thread.
<vm:connector name="VM" validateConnections="true" doc:name="VM" >
<dispatcher-threading-profile doThreading="false"/>
</vm:connector>
<flow name="testFlow8">
<vm:inbound-endpoint exchange-pattern="one-way" doc:name="VM" connector-ref="VM">
<custom-transaction action="NONE"/>
</vm:inbound-endpoint>
</flow>
Be aware of the fact that, if the number of incoming messages on VM are very high and time taken to process each message is more, then you may run into SEDA-QUEUE errors due to no thread availability.
If without threading, your flow is behaving correctly then you may need to look how your components should behave in multi-threading.
Hope That helps!

How to scan multiple directory locations for files with only one input-channel-adapter in Spring?

I have a assignment where I am reading two different folders for the files using spring-integration inbound-channel-adapter.
My bean is defined as below:
<file:inbound-channel-adapter id="channel1"
directory="file:${java.io.tmpdir}/input1">
<integration:poller id="poller" fixed-delay="60000">
</integration:poller>
</file:inbound-channel-adapter>
<file:inbound-channel-adapter id="channel2"
directory="file:${java.io.tmpdir}/input2">
<integration:poller id="poller2" fixed-delay="60000">
</integration:poller>
</file:inbound-channel-adapter>
<integration:service-activator
input-channel="channel1" ref="handler" />
<integration:service-activator
input-channel="channel2" ref="handler" />
<bean id="handler" class="c.d.Handler" />
I want to read the files from both the location and to whichever location it comes I want to process it with same handler class. I cannot write two main classes to read different inbound-channel-adapters. I have tried adding scanner to the same but it didn't workout. I had tried above but it is giving error expected single matching bean but found 2:. Any help upon this would be much appreciated.
Simply declare one service activator with input-channel="in", then your two adapters...
<file:inbound-channel-adapter id="one" channel="channel" ...
<file:inbound-channel-adapter id="two" channel="channel" ...
i.e. route the output from both adapters to the same bean.

SourcePollingChannelAdapter create

I use spring and spring-integration. I need create SourcePollingChannelAdapter dynamic for Feed parse, and register in Spring context.
QueueChannel channel = (QueueChannel) context.getBean("rssFeedChannel");
SourcePollingChannelAdapter adapter = new SourcePollingChannelAdapter();
adapter.setApplicationContext(context);
adapter.setBeanName("adapter.1");
FeedEntryMessageSource source = new FeedEntryMessageSource(new URL("https://spring.io/blog.atom"), "news");
source.setApplicationContext(context);
source.setBeanName("source");
adapter.setSource(source);
adapter.setOutputChannel(channel);
adapter.setTrigger(new PeriodicTrigger(1000));
adapter.start();
And my application config:
<int:poller default="true" fixed-rate="5000"/>
<int:channel id="rssFeedChannel">
<int:queue capacity="40"/>
</int:channel>
<file:outbound-channel-adapter id="file" mode="APPEND" charset="UTF-8" directory="/tmp/si" filename-generator-expression="'SpringBlog'"/>
<!-- With this work -->
<!--<feed:inbound-channel-adapter id="news" channel="rssFeedChannel" url="https://spring.io/blog.atom">-->
<!--<int:poller fixed-rate="5000"/>-->
<!--</feed:inbound-channel-adapter>-->
<int:transformer input-channel="rssFeedChannel" expression="payload.title + ' # ' + payload.link + '#{systemProperties['line.separator']}'" output-channel="file"/>
but nothing is written to the file. Please help bug.
You must make FeedEntryMessageSource as bean, too. The applicationContext injection.
You forgot to invoke adapter.afterPropertiesSet(). BTW the same for the FeedEntryMessageSource instance.
From other side, please, share what is in your mind for the reason to go this manual way. Why just don't rely on the standard Inversion of Control principle?

dynamically determining polling frequency in mule

I have been struggling to find a work around to be able to dynamically read the polling frequency in mule flow. Currently I am reading that from a file using spring's Propertyplaceholder at the start up and value remains the same even if the fie is changed(as we all know)..
Since poll tag needs to be the first component in the flow, There is nothing much i could do to read the "live" file update.
Is there any way I could set the polling frequency dynamically read from a file(without requiring restart)?
For Reference:
<spring:beans>
<context:property-placeholder location="file:///C:/Users/test/config.properties" />
</spring:beans>
<flow name="querying-database-pollingFlow1" doc:name="querying-database-pollingFlow1">
<poll doc:name="Poll3e3">
<fixed-frequency-scheduler frequency="${pollinginterval}"/>
<db:select config-ref="MySQL_Configuration1" doc:name="Perform a query in MySQL">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active';]]></db:dynamic-query>
</db:select>
</poll>
....</flow>
There is absolutely no issue with <fixed-frequency-scheduler frequency="${pollinginterval}"/> as you can dynamically read polling frequency from a properties file ...
The only thing I am concern here is :- <context:property-placeholder location="file:///C:/Users/test/config.properties" />
Since you are reading from a properties file outside your classpath, better try with the following :-
<context:property-placeholder
location="file:C:/Users/test/config.properties" />
One more thing .. if you are using Spring beans for properties file use the following way :-
<spring:beans>
<spring:bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<spring:property name="locations">
<spring:list>
<spring:value>file:C:/Users/test/config.properties</spring:value>
</spring:list>
</spring:property>
</spring:bean>
</spring:beans>
The clean way using FixedFrequencyScheduler is not there. You could potentially go to the registry, fetch your flow by name, then get the MessageSource and cast it to FixedFrequencyScheduler set the new interval and stop-start, however if you take a look to the code you'll see there is no setter for it and reflexion it's just too dirty.
My first choice would probably be to leverage a quartz endpoint and then leverage the quartz abilities to expose the configuration throught jmx/rmi.
I would definitely advise against using hot deploy to solve this problem especially if you need to change the frequency often. There is a risk that this will lead to problems with permgen running out of memory.
Instead you could use a flow with a quartz endpoint that fires at a relatively low frequency. Then add a filter that only lets through the message at the required frequency.
The filter can either watch a properties file for changes or expose attributes over JMX to allow you to change the frequency. Something like this.
<spring:beans>
<spring:bean id="frequencyFilter" class="FrequencyFilter" />
</spring:beans>
<flow name="trigger-polling-every-second" doc:name="trigger-polling-every-second">
<quartz:inbound-endpoint repeatInterval="1000" doc:name="Quartz" responseTimeout="10000" jobName="poll-trigger">
<quartz:event-generator-job>
<quartz:payload>Scheduled Trigger</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<filter ref="frequencyFilter" />
<vm:outbound-endpoint path="query-database" />
</flow>
<flow name="query-database">
<vm:inbound-endpoint path="query-database" />
<db:select config-ref="databaseConfig" doc:name="Perform a query in database">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active']]></db:dynamic-query>
</db:select>
<logger level="ERROR" message="#[payload]"/>
</flow>

Categories

Resources