How to use sftp in the mule flow after writing a file? - java

I have an orchestration flow which calls a subflow to write the file and then next flow (via flow ref) to sftp it.
WriteSubflow
<file:outbound-endpoint path="${outputDir}" outputPattern="${fileName}" responseTimeout="10000" doc:name="Write file"/>
<logger message="File ${fileName} written to ${outputDir}" level="INFO" doc:name="End"/>
Then I call a flow (via ref) which kicks off sftp process.
<flow name="sftp">
<file:inbound-endpoint path="${outputDir}" connector-ref="File" responseTimeout="10000" doc:name="File">
<file:filename-regex-filter pattern="${fileName}" caseSensitive="true"/>
</file:inbound-endpoint>
<sftp:outbound-endpoint exchange-pattern="one-way" connector-ref="SFTP" responseTimeout="10000" ref="endpoint" doc:name="SFTP" />
</flow>
The problem is
While the file is being written, the flow executes the logger after the file outbound endpoint and says file is already written, and after a while the fileconnector spits out "Write file to ...". How do i make logger wait for file to be done writing??
The file inbound endpoint in flow sftp above, is executed immiediately and file isnt ready yet. so it throws an exception first saying its expecting a InputStream or byte[] or String but got a ArrayList(which is original payload from the orchestration flow). After this exception is printed, finally the file is ready and the inbound file connector kicks off reads it and sftps it fine. This seems related to above question where I need to somehow make the rest of the flow wait for file writing to finish.
Note: I have to create sftp flow as a flow instead of subflow, because it needs to be a source. I think if I dont create it a flow and have file connector not as a source, it will become outbound connector.
Any help appreciated.

So i finally figured it out, somehow both these questions are answered in one blog post here
http://www.sixtree.com.au/articles/2015/advanced-file-handling-in-mule/
The key for #1 is
<file:connector name="synchronous-file-connector" doc:name="File">
<dispatcher-threading-profile doThreading="false"/>
</file:connector>
For #2 as Ryan mentions above, using mule requester module.

1) Set the Flow's procesingStrategy to synchronous:
<flow name="testFlow" processingStrategy="synchronous">
<poll frequency="10000">
<set-payload value="some test text" />
</poll>
<file:outbound-endpoint path="/Users/ryancarter/Downloads"
outputPattern="test.txt" />
<logger level="ERROR" message="After file..." />
</flow>
2) Not sure I quite understand, but you cant invoke inbound-endpoints via flow-ref so the inbound-endpoint will be ignored and the inbound endpoint will run on its own regardless of the calling flow. If you want to read in the file mid-flow then using the mule-requestor module: http://blogs.mulesoft.com/introducing-the-mule-requester-module/

Related

How to limit number of files to be processed in a mule flow?

I have the following code:
<sftp:connector name="ImportStatusUpdateSFTP" validateConnections="true" doc:name="SFTP"/>
<flow name="UpdateFlow1" doc:name="UpdateFlow1">
<sftp:inbound-endpoint sizeCheckWaitTime="${sftpconnector.sizeCheckWaitTime}" connector-ref="ImportStatusUpdateSFTP" host="${sftp.host}" port="${sftp.port}"
path="${sftp.path}" user="${sftp.user}" password="${sftp.password}" responseTimeout="${sftp.responseTimeout}"
archiveDir="${mule.servicefld}${sftp.archiveDir}" archiveTempReceivingDir="${sftpconnector.archiveTempReceivingDir}" archiveTempSendingDir="${sftpconnector.archiveTempSendingDir}"
tempDir="${sftp.tempDir}" doc:name="SFTP" pollingFrequency="${sftp.poll.frequency}">
<file:filename-wildcard-filter pattern="*.xml"/>
</sftp:inbound-endpoint>
<idempotent-message-filter idExpression="#[headers:originalFilename]"
throwOnUnaccepted="true" storePrefix="Idempotent_Message" doc:name="Idempotent Message"
doc:description="Check for processing the same file again.">
<simple-text-file-store name="FTP_files_names"
maxEntries="1000" entryTTL="-1" expirationInterval="3600"
directory="${mule.servicefld}${idempotent.fileDir}" />
</idempotent-message-filter>
<object-to-byte-array-transformer doc:name="Object to Byte Array"/>
<message-filter onUnaccepted="Status_UpdateFlow_XML_Validation_Failed">
<mulexml:schema-validation-filter schemaLocations="xsd/StatusUpdate.xsd" returnResult="false" doc:name="Schema_Validation"/>
</message-filter>
<vm:outbound-endpoint exchange-pattern="one-way"
path="StatusUpdateIN" doc:name="StatusUpdateVMO" />
<default-exception-strategy>
<vm:outbound-endpoint path="serviceExceptionHandlingFlow" />
</default-exception-strategy>
</flow>
My problem is, if there are lots of files on the SFTP (1000), it takes them all, converts them, validates them, and then sends them to the outbound-endpoint, and this puts a strain on the Application Processing Part.
Is there a way to limit, split, batch, filter or any other kind of action that will send only a max amount of messages / files to the outbound endpoint.
In Mule 3 there is no built-in generic method to do this. There may be possible solutions in a case by case basis. In Mule 4 there is a simple way using the maxConcurrency attribute in flows.

Not able to receive tcp(mode is server) message on channel with undefined terminator

Currently we are using Spring Integration 2.1.0 Release(Due to legacy application can not switch on latest version ) in our application.
Application flow is as below:
All the configuration details are defined in a configuration file, like host name, port number, terminator etc
Get the message from TCP using tcp-inbound-channel-adapter via channel.
Pass it to splitter for further flow.
Here issue is if message has terminator other than, which is defined in configuration file,message does not come to class defined for splitter, if terminator is same, it is working fine.
Requirement is if terminator value is different it should show a error message on same channel using tcp-outbound-channel-adapter(inbound and outbound is used due asynchronous call).
I have enabled the application and spring logging at Trace level but not able to understand why and where message is stuck.
Code for Configuration file is
<Config>
<host>localhost</host>
<port>8888</port>
<mode>server</mode>
<terminator>10</terminator>
<msgLength>65535</msgLength>
<inChannel>telnetInboundCustomChannel</inChannel>
</Config>
XML for connection details
<beans:bean id="serverCustomSerializer"
class="com.core.serializer.CustomSerializer">
<beans:property name="terminatingChar" value="${server.terminator}"/>
<beans:property name="maxLength" value="${server.msgLength}"/>
</beans:bean>
<beans:bean id="serverFactoryTaskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<beans:property name="corePoolSize" value="5" />
<beans:property name="queueCapacity" value="0" />
</beans:bean
<int:channel id="telnetLandingChannel" />
<ip:tcp-connection-factory id="serverFactory" type="server"
host="${server.host}" port="${server.port}" single-use="false"
serializer="${server.serializer}" deserializer="${server.serializer}" task-
executor="serverFactoryTaskExecutor"/>
<ip:tcp-inbound-channel-adapter id="serverInboundAdpater"
channel="telnetLandingChannel" connection-factory="serverFactory"
error-channel="errorChannel" auto-startup="false"/>
<ip:tcp-outbound-channel-adapter id="serverOutboundAdapter"
channel="serverReplyChannel"
connection-factory="serverFactory"
auto-startup="true"/>
XML for Channel details and flow are:
<int:channel id="telnetInboundCustomChannel" />
<int:splitter id="messageSplitter"
input-channel="telnetInboundCustomChannel" ref="telnetCustomMessageSplitter"
method="splitCustomMessageStream"
outputchannel="base24CustomSplitterChannel" />
<int:filter id="messageFilter" input-
channel="base24CustomSplitterChannel"
output-channel="base24CustomCoreMessageChannel"
ref="telnetCustomMessageFilter"
method="customMessageFilter" />
<!--Other code to get data from filer and pass it to correct router -->
If somehow message is visible in filter class, I can apply the logic to written error code on TCP connection.
I have applied the break points on run() of TcpNetConnection class as well. I am not able to understand Spring Integration internal flow. How message is coming even till splitter.
I have noticed one more thing if I send message with correct terminator, after sending with wrong terminator, Spring will append new message with old message.
Looks like without correct terminator spring is not able to cut the frame and it is stuck in telnetInboundCustomChannel.
Please guide how to fix this issue and reason of issue for better understanding.
It's not clear how you can detect a bad terminator. By definition the deserializer needs to know a message is complete before returning. You could detect a socket close (bite < 0) and n>0 and return a special message but I don't see how else you can emit a message unless you know what invalid terminator(s) to look for.
EDIT
If you mean check for another "special" (non-printable) character, then you can use something like...
if (n > 0 && (bite == bytes.byteValue())) {
break;
}
else (if bite < 0x20) {
return ("Bad terminator for" + new String(buffer, 0, n)).getBytes();
}
The requirement is strictly meaningless. There is no such thing as a message in TCP, and no such thing as a message with an undefined terminator in any protocol.

How to ignore MessageHandlingException while dealing with ProcessBuilder in service-activator

That is continuation of the question in How to integrate legacy executables into Spring Integration application?
I asked about incorporating legacy executable files and then incorporated ProcessBuilder.
ProcessBuilder works fine, however I am getting the org.springframework.messaging.MessageHandlingException: failed to write Message payload to file. As I understand, my problem is that for any service-activator I need inbound and outbound channels but in reality I am processing the file information in the executable file prj itself. What is the best way to avoid this kind of exception?
The configuration file is as follows:
<beans xmlns="http://www.springframework.org/schema/beans"
...
<int-file:inbound-channel-adapter id="producer-file-adapter"
channel="inboundChannel" directory="file:/Users/anarinsky/springint/chem"
prevent-duplicates="true">
<int:poller fixed-rate="5000" />
</int-file:inbound-channel-adapter>
<int:channel id="inboundChannel" />
<int:channel id="outboundChannel" />
<int:service-activator input-channel="inboundChannel" output-channel="outboundChannel"
expression="new ProcessBuilder('/Users/anarinsky/springint/chem/prj', '/Users/anarinsky/springint/chem/a.dat', '/Users/anarinsky/springint/chem/a.out').start()">
</int:service-activator>
<int-file:outbound-channel-adapter
channel="outboundChannel" id="consumer-file-adapter"
directory="file:/Users/anarinsky/springint/chem"/>
</beans>
The exception stack starts as follows:
19:29:46.236 ERROR [task-scheduler-1][org.springframework.integration.handler.LoggingHandler] org.springframework.messaging.MessageHandlingException: failed to write Message payload to file
at org.springframework.integration.file.FileWritingMessageHandler.handleRequestMessage(FileWritingMessageHandler.java:309)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:170)
at
...
Caused by: java.lang.IllegalArgumentException: unsupported Message payload type [java.lang.UNIXProcess]
at org.springframework.integration.file.FileWritingMessageHandler.handleRequestMessage(FileWritingMessageHandler.java:304)
... 45 more
If you service invoked by the activator has a void return type or returns null, you don't need an output channel. If your service returns a value and you want to discard it, set the output-channel="nullChannel" (it's like /dev/nul in unix).
Right now, it looks like you are trying to write the result of running the process (a UnixProcess to a file). The file outbound adapter doesn't support that payload type (as the exception explains).

Issues with Spring Integration and process taking time and pausing

I am looking at some issue that we have in our application. Spring integration is being used to poll a particular directory and then process the files in this directory. It can process 5k 1kb files and sometimes there is a huge pause where the application is doing nothing just sitting idle and then completes the process in 4 minutes. Then the next run will take a bit longer and the one after that takes slightly longer and so on until i restart the application where it goes back to the 4 minutes mark. Has anyone experienced this issue before.
I wrote a standalone version without Spring Integration and dont get the same issue.
I have also below pasted the xml config, just incase i have done something wrong that I can't spot.
Thanks in advance.
<!-- Poll the input file directory for new files. If found, send a Java File object on inputFileChannel -->
<file:inbound-channel-adapter directory="file:${filepath}"
channel="inputFileChannel" filename-regex=".+-OK.xml">
<si:poller fixed-rate="5000" max-messages-per-poll="1" />
</file:inbound-channel-adapter>
<si:channel id="inputFileChannel" />
<!-- Call processFile() and start parsing the XML inside the File -->
<si:service-activator input-channel="inputFileChannel"
method="splitFile" ref="splitFileService">
</si:service-activator>
<!-- Poll the input file directory for new files. If found, send a Java File object on inputFileChannel -->
<file:inbound-channel-adapter directory="file:${direcotrypath}" channel="inputFileRecordChannel" filename-regex=".+-OK.xml">
<si:poller fixed-rate="5000" max-messages-per-poll="250" task-executor="executor" />
</file:inbound-channel-adapter>
<task:executor id="executor" pool-size="8"
queue-capacity="0"
rejection-policy="DISCARD"/>
<si:channel id="inputFileRecordChannel" />
<!-- Call processFile() and start parsing the XML inside the File -->
<si:service-activator input-channel="inputFileRecordChannel"
method="processFile" ref="processedFileService">
</si:service-activator>
<si:channel id="wsRequestsChannel"/>
<!-- Sends messages from wsRequestsChannel to the httpSender, and returns the responses on
wsResponsesChannel. This is used once for each record found in the input file. -->
<int-ws:outbound-gateway uri="#{'http://localhost:'+interfaceService.getWebServiceInternalInterface().getIpPort()+'/ws'}"
message-sender="httpSender"
request-channel="wsRequestsChannel" reply-channel="wsResponsesChannel" mapped-request-headers="soap-header"/>
<!-- Handles the responses from the web service (wsResponsesChannel). Again
this is used once for each response from the web service -->
<si:service-activator input-channel="wsResponsesChannel"
method="handleResponse" ref="responseProcessedFileService">
</si:service-activator>
As I surmised in the comment to your question, the (default) AcceptOnceFileListFilter does not scale well for a large number of files because it performs a linear search over the previously processed files.
We can make some improvements there; I opened a JIRA Issue for that.
However, if you don't need the semantics of that filter (i.e. your flow removes the input file on completion), you can replace it with another filter, such as an AcceptAllFileListFilter.
If you need accept once semantics you will need a more efficient implementation for such a large number of files. But I would warn that when using such a large number of files, if you don't remove them after processing, things are going to slow down anyway, regardless of the filter.

Camel File processing

I'm using Camel (2.11.0) to try and achieve the following functionality:
If a file exists at a certain location, copy it to another location and then begin processing it
If no such file exists, then I don't want the file consumer/poller to block; I just want processing to continue to a direct:cleanup route
I only want the file to be polled once!
Here's what I have so far (using Spring XML):
<camelContext id="my-camel-context" xmlns="http://camel.apache.org/schema/spring">
<route id="my-route
<from uri="file:///home/myUser/myApp/fizz?include=buzz_.*txt"/>
<choice>
<when>
<!-- If the body is empty/NULL, then there was no file. Send to cleanup route. -->
<simple>${body} == null</simple>
<to uri="direct:cleanup" />
</when>
<otherwise>
<!-- Otherwise we have a file. Copy it to the parent directory, and then continue processing. -->
<to uri="file:///home/myUser/myApp" />
</otherwise>
</choice>
<!-- We should only get here if a file existed and we've already copied it to the parent directory. -->
<to uri="bean:shouldOnlyGetHereIfFileExists?method=doSomething" />
</route>
<!--
Other routes defined down here, including one with a "direct:cleanup" endpoint.
-->
</camelContext>
With the above configuration, if there is no file at /home/myUser/myApp/fizz, then Camel just waits/blocks until there is one. Instead, I want it to just give up and move on to direct:cleanup.
And if there is a file, I see it getting processed inside the shouldOnlyGetHereIfFileExists bean, but I do not see it getting copied to /home/myUser/myApp; so it's almost as if the <otherwise> element is being skipped/ignored altogether!
Any ideas? Thanks in advance!
Try this setting, and tune your polling interval to suit:
From Camel File Component docs:
sendEmptyMessageWhenIdle
default =false
Camel 2.9: If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.
Regarding writing the file, add a log statement inside the <otherwise> to ensure it's being executed. If so, check file / folder permissions, etc.
Good luck.
One error i faced while I tried using the condition:
<simple>${body} != null</simple>
was it always returns true.
Please go through the below link:
http://camel.465427.n5.nabble.com/choice-when-check-BodyType-null-Body-null-td4259599.html
It may help you.
This is very old, but if anyone finds this, you can poll only once with
"?repeatCount=1"
I know the question was done almost 4 years ago but I had exaclty the same problem yesterday.
So I will let my answer here, maybe it will help another person.
I am using Camel, version 3.10.0
To make it work exactly as described in the question:
If a file exists at a certain location, copy it to another location and then begin processing it
If no such file exists, then I don't want the file consumer/poller to block; I just want processing to continue to a direct:cleanup route
ONLY want the file to be polled once!
Using ${body} == null
The configuration that we need are:
sendEmptyMessageWhenIdle=true //Will send a empty body when Idle
maxMessagesPerPoll=1 //Max files that it will take at once
repeatCount=1 //How many times will execute the Pool (above)
greedy=true // If the last pool executed with files, it will
execute one more time
XML:
<camel:endpoint id="" uri="file:DIRECTORY?sendEmptyMessageWhenIdle=true&initialDelay=100&maxMessagesPerPoll=1&repeatCount=1&greedy=false" />

Categories

Resources