my spring-boot-integration app, could be running on multiple servers(nodes) but they all are supposed to read a common directory.Now, I wrote a custom locker which takes the lock on file so that any other instance will not be able to process the same file . All spring configuration have been done in xml.
Application acquiring the lock but unable to read the content of locked file.
java.io.IOException: The process cannot access the file because another process has locked a portion of the file
as suggested in forms, we can get access to the locked file content only over ByteBuffer.
so tried to transform the file to bytes using file-to-bytes-transformer and passed as input to outbound gateway. But instance not getting started.
Any suggestion?
<file:file-to-bytes-transformer input-channel="filesOut" output-channel="filesOutChain"/>
<integration:chain id="filesOutChain" input-channel="filesOutChain">
<file:outbound-gateway id="fileMover"
auto-create-directory="true"
directory-expression="headers.TARGET_PATH"
mode="REPLACE">
<file:request-handler-advice-chain>
<ref bean="retryAdvice" />
</file:request-handler-advice-chain>
</file:outbound-gateway>
<integration:gateway request-channel="filesChainChannel" error-channel="errorChannel"/>
</integration:chain>
Related
I'm attempting to utilize the Fabric-SDK-Java (https://github.com/hyperledger/fabric-sdk-java) in a custom client application (works with an admin role user in the wallet) to update an existing channel's channel configuration. My expectation was for:
#1) the channel configuration file to be retrieved from the ledger (Done by SDK)
#2) the file to be converted from .pb to .json (Done by SDK)
#3) the file to be modified by me (Done by custom client code)
#4) add the peer signatures for member orgs of the channel to the transaction (Done by SDK)
#5) have the orderer process the transaction into a block and submit it to the peers' ledger
Problem:
At step 5, I get an error in my client code console:
Channel mychannel orderer localhost:7050 status returned failure code 400 (BAD_REQUEST) during orderer next
In the orderer's log, I get the following at WARN level:
Rejecting broadcast of config message from <ip> because of error: error applying config update to existing channel 'mychannel': error authorizing update: unexpected EOF
What I've attempted so far is to reduce my channel down to just 1 admin org and the orderer for testing, pull down the pb file of the channel config for my channel and convert it to json via the terminal commands (via step 1 of https://hyperledger-fabric.readthedocs.io/en/release-2.2/config_update.html) and remove all header info using the jq tool, modify that json manually in a text editor, and the use the following code to update the channel:
Channel myChannel = network.getChannel()
String msg = ... //obtains the channel config json file from a directory
UpdateChannelConfiguration ucc = new UpdateChannelConfiguration();
ucc.setUpdateChanneConfiguration(msg.getBytes());
myChannel.updateChannelConfiguration(ucc, myChannel.getUpdateChannelConfigurationSignature(ucc, user)); //where the user is an object implementing the User interface taking in a: username, mspId, Enrollment object, and admin role in a Set
My questions are:
#1) What is the meaning and cause of the EOF error?
#2) Is the channel config json file that was converted from the .pb file (with headers removed) the correct file to add to the "setUpdateChanneConfiguration" method after I modify it with my updates?
#3) Do I have to manually change the version field in the modified sections of the channel config json file or will the method take care of that automatically?
#4) Do I need the orderer signature along with the admin org peer signature on the update transaction (via the getUpdateChannelConfigurationSignature method)? (attempted this but didnt have any effect on the error)
EOF error means you have a wrong format for the configuration file you submitted. Based on your question, I think you submitted the JSON instead of pb file right?
No. After you modify it, you will have to re-encode the JSON to pb before submitting to orderer. Check the Step 3 in this documentation.
No, don't change it manually.
No, please follow the documentation provided above.
One thing to note. Better not to use the SDK for this, as what mentioned in the documentation:
The Fabric v2.x SDKs only support transaction and query functions and event listening. Support for administrative functions for channels and nodes has been removed from the SDKs in favor of the CLI tools.
We are developing a spring batch application which is going to process "big" files in the future. To maintain a low memory signature we use spring batch on the smallest possible chunks of these files.
After processing, we want to write a result back to SFTP, which also happens per chunk of the input file.
The current approach is as follows:
StepExecutionListener.before(): we send a message to the SftpOutboundAdapter with FileExistsMode.REPLACE and empty payload to create an empty file (with .writing)
Reader: will read the input file
Processor: will enhance the input with the results and return a list of string
Writer: will send the list of strings to another SftpOutboundAdapter with FileExistsMode.APPEND
StepExecutionListener.after(): In case the execution as successful we will rename the file to remove the .writing suffix.
Now I saw that there are Streaming Inbound Adapters but I could not find Streaming Outbound Adapters.
Is this really the only/best way to solve it by append? Or is it possible to stream the file content?
Scenario:
I am trying to stream and process some large xml files. These files are send from a producer asynchronously.
producerTemplate.sendBodyAndHeaders(endpointUri, inStream, ImmutableMap.of(JOBID_PROPERTY, importJob.getId()));
I need to batch all file input streams, identify the files by probing them with xpath and reorder them according to their content. I have the following route:
from("direct:route1")
.streamCaching()
.choice()
.when(xpath("//Tag1")) .setHeader("execOrder", constant(3)) .setHeader("xmlRoute", constant( "direct:some-route"))
.when(xpath("//Tag2")) .setHeader("execOrder", constant(1)) .setHeader("xmlRoute", constant( "direct:some-other-route"))
.when(xpath("//Tag3")) .setHeader("execOrder", constant(2)) .setHeader("xmlRoute", constant( "direct:yet-another-route"))
.otherwise()
.to("direct:somewhereelse")
.end()
.resequence(header("execOrder"))
.batch(new BatchResequencerConfig(300, 10000L))
.allowDuplicates()
.recipientList(header("xmlRoute"))
When running my code I get the following error:
2017-11-23 11:43:13.442 INFO 10267 --- [ - Batch Sender] c.w.n.s.m.DefaultImportJobService : Updating entity ImportJob with id 5a16a61803af33281b22c716
2017-11-23 11:43:13.451 WARN 10267 --- [ - Batch Sender] org.apache.camel.processor.Resequencer : Error processing aggregated exchange: Exchange[ID-int-0-142-bcd-wsint-pro-59594-1511433568520-0-20]. Caused by: [org.apache.camel.RuntimeCamelException - Cannot reset stream from file /var/folders/dc/fkrgdrnx6txbg7jfdjd_58mm0000gn/T/camel/camel-tmp-39abaae8-9bdd-435a-b63d-299ad8b06415/cos1499080503439465502.tmp]
org.apache.camel.RuntimeCamelException: Cannot reset stream from file /var/folders/dc/fkrgdrnx6txbg7jfdjd_58mm0000gn/T/camel/camel-tmp-39abaae8-9bdd-435a-b63d-299ad8b06415/cos1499080503439465502.tmp
at org.apache.camel.converter.stream.FileInputStreamCache.reset(FileInputStreamCache.java:91)
I've read here that the FileInputStreamCache is closed when the XPathBuilder.getDocument() is called, and the temp file is deleted, so you get the FileNotFoundException when the XPathBuilder wants to reset the InputStream
The solution seems to be to disable the spooling to disk like that:
camelContext.getStreamCachingStrategy().setSpoolThreshold(-1);
However, I don't want to do that because of RAM restrictions, i.e. files can get up to 600MB and I don't want to keep them in memory. Any ideas how to solve the problem?
The resequencer is a two-leg pattern (stateful) and will cause the original exchange to be done beforehand, as its keeping a copy in memory while re-sequencing until the gap is fulfilled and sending the messages out in the new order.
Since your input stream comes from some HTTP service then that would be closed beforehand the resequencer may output the exchange.
Either do as suggested to store to local disk first, and then let the resequencer work on that, or find a way not to use the resequencer.
I ended up doing what Claus and Ricardo suggested. I made a separate route which saves the files to disk. Then another one which probes the files and resequences the exchanges according to a fixed order.
String xmlUploadDirectory = "file://" + Files.createTempDir().path + "/xmls?noop=true"
from("direct:route1")
.to(xmlUploadDirectory)
from(xmlUploadDirectory)
.choice()
.when(xpath("//Tag1")).setHeader("execOrder", constant(3)).setHeader("xmlRoute", constant( "direct:some-route"))
.when(xpath("//Tag2")).setHeader("execOrder", constant(1)).setHeader("xmlRoute", constant( "direct:some-other-route"))
.when(xpath("//Tag3")).setHeader("execOrder", constant(2)).setHeader("xmlRoute", constant( "direct:yet-another-route"))
.otherwise()
.to("direct:somewhereelse")
.end()
.to("direct:resequencing")
from("direct:resequencing")
.resequence(header("execOrder"))
.batch(new BatchResequencerConfig(300, 10000L))
.allowDuplicates()
.recipientList(header("xmlRoute"))
I am polling from SFTP in mulesoft every second,fileAge is set to 0, connection pool size is 1 and autodelete is enabled. Then i save the file to the directory within a File connector which is polling ever 2 seconds and file age is 500(This is the outbound endpoint. Then the next flow starts with this same directory as File inbound endpoint and process the file. Here is polling set to every 3 seconds and autodelete is enabled.I get this error but file is processed..
java.io.IOException: The requested file does not exist (//file/7ggot1517.txt)
at org.mule.transport.sftp.SftpClient.getSize(SftpClient.java:499)
at org.mule.transport.sftp.SftpClient.retrieveFile(SftpClient.java:378)
...
Does anyone have some knowledge how to configure sftp and file connector to :
1.Read File From SFTP and delete it from SFTP
2.Process the File from local directory and delete it?
3.Get rid of that error
Thank you
Can you try the below configuration...I tried reading file from FTP to local directory ..
Replace FTP with SFTP
Use the little small groovy script provided in that.This should work .I just tested this and working as expected .Deleting can be done by autoDelete attribute or fileAge .Please let me know if this helps
<flow name="ftptestFlow">
<ftp:inbound-endpoint host="hostname" port="port" path="path/filename" user="userid" password="password" responseTimeout="10000" doc:name="FTP"/>
<set-variable variableName="fileName" value="fileName" doc:name="fileName"/>
<scripting:component doc:name="getFile">
<scripting:script engine="Groovy"><![CDATA[new File(flowVars.fileName).getText('UTF-8')]]></scripting:script>
</scripting:component>
<file:outbound-endpoint path="path" outputPattern="filename" responseTimeout="10000" doc:name="File"/>
</flow>
Your SFTP inbound endpoint probably tries to poll the file a first time, but a second poll is started before the first one had a chance to delete file. Something like this happens:
First poll - a file is found, let's read it => OK
First poll - read the file and process it => OK
Second poll - a file is found, let's read it => OK
First poll - processing finished, delete the file => OK
Second poll - read the file and process it => Error: file has been deleted
As you see a second poll detects the presence of the file before the first poll actually deletes it, but by the time it tries to read it the first poll already had the file deleted.
You can use the tempDir attribute on your SFTP inbound endpoint, it will move the file to a sub-directory of the folder where it is read before processing, ensuring subsequent polls are not triggered for the same file again. It then does something like:
First poll - a file is found, move it to tempDir and let's read it => OK
First poll - read the file and process it => OK
Second poll - No file found (it has been moved!) => OK
First poll - processing finished, delete the file => OK
Such as:
<sftp:inbound-endpoint connector-ref="SFTP"
tempDir="${ftp.path}/tmpPoll"
host="${ftp.host}"
port="${ftp.port}"
path="${ftp.path}"
user="${ftp.user}"
password="${ftp.password}" doc:name="SFTP" responseTimeout="10000"/>
You also need to make sure the SFTP user can read/write the sub-dir or create it if necessary. Everything is documented here.
EDIT: and to delete your file from the local machine you can simply use a Java or Groovy component once it has been properly processed
try {
Files.delete(filePath);
} catch (...) {
}
I am looking at some issue that we have in our application. Spring integration is being used to poll a particular directory and then process the files in this directory. It can process 5k 1kb files and sometimes there is a huge pause where the application is doing nothing just sitting idle and then completes the process in 4 minutes. Then the next run will take a bit longer and the one after that takes slightly longer and so on until i restart the application where it goes back to the 4 minutes mark. Has anyone experienced this issue before.
I wrote a standalone version without Spring Integration and dont get the same issue.
I have also below pasted the xml config, just incase i have done something wrong that I can't spot.
Thanks in advance.
<!-- Poll the input file directory for new files. If found, send a Java File object on inputFileChannel -->
<file:inbound-channel-adapter directory="file:${filepath}"
channel="inputFileChannel" filename-regex=".+-OK.xml">
<si:poller fixed-rate="5000" max-messages-per-poll="1" />
</file:inbound-channel-adapter>
<si:channel id="inputFileChannel" />
<!-- Call processFile() and start parsing the XML inside the File -->
<si:service-activator input-channel="inputFileChannel"
method="splitFile" ref="splitFileService">
</si:service-activator>
<!-- Poll the input file directory for new files. If found, send a Java File object on inputFileChannel -->
<file:inbound-channel-adapter directory="file:${direcotrypath}" channel="inputFileRecordChannel" filename-regex=".+-OK.xml">
<si:poller fixed-rate="5000" max-messages-per-poll="250" task-executor="executor" />
</file:inbound-channel-adapter>
<task:executor id="executor" pool-size="8"
queue-capacity="0"
rejection-policy="DISCARD"/>
<si:channel id="inputFileRecordChannel" />
<!-- Call processFile() and start parsing the XML inside the File -->
<si:service-activator input-channel="inputFileRecordChannel"
method="processFile" ref="processedFileService">
</si:service-activator>
<si:channel id="wsRequestsChannel"/>
<!-- Sends messages from wsRequestsChannel to the httpSender, and returns the responses on
wsResponsesChannel. This is used once for each record found in the input file. -->
<int-ws:outbound-gateway uri="#{'http://localhost:'+interfaceService.getWebServiceInternalInterface().getIpPort()+'/ws'}"
message-sender="httpSender"
request-channel="wsRequestsChannel" reply-channel="wsResponsesChannel" mapped-request-headers="soap-header"/>
<!-- Handles the responses from the web service (wsResponsesChannel). Again
this is used once for each response from the web service -->
<si:service-activator input-channel="wsResponsesChannel"
method="handleResponse" ref="responseProcessedFileService">
</si:service-activator>
As I surmised in the comment to your question, the (default) AcceptOnceFileListFilter does not scale well for a large number of files because it performs a linear search over the previously processed files.
We can make some improvements there; I opened a JIRA Issue for that.
However, if you don't need the semantics of that filter (i.e. your flow removes the input file on completion), you can replace it with another filter, such as an AcceptAllFileListFilter.
If you need accept once semantics you will need a more efficient implementation for such a large number of files. But I would warn that when using such a large number of files, if you don't remove them after processing, things are going to slow down anyway, regardless of the filter.