I am using spring batch to read,process,writer and create files directly from FTP location.
I could able to read files using below configration.
<bean id="cvsFileItemReader2" class="org.springframework.batch.item.file.FlatFileItemReader">
<!-- Read a csv file -->
<property name="resource"
value="ftp://user123:12496#ftp.myftp.net/Ftpfiles/it/se/dev/expfiles/ABEXCEP.CSV"/>
But the same i am trying to write using the FlatFileItemWriter uaing below config
<bean class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="ftp://user123:12496#ftp.myftp.net/Ftpfiles/it/se/dev/expfiles/ABFIXED.TXT" />
Getting the following exception
java.io.FileNotFoundException:
URL [ftp://user123:12496#ftp.myftp.net/Ftpfiles/it/se/dev/expfiles/ABFIXED.TXT]
cannot be resolved to absolute file path because it does not reside in the file system:
ftp://user123:12496#ftp.myftp.net/Ftpfiles/it/se/dev/expfiles/ABFIXED.TXT
at org.springframework.util.ResourceUtils.getFile(ResourceUtils.java:205)
Appreciate any help on this.. Thanks
Springs org.springframework.core.io.Resource has a sub-interface that is called org.springframework.core.io.WritableResource for which the only implementations I found were: FileSystemResource and FileSystemContextResource. So it's not possible to write straight on ftp. What you can do is write locally on disk and write a tasklet that upload from disk to ftp.
I think that you can't create a remote FTP Resource. One solution consist on using Spring Batch to generate your file, Then you can use Spring Integration's FTP/FTPS Adapters to transfer your generated file to the FTP server.
Hope could help you.
Use spring-integration for this type of job (see this example) for further explanation
Use Tasklet to send file over sftp. Refer this link, link for more information / coding for this.
Related
I am trying to set dynamic path in camel file component to avoid platform specific paths. But camel is not allowing this as it doesn't expect $ in directory path.
What I am trying to do is to set a VM param say file.home and then use this into my file component like
file:\${file.home}\type1
This will allow me to eliminate the platform specific path directly.
I tried externalizing this into property file but then Spring doesn't understand the camel specific dynamic language for e.g. ${in.header.abc}
Can someone help me out in achieving this.
Those answers are not correct. If you use a BridgePropertyPlaceholderConfigurer and a PropertiesComponent, you can use dynamic values everywhere.
<bean id="bridgePropertyPlaceholder" class="org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer">
<property name="properties">
<value>
...normal property syntax name=value - use this in test definitions
</value>
</property>
</bean>
Or use something like this in real application
<bean id="dummyPropertyPlaceholder" class="org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer">
<property name="location" value="classpath:/dummy.properties" />
</bean>
e.g.
<route id="DummyRoute">
<from uri="file:{{dummy.int.dir}}?delay={{poll.delay}}&initialDelay={{initial.delay}}&{{readlockChanged}}&move={{root}}/{{dummy.arch.dir}}/{{archive.dir}}&moveFailed={{error.dir}}&scheduledExecutorService=#scheduledExecutorService" />
<to uri="file:{{root}}/{{dummy.int.destination.dir}}" />
</route>
There is a trick with later versions of Camel: use $simple{file.path} instead of ${file.path} so Spring won't strip your ${} and pass the bare file.path to Camel. E.g. a move on an input 'from' uri might be like this:
move=archive/$simple{date:now:yyyyMMdd}/$simple{file:name}
http://camel.apache.org/how-do-i-use-spring-property-placeholder-with-camel-xml.html
http://camel.apache.org/using-propertyplaceholder.html
You can use dynamic uri but only in to endpoints (using specific components). You can't use it as from.
There you can find explanation how to use toD (from Camel 2.16) or recipientList: How to use dynamic URI in to
But as I said - there is only possibility to use it in to. It's not possible to use it in from. As a workaround, you have to write a route for each location you are expecting to be in use. You can also use autoCreate=false option to not creating other directories automatically, because for example Linux path without autoCreate=false option: /home/user/test will create directory structure in Windows c:\home\user\test
Since Camel 2.16
We can use
.from("file://folder")
.toD("file://folder/${file:onlyname}")
Here are some details about using properties within camel and/or spring xml: http://camel.apache.org/using-propertyplaceholder.html
According to Camel File Component:
Camel supports only endpoints configured with a starting directory. So
the directoryName must be a directory. If you want to consume a single
file only, you can use the fileName option e.g., by setting
fileName=thefilename. Also, the starting directory must not contain
dynamic expressions with ${} placeholders. Again use the fileName
option to specify the dynamic part of the filename.
So, you could do somethings like:
from(...).to("file://?fileName=${somePathAndSomeFile}").to(...)
Some of the comments/answers in this thread are misleading, as it's possible to set the value of "from" endpoint URI to have a value of a directory taken from a properties file as asked.
Place propertyPlaceholder element under camelContext, and ensure that the properties file could be found in the classpath
<propertyPlaceholder location="dir.properties" />
<from id="_fromInputFolder" uri="file:{{fromDir}}?"/>
I am trying to develop an application using which I want to be able to send zip files over to a messaging queue that's running on a separate server. I have successfully implemented the messaging bit using ActiveMQ and the queue is up and listening for the messages on the server side.
I have a similar application which sends json files as messages to a queue and it's working fine. I am trying to write my application based on how the former one was implemented.
Below is part of my spring integration configuration:
<int-file:inbound-channel-adapter id="filesIn" directory="${harvest.directory}" filename-pattern="*.zip">
<int:poller id="poller" fixed-rate="${harvest.pollRate}" max-messages-per-poll="${harvest.queueCapacity}" />
</int-file:inbound-channel-adapter>
<int:transformer id="copyFiles" input-channel="filesIn"
output-channel="routingChannel" ref="transformationHandler" method="handleFile"/>
<int-jms:outbound-channel-adapter id="jmsOut" destination="requestQueue" channel="filesIn"/>
.
.
.
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${activemq.url}" />
</bean>
<bean id="requestQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="myQueue"/>
</bean>
As you can see there is a transformer. But in my case, I've got nothing to transform and happy to drop the transformer if possible. I just need to be able to poll a directory for zip files and whenever there's one, send it to the queue called myQueue. Unfortunately the approach of receiving files from filesIn inbound-channel-adapter and sending to the queue using jmsOut out-bound-channel adapter doesn't seem to be working.
I am not sure if this is the right way to do it or if it's doable. Could someone please tell me what's wrong here and what I should do?
I know your question is how to do this purely in Spring, but have you looked into using Apache Camel?
Most specifically the File component and one of JMS (JMS/ActiveMQ) components.
It does the polling for you and is highly configurable. It also plays very well with the other technologies you are using in your example. The route could be configured entirely in Spring.
Are you trying to send the File object, or the contents?
While java.io.File is Serializable, it doesn't really transfer properly (it has a number of transient fields).
If the server has access to the filesystem (e.g. NFS), transfer just the file name...
<int:transformer ... expression="payload.absolutePath" />
If you want to transfer the contents of the zip file(s), use a
<int-file:file-to-bytes-transformer/>.
I've run into a strange obstacle using Apache Camel's file component. The gist of the problem is this: I'm using the file component to load in all messages from a directory. I then use a processor to perform some operation on the files loaded. If the process succeeds, then I move the file to a different location and delete the original file. If the process fails however, I leave the message in it's original directory. The code looks something like this:
from("fileLocation")
.process(new MessageProcessor())
.choice()
.when(header("processPassed").isEqualTo(true))
.to("file:newLocation")
.otherwise()
.to("fileLocation");
If the processor passes then everything works great. However, if the processor fails and I'm trying to return the message back to its original location, that doesn't work. Any idea on how to fix this?
I think there are two problems affecting you. Firstly you cannot write the file back to the original location because Camel is processing it and second there is a risk that you'll repeatedly process the same file. To get around this you can use two options:
preMove to use a working directory
idempotent to prevent the same file be processed a second time.
Here is a slightly modified version of your code that I believe does what you require
from("file:fileLocation?idempotent=true&preMove=working")
.process(new MessageProcessor())
.choice()
.when(header("processPassed").isEqualTo(true))
.to("file:newLocation")
.otherwise()
.to("file:fileLocation");
More details available in the File Endpoint documentation.
Read the documentation, there is a moveFailed option that you can specify.
It is a good thing to have the files placed in some error folder, and not the original location though. Then you will know where to look for bad files.
Update:
Since it's a firm requirement that you need to leave the files in place, you need to setup a persistent idempotent repository.
This is mostly copied from docs, and will save the absolute file paths of the processed files into a file on disk - so that it will never process the same (filename) again.
<!-- this is our file based idempotent store configured to use the .filestore.dat as file -->
<bean id="fileStore" class="org.apache.camel.processor.idempotent.FileIdempotentRepository">
<!-- the filename for the store -->
<property name="fileStore" value="some/folder/.idempotentrepo/filestore.dat"/>
<!-- the max filesize in bytes for the file.-->
<property name="maxFileStoreSize" value="<some size>"/>
</bean>
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="file://some/folder/?idempotentRepository=#fileStore&noop=true"/>
<to uri="mock:result"/>
</route>
</camelContext>
I am working a Spring MVC project and in my Service object I need some information like system password, id, url etc but I would like to put this into one of the XML files so it can be changes without changing code.. which XML should I put it in and how do I read it into the object
Moving constants to XML is a first step, but to make your application truly configurable you should use external .properties file:
<context:property-placeholder location="file:///foo/bar/conf.properties" />
And then use it everywhere in your XML configuration:
<property name="password" value="${db_password}"/>
Where conf.properties contains:
db_password=secret
Note that you can also place properties file inside WAR (with location="classpath:/foo/bar/conf.properties").
If you are a happy user of Spring 3.1 (currently RC2) you can take advantage of new #PropertySourceannotation:
#Configuration
#PropertySource("classpath:/com/myco/app.properties")
Hi I am trying to run Apache Nutch 1.2 on Amazon's EMR.
To do this I specifiy an input directory from S3. I get the following error:
Fetcher: java.lang.IllegalArgumentException:
This file system object (hdfs://ip-11-202-55-144.ec2.internal:9000)
does not support access to the request path
's3n://crawlResults2/segments/20110823155002/crawl_fetch'
You possibly called FileSystem.get(conf) when you should have called
FileSystem.get(uri, conf) to obtain a file system supporting your path.
I understand the difference between FileSystem.get(uri, conf), and FileSystem.get(conf). If I were writing this myself I would FileSystem.get(uri, conf) however I am trying to use existing Nutch code.
I asked this question, and someone told me that I needed to modify hadoop-site.xml to include the following properties: fs.default.name, fs.s3.awsAccessKeyId, fs.s3.awsSecretAccessKey. I updated these properties in core-site.xml (hadoop-site.xml does not exist), but that didn't make a difference. Does anyone have any other ideas?
Thanks for the help.
try to specify in
hadoop-site.xml
<property>
<name>fs.default.name</name>
<value>org.apache.hadoop.fs.s3.S3FileSystem</value>
</property>
This will mention to Nutch that by default S3 should be used
Properties
fs.s3.awsAccessKeyId
and
fs.s3.awsSecretAccessKey
specification you need only in case when your S3 objects are placed under authentication (In S3 object can be accessed to all users, or only by authentication)