I currently have a spring boot web application. The application writes to a file every time the web app is refreshed. Locally I am able to see the files in the root path directory. But when I upload my .jar file to cloud foundry how would I be able to obtain those files that are being written?
Script snippet writing to file
try{
Date date = new Date();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH-mm-ss");
File file = new File(dateFormat.format(date) + "data.txt");
BufferedWriter out = new BufferedWriter(new FileWriter(file));
out.write("Some Data is being written");
out.close;
}
I am able to find data.txt in my root folder. But How can I get those files after I package my application to a jar, and push it to cloud foundry.
Cf push command
-cf push testapp -p target/webapp.jar
I hope this doc will be useful for you:
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-traditional-deployment.html#howto-convert-an-existing-application-to-spring-boot
Try this:
cf ssh-code and then:
scp -P 2222 -o User=cf:<APP_GUID>/0 ssh.<system domain>:/<path>/<file name> <LOCAL_FILE_SYS>
Thanks,
Chandan
Related
I am running a JAR in a Docker container. Functionality of my applications is to connect to a DB and get records and output it to a .csv file inside a folder called reports.
Both my src folder and reports folder are in same directory. Which means I can write to the file like below.
String csvFile = "." + File.separator + "reports" + File.separator + "VAS_Report_" + subscriberType +
df.format(new Date()) + ".csv";
//db connection and result extraction logic
........
CSVWriter csvWriter = new CSVWriter(new FileWriter(csvFile));
csvWriter.writeAll(resultSet, true);
This works fine when I run the program locally. I build the project and bundle it as a JAR file.
I create a dockerfile with other relevant steps and following step (to create a folder named reports in the folder where my JAR will be copied)
RUN mkdir -p /apps/Consumer/
COPY My-App-1.0.jar /apps/Consumer/
RUN mkdir -p /apps/Consumer/reports
RUN chmod ugo+w /apps/Consumer/reports #giving write permission
Docker image builds successfully, db connection success and during run time when it tries to write to csv file, the application throws exceptions stating that it cannot find specified folder/file (FileNotFound Exception).
What am I doing wrong here?
Following are additional questions which came across while searching for a solution.
Does bundling an application preserve project structure? (Since I needed to manually create a folder named reports in docker container)
Do I need to provide permissions for created folder (reports) in any way? (Which I did here)
In place of:
RUN mkdir -p /apps/Consumer/
...
RUN mkdir -p /apps/Consumer/reports
Use WORKDIR as:
WORKDIR /apps/Consumer/
COPY My-App-1.0.jar .
RUN chmod ugo+w /apps/Consumer/reports #giving write permission
I'm trying to access files directly from an SFTP server, using Docker.
The following works:
import static java.nio.file.Paths.get;
public File[] copyExtractFiles() {
String command = "sftp -i case-loader/./docker/config -o StrictHostKeyChecking=no -P 2222 sftp#localhost:incoming/*.xml src/test/resources/extract";
Process p = new ProcessBuilder("bash", "-c", command).start();
p.waitFor();
BufferedReader stdOutput = new BufferedReader(new InputStreamReader(p.getInputStream(), Charset.forName(CHARSET_NAME)));
BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream(), Charset.forName(CHARSET_NAME)));
return get("src/test/resources/extract").toFile().listFiles();
}
This transfers an XML file from the incoming directory on the Docker image to the src/test/resources/extract directory, and then lists the files.
However, I do not have access to the local file system and so want to access the files directly on the SFTP server. Is this possible? What do I need to change?
Use SFTP library, like JSch, instead of driving an external console application. Then you will be able to download the file list to memory.
Java - download from SFTP directly to memory without ever writing to file.
I am trying to run a script on my tomcat webserver. To run the script before on my local machine, this is the code I used.
String absolutePath = new File(".").getAbsolutePath();
int last = absolutePath.length()-1;
absolutePath = absolutePath.substring(0, last);
String filePath = "";
if(osVersion.equalsIgnoreCase("Ubuntu"))
{
try (FileReader fr = new FileReader("template.txt");
FileWriter fw = new FileWriter("Ubuntu/ubuntu_file.json");) {
int c = fr.read();
while(c!=-1) {
fw.write(c);
c = fr.read();
}
} catch(IOException e) {
e.printStackTrace();
}
filePath = "Ubuntu";
String fi = absolutePath + filePath;
System.out.println(fi);//Get the full path.
// Create ProcessBuilder.
ProcessBuilder pb = new ProcessBuilder("bash", "-c",
"cd "+fi+" ; PACKER_LOG=1 /usr/local/bin/packer build ubuntu_file.json");
Process p = pb.start();
When I however try to run it on the tomcat webserver, I keep getting this error.
EclipseEE.app/Contents/MacOS/Ubuntu Failed to parse template: open
ubuntu_file.json: no such file or directory
I am fairly new to Tomcat, and I am just learning it's ins and outs. What tomcat directory should I place my Ubuntu folder (I am assuming it's the webapp directory) in order for tomcat to get the absolute path of the folder and then be able to run the script.
If you have a more or less conventional Tomcat installation then the $CATALINA_HOME environment variable will be set and point to your server installation which will contain at least the following directories:
$CATALINA_HOME/
bin/
conf/
lib/
webapps/
You can get the value of $CATALINA_HOME via:
String catalinaHomeDir = System.getenv("CATALINA_HOME");
I would be inclined to put your configuration in the conf subdirectory.
If you're running multiple Tomcat instances from the same base then be sure to read the RUNNING.txt file that comes with it because you may need to use $CATALINA_BASE instead.
You may need to set up CATALINA_HOME/BASE in your Eclipse Tomcat Runtime environment when running locally with an Eclipse controlled server.
BTW. This is not a portable solution. If you need to migrate to some other container (such as WildFly or Glassfish) then the absolute path config recommended by others is the way to go.
I am trying to stream twitter feeds to hdfs and then use hive. But the first part, streaming data and loading to hdfs is not working and giving Null Pointer Exception.
This is what I have tried.
1. Downloaded apache-flume-1.4.0-bin.tar. Extracted it. Copied all the contents to /usr/lib/flume/.
in /usr/lib/ i changed owner to the user for flume directory.
When I do ls command in /usr/lib/flume/, it shows
bin CHANGELOG conf DEVNOTES docs lib LICENSE logs NOTICE README RELEASE-NOTES tools
2. Moved to conf/ directory. I copied file flume-env.sh.template as flume-env.sh And I edited the JAVA_HOME to my java path /usr/lib/jvm/java-7-oracle.
3. Next I created a file called flume.conf in same conf directory and added following contents
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = <Twitter Application API key>
TwitterAgent.sources.Twitter.consumerSecret = <Twitter Application API secret>
TwitterAgent.sources.Twitter.accessToken = <Twitter Application Access token>
TwitterAgent.sources.Twitter.accessTokenSecret = <Twitter Application Access token secret>
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, couldera, data science, data scientist, business intelligence, mapreduce, datawarehouse, data ware housing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:8020/user/flume/tweets/%Y/%m/%d/%H/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 600
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
I created an app in twitter. Generated token and added all the keys to above file. API Key I added as consumer key.
I downloaded the flume-sources jar from cloudera -files as they mentioned in here.
4. I added the flume-sources-1.0-SNAPSHOT.jar to /user/lib/flume/lib.
5. Started Hadoop and done the following
hadoop fs -mkdir /user/flume/tweets
hadoop fs -chown -R flume:flume /user/flume
hadoop fs -chmod -R 770 /user/flume
6. I run the following in /user/lib/flume
/usr/lib/flume/conf$ bin/flume-ng agent -n TwitterAgent -c conf -f conf/flume-conf
It is showing JARs it is showing and then exiting.
When I checked the hdfs, there is no files in that. hadoop fs -ls /user/flume/tweets and it is showing nothing.
In hadoop, the core-site.xml file has following configuration
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
<fina1>true</fina1>
</property>
</configuration>
Thanks
I run the following command and it got worked
bin/flume-ng agent –conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n TwitterAgent
I used this command and it is working
flume-ng agent --conf /etc/flume-ng/conf/ -f /etc/flume-ng/conf/flume.conf - Dflume.root.logger=DEBUG,console -n TwitterAgent
I recently created an application and successfully jarred this to c:/my/folder/app.jar. It works like a charm in the following case [Startup #1]:
Open cmd
cd to c:/my/folder
java -jar app.jar
But when I do this, it doesn't work [Startup #2]:
Open cmd
cd to c:/my/
java -jar folder/app.jar
Because app.jar contains a .exe-file which I try to run in my application:
final Process p = Runtime.getRuntime().exec("rybka.exe");
It won't work in example 2 because it can't find the file rybka.exe.
Any suggestions?
Something like this is a better way forward. Copy the exe out of the jar to a temp location and run it from there. Your jar will then also be executable via webstart and so on:
InputStream src = MyClass.class.getResource("rybka.exe").openStream();
File exeTempFile = File.createTempFile("rybka", ".exe");
FileOutputStream out = new FileOutputStream(exeTempFile);
byte[] temp = new byte[32768];
int rc;
while((rc = src.read(temp)) > 0)
out.write(temp, 0, rc);
src.close();
out.close();
exeTempFile.deleteOnExit();
Runtime.getRuntime().exec(exeTempFile.toString());
If the jar will always be in that directory you can use a full path /my/folder/rybka.exe. If not, you can use getClass().getProtectionDomain().getCodeSource().getLocation() to find out the location of the jar and prepend that onto rybka.exe.
Try extracting the exe to
System.getProperty("java.io.tmpdir"));
then run it from this location too should work every time.
Paul