I am using below iText Java code to extract attachments from PDF file. that work fine on local system. It extract XML file from PDF and stores on strOutputPath. I want to perform this operation on AWS S3. PDF file will on S3 and attachment should be extracted on S3. How I can use absolute path of file on S3 in this case. I used s3client.getUrl().toExternalForm(); but I get HTTP 403 error.
import java.util.Iterator;
import java.util.Set;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.File;
import com.itextpdf.text.pdf.PdfObject;
import com.itextpdf.text.pdf.PRStream;
import com.itextpdf.text.pdf.PdfArray;
import com.itextpdf.text.pdf.PdfDictionary;
import java.io.IOException;
import com.itextpdf.text.pdf.PdfName;
import com.itextpdf.text.pdf.PdfReader;
public class app
{
public static void main(final String[] args) {
try {
final String strInputPath = args[0];
final String strOutputPath = args[1];
final PdfReader pdfReader = new PdfReader(strInputPath);
final PdfDictionary catalog = pdfReader.getCatalog();
final PdfDictionary names = catalog.getAsDict(PdfName.NAMES);
final PdfDictionary embeddedFiles = names.getAsDict(PdfName.EMBEDDEDFILES);
final PdfArray embeddedFilesArray = embeddedFiles.getAsArray(PdfName.NAMES);
for (int i = 0; i < embeddedFilesArray.size(); ++i) {
final PdfDictionary FileSpec = embeddedFilesArray.getAsDict(i);
if (FileSpec != null) {
String strFileName = FileSpec.getAsString(PdfName.F).toString();
System.out.println(strFileName);
if (strFileName.endsWith(".xml")) {
strFileName = String.valueOf(System.currentTimeMillis()) + ".xml";
extractFiles(pdfReader, FileSpec, String.valueOf(strOutputPath) + strFileName);
}
}
}
}
catch (IOException e) {
e.printStackTrace();
}
}
private static void extractFiles(final PdfReader pdfReader, final PdfDictionary filespec, final String strFileName) {
final PdfDictionary refs = filespec.getAsDict(PdfName.EF);
PRStream prStream = null;
FileOutputStream outputStream = null;
final Set<PdfName> keys = (Set<PdfName>)refs.getKeys();
try {
for (final PdfName key : keys) {
prStream = (PRStream)PdfReader.getPdfObject((PdfObject)refs.getAsIndirectObject(key));
outputStream = new FileOutputStream(new File(strFileName));
outputStream.write(PdfReader.getStreamBytes(prStream));
outputStream.flush();
outputStream.close();
}
}
catch (FileNotFoundException e) {
e.printStackTrace();
}
catch (IOException e2) {
e2.printStackTrace();
}
finally {
try {
if (outputStream != null) {
outputStream.close();
}
}
catch (IOException e3) {
e3.printStackTrace();
}
}
try {
if (outputStream != null) {
outputStream.close();
}
}
catch (IOException e3) {
e3.printStackTrace();
}
}
}
I think what you need to do is write a Java client that works on the files on your S3 bucket and performs following steps:
Downloads the required file from S3.
Extracts the attachment from the file.
Uploads the resultant files back to S3.
Sample code the perform above mentioned steps is as follows :
import java.io.*;
import java.util.Set;
import com.amazonaws.services.s3.*;
import com.amazonaws.services.s3.model.*;
import com.itextpdf.text.pdf.*;
public class S3PDFAttachmentExtractor {
public static void main(String[] args) throws IOException {
// download file from S3
AmazonS3Client amazonS3Client = new AmazonS3Client();
S3Object object = amazonS3Client.getObject("<yours3location>", "fileKey");
// write the file content to a local file.
S3ObjectInputStream objectContent = object.getObjectContent();
FileOutputStream out = new FileOutputStream("tempOutputFile.pdf");
writeToFile(objectContent, out);
// Extract attachment from the downloaded file.
extractAttachment("tempOutputFile.pdf", "tempAttachement.xml");
//upload the attachment
uploadFile("<s3bucket.fully.qualified.name>", "tempAttachement.xml", "attachementNameOnS3.xml");
}
private static void writeToFile(InputStream input, FileOutputStream out) throws IOException {
// Read the text input stream one line at a time and display each line.
try (BufferedInputStream in = new BufferedInputStream(input);) {
byte[] chunk = new byte[1024];
while (in.read(chunk) > 0) {
out.write(chunk);
}
} finally {
input.close();
}
}
public static void extractAttachment(final String strInputPath, final String strOutputPath) {
try {
final PdfReader pdfReader = new PdfReader(strInputPath);
final PdfDictionary catalog = pdfReader.getCatalog();
final PdfDictionary names = catalog.getAsDict(PdfName.NAMES);
final PdfDictionary embeddedFiles = names.getAsDict(PdfName.EMBEDDEDFILES);
final PdfArray embeddedFilesArray = embeddedFiles.getAsArray(PdfName.NAMES);
for (int i = 0; i < embeddedFilesArray.size(); ++i) {
final PdfDictionary FileSpec = embeddedFilesArray.getAsDict(i);
if (FileSpec != null) {
String strFileName = FileSpec.getAsString(PdfName.F).toString();
System.out.println(strFileName);
if (strFileName.endsWith(".xml")) {
strFileName = String.valueOf(System.currentTimeMillis()) + ".xml";
extractFiles(pdfReader, FileSpec, String.valueOf(strOutputPath) + strFileName);
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
private static void extractFiles(final PdfReader pdfReader, final PdfDictionary filespec, final String strFileName) {
final PdfDictionary refs = filespec.getAsDict(PdfName.EF);
PRStream prStream = null;
FileOutputStream outputStream = null;
final Set<PdfName> keys = (Set<PdfName>) refs.getKeys();
try {
for (final PdfName key : keys) {
prStream = (PRStream) PdfReader.getPdfObject((PdfObject) refs.getAsIndirectObject(key));
outputStream = new FileOutputStream(new File(strFileName));
outputStream.write(PdfReader.getStreamBytes(prStream));
outputStream.flush();
outputStream.close();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e2) {
e2.printStackTrace();
} finally {
try {
if (outputStream != null) {
outputStream.close();
}
} catch (IOException e3) {
e3.printStackTrace();
}
}
try {
if (outputStream != null) {
outputStream.close();
}
} catch (IOException e3) {
e3.printStackTrace();
}
}
private static void uploadFile(String bucketFullPath, String fileLocation, String fileName) throws IOException {
AmazonS3Client amazonS3Client = new AmazonS3Client();
InputStream bis = new FileInputStream(fileLocation);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentType("application/xml");
amazonS3Client.putObject(bucketFullPath, fileName, bis, objectMetadata);
}
}
Please note that a better way to do this type of thing is to write a AWS Lambda function in Java using the above code. Since AWS Lambada can be easily configured to process events from S3 Storage, your code will automatically get invoked when a file is written or modified in S3 bucket. For further details you can check the AWS Lambda Documentation
Edit:
Another alternative is - If you are running the Java code on AWS EC2, then there is a way to mount a S3 bucket as a file System. This will allow you access files as if these files are stored locally, And your original code will work. But this approach will work only on AWS EC2 environment.
Related
I am wondering if any one can help implementing Parallel Zip Creation using ScatterZipOutputStream . I have searched a lot but no where I am finding an example for the same.
https://commons.apache.org/proper/commons-compress/zip.html
I have tried making Zip, Zipping a directory etc with ZipArchiveOutputStream . Now, I am trying to do that in parallel.
public static void makeZip(String filename) throws IOException,
ArchiveException {
File sourceFile = new File(filename);
final OutputStream out = new FileOutputStream(filename.substring(0, filename.lastIndexOf('.')) + ".zip");
ZipArchiveOutputStream os = new ZipArchiveOutputStream(out);
os.setUseZip64(Zip64Mode.AsNeeded);
os.putArchiveEntry(new ZipArchiveEntry(sourceFile.getName()));
IOUtils.copy(new FileInputStream(sourceFile), os);
os.closeArchiveEntry();
os.close();
}
It should be able to process individual files as thread and then combine it to write the result zip.
Following is the working code of both zip and unzip:
1. Change path for sourceFolder and zipFilePath
2. Zipping only *.text type of files it can be any type or all the files
3. Unzipped files at sourceFolder/unzip/
Import following dependency in build.gradle or in pom.xml
implementation("org.apache.commons:commons-compress:1.18")
implementation("commons-io:commons-io:2.6")
Ref: https://mvnrepository.com/artifact/org.apache.commons/commons-compress/1.18
https://mvnrepository.com/artifact/commons-io/commons-io/2.6
//code
import org.apache.commons.compress.archivers.zip.*;
import org.apache.commons.compress.parallel.InputStreamSupplier;
import org.apache.commons.io.FileUtils;
import java.io.*;
import java.nio.file.Files;
import java.util.Iterator;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
public class ZipMain {
static ParallelScatterZipCreator scatterZipCreator = new ParallelScatterZipCreator();
static ScatterZipOutputStream dirs;
static {
try {
dirs = ScatterZipOutputStream.fileBased(File.createTempFile("java-zip-dirs", "tmp"));
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws IOException {
String sourceFolder = "/Users/<user>/Desktop/";
String zipFilePath = "/Users/<user>/Desktop/Desk.zip";
String fileTypesToBeAddedToZip = "txt";
zip(sourceFolder, zipFilePath, fileTypesToBeAddedToZip);
unzip(zipFilePath, sourceFolder + "/unzip/");
}
private static void zip(String sourceFolder, String zipFilePath, String fileTypesToBeAddedToZip) throws IOException {
OutputStream outputStream = null;
ZipArchiveOutputStream zipArchiveOutputStream = null;
try {
File srcFolder = new File(sourceFolder);
if (srcFolder.isDirectory()) {
// uncomment following code if you want to add all files under srcFolder
//Iterator<File> fileIterator = Arrays.asList(srcFolder.listFiles()).iterator();
Iterator<File> fileIterator = FileUtils.iterateFiles(srcFolder, new String[]{fileTypesToBeAddedToZip}, true);
File zipFile = new File(zipFilePath);
zipFile.delete();
outputStream = new FileOutputStream(zipFile);
zipArchiveOutputStream = new ZipArchiveOutputStream(outputStream);
zipArchiveOutputStream.setUseZip64(Zip64Mode.AsNeeded);
int srcFolderLength = srcFolder.getAbsolutePath().length() + 1; // +1 to remove the last file separator
while (fileIterator.hasNext()) {
File file = fileIterator.next();
// uncomment following code if you want to add all files under srcFolder
//if (file.isDirectory()) {
// continue;
// }
String relativePath = file.getAbsolutePath().substring(srcFolderLength);
InputStreamSupplier streamSupplier = () -> {
InputStream is = null;
try {
is = Files.newInputStream(file.toPath());
} catch (IOException e) {
e.printStackTrace();
}
return is;
};
ZipArchiveEntry zipArchiveEntry = new ZipArchiveEntry(relativePath);
zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
scatterZipCreator.addArchiveEntry(zipArchiveEntry, streamSupplier);
}
scatterZipCreator.writeTo(zipArchiveOutputStream);
}
if (zipArchiveOutputStream != null) {
zipArchiveOutputStream.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (outputStream != null) {
outputStream.close();
}
}
}
private static void unzip(String zipFilePath, String destDir) {
File dir = new File(destDir);
// create output directory if it doesn't exist
if (!dir.exists()) {
dir.mkdirs();
} else {
dir.delete();
}
FileInputStream fis;
//buffer for read and write data to file
byte[] buffer = new byte[1024];
try {
fis = new FileInputStream(zipFilePath);
ZipInputStream zis = new ZipInputStream(fis);
ZipEntry ze = zis.getNextEntry();
while (ze != null) {
String fileName = ze.getName();
File newFile = new File(destDir + File.separator + fileName);
System.out.println("Unzipping to " + newFile.getAbsolutePath());
//create directories for sub directories in zip
String parentFolder = newFile.getParent();
File folder = new File(parentFolder);
folder.mkdirs();
FileOutputStream fos = new FileOutputStream(newFile);
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
//close this ZipEntry
zis.closeEntry();
ze = zis.getNextEntry();
}
//close last ZipEntry
zis.closeEntry();
zis.close();
fis.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Ref: Fast zipping folder using java ParallelScatterZipCreator
I need to copy file from one place to another. I have found good solution :
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class FileCopyTest {
public static void main(String[] args) {
Path source = Paths.get("/Users/apple/Desktop/test.rtf");
Path destination = Paths.get("/Users/apple/Desktop/copied.rtf");
try {
Files.copy(source, destination);
} catch (IOException e) {
e.printStackTrace();
}
}
}
This library work good, but in doesn't available in Android...
I try figure out which way i should use instead of, but it any suggestion... I am almost sure that it should be a library which allow copy files in one go.
If someone know say please, i am sure it will be very helpful answer for loads of people.
Thanks!
Well with commons-io, you can do this
FileInputStream source = null;
FileOutputStream destination = null;
try {
source = new FileInputStream(new File(/*...*/));
destination = new FileOutputStream(new File(Environment.getExternalStorageDirectory(), /*...*/);
IOUtils.copy(source, destination);
} finally {
IOUtils.closeQuietly(source);
IOUtils.closeQuietly(destination);
}
Just add
compile 'org.apache.directory.studio:org.apache.commons.io:2.4'
to the build.gradle file
try this code
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.channels.FileChannel;
public class CopyFile {
public static void main(String[] args) {
File sourceFile = new File(
"/Users/Neel/Documents/Workspace/file1.txt");
File destFile = new File(
"/Users/Neel/Documents/Workspace/file2.txt");
/* verify whether file exist in source location */
if (!sourceFile.exists()) {
System.out.println("Source File Not Found!");
}
/* if file not exist then create one */
if (!destFile.exists()) {
try {
destFile.createNewFile();
System.out.println("Destination file doesn't exist. Creating
one!");
} catch (IOException e) {
e.printStackTrace();
}
}
FileChannel source = null;
FileChannel destination = null;
try {
/**
* getChannel() returns unique FileChannel object associated a file
* output stream.
*/
source = new FileInputStream(sourceFile).getChannel();
destination = new FileOutputStream(destFile).getChannel();
if (destination != null && source != null) {
destination.transferFrom(source, 0, source.size());
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
finally {
if (source != null) {
try {
source.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (destination != null) {
try {
destination.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
Use this utility class to read/write file in sdcard:
public class MyFile {
String TAG = "MyFile";
Context context;
public MyFile(Context context){
this.context = context;
}
public Boolean writeToSD(String text){
Boolean write_successful = false;
File root=null;
try {
// check for SDcard
root = Environment.getExternalStorageDirectory();
Log.i(TAG,"path.." +root.getAbsolutePath());
//check sdcard permission
if (root.canWrite()){
File fileDir = new File(root.getAbsolutePath());
fileDir.mkdirs();
File file= new File(fileDir, "samplefile.txt");
FileWriter filewriter = new FileWriter(file);
BufferedWriter out = new BufferedWriter(filewriter);
out.write(text);
out.close();
write_successful = true;
}
} catch (IOException e) {
Log.e("ERROR:---", "Could not write file to SDCard" + e.getMessage());
write_successful = false;
}
return write_successful;
}
public String readFromSD(){
File sdcard = Environment.getExternalStorageDirectory();
File file = new File(sdcard,"samplefile.txt");
StringBuilder text = new StringBuilder();
try {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
while ((line = br.readLine()) != null) {
text.append(line);
text.append('\n');
}
}
catch (IOException e) {
}
return text.toString();
}
#SuppressLint("WorldReadableFiles")
#SuppressWarnings("static-access")
public Boolean writeToSandBox(String text){
Boolean write_successful = false;
try{
FileOutputStream fOut = context.openFileOutput("samplefile.txt",
context.MODE_WORLD_READABLE);
OutputStreamWriter osw = new OutputStreamWriter(fOut);
osw.write(text);
osw.flush();
osw.close();
}catch(Exception e){
write_successful = false;
}
return write_successful;
}
public String readFromSandBox(){
String str ="";
String new_str = "";
try{
FileInputStream fIn = context.openFileInput("samplefile.txt");
InputStreamReader isr = new InputStreamReader(fIn);
BufferedReader br=new BufferedReader(isr);
while((str=br.readLine())!=null)
{
new_str +=str;
System.out.println(new_str);
}
}catch(Exception e)
{
}
return new_str;
}
}
Note you should give this permission in the AndroidManifest file.
Here permision
uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"
For more details visit : http://www.coderzheaven.com/2012/09/06/read-write-files-sdcard-application-sandbox-android-complete-example/
Android developer official Docs
I have byte[] zipFileAsByteArray
This zip file has rootDir --|
| --- Folder1 - first.txt
| --- Folder2 - second.txt
| --- PictureFolder - image.png
What I need is to get two txt files and read them, without saving any files on disk. Just do it in memory.
I tried something like this:
ByteArrayInputStream bis = new ByteArrayInputStream(processZip);
ZipInputStream zis = new ZipInputStream(bis);
Also I will need to have separate method go get picture. Something like this:
public byte[]image getImage(byte[] zipContent);
Can someone help me with idea or good example how to do that ?
Here is an example:
public static void main(String[] args) throws IOException {
ZipFile zip = new ZipFile("C:\\Users\\mofh\\Desktop\\test.zip");
for (Enumeration e = zip.entries(); e.hasMoreElements(); ) {
ZipEntry entry = (ZipEntry) e.nextElement();
if (!entry.isDirectory()) {
if (FilenameUtils.getExtension(entry.getName()).equals("png")) {
byte[] image = getImage(zip.getInputStream(entry));
//do your thing
} else if (FilenameUtils.getExtension(entry.getName()).equals("txt")) {
StringBuilder out = getTxtFiles(zip.getInputStream(entry));
//do your thing
}
}
}
}
private static StringBuilder getTxtFiles(InputStream in) {
StringBuilder out = new StringBuilder();
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String line;
try {
while ((line = reader.readLine()) != null) {
out.append(line);
}
} catch (IOException e) {
// do something, probably not a text file
e.printStackTrace();
}
return out;
}
private static byte[] getImage(InputStream in) {
try {
BufferedImage image = ImageIO.read(in); //just checking if the InputStream belongs in fact to an image
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(image, "png", baos);
return baos.toByteArray();
} catch (IOException e) {
// do something, it is not a image
e.printStackTrace();
}
return null;
}
Keep in mind though I am checking a string to diferentiate the possible types and this is error prone. Nothing stops me from sending another type of file with an expected extension.
You can do something like:
public static void main(String args[]) throws Exception
{
//bis, zis as you have
try{
ZipEntry file;
while((file = zis.getNextEntry())!=null) // get next file and continue only if file is not null
{
byte b[] = new byte[(int)file.getSize()]; // create array to read.
zis.read(b); // read bytes in b
if(file.getName().endsWith(".txt")){
// read files. You have data in `b`
}else if(file.getName().endsWith(".png")){
// process image
}
}
}
finally{
zis.close();
}
}
You can use below code.
But need to make sure that you S3 Bucket initial setup.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;
import java.io.*;
import static com.amazonaws.regions.Regions.US_EAST_1;
public class GetObject2 {
public static void main(String[] args) throws IOException {
String bucketName = "Give Yout Bucket Name";
String key = "Give your String Key";
S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(US_EAST_1)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Get an object and print its contents.
System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());
File localFile = new File("C:\\awstest.zip");
ObjectMetadata object = s3Client.getObject(new GetObjectRequest(bucketName, key), localFile);
// Get a range of bytes from an object and print the bytes.
GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());
// Get an entire object, overriding the specified response headers, and print the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} finally {
// To ensure that the network connection doesn't remain open, close any open input streams.
if (fullObject != null) {
fullObject.close();
}
if (objectPortion != null) {
objectPortion.close();
}
if (headerOverrideObject != null) {
headerOverrideObject.close();
}
}
}
static void displayTextInputStream(InputStream input) throws IOException {
// Read the text input stream one line at a time and display each line.
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
String line = null;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
System.out.println();
}
}
With this code iI always get a empty file.
What I have to do with it?
login is always true. (ofc, here is not real password)
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPFile;
import java.io.*;
public class Logs {
public static void main(String[] args) {
FTPClient client = new FTPClient();
try {
client.connect("myac.cs-server.pro", 121);
boolean login = client.login("a3ro", "passWordIsSecret");
System.out.println(login);
String remoteFile1 = "myac_20150304.log";
File downloadFile1 = new File("C:\\Users\\Aero\\Desktop\\test\\myac.log");
OutputStream outputStream1 =
new BufferedOutputStream(new FileOutputStream(downloadFile1));
boolean success = client.retrieveFile(remoteFile1, outputStream1);
System.out.println(success);
outputStream1.close();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
client.disconnect();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
Use FileOutputStream:
String filename = "test.txt";
FileOutputStream fos = new FileOutputStream(filename);
client.retrieveFile("/" + filename, fos);
Use something like this:
InputStream inputStream = client.retrieveFileStream(remoteFileNameHere);
To retrieve the remote file input stream.
Then you can use to copy the stream to desired file:
FileOutputStream out = new FileOutputStream(targetFile);
org.apache.commons.io.IOUtils.copy(in, out);
Have a file on specified path /foo/file-a.txt and that file contains a path of another file
file-a.txt contains: /bar/file-b.txt this path at line one. need to parse the path of file-b.txt and zip that file and move that zipped file to another path /too/ from my Java code.
I been till the below code then i m stuck.
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
public class Reader
{
public static void main(String[] args)
{
BufferedReader br = null;
try
{
String CurrentLine;
br = new BufferedReader(new FileReader("/foo/file-a.txt"));
while ((CurrentLine = br.readLine()) != null)
{
System.out.println(CurrentLine);
}
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (br != null)br.close();
}
catch (IOException ex)
{
ex.printStackTrace();
}
}
}
}
am getting path as text, help would be appreciated. Thanks in advance
For the actual zipping of the file, this page may be of help.
As a general note, this code will replace the current existing zip file.
public class TestZip02 {
public static void main(String[] args) {
try {
zip(new File("TextFiles.zip"), new File("sample.txt"));
} catch (IOException ex) {
ex.printStackTrace();
}
}
public static void zip(File zip, File file) throws IOException {
ZipOutputStream zos = null;
try {
String name = file.getName();
zos = new ZipOutputStream(new FileOutputStream(zip));
ZipEntry entry = new ZipEntry(name);
zos.putNextEntry(entry);
FileInputStream fis = null;
try {
fis = new FileInputStream(file);
byte[] byteBuffer = new byte[1024];
int bytesRead = -1;
while ((bytesRead = fis.read(byteBuffer)) != -1) {
zos.write(byteBuffer, 0, bytesRead);
}
zos.flush();
} finally {
try {
fis.close();
} catch (Exception e) {
}
}
zos.closeEntry();
zos.flush();
} finally {
try {
zos.close();
} catch (Exception e) {
}
}
}
}
For moving the file, you can use File.renameTo, here's an example.
Hope this helps!