Convert inputstream to file all extensions - java

I am converting an InputStream to file using below code snippet. This is creating file in temp location. But when I get PDF/Word/XLS as an inputstream. This is corrupting the file and I am unable to use this file object for other operations. I know I have to set content-type/Mime type. I tried giving the prefix and suffix matching my requirement but it didn't work.
public static final String PREFIX = "stream2file";
public static final String SUFFIX = ".tmp";
public static File stream2file (InputStream in) throws IOException {
final File tempFile = File.createTempFile(PREFIX, SUFFIX);
tempFile.deleteOnExit();
try (FileOutputStream out = new FileOutputStream(tempFile)) {
IOUtils.copy(in, out);
}
return tempFile;
}
I don't know what has to be added in this code snippet while creating the temporary file so that I can retrieve all the files without corrupting.

Related

The file gets smaller after reading the jar package and writing it to another file

package com.example.demo.Util;
public class Test {
static HashMap<String,String> map = new HashMap<>();
public static void main(String[] args) throws IOException {
String data = "12j3h1i7tsa7sgdajk123y8asd: 88888";
File jarFile = new File(new Test().getJarPath());
File tempJar = upJarFile(jarFile, "BOOT-INF/classes/application.properties", data);
}
public static File upJarFile(File originalJarFile, String editFilePath, String content) throws IOException {
File tempFile = File.createTempFile("temp", ".jar");
JarFile jarFile = new JarFile(originalJarFile);
Enumeration<JarEntry> entries = jarFile.entries();
System.out.println("before:"+ originalJarFile.length());
JarOutputStream jarOutputStream = new JarOutputStream(new FileOutputStream(tempFile));
while (entries.hasMoreElements()) {
JarEntry jarEntry = entries.nextElement();
jarOutputStream.putNextEntry(jarEntry);
map.put(jarEntry.getName(), String.valueOf(jarEntry.getSize()));
jarOutputStream.write(new Test().inputStreamToByteArray(jarFile.getInputStream(jarEntry)));
}
jarOutputStream.finish();
jarOutputStream.close();
System.out.println(tempFile.getPath());
System.out.println("after:" + tempFile.length());
return tempFile;
}
public String getJarPath() {
String path1 = System.getProperty("user.dir");
File file = new File(path1 + "/target/");
String jarFile = null;
for (File file1 : file.listFiles()) {
if (file1.getName().endsWith(".jar")) {
jarFile = file1.getPath();
break;
}
}
return jarFile;
}
public byte[] inputStreamToByteArray(InputStream inputStream) {
try (ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
byte[] buffer = new byte[1024];
int num;
while ((num = inputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, num);
}
byteArrayOutputStream.flush();
return byteArrayOutputStream.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return new byte[]{};
}
}
As shown in the code above,I just turn the incoming jar packages into streams and write them one by one,But it got smaller when I tested the size of the input package and the size of the output temporary package(before:49651057-->after:49647985)
What could be causing this difference?
This can happen due to a number of reasons:
The original JAR file was created with a compression level that is not as high as the default compression level, so the JAR file that you create (with default compression) achieves better compression, and therefore it is smaller. You can verify this by opening both the original and the result JAR files with a ZIP utility (e.g. 7Zip) and examining their checksums and their compressed sizes. If the checksums are identical, but the compressed sizes differ, then the difference is simply due to better compression.
The original JAR file contains unused data. This can happen when sloppy archive creation software updates an archive by appending to it instead of rewriting it from scratch. You can verify this by opening the original ZIP archive with a ZIP utility (e.g. 7Zip) and saving it under a new filename. If the new file is smaller, then the original file contained some unused data.
The original JAR file contains files in subdirectories, which you are not checking. Thus, your output JAR file does not contain all of the files in the original. To fix this, you need to check each entry with jarEntry.isDirectory() and if so, recurse.

How to read needed files from rar archive directly to InputStream (without extracting whole archive)?

Seems quite simple with zip archive using java.util.zip.ZipFile like this:
public static void main(String[] args) throws IOException
{
final ZipFile zipFile = new ZipFile("C:/test.zip");
final Enumeration<? extends ZipEntry> entries = zipFile.entries();
while(entries.hasMoreElements())
{
final ZipEntry entry = entries.nextElement();
if(entry.getName().equals("NEEDED_NAME"))
{
try(InputStream inputStream = zipFile.getInputStream(entry))
{
// Do what's needed with the inputStream.
}
}
}
}
What would be the alternative for rar archives?
I'm aware of Junrar, but didn't found a way to do it without extracting whole archive to some folder.
Edit:
I have added "if sentence for entry.getName()" line just to indicate what I'm interested only in some specific files inside archive and would like to avoid extracting whole archive to some folder and later deleting those files.
I end up using something like this for now (with Junrar):
final Archive archive = new Archive(new File("C:/test.rar"), null);
final LocalFolderExtractor lfe = new LocalFolderExtractor(new File("/path/to/temp/location/"), new FileSystem());
for (final FileHeader fileHeader : archive)
{
if(fileHeader.getFileNameString().equals("NEEDED_NAME"))
{
File file = null;
try
{
file = lfe.extract(archive, fileHeader);
// Create inputStream from file and do what's needed.
}
finally
{
// Fully delete the file + folders if needed.
}
}
}
Maybe there is a better way :)

How to load zip file that resides in S3 bucket?

I have a situation where I need to open a zip file that resides in S3 bucket.
So far my code is like below:
public ZipFile readZipFile(String name) throws Exception {
GetObjectRequest req = new GetObjectRequest(settings.getAwsS3BatchRecogInBucketName(), name);
S3Object obj = s3Client.getObject(req);
S3ObjectInputStream is = obj.getObjectContent();
/******************************
* HOW TO DO
******************************/
return null;
}
Previously I did try creating a temporary file object and with File.createTempFile function, but I always got trouble where I don't get the File object created. My previous attempt was like below:
public ZipFile readZipFile(String name) throws Exception {
GetObjectRequest req = new GetObjectRequest(settings.getAwsS3BatchRecogInBucketName(), name);
S3Object obj = s3Client.getObject(req);
S3ObjectInputStream is = obj.getObjectContent();
File temp = File.createTempFile(name, "");
temp.setWritable(true);
FileOutputStream fos = new FileOutputStream(temp);
fos.write(IOUtils.toByteArray(is));
fos.flush();
return new ZipFile(temp);
}
Anybody ever got into this situation? Please advice me thanks :)
If you want to use the zip file immediately without saving it to a temporary file first, you can use java.util.zip.ZipInputStream:
import java.util.zip.ZipInputStream;
S3ObjectInputStream is = obj.getObjectContent();
ZipInputStream zis = new ZipInputStream(is);
From there on you can read through the entries of the zip files, ignoring the ones that you don't need, and using the ones that you need:
ZipEntry entry;
while ((entry = zis.getNextEntry()) != null) {
String name = entry.getName();
if (iWantToProcessThisEntry(name)) {
processFile(name, zis);
}
zis.closeEntry();
}
public void processFile(String name, InputStream in) throws IOException { /* ... */ }
You don't need to worry about storing temporary files that way.

Assigning a destination for properties file in a java program

Sorry if this seems like a newbie question and im sure its just a little thing i need to change but it seems like my program cannot locate the destination for a properties file i coded in.
here is my code
public String metrics() throws IOException {
String result = "";
Properties prop = new Properties();
String propFileName = "C:\\Users\\JChoi\\Desktop\\config.properties";
InputStream inputStream = getClass().getClassLoader().getResourceAsStream(propFileName);
prop.load(inputStream);
if (inputStream == null) {
throw new FileNotFoundException("property file '" + propFileName + "' not found in the classpath");
}
// get the property value and print it out
String Metrics = prop.getProperty("Metrics");
result = Metrics;
System.out.println(result);
return result;
}
I get a nullpointerexception error everytime i run the code but however, when i put the properties file in the resources folder and edit the string name to...
String propFileName = "config.properties";
works fine...any suggestions?
EDIT:
String result = "";
Properties prop = new Properties();
String propFileName = "C:\\Users\\JChoi\\Desktop\\config.properties";
FileInputStream fileInputStream = getClass().getClassLoader().getResourceAsStream(propFileName);
prop.load(fileInputStream);
SOLVED!
String propFileName = "C:\\Users\\JChoi\\Desktop\\googlebatchfile\\config.properties";
BufferedInputStream inputStream;
FileInputStream fileInputStream = new FileInputStream(propFileName);
inputStream = new BufferedInputStream(fileInputStream);
If you know the full path to a file, then do not try to open it using a classpath search (which is what getResourceAsStream() does).
Instead open the file using an inputsteam that takes a path.
Here is some code:
FileInputStream inputStream = new FileInputStream(propFileName);
The following might be a better technique (I'm not sure with property loading):
BufferedInputStream inputStream;
FileInputStream fileInputStream = new FileInputStream(propFileName);
inputStream = new BufferedInputStream(fileInputStream);
You are attempting to load a file using a classpath-based input stream but specifying a filepath.
This:
getClass().getClassLoader().getResourceAsStream(propFileName);
Will attempt to search the classpath starting at the root (based on whatever the classloader considers the root).
If you want to load a file from outside the classpath, you probably just want to use something like a FileInputStream instead.

Write to different file instead of overwriting file

I am wondering if there is an option in java to read file from specific path i.e C:\test1.txt change the content of the file in the memory and copy it to D:\test2.txt while the content of C:\test1.txt will not change but the affected file will be D:\test2.txt
Thanks
As a basic solution, you can read in chunks from one FileInputStream and write to a FileOutputStream:
import java.io.*;
class Test {
public static void main(String[] _) throws Exception{
FileInputStream inFile = new FileInputStream("test1.txt");
FileOutputStream outFile = new FileOutputStream("test2.txt");
byte[] buffer = new byte[128];
int count;
while (-1 != (count = inFile.read(buffer))) {
// Dumb example
for (int i = 0; i < count; ++i) {
buffer[i] = (byte) Character.toUpperCase(buffer[i]);
}
outFile.write(buffer, 0, count);
}
inFile.close();
outFile.close();
}
}
If you explicitly want the entire file in memory, you can also wrap your input in a DataInputStream and use readFully(byte[]) after using File.length() to figure out the size of the file.
I think, the easiest you can do, is to use Scanner class to read file and then write with writer.
Here are some nice examples for different java versions.
Or, you can also use apache commons lib to read/write/copy file.
public static void main(String args[]) throws IOException {
//absolute path for source file to be copied
String source = "C:/sample.txt";
//directory where file will be copied
String target ="C:/Test/";
//name of source file
File sourceFile = new File(source);
String name = sourceFile.getName();
File targetFile = new File(target+name);
System.out.println("Copying file : " + sourceFile.getName() +" from Java Program");
//copy file from one location to other
FileUtils.copyFile(sourceFile, targetFile);
System.out.println("copying of file from Java program is completed");
}

Categories

Resources