I am reading the list of attachment from a system witch returns the attached document in base 64 encoded string as a zip and My objective is to get the base 64 encoded string for each attached document.
Note:- I am trying below code where I am unzipping the zip and writing at my local file system.
. But in real I wanted to get the base 64 format for each file without writing the file in local file system.
public class UnzipUtility {
private static final int BUFFER_SIZE = 4096;
private static void extractFile(ZipInputStream zipIn, ZipEntry entry) throws IOException {
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream("D:/Project/"+File.separator+entry.getName()));
byte[] bytesIn = new byte[BUFFER_SIZE];
System.out.println("File Name "+entry.getName());
int read = 0;
while ((read = zipIn.read(bytesIn)) != -1) {
//Hear I dont not want to write the output stream insted I want to get the base64 data for each file.
bos.write(bytesIn);
}
bos.close();
}
public static void main(String[] args) throws IOException {
String attachmentVariable="zip base 64 data"
byte[] bytedata = attachmentVariable.getBytes("UTF-8");
byte[] valueDecoded = Base64.decodeBase64(bytedata);
ZipInputStream zipIn = new ZipInputStream(new ByteArrayInputStream(valueDecoded));
ZipEntry entry = zipIn.getNextEntry();
// iterates over entries in the zip file
while (entry != null) { extractFile(zipIn,entry);
zipIn.closeEntry();
entry = zipIn.getNextEntry();
}
}
}
So, you have a Base64 encoded string with a zip file, and you want a Map<String, String>, where key is zip entry name and value is the Base64 encoded content.
In Java 9+, that is easily done like this:
String base64ZipFile = "zip base 64 data";
Map<String, String> base64Entries = new LinkedHashMap<>();
try (ZipInputStream zipIn = new ZipInputStream(new ByteArrayInputStream(Base64.getDecoder().decode(base64ZipFile)))) {
Encoder encoder = Base64.getEncoder();
for (ZipEntry entry; (entry = zipIn.getNextEntry()) != null; ) {
base64Entries.put(entry.getName(), encoder.encodeToString(zipIn.readAllBytes()));
}
}
To Base64 encode data as it is being written to an OutputStream, use the Encoder.wrap(OutputStream os) method.
By default, BufferedOutputStream will use a 8192-byte buffer, so if you increase BUFFER_SIZE to 8192, then you won't need the BufferedOutputStream.
You should use try-with-resources, and the newer NIO.2 API.
Which means your code should be:
private static final int BUFFER_SIZE = 8192;
private static void extractFile(ZipInputStream zipIn, ZipEntry entry) throws IOException {
try ( OutputStream fos = Files.newOutputStream(Paths.get("D:/Project", entry.getName()));
OutputStream b64os = Base64.getEncoder().wrap(fos); ) {
System.out.println("File Name " + entry.getName());
byte[] buf = new byte[BUFFER_SIZE];
for (int len = 0; (len = zipIn.read(buf)) != -1; ) {
b64os.write(buf, 0, len);
}
}
}
Related
I am working on project where I have to download attachments that i receive from server. I have to exploit base64 data and convert it to the appropriate type and download it. It works for me perfectly with images ( base 64 => bytes => bitmap ) but I find troubles with other types ( txt , pdf ..etc )
try this
try {
File sdcard = Environment.getExternalStorageDirectory();
File file = new File(sdcard,"test.pdf");
File new_file_name = new File(sdcard,"new_file.pdf");
byte[] input_file = IOUtil.readFile(file);
byte[] encodedBytes = Base64.encode(input_file,URL_SAFE);
String encodedString = new String(encodedBytes);
byte[] decodedBytes = Base64.decode(encodedString.getBytes(),URL_SAFE);
FileOutputStream fos = new FileOutputStream(new_file_name);
fos.write(decodedBytes);
fos.flush();
fos.close();
}catch (Exception e)
{
Log.e("ERROR",e.toString());
}
And IOUtil class
public class IOUtil {
public static byte[] readFile(String file) throws IOException {
return readFile(new File(file));
}
public static byte[] readFile(File file) throws IOException {
// Open file
RandomAccessFile f = new RandomAccessFile(file, "r");
try {
// Get and check length
long longlength = f.length();
int length = (int) longlength;
if (length != longlength)
throw new IOException("File size >= 2 GB");
// Read file and return data
byte[] data = new byte[length];
f.readFully(data);
return data;
} finally {
f.close();
}
}
}
this code contain both encode and decode parts
I am using Java's ZIP archive APIs in the java.util.zip package. Some code that I am looking at in Apache NetBeans' source repository calculates the CRC32 of every entry in a ZIP archive being read using the following routine:
private long computeCRC32(InputStream is) throws IOException {
byte[] buf = new byte[4096];
CRC32 crc32 = new CRC32();
int read;
while ((read = is.read(buf)) != -1) {
crc32.update(buf, 0, read);
}
return crc32.getValue();
}
This routine is called once for each ZIP entry:
File f = new File("a.zip");
try (ZipFile zipFile = new ZipFile(f)) {
Enumeration<? extends ZipEntry> entries = zipFile.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
long crc;
try (InputStream is = zipFile.getInputStream(entry)) {
crc = computeCRC32(is);
}
// do something with `crc'
}
}
I am wondering if it would be better to simply call ZipEntry.getCrc() (if it returns something other than -1) rather than call computeCRC32().
One concern I have is that if the ZIP archive is malformed, getCrc() might return an incorrect value.
Is it possible for ZipEntry.getCrc() to return a value other than -1, and other than what computeCRC32() would calculate, for some ZIP entry, and to fully read through the malformed ZIP archive without any exception occurring?
UPDATE I used a hex editor to alter the CRC32 stored in a local file header of a test ZIP archive. Running my test program, I did not observe an exception, but getCrc() returned the correct CRC32 rather than the altered value.
For reference, here is my test program:
import java.io.*;
import java.util.*;
import java.util.zip.*;
public class ZipCrcTest {
public static void main(String[] args) throws IOException {
File f = new File("a.zip");
try (ZipFile zipFile = new ZipFile(f)) {
Enumeration<? extends ZipEntry> entries = zipFile.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
long crc;
try (InputStream is = zipFile.getInputStream(entry)) {
crc = computeCRC32(is);
}
System.out.printf("%s %x (computed %x)\n", entry.getName(), entry.getCrc(), crc);
if (entry.getCrc() != -1L && entry.getCrc() != crc) {
System.err.printf("Crc different for %s!\n", entry.getName());
}
}
}
}
private static long computeCRC32(InputStream is) throws IOException {
byte[] buf = new byte[4096];
CRC32 crc32 = new CRC32();
int read;
while ((read = is.read(buf)) != -1) {
crc32.update(buf, 0, read);
}
return crc32.getValue();
}
}
It turns out that the answer is "Yes".
When I similarly altered the copy of the zip entry's CRC32 in the central directory portion of the ZIP archive, getCrc() returned the altered value and no exception was thrown.
I'm kind of stuck with this problem.
I have this FileDetails class which stores details/metadata of the file along with
the complete file in a byte array. I want to send the FileDetails object inside ObjectOutputStream across network, where the receiver will
simple read the file and cast it back to FileDetails.
Here is the code:
class FileDetails {
private String fileName;
private long fileSize;
private byte[] fileData;
public FileDetails(String fileName, long fileSize, byte[] fileData) {
this.fileName = fileName;
this.fileSize = fileSize;
this.fileData = fileData;
}
public String getFileName() {
return fileName;
}
public long getFileSize() {
return fileSize;
}
public byte[] getFileData() {
return fileData;
}
}
File file = new File("C://test.dat");
RandomAccessFile randFileAccess = new RandomAccessFile(file, "r");
byte[] buff = new byte[(int) file.length()];
randFileAccess.readFully(buff);
FileDetails fd = new FileDetails(file.getname(), file.length(); buff);
FileOutputStream fos = = new FileOutputStream(C://oos.dat);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(fd);
oos.write(buff);
The problem is that the file "test.dat" is quite large and it's not optimal to read it fully into the buffer(very large) in one go. I could have read the
file into the buffer in chunks, but that would require me to create file and save data into the disk, which I cannot do as FileDetails object takes byte array.
How can I solve this problem? I want this approach only, i.e. Storing data as byte array in FileDetails object and then converting it to ObjectOutputStream, because I will be appending
the appending an mp3 file infornt of the ObjectOutStream file and sending it over the internet.
Any suggetions? Or alternative approach?
Edit: Actually I am developing an android app. Where it stores the metadata of the file in a FileDetails object along with the file data in byte array.
This FileDetails object is converted into an ObjectOutputStream file. Now an a specific mp3 file is appended in front of this ObjectOutputStream file, which is used to recognize that the file has been sent by my app.
This combined mp3 file (which contains "hidden" ObjectOutputStream file) is send via a "popular" message app to the receiver.
The receiver downloads the mp3 file through his "popular" message app. Now my app comes into action. It recognizes the mp3 file. And extracts the ObjectOutputStream file from the mp3 file. And casts it back to FileDetails and retrieves the Original file with it's metadata.
Is my approach correct? Is there any other way to recognize my appended/hidden file?
Thanks a lot in advance.
Is it possible to add the class to the receiver?
Then you could try something like this:
File file = new File("C://test.dat")
InputStream in = null;
try {
in = new BufferedInputStream(new FileInputStream(file));
-> send over the network
finally {
if (in != null) {
in.close();
}
}
}
the receiver could just write the byte to a temporary file (and not hold them in memory)
InputStream is = socket.getInputStream();
FileOutputStream fos = new FileOutputStream("C://test.dat");
BufferedOutputStream bos = new BufferedOutputStream(fos);
//something like:
byte[] buffer = new byte[1024];
int len = is.read(buffer);
while (len != -1) {
bos.write(buffer, 0, len);
len = is.read(buffer);
}
and if the operation is finished, instantiate the object FileDetails fd = new FileDetails(the file you just created,....)
You can also send the class definition over network, if you must.
Here I've added read/writeObject methods:
class FileDetails implements Serializable {
private static final int CHUNK_LEN = 0x10000; // 64k
private String fileName;
private long fileSize;
private File file;
// Note: everything can be deduced from a File object
public FileDetails(File file) {
this.fileName = file.getName();
this.fileSize = file.length();
this.file = file;
}
public String getFileName() {
return fileName;
}
public long getFileSize() {
return fileSize;
}
// explicit coding for reading a FileDetails object
private void readObject(ObjectInputStream stream)
throws IOException, ClassNotFoundException {
fileName = stream.readUTF(); // file name
fileSize = stream.readLong(); // file size
// file data as a series of byte[], length CHUNK_LEN
long toRead = fileSize;
// write file data to a File object, same path name
file = new File( fileName );
OutputStream os = new FileOutputStream( file );
while( toRead > 0 ){
// last byte arrays may be shorter than CHUNK_LEN
int chunkLen = toRead > CHUNK_LEN ? CHUNK_LEN : (int)toRead;
byte[] bytes = new byte[chunkLen];
int nread = stream.read( bytes );
// write data to file
os.write( bytes, 0, nread );
toRead -= nread;
}
os.close();
}
// explicit coding for writing a FileDetails object
private void writeObject(ObjectOutputStream stream)
throws IOException {
stream.writeUTF( fileName ); // file name as an "UTF string"
stream.writeLong( fileSize ); // file size
// file data as a series of byte[], length CHUNK_LEN
long toWrite = fileSize;
// read file data from the File object passed to the constructor
InputStream is = new FileInputStream( file );
while( toWrite > 0 ){
// last byte[] may be shorter than CHUNK_LEN
int chunkLen = toWrite > CHUNK_LEN ? CHUNK_LEN : (int)toWrite;
byte[] bytes = new byte[chunkLen];
int nread = is.read( bytes );
stream.write( bytes );
toWrite -= nread;
}
is.close();
}
private void readObjectNoData()
throws ObjectStreamException {
}
}
I've tested this with a short file:
File file = new File( "test.dat" );
FileDetails fd = new FileDetails( file );
FileOutputStream fos = new FileOutputStream("oos.dat");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject( fd );
oos.close();
// test on a local system: rename test.dat to avoid overwriting
file.renameTo( new File( "test.dat.sav" ) );
FileInputStream fis = new FileInputStream("oos.dat");
ObjectInputStream ois = new ObjectInputStream(fis);
FileDetails fd1 = (FileDetails)ois.readObject();
ois.close();
// now the file test.dat has been rewritten under the same path,
// i.e., test.dat exists again and test.dat.sav == test.dat
I'm not sure whether the receiver will be happy with some file being written according to a path name being sent in the message.
I want to read data from lets say 4 zip files called zip1, zip2, zip3, zip4. All of these zip files are split from this 1 big zip file called "BigZip". I want to combine the zip files into one and then compare the bytes if the 1 bigzip file matches the size of bytes with the combined zip file of (zip1+zip2+zip3+zip4). I am getting a very small file size when I combine the size of 4 zip files. What am I doing wrong?
Here is my code for the same:
targetFilePath1, targetFilePath2, targetFilePath3, targetFilePath4 belongs to path of 4 zip files.
sourceFilePath is the path to BigZip file
class Test {
public static void main(String args[]) {
ZipOutputStream outStream = new ZipOutputStream(new FileOutputStream(sourceBigZip));
readZip(sourceFilePath, targetFilePath1);
readZip(sourceFilePath, targetFilePath2);
readZip(sourceFilePath, targetFilePath3);
readZip(sourceFilePath, targetFilePath4);
outStream.close();
}
static void readZip(String sourceBigZip, String targetFile) throws Exception {
ZipInputStream inStream = new ZipInputStream(new FileInputStream(targetFile));
byte[] buffer = new byte[1024];
int len = inStream.read(buffer);
while (len != -1) {
outStream.write(buffer, 0, len);
len = inStream.read(buffer);
System.out.print(len);
}
inStream.close();
}
}
Create ZipOutputStream once and pass it to readZip() method, like:
public static void main(String args[]) {
ZipOutputStream outStream = new ZipOutputStream(new FileOutputStream(sourceFilePath));
readZip(outStream , targetFilePath1);
readZip(outStream , targetFilePath2);
readZip(outStream , targetFilePath3);
readZip(outStream , targetFilePath4);
}
Then you have an error dealing with copying the data from one zip to another...
You need to copy each file in the zip file like this:
static void readZip(ZipOutputStream outStream, String targetFile)
throws Exception {
ZipInputStream inStream = new ZipInputStream(new FileInputStream(
targetFile));
byte[] buffer = new byte[1024];
int len = 0;
for (ZipEntry e; (e = inStream.getNextEntry()) != null;) {
outStream.putNextEntry(e);
while ((len = inStream.read(buffer)) > 0) {
outStream.write(buffer, 0, len);
}
}
inStream.close();
}
}
Every time you call new ZipOutputStream, it creates a new empty file, and wipes out everything you have written to it before.
You have to create the stream outside of readZip, and pass it in to each call rather than creating a new stream every time.
Basically i compress video using the customized compressor class in Java. I have assembled my complete code snippets here. My actually problem is, generated video [ A.mp4] from the decompressed byte array is not running. I actually i got this compressor class code over the internet. As i new to Java platform, i am struggling to resolve this problem. Could you please any one help me on this.?
public class CompressionTest
{
public static void main(String[] args)
{
Compressor compressor = new Compressor();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
FileInputStream fis=null;
File file=null;
try
{
URL uri=CompressionTest.class.getResource("/Files/Video.mp4");
file=new File(uri.getPath());
fis = new FileInputStream(file);
}
catch ( FileNotFoundException fnfe )
{
System.out.println( "Unable to open input file");
}
try
{
byte[] videoBytes = getBytesFromFile(file);
System.out.println("CompressionVideoToCompress is: '" +videoBytes + "'");
byte[] bytesCompressed = compressor.compress(videoBytes);
System.out.println("bytesCompressed is: '" +bytesCompressed+ "'");
byte[] bytesDecompressed=compressor.decompress(bytesCompressed);
System.out.println("bytesDecompressed is: '" +bytesDecompressed+ "'");
FileOutputStream out = new FileOutputStream("A.mp4");
out.write(bytesDecompressed,0,bytesDecompressed.length-1);
out.close();
}
catch (IOException e)
{
// TODO Auto-generated catch block
System.out.println("bytesCompressed is: '");
}
}
public static byte[] getBytesFromFile(File file) throws IOException
{
InputStream is = new FileInputStream(file);
// Get the size of the file
long length = file.length();
// You cannot create an array using a long type.
// It needs to be an int type.
// Before converting to an int type, check
// to ensure that file is not larger than Integer.MAX_VALUE.
if (length > Integer.MAX_VALUE) {
// File is too large
}
// Create the byte array to hold the data
byte[] bytes = new byte[1064];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0)
{
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("Could not completely read file "+file.getName());
}
// Close the input stream and return bytes
is.close();
return bytes;
}
}
class Compressor
{
public Compressor()
{}
public byte[] compress(byte[] bytesToCompress)
{
Deflater deflater = new Deflater();
deflater.setInput(bytesToCompress);
deflater.finish();
byte[] bytesCompressed = new byte[Short.MAX_VALUE];
int numberOfBytesAfterCompression = deflater.deflate(bytesCompressed);
byte[] returnValues = new byte[numberOfBytesAfterCompression];
System.arraycopy
(
bytesCompressed,
0,
returnValues,
0,
numberOfBytesAfterCompression
);
return returnValues;
}
public byte[] decompress(byte[] bytesToDecompress)
{
Inflater inflater = new Inflater();
int numberOfBytesToDecompress = bytesToDecompress.length;
inflater.setInput
(
bytesToDecompress,
0,
numberOfBytesToDecompress
);
int compressionFactorMaxLikely = 3;
int bufferSizeInBytes =
numberOfBytesToDecompress
* compressionFactorMaxLikely;
byte[] bytesDecompressed = new byte[bufferSizeInBytes];
byte[] returnValues = null;
try
{
int numberOfBytesAfterDecompression = inflater.inflate(bytesDecompressed);
returnValues = new byte[numberOfBytesAfterDecompression];
System.arraycopy
(
bytesDecompressed,
0,
returnValues,
0,
numberOfBytesAfterDecompression
);
}
catch (DataFormatException dfe)
{
dfe.printStackTrace();
}
inflater.end();
return returnValues;
}
}
I've tested your code by compressing and decompressing a simple TXT file. The code is broken, since the compressed file, when uncompressed, is different from the original one.
Take for granted that the code is broken at least in the getBytesFromFile function. Its logic is tricky and troublesome, since it only allows files up to length 1064 and the check (throwing IOException when a longer file is read) does not work at all. The file gets read only partially and no exception is thrown.
What you are trying to achieve (file compression/decompression) can be done this way. I've tested it and it works, you just need this library.
import java.io.*;
import java.util.zip.*;
import org.apache.commons.io.IOUtils; // <-- get this from http://commons.apache.org/io/index.html
public class CompressionTest2 {
public static void main(String[] args) throws IOException {
File input = new File("input.txt");
File output = new File("output.bin");
Compression.compress(input, output);
File input2 = new File("input2.txt");
Compression.decompress(output, input2);
// At this point, input.txt and input2.txt should be equal
}
}
class Compression {
public static void compress(File input, File output) throws IOException {
FileInputStream fis = new FileInputStream(input);
FileOutputStream fos = new FileOutputStream(output);
GZIPOutputStream gzipStream = new GZIPOutputStream(fos);
IOUtils.copy(fis, gzipStream);
gzipStream.close();
fis.close();
fos.close();
}
public static void decompress(File input, File output) throws IOException {
FileInputStream fis = new FileInputStream(input);
FileOutputStream fos = new FileOutputStream(output);
GZIPInputStream gzipStream = new GZIPInputStream(fis);
IOUtils.copy(gzipStream, fos);
gzipStream.close();
fis.close();
fos.close();
}
}
This code doesn't come from "credible and/or official sources" but at least it works. :)
Moreover, in order to get more answers, adjust the title stating your real problem: your compressed files don't decompress the right way. There is no 'video' stuff here. Moreover, zipping a .mp4 file is no achievement (compression ratio will likely be around 99.99%).
Two tips:
1) Replace getBytesFromFile with a well known API call, either using Apache commons (IOUtils) or java 7 now provides such a method, too.
2) Test compress and decompress by writing a Junit test:
Create a random huge byte array, write it out, read it back and compare it with the created one.