ZipOutputStream generate corrupted zip file - java

I use the code below to generate the zip file and return it to the frontend, The performance is normal when the file is small. However, when the compressed file exceeds 1GB, the downloaded file will be conrrupted, and the number of compressed files is reduced, and this phenomenon does not always occur, sometimes downloading 3.2GB files is normal again =_=
Looking forward to hearing from everyone
outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = null;
FileInputStream filenputStream = null;
BufferedInputStream bis = null;
try {
zipOutStream = new ZipOutputStream(new BufferedOutputStream(outputStream));
zipOutStream.setMethod(ZipOutputStream.DEFLATED);
for (int i = 0; i < files.size(); i++) {
File file = files.get(i);
filenputStream = new FileInputStream(file);
bis = new BufferedInputStream(filenputStream);
zipOutStream.putNextEntry(new ZipEntry(fileName[i]));
int len = 0;
byte[] bs = new byte[40];
while ((len = bis.read(bs)) != -1) {
zipOutStream.write(bs,0,len);
}
bis.close();
filenputStream.close();
zipOutStream.closeEntry();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if(Objects.nonNull(filenputStream)) {
filenputStream.close();
}
if(Objects.nonNull(bis)) {
bis.close();
}
if (Objects.nonNull(zipOutStream)) {
zipOutStream.flush();
zipOutStream.close();
}
if (Objects.nonNull(outputStream)) {
outputStream.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}

One of the potential issues with your code is with memory leaks and streams that are not properly flushed and closed if the code runs in to any exceptions.
One way to help reduce these leaks and to ensure all resources are properly closed, even when exceptions occur, is to use try-with-resources.
Here is an example of how your code example can be rewritten to utilize try-with-resources.
In this example, to get it to compile in a detached IDE environment, I've put it in a function named testStream since I do not have knowledge as to what contributes to the source of the parameters files and fileName. Therefore, validation and content of these two parameters are left to chance, and it is assumed both have the same number of entries, and are paired with each other (element 0 of each are paired together). Since this odd relationship exists, it's linked by the variable named i to mirror the original source code.
public void testStream( HttpServletResponse response, List<File> files, String[] fileName )
{
int i = 0;
try (
ServletOutputStream outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = new ZipOutputStream( new BufferedOutputStream( outputStream ) );
)
{
zipOutStream.setMethod( ZipOutputStream.DEFLATED );
for ( File file : files )
{
try (
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream( file ) );
) {
zipOutStream.putNextEntry( new ZipEntry( fileName[i++] ) );
int len = 0;
byte[] bs = new byte[4096];
while ( (len = bis.read( bs )) != -1 )
{
zipOutStream.write( bs, 0, len );
}
// optional since putNextEntry will auto-close open entries:
zipOutStream.closeEntry();
}
}
}
catch ( IOException e )
{
e.printStackTrace();
}
}
This compiles successfully and should work; but may need some other adjustments to get it to work in your environment with your data. Hopefully it eliminates issues you may have been having with closing streams and other resources.

Related

Java Heap OutOfMemory on ByteArrayOutputStream for deflater zip bytes

I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);

compressing file by independent chunks and then concat them into one valid archive

I wonder if it is possible to compress an arbitrary file (or folder, or any other file structure) by independent chunks and then get a valid archive (e.g. gzip) by concatenating them together. Some requirements:
java 8
chunks <= 16MB
folder structure does not change during the process
chunks are compressed independently, but order is preserved
each compressed chunk is appended to the end of the resulting archive
resulting archive should be valid and decompressable by any standard tool
It looks like to achieve that I would need to create an archive header first and then just append compressed blocks to it https://www.rfc-editor.org/rfc/rfc1952, however I'm not sure if it is supported by any of standard java utils or 3rd party libraries. Does anybody have any ideas on where to start from?
Some background:
I have a client-server app, which allows user to upload files to a cloud storage. Communication via REST api, client side is going to be responsible for dividing files into chunks and upload them one by one. It is possible to do compression in browser, however I wonder if we can move that load to the backend.
Yes. A concatenation of gzip files is a valid gzip file, per the standard (RFC 1952). gzip certainly handles this.
You are correct to be concerned that some code out there might not support it, since it is not very common to have concatenated gzip members. If you want to be super-safe, you can combine the gzip files into a single gzip member, without having to recompress. You do however need to read through all of the compressed data, effectively decompressing it in memory (which is still much faster than compressing). You can find an example of that in gzjoin.c.
You can try something like this for tar + gzip:
Maven dependency:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>1.18</version>
</dependency>
Java code to compress into chunks:
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream;
import org.apache.commons.compress.utils.IOUtils;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
[..]
private static final int MAX_CHUNK_SIZE = 16000000;
public void compressTarGzChunks(String inputDirPath, String outputDirPath) throws Exception {
PipedInputStream in = new PipedInputStream();
final PipedOutputStream out = new PipedOutputStream(in);
new Thread(() -> {
try {
int chunkIndex = 0;
int n = 0;
byte[] buffer = new byte[8192];
do {
String chunkFileName = String.format("archive-part%d.tar.gz", chunkIndex);
try (OutputStream fOut = Files.newOutputStream(Paths.get(outputDirPath, chunkFileName));
BufferedOutputStream bOut = new BufferedOutputStream(fOut);
GzipCompressorOutputStream gzOut = new GzipCompressorOutputStream(bOut)) {
int currentChunkSize = 0;
if (chunkIndex > 0) {
gzOut.write(buffer, 0, n);
currentChunkSize += n;
}
while ((n = in.read(buffer)) != -1 && currentChunkSize + n < MAX_CHUNK_SIZE) {
gzOut.write(buffer, 0, n);
currentChunkSize += n;
}
chunkIndex++;
}
} while (n != -1);
in.close();
} catch (IOException e) {
// logging and exception handling should go here
}
}).start();
try (TarArchiveOutputStream tOut = new TarArchiveOutputStream(out)) {
compressTar(tOut, inputDirPath, "");
}
}
private static void compressTar(TarArchiveOutputStream tOut, String path, String base)
throws IOException {
File file = new File(path);
String entryName = base + file.getName();
TarArchiveEntry tarEntry = new TarArchiveEntry(file, entryName);
tarEntry.setSize(file.length());
tOut.putArchiveEntry(tarEntry);
if (file.isFile()) {
try (FileInputStream in = new FileInputStream(file)) {
IOUtils.copy(in, tOut);
tOut.closeArchiveEntry();
}
} else {
tOut.closeArchiveEntry();
File[] children = file.listFiles();
if (children != null) {
for (File child : children) {
compressTar(tOut, child.getAbsolutePath(), entryName + "/");
}
}
}
}
Java code to concatenate the chunks into a single archive:
public void concatTarGzChunks(List<InputStream> sortedTarGzChunks, String outputFile) throws IOException {
try {
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
for (InputStream in : sortedTarGzChunks) {
int len;
byte[] buf = new byte[1024 * 1024];
while ((len = in.read(buf)) != -1) {
fos.write(buf, 0, len);
}
}
}
} finally {
sortedTarGzChunks.forEach(is -> {
try {
is.close();
} catch (IOException e) {
// logging and exception handling should go here
}
});
}
}

The gzip compressor is not going with the while loop

I am trying to query from database with Java JDBC and compress data in one column to gzip file in specific directory. I have tested my JDBC query and it working fine, but the Gzip code not going with the while loop, it's run with the loop firt row and stuck there. Why it's stuck? help me please!
These folders already existed: D:\Directory\My\year\id1\id2
//Some query JDBC code here, it's work well. I query all rows Data, year, id1,id2,id3
while (myRs1.next()){
String str = Data;
File myGzipFile = new File("D:\\Directory\\My\\"+year+"\\"+id1+"\\"+id2+"\\"+id3+".gzip");
GZIPOutputStream gos = null;
InputStream is = new ByteArrayInputStream(str.getBytes());
gos = new GZIPOutputStream(new FileOutputStream(myGzipFile));
byte[] buffer = new byte[1024];
int len;
while ((len = is.read(buffer)) != -1) {
gos.write(buffer, 0, len);
System.out.print("done for:"+id3);
}
try { gos.close(); } catch (IOException e) { }
}
Try formatting the source like this to catch exceptions.
public class InputStreamDemo {
public static void main(String[] args) throws Exception {
InputStream is = null;
int i;
char c;
try{
is = new FileInputStream("C://test.txt");
System.out.println("Characters printed:");
// reads till the end of the stream
while((i=is.read())!=-1)
{
// converts integer to character
c=(char)i;
// prints character
System.out.print(c);
}
}catch(Exception e){
// if any I/O error occurs
e.printStackTrace();
}finally{
// releases system resources associated with this stream
if(is!=null)
is.close();
}
}
}

Setting permissions for created directory to copy files into it

During the execution of my program it creates a directory which contains two sub-directories/two folders. Into one of these folders I need to copy a Jar-File. My programm resembles an installation routine. The copying of the Jar file is not the problem here, but the permissions of the created directories.
I tried to set the permissions of the directories (before actually creating them with the mkdirs() method) with File.setWritable(true, false) and also with the .setExecutable and .setReadable methods, but the access to the sub-directories is still denied.
Here's an excerpt of my code for the creation of one of the two sub-directories:
folderfile = new File("my/path/to/directory");
folderfile.setExecutable(true, false);
folderfile.setReadable(true, false);
folderfile.setWritable(true, false);
result = folderfile.mkdirs();
if (result) {
System.out.println("Folder created.");
}else {
JOptionPane.showMessageDialog(chooser, "Error");
}
File source = new File("src/config/TheJar.jar");
File destination = folderfile;
copyJar(source, destination);
And my "copyJar" method:
private void copyJar(File source, File dest) throws IOException {
InputStream is = null;
OutputStream os = null;
try {
is = new FileInputStream(source);
os = new FileOutputStream(dest);
byte[] buffer = new byte[1024];
int length;
while ((length = is.read(buffer))>0) {
os.write(buffer, 0, length);
}
} catch (Exception e) {
e.printStackTrace();
}
is.close();
os.close();
}
At os = new FileOutputStream(dest); the debugger throws a FileNotFoundException with the description that the access to the directory has been denied.
Does anyone have an idea what I am doing wrong or have a better solution for setting the permissions via Java? Thanks in advance!
A similar question was asked there are several years.
A possible solution for Java 7 and Unix system is available here : How do i programmatically change file permissions?
Or, below the best response, a example with JNA.
I hope that that will help you !
I solved the problem. In the end it was much easier to solve than expected.
The main problem was not the permission issue but the FileNotFoundException. The file that is assigned to the OutputStream is not really a file, but just a directory so that the Stream can't find it. You have to create the file before initializing the OutputStream and after that you copy your source file into the newly created file. The code:
private void copyJar(File source, File dest) throws IOException {
InputStream is = null;
File dest2 = new File(dest+"/TheJar.jar");
dest2.createNewFile();
OutputStream os = null;
try {
is = new FileInputStream(source);
os = new FileOutputStream(dest2);
byte[] buffer = new byte[1024];
int length;
while ((length = is.read(buffer))>0) {
os.write(buffer, 0, length);
}
} catch (Exception e) {
e.printStackTrace();
}
is.close();
os.close();
}

ZipException: error in opening zip file

I'm working on a method which will take a zipped file, unzip it, and return a new file/directory containing all the unzipped files. The goal is to then take that directory and extract an excel document from it and then convert it into a Workbook class I built (which is fully unit tested and works fine). The problem is that I'm getting the following exception:
java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:215)
at java.util.zip.ZipFile.<init>(ZipFile.java:145)
at java.util.zip.ZipFile.<init>(ZipFile.java:159)
at com.atd.core.datamigrator.BulkImageUpload.createWorkbook(BulkImageUpload.java:54)
at com.atd.core.datamigrator.BulkImageUpload.importImages(BulkImageUpload.java:38)
at com.atd.core.datamigrator.BulkImageUpload.main(BulkImageUpload.java:236)
Here is my code
private Workbook createWorkbook(File file) {
File unZipedFile = unZip(file);
File[] files = unZipedFile.listFiles();
Workbook wBook = null;
for (int i = 0; i < files.length; i++) {
if (files[i].getName().contains(".xls")) {
try {
File f = files[i];
ZipFile zip = new ZipFile(f);
wBook = new Workbook(zip);
} catch (IOException e) {
e.printStackTrace();
}
break;
}
}
return wBook;
}
private File unZip(File input) {
File output = new File("unzippedFile");
OutputStream out = null;
try {
ZipFile zipFile = new ZipFile(input);
Enumeration<? extends ZipEntry> entries = zipFile.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
File entryDestination = new File(output, entry.getName());
entryDestination.getParentFile().mkdirs();
InputStream in = zipFile.getInputStream(entry);
ZipInputStream zis = new ZipInputStream(in);
out = new FileOutputStream(entryDestination);
out.write(zis.read());
out.flush();
out.close();
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return output;
}
I know this is a problem with the unzip method because when I use File f = new File("some path") instead of using the unzipped file, it works fine.
Also, File I/O was never my strong point, so be nice :)
Okay, I now believe that this is the problem:
ZipInputStream zis = new ZipInputStream(in);
out = new FileOutputStream(entryDestination);
out.write(zis.read());
out.flush();
out.close();
You're creating a new file, and writing a single byte to it. That's not going to be a valid Excel file of any description. You're also failing to close streams using finally blocks, but that's a different matter. To copy the contents of one stream to another, you want something like:
byte[] buffer = new byte[8192];
int bytes;
while ((bytes = input.read(buffer)) > 0) {
output.write(buffer, 0, bytes);
}
That said, you'd be better off using a 3rd party library to hide all of this detail - look at Guava and its ByteStreams and Files classes for example.
It's worth taking a step back and working out why you didn't spot this problem for yourself, by the way. For example, the first thing I'd have done would be to look at the directory where the files were unzipped, and try to open those files. Just seeing a bunch of 1-byte files would be a bit of a giveaway. When trying to diagnose an issue, it's vital that you can split a big problem into small ones, and work out which small one is at fault.

Categories

Resources