I am trying to write a program in Java to unzip files zipped by PKZIP tool in Mainframe. However, I have tried below 3 ways, none of them can solve my problem.
By exe.
I have tried to open it by WinRAR, 7Zip and Linux command(unzip).
All are failed with below error message :
The archive is either in unknown format or damaged
By JDK API - java.util.ZipFile
I also have tried to unzip it by JDK API, as this website described.
However, it fails with error message :
IO Error: java.util.zip.ZipException: error in opening zip file
By Zip4J
I also have tried to use Zip4J. It failed too, with error message :
Caused by: java.io.IOException: Negative seek offset
at java.io.RandomAccessFile.seek(Native Method)
at net.lingala.zip4j.core.HeaderReader.readEndOfCentralDirectoryRecord(HeaderReader.java:117)
... 5 more
May I ask if there is any java lib or linux command can extract zip file zipped by PKZIP in Mainframe? Thanks a lot!
I have successfully read files that were compressed with PKZip on z/OS and transferred to Linux. I was able to read them with java.util.zip* classes:
ZipFile ifile = new ZipFile(inFileName);
// faster to loop through entries than open the zip file as a stream
Enumeration<? extends ZipEntry> entries = ifile.entries();
while ( entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
if (!entry.isDirectory()) { // skip directories
String entryName = entry.getName();
// code to determine to process omitted
InputStream zis = ifile.getInputStream(entry);
// process the stream
}
}
The jar file format is just a zip file, so the "jar" command can also read such files.
Like the others, I suspect that maybe the file was not transferred in binary and so was corrupted. On Linux you can use the xxd utility (piped through head) to dump the first few bytes to see if it looks like a zip file:
# xxd myfile.zip | head
0000000: 504b 0304 2d00 0000 0800 2c66 a348 eb5e PK..-.....,f.H.^
The first 4 bytes should be as shown. See also the Wikipedia entry for zip files
Even if the first 4 bytes are correct, if the file was truncated during transmission that could also cause the corrupt file message.
Related
When I try to create a java.util.zip.ZipFile I get a java.util.zip.ZipException: error in opening zip file. This exception only occurs when I try to open a large ZipFile (> 2GB). Is there a trick to open big zip files?
Later I need to extract single files from this zip and I doubt that the ZipInputStream is fast enough to extract the required files, since I need to run over all files.
Here is my StackTrace:
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:148)
at java.util.zip.ZipFile.<init>(ZipFile.java:162)
Update:
I found out that it works on my desktop computer and it works if I open the ZipFile as a JUnit-Test within Android Studio, too (Since JUnit-Tests are run on the local desktop computer and not on the android device). However, I could not get it working on the android device. I guess the reason is the android file system.
Key point to remember, especially if you are processing large zip archives is that, Java 6 only support zip file up to 2GB.
Java 7 supports zip64 mode, which can be used to process large zip file with size more than 2GB
Also using streams for big files is a good idea:
private static void readUsingZipInputStream() throws IOException {
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(FILE_NAME));
final ZipInputStream is = new ZipInputStream(bis);
try {
ZipEntry entry;
while ((entry = is.getNextEntry()) != null) {
System.out.printf("File: %s Size %d Modified on %TD %n", entry.getName(), entry.getSize(), new Date(entry.getTime()));
extractEntry(entry, is);
}
} finally {
is.close();
}
I'm dealing with some gzipped pack200 files and have no trouble unpacking them with the command line tool. I only run into problems when I attempt to unpack the files with the pack200 library.
For reference, this is the method I am using to unpack the files:
//Output from this can be properly unpacked with command line tool
InputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed));
//This is where things go awry
Pack200.Unpacker unpacker = Pack200.newUnpacker();
JarOutputStream out = new JarOutputStream(new FileOutputStream("file.jar"));
unpacker.unpack(in, out);
Here is the output of unpacker.properties():
com.sun.java.util.jar.pack.default.timezone: false
com.sun.java.util.jar.pack.disable.native: false
com.sun.java.util.jar.pack.verbose: 0
pack.class.attribute.CompilationID: RUH
pack.class.attribute.SourceID: RUH
pack.code.attribute.CharacterRangeTable: NH[PHPOHIIH]
pack.code.attribute.CoverageTable: NH[PHHII]
pack.deflate.hint: keep
pack.effort: 5
pack.keep.file.order: true
pack.modification.time: keep
pack.segment.limit: -1
pack.unknown.attribute: pass
Some other relevant information:
The jar files output by the library are consistently smaller than those unpacked by the command line tool.
The library generated files use a newer version of the .zip format (0x14 vs 0x0A).
unpack200.exe version 1.30, 07/05/05
jdk version 1.7.0_21
So to reiterate, the jar files generated by the command line tool function properly while those generated by the library do not.
I very much appreciate any help or guidance.
It was something very simple, but I'm happy to have found the problem. Here is the solution I was able to use:
//Output from this can be properly unpacked with command line tool
InputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed));
//This is where things go awry
Pack200.Unpacker unpacker = Pack200.newUnpacker();
JarOutputStream out = new JarOutputStream(new FileOutputStream("file.jar"));
unpacker.unpack(in, out);
out.close();
Don't forget your JarOutPutStream.close();, kids.
I am creating compressed archives with tar and bzip2 using jarchivelib which utilizes org.apache.commons.compress.
try {
Archiver archiver = ArchiverFactory.createArchiver(ArchiveFormat.TAR, CompressionType.BZIP2);
File archive = archiver.create(archiveName, destination, sourceFilesArr);
} catch (IOException e) {
e.printStackTrace();
}
Sometimes it can happen that the created file is corrupted, so I want to check for that and recreate the archive if necessary. There is no error thrown and I detected the corruption when trying to decompress it manually with tar -xf file.tar.bz2 (Note: extracting with tar -xjf file.tar.bz2 works flawlessly)
tar: Archive contains `\2640\003\203\325#\0\0\0\003\336\274' where numeric off_t value expected
tar: Archive contains `\0l`\t\0\021\0' where numeric mode_t value expected
tar: Archive contains `\003\301\345\0\0\0\0\006\361\0p\340' where numeric time_t value expected
tar: Archive contains `\0\210\001\b\0\233\0' where numeric uid_t value expected
tar: Archive contains `l\001\210\0\210\001\263' where numeric gid_t value expected
tar: BZh91AY&SY"'ݛ\003\314>\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\343\262\037\017\205\360X\001\210: Unknown file type `', extracted as normal file
tar: BZh91AY&SY"'ݛ�>��������������������������������������X�: implausibly old time stamp 1970-01-01 00:59:59
tar: Skipping to next header
tar: Exiting with failure status due to previous errors
Is there a way using org.apache.commons.compress to check a compressed archive if it is corrupted? Since the files can be at the size of several GB an approach without decompressing would be great.
As bzip2 compression produces a stream, there is no way how to check for corruption without decompressing that stream and passing it to tar to check.
Anyway, in your case you actually decompress directly with tar and not passing first to bzip2. This is the root cause. You need to always use the -j flag to tar as it's compressed by bzip2. That's why the second command works correctly.
I am working on a project in JavaFX. I need to download a file from the server for that I am using the ftp connection and downloading the file.
The size of the file is 560 MB, while downloading the file the code doesn't give any error but when I check the size of the file in the download location it is only 485 MB and I am not able to open it.
My code for downloading is:
OutputStream output = new FileOutputStream(toPath + "/" + dfile);
if(ftpClient.retrieveFile(dfile, output))
{
downloadButton.setDisable(true);
}
output.close();
Does java ftp have some download file size limit? How to resolve this problem? I have heard of chunking but don't know how to implement it in this case.
I downloaded the files in binary mode and it's working fine now.
ftpClient.setFileType(FTP.BINARY_FILE_TYPE)
I'm trying to transfer a pgp file with apache.commons.net.ftp.FTPClient, result seems successfully, but when I want to convert it to a txt file I encounter to this error:
gpg: [don't know]: invalid packet (ctb=20)
and when I check the exact size of downloaded file, I notice that it's size is about 1KB less than original file.
here is the code for downloading file:
FileOutputStream fos = new FileOutputStream(Localfilename);
InputStream inputStream = ftpClient.retrieveFileStream(remoteFileDir);
IOUtils.copy(inputStream, fos);
fos.flush();
IOUtils.closeQuietly(fos);
IOUtils.closeQuietly(inputStream);
boolean commandOK = ftpClient.completePendingCommand();
can any one understand what is mistake with my way or code?
[edited] noted that the original file decode (convert to txt)successfully, so the problem occures while downloading file.
[edited2] I run the program in my windows desktop and download file in windows, no problem for decode, and I understand that when I run my program with linux server this problem appears!
I found my problem!
The problem was with addressing the remote path, a silly mistake!
so If any one has this problem recheck and recheck again the addresses.