I am trying to compress a sequence of images in png format. It seems that compression is going well:
FileOutputStream fos = null;
GZIPOutputStream gzip = null;
fos = new FileOutputStream(PATH_SAVE_GZIP);
gzip = new GZIPOutputStream(fos);
for (int i = 0; i < NB_OF_IMAGES; i++) {
BufferedImage im = images.get(i).getBufImg();
ImageIO.write(im, "JPEG", gzip);
}
gzip.finish();
gzip.close();
fos.close();
However I get Exception Nullpointer... when I try to uncompress it with this code.
What am I'm doing wrong?
I've finished my project and now I know the answer. This could be solved by several ways:
One is by using ObjectOutput/Input Stream and write BufferedImages like objects.
The other is to use ByteArrayOutputStream and write images like bytes. Prior to use this you should know the size to be written. So I've solved this writing size before each image. Not efficient way... However works.
fileOutputStream fos = new FileOutputStream(path);
GZIPOutputStream gzip = new GZIPOutputStream(fos);
gzip.write(shortToBytes(numImatges));
gzip.write(shortToBytes((short) 0));
for (int i = 0; i < dates.getNB_OF_IMAGES(); i++) {
if (images != null) {
im = images.get(i).getBufImg();
}
ByteArrayOutputStream byteOstream = new ByteArrayOutputStream();
ImageIO.write(im, "jpeg", byteOstream);
byteOstream.flush();
byteOstream.close();
gzip.write(shortToBytes((short) byteOstream.size()));
gzip.write(byteOstream.toByteArray());
}
//close streams
Your problem is that you write all the images to a single GZIP stream and when reading, ImageIO doesn't know where one image ends and the next begins.
You got two options:
Use ZIP instead of GZIP
Package the files in a TAR file using jtar or Java Tar Package and then GZIP the tar, when reading you will first UnGZIP and then extract the images from the tar file
Related
Requirement: compress a byte[] to get another byte[] using java.util.zip.ZipOutputStream BUT without using any files on disk or in-memory(like here https://stackoverflow.com/a/18406927/9132186). Is this even possible?
All the examples I found online read from a file(.txt) and write to a file(.zip). ZipOutputStream needs a ZipEntry to work with and that ZipEntry needs a file.
However, my use case is as follows: I need to compress a chunk (say 10MB) of a file at a time using a zip format and append all these compressed chunks to make a .zip file. But, when I unzip the .zip file then it is corrupted.
I am using in-memory files as suggested in https://stackoverflow.com/a/18406927/9132186 to avoid files on disk but need a solution without these files also.
public void testZipBytes() {
String infile = "test.txt";
FileInputStream in = new FileInputStream(infile);
String outfile = "test.txt.zip";
FileOutputStream out = new FileOutputStream(outfile);
byte[] buf = new byte[10];
int len;
while ((len = in.read(buf)) > 0) {
out.write(zipBytes(buf));
}
in.close();
out.close();
}
// ACTUAL function that compresses byte[]
public static class MemoryFile {
public String fileName;
public byte[] contents;
}
public byte[] zipBytesMemoryFileWORKS(byte[] input) {
MemoryFile memoryFile = new MemoryFile();
memoryFile.fileName = "try.txt";
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ZipOutputStream zos = new ZipOutputStream(baos);
ZipEntry entry = new ZipEntry(memoryFile.fileName);
entry.setSize(input.length);
zos.putNextEntry(entry);
zos.write(input);
zos.finish();
zos.closeEntry();
zos.close();
return baos.toByteArray();
}
Scenario 1:
if test.txt has small amount of data (less than 10 bytes) like "this" then unzip test.txt.zip yeilds try.txt with "this" in it.
Scenario 2:
if test.txt has larger amount of data (more than 10 bytes) like "this is a test for zip output stream and it is not working" then unzip test.txt.zip yields try.txt with broken pieces of data and is incomplete.
this 10 bytes is the buffer size in testZipBytes and is the amount of data that is compressed at a time by zipBytes
Expected (or rather desired):
1. unzip test.txt.zip does not use the "try.txt" filename i gave in the MemoryFile but rather unzips to filename test.txt itself.
2. unzipped data is not broken and yields the input data as is.
3. I have done the same with GzipOutputStream and it works perfectly fine.
Requirement: compress a byte[] to get another byte[] using java.util.zip.ZipOutputStream BUT without using any files on disk or in-memory(like here https://stackoverflow.com/a/18406927/9132186). Is this even possible?
Yes, you've already done it. You don't actually need MemoryFile in your example; just delete it from your implementation and write ZipEntry entry = new ZipEntry("try.txt") instead.
But you can't concatenate the zips of 10MB chunks of file and get a valid zip file for the combined file. Zipping doesn't work like that. You could have a solution which minimizes how much is in memory at once, perhaps. But breaking the original file up into chunks seems unworkable.
We are using Apache Camel for compressing and decompressing our files.
We use the standard .marshal().gzip() and .unmarshall().gzip() APIs.
Our problem is that when we get really large files, say 800MB to more than 1GB file size, our application runs out of memory, since the entire file is loading into memory for compression and decompression.
Are there any camel apis or java libraries which will help zip/unzip the file without loading the entire file in memory.
There is a similar unanswered question here
Explanation
Use a different approach: Stream the file.
That is, don't load it into memory completely but read it byte per byte and simultaneously write it back byte per byte .
Get an InputStream to the file, wrap some GZipInputStream around. Read byte per byte, write to an OutputStream.
The opposite if you want to compress an archive. Then you wrap the OutputStream by some GZipOutputStream.
Code
The example uses Apache Commons Compress but the logic of the code remains the same for all libraries.
Unpacking a gz archive:
Path inputPath = Paths.get("archive.tar.gz");
Path outputPath = Paths.get("archive.tar");
try (InputStream fin = Files.newInputStream(inputPath );
OutputStream out = Files.newOutputStream(outputPath);) {
GZipCompressorInputStream in = new GZipCompressorInputStream(
new BufferedInputStream(fin));
// Read and write byte by byte
final byte[] buffer = new byte[buffersize];
int n = 0;
while (-1 != (n = in.read(buffer))) {
out.write(buffer, 0, n);
}
}
Packing as gz archive:
Path inputPath = Paths.get("archive.tar");
Path outputPath = Paths.get("archive.tar.gz");
try (InputStream in = Files.newInputStream(inputPath);
OutputStream fout = Files.newOutputStream(outputPath);) {
GZipCompressorOutputStream out = new GZipCompressorOutputStream(
new BufferedOutputStream(fout));
// Read and write byte by byte
final byte[] buffer = new byte[buffersize];
int n = 0;
while (-1 != (n = in.read(buffer))) {
out.write(buffer, 0, n);
}
}
You could also wrap BufferedReader and PrintWriter around if you feel more comfortable with them. They manage the buffering themselves and you can read and write lines instead of bytes. Note that this only works correct if you read a file with lines and not some other format.
I am trying to write a simple server that uses sockets and reads images from disc when it receives http request from browser.
I am able to receive the request, read the image from disc and pass it to the browser (the browser then automatically downloads the image). However, when I try to open the downloaded image, it says:
Could not load image 'img.png'. Fatal error reading PNG image file: Not a PNG file
The same goes for all other types of extensions (jpg, jpeg, gif etc...)
Could you help me out and tell me what am I doing wrong? I suspect that there might be something wrong with the way I read the image or maybe some encoding has to be specified?
Reading the image from disc:
// read image and serve it back to the browser
public byte[] readImage(String path) {
File file = new File(FILE_PATH + path);
try {
BufferedImage image = ImageIO.read(file); // try reading the image first
// get DataBufferBytes from Raster
WritableRaster raster = image.getRaster();
DataBufferByte data = (DataBufferByte) raster.getDataBuffer();
return data.getData();
} catch (IOException ex) {
// handle exception...
}
return ("Could not read image").getBytes();
}
Writing the data via socket:
OutputStream output = clientSocket.getOutputStream();
output.write(result);
In this case, the result contains the byte array produced by the readImage method.
EDIT: second try with reading the image as normal file
FileReader reader = new FileReader(file);
char buf[] = new char[8192];
int len;
StringBuilder s = new StringBuilder();
while ((len = reader.read(buf)) >= 0) {
s.append(buf, 0, len);
byte[] byteArray = s.toString().getBytes();
}
return s.toString().getBytes();
You may use ByteArrayOutputStream, like,
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", byteArrayOutputStream);
and then you can write to socket as,
outputStream.write(byteArrayOutputStream.toByteArray());
I'm using the Apache Compress library to read a .tar.gz file, something like this:
final TarArchiveInputStream tarIn = initializeTarArchiveStream(this.archiveFile);
try {
TarArchiveEntry tarEntry = tarIn.getNextTarEntry();
while (tarEntry != null) {
byte[] btoRead = new byte[1024];
BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream(destPath)); //<- I don't want this!
int len = 0;
while ((len = tarIn.read(btoRead)) != -1) {
bout.write(btoRead, 0, len);
}
bout.close();
tarEntry = tarIn.getNextTarEntry();
}
tarIn.close();
}
catch (IOException e) {
e.printStackTrace();
}
Is it possible not to extract this into a seperate file, and read it in memory somehow? Maybe into a giant String or something?
You could replace the file stream with a ByteArrayOutputStream.
i.e. replace this:
BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream(destPath)); //<- I don't want this!
with this:
ByteArrayOutputStream bout = new ByteArrayOutputStream();
and then after closing bout, use bout.toByteArray() to get the bytes.
Is it possible not to extract this into a seperate file, and read it in memory somehow? Maybe into a giant String or something?
Yea sure.
Just replace the code in the inner loop that is openning files and writing to them with code that writes to a ByteArrayOutputStream ... or a series of such streams.
The natural representation of the data that you read from the TAR (like that) will be bytes / byte arrays. If the bytes are properly encoded characters, and you know the correct encoding, then you can convert them to strings. Otherwise, it is better to leave the data as bytes. (If you attempt to convert non-text data to strings, or if you convert using the wrong charset/encoding you are liable to mangle it ... irreversibly.)
Obviously, you are going to need to think through some of these issues yourself, but basic idea should work ... provided you have enough heap space.
copy the value of btoread to a String like
String s = String.valueof(byteVar);
and goon appending the byte value to the string untill end of the file reaches..
I am trying to load a .swf file in my page, i would like to make this load faster by converting it to Base64, rather providing a src. This is working great with image formats by using the below code
Java code
BufferedImage buffImg = ImageIO.read(new File(imagePath));
ImageIO.write(buffImg, imgExtension, bos);
byte[] imageBytes = bos.toByteArray();
BASE64Encoder encoder = new BASE64Encoder();
imageString = encoder.encode(imageBytes);
but this is not working for swf file. is there any possible way to achieve this.
Html
<object width="10" height="10" data="data:application/x-shockwave-flash;base64, RldTCSEAAABIAZAAZAAADAEARBEIAAAAQwIAAP9AAAAA"></object>
thanks in advance.
Trying to get the file in base64 will not speed up the file transfer, it's just the opposite as it will convert the file which is stored in bytes (base256 if it can be said that way) to base64 (64 printable characters), so the final amount of data you will be transfering is more.
The only "win" is that you might be able to load it as part of the page instead of the browser making another call for the swf file, which should be no issue on http 1.1.
Unless you have some other good reason to do this, I would not suggest this kind of practice.
If you have your swf file(s) in a database as a blob, you could just make a servlet which sets the proper contenttype and write the whole file with the ServletOutputStream, without any tags. In your html code, you would have to reference to the servlet instead of a fixed file.
If you still want to convert the file to base64, you shouldn't use some image API, but get the file in a standard way for binary files, here's a sample that should do the job:
http://www.javapractices.com/topic/TopicAction.do?Id=245
You can still do the encoding as you did it once you have a byte array:
File file = new File(imagePath);
log("File size: " + file.length());
byte[] result = null;
try {
InputStream input = new BufferedInputStream(new FileInputStream(file));
result = readAndClose(input);
}
catch (FileNotFoundException ex){
log(ex);
}
BASE64Encoder encoder = new BASE64Encoder();
imageString = encoder.encode(result);
And the readAndClose method:
byte[] readAndClose(InputStream aInput){
byte[] bucket = new byte[32*1024];
ByteArrayOutputStream result = null;
try {
try {
result = new ByteArrayOutputStream(bucket.length);
int bytesRead = 0;
while(bytesRead != -1){
bytesRead = aInput.read(bucket);
if(bytesRead > 0){
result.write(bucket, 0, bytesRead);
}
}
}
finally {
aInput.close();
}
}
catch (IOException ex){
log(ex);
}
return result.toByteArray();
}
This should do the trick, maybe some fine tunings to adapt the code to your specific situation, optimize it and better error handling...