Stream image format conversions and resizing in Java - java

So, lets say I want to recode some PNG to JPEG in Java. The image has extreme resolution, lets say for example 10 000 x 10 000px. Using "standard" Java image API Writers and Reader, you need at some point to have entire image decoded in RAM, which takes extreme amount of RAM space (hundreds of MB). I have been looking how other tools do this, and I found that ImageMagick uses disk pixel storage, but this seems to by way too slower for my needs. So what I need is tru streaming recoder. And by true streaming I mean read and process data by chuncks or bins, not just give stream as input but decode it whole beforehand.
Now, first the theory behind - is it even possible, given JPEG and PNG algorithms, to do this using streams, or lets say in bins of data? So there is no need to have entire image encoded in memory(or other storage)? In JPEG compression, first few stages could be done in streams, but I believe Huffman encoding needs to build entire tree of value probabilities after quantization, therefore it needs to analyze whole image - so whole image needs to be decoded beforehand, or somehow on demand by regions.
And the golden question, if above could be achieved, is there any Java library that can actually work in this way? And save large amount of RAM?

If I create a 10,000 x 10,000 PNG file, full of incompressible noise, with ImageMagick like this:
convert -size 10000x10000 xc:gray +noise random image.png
I see ImageMagick uses 675M of RAM to create the resulting 572MB file.
I can convert it to a JPEG with vips like this:
vips im_copy image.png output.jpg
and vips uses no more than 100MB of RAM while converting, and takes 7 seconds on a reasonable spec iMac around 4 years old - albeit with SSD.

I have thought about this for a while, and I would really like to implement such a library. Unfortunately, it's not that easy. Different image formats store pixels in different ways. PNG or GIFs may be interlaced. JPEGs may be progressive (multiple scans). TIFFs are often striped or tiled. BMPs are usually stored bottom up. PSDs are channeled. Etc.
Because of this, the minimum amount of data you have to read to recode to a different format, may in worst case be the entire image (or maybe not, if the format supports random access and you can live with a lot of seeking back and forth)... Resampling (scaling) the image to a new file using the same format would probably work in most cases though (probably not so good for progressive JPEGs, unless you can resample each scan separately).
If you can live with disk buffer though, as the second best option, I have created some classes that allows for BufferedImages to be backed by nio MappedByteBuffers (memory-mapped file Buffers, kind of like virtual memory). While performance isn't really like in-memory images, it's also not entirely useless. Have a look at MappedImageFactory and MappedFileBuffer.

I've written a PNG encoder/decoder that does that (read and write progressively, which only requires to store a row in memory) for PNG format: PNGJ
I don't know if there is something similar with JPEG

Related

Maintaining LSB through JPG compilation - is it possible?

This is one of those "pretty sure we found the answer, but hoping we're wrong" questions. We are looking at a steganography problem and it's not pretty.
Situation:
We have a series of images. We want to mark them (watermark) so the watermarks survive a series of conditions. The kicker is, we are using a lossfull format, JPG, rather than lossless such as PNG. Our watermarks need to survive screenshotting and, furthermore, need to be invisible to the naked eye. Finally, they need contain at least 32 bytes of data (we expect them to be repeating patterns across an image, of course). Due to the above, we need to hide the information in the pixels themselves. I am trying a Least Significant Bit change, including using large blocks per "bit" (I tried both increments of 16 as these are the jpg compression algorithms size chunks from what we understand, as well as various prime numbers) and reading the average of the resulting block. This sort of leads to requirements:
Must be .jpg
Must survive the jpg compression algorithm
Must survive screenshotting (assume screenshots are saved losslessly)
Problem:
JPG compression, even 100% "minimum loss" changes the pixel values. EG if we draw a huge band across an image setting the Red channel to 255 in a block 64 pixels high, more than half are not 255 in the compiled image. This means that even using an average of the blocks yields the LSB to be random, rather than what we "encoded". Our current prototype can take a random image, compress the message into a bit-endoded string and convert it to a XbyX array which is then superimposed on the image using the LSB of one of the three color-channels. This works and is detectable while it remains a BufferedImage, but once we convert to a JPG the compression destroys the message.
Question:
Is there a way to better control a jpg compression's pixel values? Or are we simply SOOL here and need to drop this avenue, either shifting to PNG output (unlikely) or need to understand the JPG compression's algorithm at length and use it to somehow determine LSB pattern outcomes? Preferably java, but we are open to looking at alternative languages if there are any that can solve our problem (our current PoC is in java)

Enlarge already big jpegs using less memory

I need to enlarge already big jpegs, they are used on printing so they need to be really big 300PPI files. The resulting image will be too big to be fully hold in memory. What i thought was something like breaking the original image into small strips, enlarge each one of them separatedly and go writing it in the output file(another jpeg), never keeping the final image fully on menoru. I've read about lossless operations on jpegs it seems the way to go(create a file with the strip and copy the mcus, huffman tables and quantizatuon tables to the final file), also read something about abbreviated streams on java. What is a good way to do this?
Your best bet is to leave the JPEGs alone and have the printing software scale the output to the device.
If you really want to double the size,you could use subsampling. Just double the Y component in each direction and change the sampling for Cb and Cr while leaving the data alone.
You could also do as you say, and recompress in strips of MCUs.

Split and Merge Image wthout loss in image quality in Java

I want split an image in number of chunks and merge again to create original image
I used this method to spit image http://kalanir.blogspot.in/2010/02/how-to-split-image-into-chunks-java.html and for merge used this http://kalanir.blogspot.in/2010/02/how-to-merge-multiple-images-into-one.html
But after merging i fount that my new image size is reduce compare to original.
How i do split and merging of image without loosing any information
Not sure if you are:
reading files into memory RGB data
splitting
joining them
writing a new file
But in that case:
When reading the JPEGs you are obtaining unencoded, uncompressed, lossless data.
When you join them you are encoding again (lossing data) to generate the file.
If the second encoding doesn't do what the original software that encoded the image did, it'll be different.
A starting point is using the same JPEG quality. You can indicate a number from 1 to 100 indicating how much quality you want to keep at a size cost. That number should be at least equal to the number used to encode the original image.
I don't know how exactly to get that or to write that using the libraries you've used, but it should be available.
You can use different image processing java api's like imgscalr, ImageJ, Marvin...etc

Fast way to compress binary data?

I have some binary data (pixel values) in a int[] (or a byte[] if you prefer) that I want to write to disk in an Android app. I only want to use a small amount of processing time but want as much compression as I can for this. What are my options?
In many cases the array will contain lots of consecutive zeros so something simple and fast like RLE compression would probably work well. I can't see any Android API functions for this though. If I have to loop over the array in Java, this will be very slow as there is no JIT on most Android devices. I could use the NDK but I'd rather avoid this if I can.
DeflatorOutputStream takes ~25 ms to compress 1 MB in Java. Its a native method so a JIT should not make much difference.
Do you have a requirement which says 0.2s or 0.5s is too slow?
Can you do it in a background thread so the user doesn't notice how long it takes?
GZIP is based on the Deflator + CRC32 so is likely to be much the same or slightly slower.
Deflator has several modes. The DEFAULT_STRATEGY is fastest in Java, but simpler compressions such as HUFFMAN_ONLY might be faster for you.
Android has Java's DeflaterOutputStream. Would that work?
Pass the byte array to
http://download.oracle.com/javase/6/docs/api/java/io/FileWriter.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPOutputStream.html
to it
then when you need to read the data back in do the reverse
http://download.oracle.com/javase/1.4.2/docs/api/java/io/FileReader.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPInputStream.html
Depending on the size of the file your saving you will see some compression Gzip is good like that, if your not seeing much of a trade of just write the data uncompressed using a buffered writer(That should be the fastest). Also if you do gzip it using a buffered writer reader could also speed it up a bit.
I've had to solve basically the same problem on another platform and my solution was to use a modified LZW compression. First, do some difference filtering (similar to PNG) on the 32bpp image. This will turn most of the image to black if there are large areas of common color. Then use a generic GIF compression algorithm treating the filtered image as if it's 8bpp. You'll get decent compression and it works very quickly. This will need to run in native code (NDK). It's really quite easy to get native code working on Android.
Random thought: if it's image data, try saving it as png. Standard java has it, i'm sure android will too, and probably optimized with native code. It has pretty good compression and it's lossless.

Java: make an low quality image

In the software 'Teamviewer', the quality of the images can be changed. It looks like the image comes from 32bit to 16bit (Or other values, like in the screen device settings in Windows). The image is realy smaller because you notice that the speed of the desktop sharing gets higher. I don't want something like: "scale down, send and than scale up".
Now my question: Is it possible to make a low-quality image.
Thanks
You have four alternatives for lossy compression:
reduce spatial resolution (size)
reduce bitdepth
compress in another domain (JPEG)
a combination of these
And you will probably get the best gain with JPEG for rich pictures like photos, and with bitdepth reduction (even down to using 8bit or less palette) on others with less variation in colors. Please note that bitdepth reduction is most effective if combined with lossless compression afterwards, like runlength encoding (did you know that even jpeg uses that?)
Yes, you can change the compression settings for many different types of Images.
Google found this: Adjust JPEG image compression quality when saving images in Java
You can use image converters for this purpose. When user uploads a file its sent to the converter which does its thing (according to defined settings). You would however need access to run applications on the server I think.
ypnos already mentioned bit depth reduction. Reading your question I also immediately though of dithering, which will preserve the image better as you reduce the size of the color space. You can pretty easily find implementations of the Floyd-Steinberg algorithm around the net.

Categories

Resources