Is it possible to compress jpg and/or png images more in a loss-less manner? I've found websites that do this, but I assumed that these files were all compressed the same way.
By compress, I mean it is still a jpg or png file that any image program can read, but the file size is smaller.
And if so, what is the best way in both Java and C# to get the additional compression?
There are several compression algorithms out there some achieve better results than others in different images. I am not sure exactly what your question wants to know as you mix Java and C# with image compression.
Recently enough google developers have released a new format called WebP. They claim this format can deliver 26% smaller lossless images when compared with PNG's and lossy even better when compared to JPEG however the support for WebP is still quite limited.
What you can do is make use of the picture HTML element and deliver an alternative WebP with a fallback to a PNG or JPEF file.
here's an example:
<picture>
<source srcset="img/awesomeWebPImage.webp" type="image/webp">
<source srcset="img/creakyOldJPEG.jpg" type="image/jpeg">
<img src="img/creakyOldJPEG.jpg" alt="Alt Text!">
</picture>
Here's a good article on using WebP: https://css-tricks.com/using-webp-images/
All JPEG compression is inherently lossy. The RGB<->YCrCb conversion often requires modifying values because the gamuts of the color spaces are different. JPEG compression involves floating point values rounded to integers.
There are a lot of JPEG setting available to trade off loss against size. Whether you have access to them or not depends upon your encoder.
In the case of PNG, the only real size setting is how far the encoder will search the LZ window for matches. In PNG, it's a tradeoff between compression speed and size.
Related
So, lets say I want to recode some PNG to JPEG in Java. The image has extreme resolution, lets say for example 10 000 x 10 000px. Using "standard" Java image API Writers and Reader, you need at some point to have entire image decoded in RAM, which takes extreme amount of RAM space (hundreds of MB). I have been looking how other tools do this, and I found that ImageMagick uses disk pixel storage, but this seems to by way too slower for my needs. So what I need is tru streaming recoder. And by true streaming I mean read and process data by chuncks or bins, not just give stream as input but decode it whole beforehand.
Now, first the theory behind - is it even possible, given JPEG and PNG algorithms, to do this using streams, or lets say in bins of data? So there is no need to have entire image encoded in memory(or other storage)? In JPEG compression, first few stages could be done in streams, but I believe Huffman encoding needs to build entire tree of value probabilities after quantization, therefore it needs to analyze whole image - so whole image needs to be decoded beforehand, or somehow on demand by regions.
And the golden question, if above could be achieved, is there any Java library that can actually work in this way? And save large amount of RAM?
If I create a 10,000 x 10,000 PNG file, full of incompressible noise, with ImageMagick like this:
convert -size 10000x10000 xc:gray +noise random image.png
I see ImageMagick uses 675M of RAM to create the resulting 572MB file.
I can convert it to a JPEG with vips like this:
vips im_copy image.png output.jpg
and vips uses no more than 100MB of RAM while converting, and takes 7 seconds on a reasonable spec iMac around 4 years old - albeit with SSD.
I have thought about this for a while, and I would really like to implement such a library. Unfortunately, it's not that easy. Different image formats store pixels in different ways. PNG or GIFs may be interlaced. JPEGs may be progressive (multiple scans). TIFFs are often striped or tiled. BMPs are usually stored bottom up. PSDs are channeled. Etc.
Because of this, the minimum amount of data you have to read to recode to a different format, may in worst case be the entire image (or maybe not, if the format supports random access and you can live with a lot of seeking back and forth)... Resampling (scaling) the image to a new file using the same format would probably work in most cases though (probably not so good for progressive JPEGs, unless you can resample each scan separately).
If you can live with disk buffer though, as the second best option, I have created some classes that allows for BufferedImages to be backed by nio MappedByteBuffers (memory-mapped file Buffers, kind of like virtual memory). While performance isn't really like in-memory images, it's also not entirely useless. Have a look at MappedImageFactory and MappedFileBuffer.
I've written a PNG encoder/decoder that does that (read and write progressively, which only requires to store a row in memory) for PNG format: PNGJ
I don't know if there is something similar with JPEG
I already have successfully coded my Steganography program in a PNG file using Java. My program works very well in both PNG and BMP files. But when I tried running my program in a JPG file, the revealed data is not the same as the original data. Certainly, the headers of each file type isn't the same. And so now I wonder; Do the data structures of PNG and JPG files aren't the same? I need to know exactly how to manipulate the bytes of a JPG file without affecting its header and the footer.
Thanks.
First of all you need to tell the exact method you are using for image steganography e.g. hiding the secret data in the lsb's of the pixels of the image, reading the file in a binary format etc.
If working with lsb's is your procedure then I hope the following answer satisfies your query-
'PNG' and 'BMP' are actually lossless file formats. After manipulating the bits of pixels of these formats when you create the new image no data is lost. This is the reason you are able to retrieve all the hidden data.
'JPG' formats however use a lossy compression technique due to which the data hidden in the pixels is lost. Even I faced this problem and the solution to this exists in handling the image in transform domain. You need to use the Direct Cosine Transform method for its implementation.
The transform domain involves manipulation of the algorithms and image transforms such as discrete cosine transformation (DCT) and wavelet transformation. These methods can hide information in more significant areas of the image and may also manipulate the properties of the image like luminance. These kinds of techniques are more effective than image domain bit-wise steganographic methods. The transform domain techniques can be applied to image of any format. Also, the conversion between lossless and lossly formats may survive.
How DCT works in steganography?
The image is broken down into 8x8 blocks of pixels. DCT is applied to each block from left to right, top to bottom. The quantization table compresses each block to scale the DCT coefficients and the message is embedded in the scaled DCT coefficients.
A lot of reasearch is still required in this method. I am working on its code and will post it ASAP.
It will be a pleasure to hear of other methods or different efficient techniques from other developers.
I have some binary data (pixel values) in a int[] (or a byte[] if you prefer) that I want to write to disk in an Android app. I only want to use a small amount of processing time but want as much compression as I can for this. What are my options?
In many cases the array will contain lots of consecutive zeros so something simple and fast like RLE compression would probably work well. I can't see any Android API functions for this though. If I have to loop over the array in Java, this will be very slow as there is no JIT on most Android devices. I could use the NDK but I'd rather avoid this if I can.
DeflatorOutputStream takes ~25 ms to compress 1 MB in Java. Its a native method so a JIT should not make much difference.
Do you have a requirement which says 0.2s or 0.5s is too slow?
Can you do it in a background thread so the user doesn't notice how long it takes?
GZIP is based on the Deflator + CRC32 so is likely to be much the same or slightly slower.
Deflator has several modes. The DEFAULT_STRATEGY is fastest in Java, but simpler compressions such as HUFFMAN_ONLY might be faster for you.
Android has Java's DeflaterOutputStream. Would that work?
Pass the byte array to
http://download.oracle.com/javase/6/docs/api/java/io/FileWriter.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPOutputStream.html
to it
then when you need to read the data back in do the reverse
http://download.oracle.com/javase/1.4.2/docs/api/java/io/FileReader.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPInputStream.html
Depending on the size of the file your saving you will see some compression Gzip is good like that, if your not seeing much of a trade of just write the data uncompressed using a buffered writer(That should be the fastest). Also if you do gzip it using a buffered writer reader could also speed it up a bit.
I've had to solve basically the same problem on another platform and my solution was to use a modified LZW compression. First, do some difference filtering (similar to PNG) on the 32bpp image. This will turn most of the image to black if there are large areas of common color. Then use a generic GIF compression algorithm treating the filtered image as if it's 8bpp. You'll get decent compression and it works very quickly. This will need to run in native code (NDK). It's really quite easy to get native code working on Android.
Random thought: if it's image data, try saving it as png. Standard java has it, i'm sure android will too, and probably optimized with native code. It has pretty good compression and it's lossless.
im transeffing image throw tcp/ip and i like to optimize it and still good quality as much as possible
what kind of methods or algorithms i can use ?
p.s
now if i think about it maybe i should ask what is the best and the fast way to send image
via tcp/ip
To find the right answer to your question, you need to have a look at the images themselves. Are they real world images captured on camera? Or are they synthetic images, like icons or graphs?
Lossy compression (like JPEG) works very well for real scenes with many gradients and smooth edges. For images with solid colors and hard edges, you have a much higher (even perceived) loss in image quality and less gain in compression rates compared to lossless compression.
Basically, established image formats for your domain are PNG (Portable Network Graphics) and JPEG. PNG images are always compressed lossless, but their compression algorithm works better than competition, i.e. GIF. If the images are well-suited, you gain compression rates comparable to JPEG, if not (like real world images), you gain typical ZIP compression rates (around 50%).
After determining lossy/lossless compression (or a combination, based on picture type -- you could also think of compressing images first in both formats and then compare, if processing time does not matter as much as network througput), you should also take the advantage of progressive coding, which is supported both by JPEG and PNG formats.
With progressive coding, basically the data is organized in a way that the more data you receive, the better the quality (other than just sending the images row-by-row). The advantage here is that you can show the image to the user already while it is still being received. However, for this you need a decoder who exposes this functionality.
I don't know about the libraries available in Java for this.
You should check Java Advanced Imaging API.
But to use it effectively you will need to understand what type of image operations are right for your problem. This will depend, among other things, on the encoding of your source image.
As for the "good quality as much as possible", you will most likely need to experiment with various compression techniques and their relevant parameters before deciding which one gives the right balance of speed, size and quality for your needs.
You may take a look at this. It's a comparison between common compression algorithms (quality and compression rate).
Edit: it is not directly java, but you probably can find an implementation of the desired algorithm.
For images intended for human viewing JPEG is quite nice. What is in the remote end? A browser?
In the software 'Teamviewer', the quality of the images can be changed. It looks like the image comes from 32bit to 16bit (Or other values, like in the screen device settings in Windows). The image is realy smaller because you notice that the speed of the desktop sharing gets higher. I don't want something like: "scale down, send and than scale up".
Now my question: Is it possible to make a low-quality image.
Thanks
You have four alternatives for lossy compression:
reduce spatial resolution (size)
reduce bitdepth
compress in another domain (JPEG)
a combination of these
And you will probably get the best gain with JPEG for rich pictures like photos, and with bitdepth reduction (even down to using 8bit or less palette) on others with less variation in colors. Please note that bitdepth reduction is most effective if combined with lossless compression afterwards, like runlength encoding (did you know that even jpeg uses that?)
Yes, you can change the compression settings for many different types of Images.
Google found this: Adjust JPEG image compression quality when saving images in Java
You can use image converters for this purpose. When user uploads a file its sent to the converter which does its thing (according to defined settings). You would however need access to run applications on the server I think.
ypnos already mentioned bit depth reduction. Reading your question I also immediately though of dithering, which will preserve the image better as you reduce the size of the color space. You can pretty easily find implementations of the Floyd-Steinberg algorithm around the net.