I'm trying to see the results of various compression qualities with something like this:
private static Bitmap codec(Bitmap src, Bitmap.CompressFormat format,
int quality) {
ByteArrayOutputStream os = new ByteArrayOutputStream();
src.compress(format, quality, os);
byte[] array = os.toByteArray();
return BitmapFactory.decodeByteArray(array, 0, array.length);
}
I have a Bitmap called bitmap, and I'm comparing it to compressed version:
Bitmap compressed = codec(bitmap,Bitmap.CompressFormat.JPEG ,10);
Log.d("result","bitmap=" + bitmap.getByteCount() + " compressed=" + compressed.getByteCount());
No matter what photo I select to load into bitmap, the compressed version's byte count remains the same as bitmap's bytecount -- though, if I load compressed into an ImageView, the quality is very noticeably lower.
Is the size really staying the same while lowering the visual quality of the image? Am I getting the size of the file incorrectly?
EDIT:
Even stranger, the result size is showing 16343040 bytes for an image that says 1.04mb in gallery details.
I'm getting the original bitmap through onActivityResult using:
InputStream is = getContentResolver().openInputStream(selectedImageUri);
bitmap = BitmapFactory.decodeStream(is);
is.close();
Where selectedImageUri is either from getData() or the file selected from the device's storage.
By the Android Developer reference for Bitmap, getByteCount() returns the minimum number of bytes that can be used to represent the pixels in the image, i.e. the maximally compressed size, even for the uncompressed image! You should use getAllocationByteCount() instead, as it returns the number of bytes the Bitmap is actually taking using.
Bitmap is a memory data structure to display images. Your byte[] array will tell the size as on disk: array.length.
(To be entirely clear.) A Bitmap in memory will probably not use more or less memory. (Just when using another color model, like 256 indexed colors.)
BitmapFactory.decodeByteArray converts a compressed image, into an uncompressed one (RGBA8 format). So, basically, your codec() function is taking in an RGBA8 image, compressing it, then decompressing it back to RGBA8 before returning the result.
This is talked about in Smaller PNG Files; Android doesn't use the compressed images directly; All image data has to be converted from compressed formats, to RGBA8 formats so that the rendering system can use them properly (see Rendering Performance 101).
If you want a smaller in-memory representation of your image, you need to use a different pixel format like RGB656 (which is discussed in Smaller Pixel Formats)
Related
I have an application that needs to communicate with a server exchanging images via their Base64 representation. Due to server capacity, I can only compress and send images that are < 100KB of size. I can easily retrieve the size of the image using:
File file= new File(path);
long size = file.length() / 1024; // KB
and that displays the exact size. Then I decode it into a Bitmap and compress it using:
int quality= 100;
Bitmap bitmap = BitmapFactory.decodeFile(path);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, baos);
byte[] byteArr = baos.toByteArray();
And here things get dirty. I can't properly retrieve the exact size value as I did before, because if the size is > 100KB then I need to re-compress it adjusting the quality.
EDIT: forgot to mention that I have tried byte.length method but the resulting size isn't the same as it was before.
In this example, I have tried with an 80KB image, as shown in the AndroidStudio Console:
You may want to use this library which accepts Max-Size (in kb) for compression.
Example (From readme.md):
Luban.compress(context, file)
.setMaxSize(100) // limit the final image size(unit:Kb)
.setMaxHeight(1920) // limit image height
.setMaxWidth(1080) // limit image width
.putGear(Luban.CUSTOM_GEAR) // use CUSTOM GEAR compression mode
.asObservable()
However i strongly suggest you to not send binary data (such as images) as Base64 since it'll reduce performance and increase size!
It's better to upload it in binary.
If none of above solutions suits you, then at least try to implement your method using binary search.
You'll need to loop over your compression section, reducing the value of quality each loop until the desired size is reached. You can check the size by evaluating byteArr.length
I use Base64 system for encode from image to string this code
Bitmap bitmap = BitmapFactory.decodeFile(picturePath);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 90, stream);
byte[] image = stream.toByteArray();
String img_str = Base64.encodeToString(image, 0);
and decode this code
byte[] decodedString = Base64.decode(decode, Base64.NO_WRAP);
Bitmap decodedByte = BitmapFactory.decodeByteArray(decodedString, 0, decodedString.length);
imageView.setImageBitmap(decodedByte);
but string is too long, very very long. I can't use this way. How can I do short string ?,
You can't. Images typically contain a lot of data. When you convert that to text as base64 it becomes even bigger (4 characters for every 3 bytes). So yes, that will typically be very long if it's a large image.
You could compress the image more heavily in order to reduce the size, but eventually it will be hard to even recognize as the original image - and may well be quite large even so.
Another way of reducing the size in bytes is to create a smaller image in terms of the number of pixels - for example, shrinking a 1000x1000 image to 100x100... is that an option in your case?
You haven't given us much context, but could you store the data elsewhere and then just use a URL instead?
I believe the only answer for this is that if you want a shorter string, you should use smaller image
Depends on the size of the image. A larger image is gonna yield a larger string. Images contain a lot of data. That is why people usually only do base64 encoding for very small images like icons, etc.
You could try reducing the quality of the JPEG compression, but I doubt you'd save much space. Reducing the dimensions (if possible) of the image would probably save some space. Either way, doing base64 on anything larger than a really small gif or png image is almost always counter productive.
I try to get the compression ratio of an JPEG and GIF image in Java.
Searched everywhere but cant find anything. Is it possible to read the compression ratio of the files?
When not how could i compute this ratio?
To calculate the compression of an image you compare the actual file size to the size you'd get if you were storing the image "raw".
For example a jpeg file that's 1024x1024, true color (24bpp) that's 384Kb you'd get a ratio of (384x1024) / (1024x1024x3) = 0.125, this means the jpeg produced a file that's 12% of raw image. If you invert the division you can say the image was compressed 8x or 1:8 ratio.
Get the size and color info of the image from headers or by using Image API, no need to decompress the file to do this calculation
You could try comparing the filesize to it's pixels count - it gives you a sort of ratio. For example:
//Image 1
Image Dimensions = 607x800px
Number of pixels = 486K
File size = 143KB
//Good quality
//Image 2
Image Dimension s= 1719x2377px
Number pixels= 4.086M
File size = 408KB
//Bad quality
You can start with Java Image-IO, read in your image and use the appropriate methods of the ImageReader class.
You can download the JAI here.
I'm loading a jpeg-file via BitmapFactory and try to save it again (later I want to do some calculation on the pixel data before I save it again).
But if I try to save it with
FileOutputStream fos = new FileOutputStream(new File("/sdcard/test.jpg"));
originalImage.compress(Bitmap.CompressFormat.JPEG, 100, fos);
then it is not exactly the same result as in the original picture. Some pixel have got different color values and this ist not useful for my later calculation.
Is there a possibility to safe it lossless? Or is the problem already when I load the picture with
Bitmap originalImage = BitmapFactory.decodeFile("/sdcard/input.jpg");
few lines before?
Is there a possibility to safe it lossless?
No. The JPEG format uses a lossy compression. It makes no formal guarantees even if you set the quality to 100.
Or is the problem already when I load the picture with [...]
No, bitmaps are... maps of bits, i.e. they represent the exact bits of the image data.
I have java program that reads a jpegfile from the harddrive and uses it as the background image for various other things. The image itself is stored in a BufferImage object like so:
BufferedImage background
background = ImageIO.read(file)
This works great - the problem is that the BufferedImage object itself is enormous. For example, a 215k jpeg file becomes a BufferedImage object that's 4 megs and change. The app in question can have some fairly large background images loaded, but whereas the jpegs are never more than a meg or two, the memory used to store the BufferedImage can quickly exceed 100s of megabytes.
I assume all this is because the image is being stored in ram as raw RGB data, not compressed or optimized in any way.
Is there a way to have it store the image in ram in a smaller format? I'm in a situation where I have more slack on the CPU side than RAM, so a slight performance hit to get the image object's size back down towards the jpeg compression would be well worth it.
One of my projects I just down-sample the image as it is being read from an ImageStream on the fly. The down-sampling reduces the dimensions of the image to a required width & height whilst not requiring expensive resizing computations or modification of the image on disk.
Because I down-sample the image to a smaller size, it also significantly reduces the processing power and RAM required to display it. For extra optimization, I render the buffered image in tiles also... But that's a bit outside the scope of this discussion. Try the following:
public static BufferedImage subsampleImage(
ImageInputStream inputStream,
int x,
int y,
IIOReadProgressListener progressListener) throws IOException {
BufferedImage resampledImage = null;
Iterator<ImageReader> readers = ImageIO.getImageReaders(inputStream);
if(!readers.hasNext()) {
throw new IOException("No reader available for supplied image stream.");
}
ImageReader reader = readers.next();
ImageReadParam imageReaderParams = reader.getDefaultReadParam();
reader.setInput(inputStream);
Dimension d1 = new Dimension(reader.getWidth(0), reader.getHeight(0));
Dimension d2 = new Dimension(x, y);
int subsampling = (int)scaleSubsamplingMaintainAspectRatio(d1, d2);
imageReaderParams.setSourceSubsampling(subsampling, subsampling, 0, 0);
reader.addIIOReadProgressListener(progressListener);
resampledImage = reader.read(0, imageReaderParams);
reader.removeAllIIOReadProgressListeners();
return resampledImage;
}
public static long scaleSubsamplingMaintainAspectRatio(Dimension d1, Dimension d2) {
long subsampling = 1;
if(d1.getWidth() > d2.getWidth()) {
subsampling = Math.round(d1.getWidth() / d2.getWidth());
} else if(d1.getHeight() > d2.getHeight()) {
subsampling = Math.round(d1.getHeight() / d2.getHeight());
}
return subsampling;
}
To get the ImageInputStream from a File, use:
ImageIO.createImageInputStream(new File("C:\\image.jpeg"));
As you can see, this implementation respects the images original aspect ratio as well. You can optionally register an IIOReadProgressListener so that you can keep track of how much of the image has been read so far. This is useful for showing a progress bar if the image is being read over a network for instance... Not required though, you can just specify null.
Why is this of particular relevance to your situation? It never reads the entire image into memory, just as much as you need it to so that it can be displayed at the desired resolution. Works really well for huge images, even those that are 10's of MB on disk.
I assume all this is because the image
is being stored in ram as raw RGB
data, not compressed or optimized in
any way.
Exactly... Say a 1920x1200 JPG can fit in, say, 300 KB while in memory, in a (typical) RGB + alpha, 8 bits per component (hence 32 bits per pixel) it shall occupy, in memory:
1920 x 1200 x 32 / 8 = 9 216 000 bytes
so your 300 KB file becomes a picture needing nearly 9 MB of RAM (note that depending on the type of images you're using from Java and depending on the JVM and OS this may sometimes be GFX-card RAM).
If you want to use a picture as a background of a 1920x1200 desktop, you probably don't need to have a picture bigger than that in memory (unless you want to some special effect, like sub-rgb decimation / color anti-aliasing / etc.).
So you have to choices:
makes your files less wide and less tall (in pixels) on disk
reduce the image size on the fly
I typically go with number 2 because reducing file size on hard disk means you're losing details (a 1920x1200 picture is less detailed than the "same" at 3940x2400: you'd be "losing information" by downscaling it).
Now, Java kinda sucks big times at manipulating pictures that big (both from a performance point of view, a memory usage point of view, and a quality point of view [*]). Back in the days I'd call ImageMagick from Java to resize the picture on disk first, and then load the resized image (say fitting my screen's size).
Nowadays there are Java bridges / APIs to interface directly with ImageMagick.
[*] There is NO WAY you're downsizing an image using Java's built-in API as fast and with a quality as good as the one provided by ImageMagick, for a start.
Do you have to use BufferedImage? Could you write your own Image implementation that stores the jpg bytes in memory, and coverts to a BufferedImage as necessary and then discards?
This applied with some display aware logic (rescale the image using JAI before storing in your byte array as jpg), will make it faster than decoding the large jpg every time, and a smaller footprint than what you currently have (processing memory requirements excepted).
Use imgscalr:
http://www.thebuzzmedia.com/software/imgscalr-java-image-scaling-library/
Why?
Follows best practices
Stupid simple
Interpolation, Anti-aliasing support
So you aren't rolling your own scaling library
Code:
BufferedImage thumbnail = Scalr.resize(image, 150);
or
BufferedImage thumbnail = Scalr.resize(image, Scalr.Method.SPEED, Scalr.Mode.FIT_TO_WIDTH, 150, 100, Scalr.OP_ANTIALIAS);
Also, use image.flush() on your larger image after conversion to help with the memory utilization.
File size of the JPG on disk is completely irrelevant.
The pixel dimensions of the file are. If your image is 15 Megapixels expect it to require crap load of RAM to load a raw uncompressed version.
Re-size your image dimensions to be just what you need and that is the best you can do without going to a less rich colorspace representation.
You could copy the pixels of the image to another buffer and see if that occupies less memory then the BufferedImage object. Probably something like this:
BufferedImage background = new BufferedImage(
width,
height,
BufferedImage.TYPE_INT_RGB
);
int[] pixels = background.getRaster().getPixels(
0,
0,
imageBuffer.getWidth(),
imageBuffer.getHeight(),
(int[]) null
);