how to convert from image to short string? - java

I use Base64 system for encode from image to string this code
Bitmap bitmap = BitmapFactory.decodeFile(picturePath);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 90, stream);
byte[] image = stream.toByteArray();
String img_str = Base64.encodeToString(image, 0);
and decode this code
byte[] decodedString = Base64.decode(decode, Base64.NO_WRAP);
Bitmap decodedByte = BitmapFactory.decodeByteArray(decodedString, 0, decodedString.length);
imageView.setImageBitmap(decodedByte);
but string is too long, very very long. I can't use this way. How can I do short string ?,

You can't. Images typically contain a lot of data. When you convert that to text as base64 it becomes even bigger (4 characters for every 3 bytes). So yes, that will typically be very long if it's a large image.
You could compress the image more heavily in order to reduce the size, but eventually it will be hard to even recognize as the original image - and may well be quite large even so.
Another way of reducing the size in bytes is to create a smaller image in terms of the number of pixels - for example, shrinking a 1000x1000 image to 100x100... is that an option in your case?
You haven't given us much context, but could you store the data elsewhere and then just use a URL instead?

I believe the only answer for this is that if you want a shorter string, you should use smaller image

Depends on the size of the image. A larger image is gonna yield a larger string. Images contain a lot of data. That is why people usually only do base64 encoding for very small images like icons, etc.
You could try reducing the quality of the JPEG compression, but I doubt you'd save much space. Reducing the dimensions (if possible) of the image would probably save some space. Either way, doing base64 on anything larger than a really small gif or png image is almost always counter productive.

Related

How to compress an image until a fixed byte size is reached

I have an application that needs to communicate with a server exchanging images via their Base64 representation. Due to server capacity, I can only compress and send images that are < 100KB of size. I can easily retrieve the size of the image using:
File file= new File(path);
long size = file.length() / 1024; // KB
and that displays the exact size. Then I decode it into a Bitmap and compress it using:
int quality= 100;
Bitmap bitmap = BitmapFactory.decodeFile(path);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, baos);
byte[] byteArr = baos.toByteArray();
And here things get dirty. I can't properly retrieve the exact size value as I did before, because if the size is > 100KB then I need to re-compress it adjusting the quality.
EDIT: forgot to mention that I have tried byte.length method but the resulting size isn't the same as it was before.
In this example, I have tried with an 80KB image, as shown in the AndroidStudio Console:
You may want to use this library which accepts Max-Size (in kb) for compression.
Example (From readme.md):
Luban.compress(context, file)
.setMaxSize(100) // limit the final image size(unit:Kb)
.setMaxHeight(1920) // limit image height
.setMaxWidth(1080) // limit image width
.putGear(Luban.CUSTOM_GEAR) // use CUSTOM GEAR compression mode
.asObservable()
However i strongly suggest you to not send binary data (such as images) as Base64 since it'll reduce performance and increase size!
It's better to upload it in binary.
If none of above solutions suits you, then at least try to implement your method using binary search.
You'll need to loop over your compression section, reducing the value of quality each loop until the desired size is reached. You can check the size by evaluating byteArr.length

Bitmap ByteCount remaining the same after compression

I'm trying to see the results of various compression qualities with something like this:
private static Bitmap codec(Bitmap src, Bitmap.CompressFormat format,
int quality) {
ByteArrayOutputStream os = new ByteArrayOutputStream();
src.compress(format, quality, os);
byte[] array = os.toByteArray();
return BitmapFactory.decodeByteArray(array, 0, array.length);
}
I have a Bitmap called bitmap, and I'm comparing it to compressed version:
Bitmap compressed = codec(bitmap,Bitmap.CompressFormat.JPEG ,10);
Log.d("result","bitmap=" + bitmap.getByteCount() + " compressed=" + compressed.getByteCount());
No matter what photo I select to load into bitmap, the compressed version's byte count remains the same as bitmap's bytecount -- though, if I load compressed into an ImageView, the quality is very noticeably lower.
Is the size really staying the same while lowering the visual quality of the image? Am I getting the size of the file incorrectly?
EDIT:
Even stranger, the result size is showing 16343040 bytes for an image that says 1.04mb in gallery details.
I'm getting the original bitmap through onActivityResult using:
InputStream is = getContentResolver().openInputStream(selectedImageUri);
bitmap = BitmapFactory.decodeStream(is);
is.close();
Where selectedImageUri is either from getData() or the file selected from the device's storage.
By the Android Developer reference for Bitmap, getByteCount() returns the minimum number of bytes that can be used to represent the pixels in the image, i.e. the maximally compressed size, even for the uncompressed image! You should use getAllocationByteCount() instead, as it returns the number of bytes the Bitmap is actually taking using.
Bitmap is a memory data structure to display images. Your byte[] array will tell the size as on disk: array.length.
(To be entirely clear.) A Bitmap in memory will probably not use more or less memory. (Just when using another color model, like 256 indexed colors.)
BitmapFactory.decodeByteArray converts a compressed image, into an uncompressed one (RGBA8 format). So, basically, your codec() function is taking in an RGBA8 image, compressing it, then decompressing it back to RGBA8 before returning the result.
This is talked about in Smaller PNG Files; Android doesn't use the compressed images directly; All image data has to be converted from compressed formats, to RGBA8 formats so that the rendering system can use them properly (see Rendering Performance 101).
If you want a smaller in-memory representation of your image, you need to use a different pixel format like RGB656 (which is discussed in Smaller Pixel Formats)

PNG File format and Image bytes?

I am currently trying to write a program to encode text into a png file only changing the least significant bit for each letter i want to encode in the picture, example
I have a 'A' which is 65 and I use 8 different bytes to encode the letter A. So
01010100<- 10101101<- 11011010<- 10101010<- each of these I change the last bit and the put
10110110<- 01010100<- 01010100<- 01010101<- them together so 65 is 01000001 each number by
the arrow is changed according to the 65.
If I should approach this a different way suggestions would be awesome :). This is just
a fun little project I wanted to do. But anyways back to my question.
When I read in a image that is only 4 pixels big I get like 680 bytes which is crazy, or at least I think it is, maybe im wrong? 4 pixels with ARGB at 8 bits each should be 16 bytes with a few bytes im sure to tell the operating system that it is a png and how to handle it. So i was expecting maybe like 30 bytes. Maybe less. Am I looking at this the wrong way? When png images are compressed do they become bigger if it is a small picture? And also, when I was saving it back to the Hard drive I always got a larger file. The original picture was 8,554 kb and then it turned into like 16kb when I saved it back. Here is the code for getting the image bytes and for saving the image. Maybe I am doing something wrong or I am just not understanding it correctly.
These are the ways I get the image (I tried 2 different things)
// BufferedImage img = ImageIO.read(new File("image.png"));
BufferedImage img= robot.createScreenCapture(new Rectangle(1,2,2,2));
how I saved two different ways again.
try {
InputStream in = new ByteArrayInputStream(imgBytes);
BufferedImage bImageFromConvert = ImageIO.read(in);
ImageIO.write(bImageFromConvert, "png", new File(
"image.png"));
//FileOutputStream fos = new FileOutputStream("image.png");
//fos.write(b);
//fos.close();
}catch(Exception e){}
How I got the bytes from the Image, again I tried two different ways, the second way that is commented out actually did give me the 16 bytes like I want but when I saved it the Windows couldnt Open it because it didnt know what it was i guess? Not sure, just said file not supported.
byte[] imageBytes = null;
try{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", baos );
baos.flush();
imageBytes = baos.toByteArray();
baos.close();
}catch(IOException e){System.out.println(e.getMessage());}
// imageBytes = ((DataBufferByte) image.getData().getDataBuffer()).getData();
return imageBytes;
Thanks!
A png consists of a lot of image meta data as well as the raw image data. That is what is giving you crazy 680 bytes.
I had pretty much the same problem with my barcode fonts. I just wanted to encode 5 bits of data into the smallest PNG I could get. What I didn't want to do is write a custom program, based on libpng. I tried quite a few editors and the smallest file size I could get was around 170 bytes.
Finally I found Gimp 2.0. There is a feature you can use to export a PNG file without all the metadata. I also changed to 8 bit grayscale. I think I could shave a couple of bytes off by switching to 2 bit grayscale, but Gimp wouldn't do that for me. In the end, I was happy with ~75 bytes per character.

Saving JPEG files in Android with no loss of pixel information

I'm loading a jpeg-file via BitmapFactory and try to save it again (later I want to do some calculation on the pixel data before I save it again).
But if I try to save it with
FileOutputStream fos = new FileOutputStream(new File("/sdcard/test.jpg"));
originalImage.compress(Bitmap.CompressFormat.JPEG, 100, fos);
then it is not exactly the same result as in the original picture. Some pixel have got different color values and this ist not useful for my later calculation.
Is there a possibility to safe it lossless? Or is the problem already when I load the picture with
Bitmap originalImage = BitmapFactory.decodeFile("/sdcard/input.jpg");
few lines before?
Is there a possibility to safe it lossless?
No. The JPEG format uses a lossy compression. It makes no formal guarantees even if you set the quality to 100.
Or is the problem already when I load the picture with [...]
No, bitmaps are... maps of bits, i.e. they represent the exact bits of the image data.

fail reducing Image size using google app engine images java API

I want to reduce image size (in KB) when its size is larger than 1MB.
when I apply the resize transformation with smaller width and smaller height the size of the transformed image (in bytes) is larger than the orig image.
The funny (or sad) part is even when I invoke the resize with the same width and height as the orig (i.e. dimensions are not changed) the size "transformed" image is larger than the orig
final byte[] origData = .....;
final ImagesService imagesService = ImagesServiceFactory.getImagesService();
final Image origImage = ImagesServiceFactory.makeImage(oldDate);
System.out.println("orig dimensions is " + origImage.getWidth() + " X " + origImage.getHeight());
final Transform resize = ImagesServiceFactory.makeResize(origImage.getWidth(), origImage.getHeight());
final Image newImage = imagesService.applyTransform(resize, origImage);
final byte[] newImageData = newImage.getImageData();
//newImageData.length > origData.length :-(
Image coding has some special characteristics that you are observing the results from. As you decode a image from its (file) representation, you generate a lot of pixels. The subsequent encoding only sees the pixels and does not know anything about the size of your original file. Therefore the encoding step is crusial to get right.
The common JPEG format, and also the PNG format, have different compression levels, i.e a quality setting. They can have this because they do lossy compressions. In general, images with a lot of details (sharp edges) should be compressed with high quality and blurry images with low quality; as you probably have seen, small images usually are more blurry and large images usually more sharp.
Without going into the techical details, this means that you should set the quality level accoring to the nature of your image, which also is determined by the size of the input image. In other words, if you encode a blurry image as a big file, you are wasting space, since you would get about the same result using less bytes. But the encoder does not have this information, so you have to configure it using the correct quality setting
Edit: In your case manually set a low quality for encoding if you started with a small file (compared to number of pixels) and then of course a high quality if the opposite is true. Do some experimentations, probably a single quality setting for all photos will be acceptable.
A pitfall I fell in was, that I requested PNG output ... and the image size didn't change either. The image service silently ignored quality parameter. According to a comment in implementation the quality parameter is considered only for JPEG.

Categories

Resources