How to get java wrapper for libjpeg-turbo to actually compress? - java

I'm having trouble getting libjpeg-turbo in my java project to actually compress an image. It writes a .jpg fine - but the final size of the result is always e.g. almost the same as a 24bit windows .bmp of the same image. A 480x854 image turns into a 1.2 Megabyte jpeg with the below code snippet. If I use GRAY sampling it's 800Kb (and these are not fancy images to begin with - mostly a neutral background with some filled primary color discs on them for a game I'm working on).
Here's the code I've got so far:
// for some byte[] src in RGB888 format, representing an image of dimensions
// 'width' and 'height'
try
{
TJCompressor tjc = new TJCompressor(
src,
width
0, // "pitch" - scanline size
height
TJ.PF_RGB // format
);
tjc.setJPEGQuality(75);
tjc.setSubsamp(TJ.SAMP_420);
byte[] jpg_data = tjc.compress(0);
new java.io.FileOutputStream(new java.io.File("/tmp/dump.jpg")).write(jpg_data, 0, jpg_data.length);
}
catch(Exception e)
{
e.printStackTrace(System.err);
}
I'm particularly having a hard time finding sample java usage documentation for this project; it mostly assumes a C background/usage. I don't understand the flags to pass to compress (nor do I really know the internals of the jpeg standard, nor do I want to :)!
Thanks!

Doh! And within 5 minutes of posting the question the answer hit me.
A hexdump of the result showed that the end of the file for these images was just lots and lots of 0s.
For anybody in a similar situation in the future, instead of using jpg_data.length (which is apparently entirely too large for some reason), use TJCompressor.getCompressedSize() immediately after your call to TJCompressor.compress().
Final result becomes:
// for some byte[] src in RGB format, representing an image of dimensions
// 'width' and 'height'
try
{
TJCompressor tjc = new TJCompressor(
src,
width
0, // "pitch" - scanline size
height
TJ.PF_RGB // format
);
tjc.setJPEGQuality(75);
tjc.setSubsamp(TJ.SAMP_420);
byte[] jpg_data = tjc.compress(0);
int actual_size = tjc.getCompressedSize();
new java.io.FileOutputStream(new java.io.File("/tmp/dump.jpg")).
write(jpg_data, 0, actual_size);
}
catch(Exception e)
{
e.printStackTrace(System.err);
}

Related

Glide load into SimpleTarget<Bitmap> not honoring the specified width and height

I'm using Glide to load an image, resize it and save it to a file by means of a SimpleTarget<Bitmap>. These images will be uploaded to Amazon S3, but that's besides the point. I'm resizing the images prior to uploading as to save as much user's bandwidth as possible. For my app needs a 1024 pixel wide image is more than enough, so I'm using the following code to accomplish that:
final String to = getMyImageUrl();
final Context appCtx = context.getApplicationContext();
Glide.with(appCtx)
.load(sourceImageUri)
.asBitmap()
.into(new SimpleTarget<Bitmap>(1024, 768) {
#Override
public void onResourceReady(Bitmap resource, GlideAnimation<? super Bitmap> glideAnimation) {
try {
FileOutputStream out = new FileOutputStream(to);
resource.compress(Bitmap.CompressFormat.JPEG, 70, out);
out.flush();
out.close();
MediaScannerConnection.scanFile(appCtx, new String[]{to}, null, null);
} catch (IOException e) {
e.printStackTrace();
}
}
});
It works almost perfectly, but the size of the resulting image is not 1024 pixels wide. Testing it with a source image with dimensions of 4160 x 2340 pixels the dimensions of the resulting saved image is 2080 x 1170 pixels.
I've tried playing with the width and height parameters passed to new SimpleTarget<Bitmap>(350, 350) and with these parameters the resulting image dimensions are 1040 x 585 pixels.
I really don't know what to do to make Glide respect the passed dimensions. In fact I'd like to resize the image proportionally so that the bigger dimension (either width or height) will be restricted to 1024 pixels and the smaller one resized accordingly (I believe I'll have to find a way to get the original image dimensions and then pass the width and height to SimpleTarget, but to do that I need Glide to respect the passed width and height!).
Does anyone have a clue what's going on? I'm using Glide 3.7.0.
Since this question itself may be useful for people trying to use Glide to resize and save images I believe it is in everyone's interest to provide my actual "solution" which relies on a new SimpleTargetimplementation that automatically saves the resized image:
import android.graphics.Bitmap;
import com.bumptech.glide.request.animation.GlideAnimation;
import com.bumptech.glide.request.target.SimpleTarget;
import java.io.FileOutputStream;
import java.io.IOException;
public class FileTarget extends SimpleTarget<Bitmap> {
public FileTarget(String fileName, int width, int height) {
this(fileName, width, height, Bitmap.CompressFormat.JPEG, 70);
}
public FileTarget(String fileName, int width, int height, Bitmap.CompressFormat format, int quality) {
super(width, height);
this.fileName = fileName;
this.format = format;
this.quality = quality;
}
String fileName;
Bitmap.CompressFormat format;
int quality;
public void onResourceReady(Bitmap bitmap, GlideAnimation anim) {
try {
FileOutputStream out = new FileOutputStream(fileName);
bitmap.compress(format, quality, out);
out.flush();
out.close();
onFileSaved();
} catch (IOException e) {
e.printStackTrace();
onSaveException(e);
}
}
public void onFileSaved() {
// do nothing, should be overriden (optional)
}
public void onSaveException(Exception e) {
// do nothing, should be overriden (optional)
}
}
Using it is as simple as:
Glide.with(appCtx)
.load(sourceImageUri)
.asBitmap()
.into(new FileTarget(to, 1024, 768) {
#Override
public void onFileSaved() {
// do anything, or omit this override if you want
}
});
After a good night's sleep I just figured it out! I had stumbled on an issue in Glide's github page which had the answer but I didn't realize it: I missed something in the explanation, which I now fully understood after resting for 10 hours. You should never underestimate the power of sleeping! But I digress. Here is the answer found on Glide's Github issue tracker:
Sizing the image usually has two phases:
Decoding/Downsampler read image from stream with inSampleSize
Transforming/BitmapTransformation take the Bitmap and match the exact
target size
The decoding is always needed and is included in the flow,
the default case is to match the target size with the "at least"
downsampler, so when it comes to the transformation the image can be
downsized more without quality loss (each pixel in the source will
match at least 1.0 pixels and at most ~1.999 pixels) this can be
controlled by asBitmap().at least|atMost|asIs|decoder(with
downsampler)
The transformation and target size is automatic by default, but only
when using a ViewTarget. When you load into an ImageView the size of
that will be detected even when it has match_parent. Also if there's
no explicit transformation there'll be one applied from scaleType.
Thus results in a pixel perfect Bitmap for that image, meaning 1 pixel
in Bitmap = 1 pixel on screen resulting in the best possible quality
with the best memory usage and fast rendering (because there's no
pixel mapping needed when drawing the image).
With a SimpleTarget you take on these responsibilities by providing a
size on the constructor or via override() or implementing getSize if
the sizing info is async-ly available only.
To fix your load add a transformation: .fitCenter|centerCrop(), your
current applied transformation is .dontTransform()
(Answer by Róbert Papp)
I got confused by this answer because of this:
With a SimpleTarget you take on these responsibilities by providing a
size on the constructor or via override() or implementing getSize if
the sizing info is async-ly available only.
Since I was passing the size I thought I got this covered already and such size should have been respected. I missed this important concept:
Decoding/Downsampler read image from stream with inSampleSize
Transforming/BitmapTransformation take the Bitmap and match the exact
target size
And this:
To fix your load add a transformation: .fitCenter|centerCrop(), your
current applied transformation is .dontTransform()
Now that I pieced it together it makes sense. Glide was only downsampling the image (first step in the sizing image flow as explained by Róbert) which gives an image with approximated dimensions. Let me say that Glide is very smart in this aspect. By using the downsampling method prior to resizing it avoids working with unnecessarily large bitmaps in memory and improves the resizing quality because downsampling to the exact size would compromise too many "important" pixels!
Since I didn't have any transformation applied to this loading pipeline, the sizing flow stopped in this first step (downsampling), and the resulting image had only approximate dimensions to my expected target size.
To solve it I just applied a .fitCenter() transform as shown bellow:
Glide.with(appCtx)
.load(sourceImageUri)
.asBitmap()
.fitCenter()
.into(new FileTarget(to, 1024, 768) {
#Override
public void onFileSaved() {
// do anything, or omit this override if you want
}
});
The resulting image now have dimensions of 1024 x 576 pixels, which is exactly what I expected.
Glide is a very cool library!

Yuv (NV21) image converting to bitmap [duplicate]

This question already has answers here:
Convert NV21 byte array into bitmap readable format [duplicate]
(2 answers)
Closed 5 years ago.
I am trying to capture images from camera preview and do some drawing on it. The problem is, I have only about 3-4 fps of drawing, and half of the frame processing time is receiving and decoding NV21 image from camera preview and converting to bitmap. I have a code to do this task, which I found on another stack question. It does not seem to be fast, but I do not know how to optimize it. It takes about 100-150 ms on Samsung Note 3, image size 1920x1080. How can I make it work faster?
Code :
public Bitmap curFrameImage(byte[] data, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
int imageFormat = parameters.getPreviewFormat();
if (imageFormat == ImageFormat.NV21)
{
YuvImage img = new YuvImage(data, ImageFormat.NV21, prevSizeW, prevSizeH, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
img.compressToJpeg(new android.graphics.Rect(0, 0, img.getWidth(), img.getHeight()), 50, out);
byte[] imageBytes = out.toByteArray();
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
}
else
{
Log.i(TAG, "Preview image not NV21");
return null;
}
}
The final format of image has to be bitmap, so I could then do processing on it. I've tried to set Camera.Parameters.setPreviewFormat to RGB_565, but could not assign camera params to camera, I've read also that NV21 is the only available format. I am not sure about that, whether it is possible to find solution in these format changes.
Thank you in advance.
Thank you, Alex Cohn, for helping me to do make this conversion faster. I implemented your suggested methods (RenderScript intrinsics). This code, made with RenderScript intrinsics, converts YUV image to bitmap about ~5 times faster. Previous code took 100-150 ms. on Samsung Note 3, this takes 15-30 or so. If someone needs to do the same task, here is the code:
These will be used:
private RenderScript rs;
private ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic;
private Type.Builder yuvType, rgbaType;
private Allocation in, out;
In on create function I initialize..:
rs = RenderScript.create(this);
yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
And the whole onPreviewFrame looks like this (here I receive and convert the image):
if (yuvType == null)
{
yuvType = new Type.Builder(rs, Element.U8(rs)).setX(dataLength);
in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(prevSizeW).setY(prevSizeH);
out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
}
in.copyFrom(data);
yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);
Bitmap bmpout = Bitmap.createBitmap(prevSizeW, prevSizeH, Bitmap.Config.ARGB_8888);
out.copyTo(bmpout);
You can get even more speed (using JellyBean 4.3, API18 or higher):
Camera preview mode must be NV21 !
On "onPreviewFrame()" do only:
aIn.copyFrom(data);
yuvToRgbIntrinsic.forEach(aOut);
aOut.copyTo(bmpout); // and of course, show the bitmap or whatever
Do not create any objects here.
All other stuff (creating rs, yuvToRgbIntrinsic, allocations, bitmap) do
in the onCreate() method before starting the camera preview.
rs = RenderScript.create(this);
yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// you don´t need Type.Builder objects , with cameraPreviewWidth and cameraPreviewHeight do:
int yuvDatalength = cameraPreviewWidth*cameraPreviewHeight*3/2; // this is 12 bit per pixel
aIn = Allocation.createSized(rs, Element.U8(rs), yuvDatalength);
// of course you will need the Bitmap
bmpout = Bitmap.createBitmap(cameraPreviewWidth, cameraPreviewHeight, Bitmap.Config.ARGB_8888);
// create output allocation from bitmap
aOut = Allocation.createFromBitmap(rs,bmpout); // this simple !
// set the script´s in-allocation, this has to be done only once
yuvToRgbIntrinsic.setInput(aIn);
On Nexus 7 (2013, JellyBean 4.3) a full HD (1920x1080) camera preview conversion takes about 0.007 s (YES, 7 ms).
OpenCV-JNI to construct Mat from Nv21 data for 4160x3120 sized image seems 2x faster (38msec) compared to renderscript (68msec-excluding initialization time). If we need to resize down constructed bitmap, OpenCV-JNI seems better approach as we would be using full size only for Y data. CbCr data would be resized to downsized Your data at the time of OpenCV mat construction only.
A still better way is passing NV21 byte array, and int pixel array to Jni. Array copy may not be needed at Jni side. Then, use open source libyuv library (https://chromium.googlesource.com/libyuv/libyuv/) to convert NV21 to ARGB. In Java, we use passed pixel array to construct bitmap. In Jni, conversion from NV21 to ARGB takes only ~4ms for 4160x3120 sized byte array on arm64 platform.

Why is JPG altered on retrieval? - Java

I am trying to retrieve an jpg image as a BufferedImage then decompose it into a 3D array [RED][GREEN][BLUE] and then turn it back into a BufferedImage and store it under a different file-name. All looks fine to me BUT, when I am trying to reload the 3D array using the new file created I get different values for RGB although the new image looks fine to the naked eye. I did the following.
BufferedImage bi = ImageIO.read(new File("old.jpg"));
int[][][] one = getArray(bi);
save("kite.jpg", one);
BufferedImage bi2 = ImageIO.read(new File("new.jpg"));
int[][][] two = getArray(bi2);
private void save(String destination, int[][][] in) {
try {
BufferedImage out = new BufferedImage(in.length, in[0].length, BufferedImage.TYPE_3BYTE_BGR);
for (int x=0; x<out.getWidth(); x++) {
for (int y = 0; y < out.getHeight(); y++) {
out.setRGB(x, y, new Color(in[x][y][0], in[x][y][1], in[x][y][2]).getRGB());
}
}
File f = new File("name");
ImageIO.write(out, "JPEG", f);
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
so in the example above the values that arrays one and two are holding are different.
I am guessing there is something to do with the different types of retrieving and restoring images? I am trying to figure out what is going on all day but with no luck. Any help appreciated.
Pretty simple:
JPEG is a commonly used method of lossy compression for digital images
(from wikipedia).
Each time the image is compressed, the image is altered to reduce file-size. In fact repeating the steps of decompression and compression for several hundred times alters the image to the point where the entire image turns into a plain gray area in most cases. There are a few compressions that work lossless, but most operation-modes will alter the image.

Android add a watermark logo to very large jpg file (say 10000 x 150000) [duplicate]

This question already has an answer here:
Load large picture from file and add watermark
(1 answer)
Closed 8 years ago.
I have a large jpeg file say 10000 x 150000 px. I want to add a small logo to the bottom of the image without re sizing.
I am able to do this If i down sample the original image and draw the logo using canvas.But when i finally save it to file, the image original size will be reduced as I am sampling it.
If i load the original image into bitmap without down sampling, it exceeds the VM.
Below code work for me :-
public static Bitmap mark(Bitmap src, String watermark, Point location, Color color, int alpha, int size, boolean underline) {
int w = src.getWidth();
int h = src.getHeight();
Bitmap result = Bitmap.createBitmap(w, h, src.getConfig());
Canvas canvas = new Canvas(result);
canvas.drawBitmap(src, 0, 0, null);
Paint paint = new Paint();
paint.setColor(color);
paint.setAlpha(alpha);
paint.setTextSize(size);
paint.setAntiAlias(true);
paint.setUnderlineText(underline);
canvas.drawText(watermark, location.x, location.y, paint);
return result;
}
For large image editing you'll need to use native tools like imagemagick. Because there seem to be a lack of advanced image processing libraries in android supported Java.
If you can compile Composite tool's binaries for android. Then you can use them with --limit option to work with limited memory.
Also, you can try OpenCV as an alternative.
You can use BitmapRegionDecoder when deal with large image file. From the official document.
BitmapRegionDecoder can be used to decode a rectangle region from an image. BitmapRegionDecoder is particularly useful when an original image is large and you only need parts of the image.
To create a BitmapRegionDecoder, call newInstance(...). Given a BitmapRegionDecoder, users can call decodeRegion() repeatedly to get a decoded Bitmap of the specified region.
Just decode the part of your image that you need to add watermark, then use Canva to draw text on it.
try {
BitmapRegionDecoder regionDecoder = BitmapRegionDecoder.newInstance("/sdcard/test.png", true);
Bitmap bitmap = regionDecoder.decodeRegion(rect, options);
} catch (IOException e) {
e.printStackTrace();
}

How to use ScriptIntrinsic3DLUT with a .cube file?

first, I'm new to image processing in Android. I have a .cube file that was "Generated by Resolve" that is LUT_3D_SIZE 33. I'm trying to use android.support.v8.renderscript.ScriptIntrinsic3DLUT to apply the lookup table to process an image. I assume that I should use ScriptIntrinsic3DLUT and NOT android.support.v8.renderscript.ScriptIntrinsicLUT, correct?
I'm having problems finding sample code to do this so this is what I've pieced together so far. The issue I'm having is how to create an Allocation based on my .cube file?
...
final RenderScript renderScript = RenderScript.create(getApplicationContext());
final ScriptIntrinsic3DLUT scriptIntrinsic3DLUT = ScriptIntrinsic3DLUT.create(renderScript, Element.U8_4(renderScript));
// How to create an Allocation from .cube file?
//final Allocation allocationLut = Allocation.createXXX();
scriptIntrinsic3DLUT.setLUT(allocationLut);
Bitmap bitmapIn = selectedImage;
Bitmap bitmapOut = selectedImage.copy(bitmapIn.getConfig(),true);
Allocation aIn = Allocation.createFromBitmap(renderScript, bitmapIn);
Allocation aOut = Allocation.createTyped(renderScript, aIn.getType());
aOut.copyTo(bitmapOut);
imageView.setImageBitmap(bitmapOut);
...
Any thoughts?
Parsing the .cube file
First, what you should do is to parse the .cube file.
OpenColorIO shows how to do this in C++. It has some ways to parse the LUT files like .cube, .lut, etc.
For example, FileFormatIridasCube.cpp shows how to
process a .cube file.
You can easily get the size through
LUT_3D_SIZE. I have contacted an image processing algorithm engineer.
This is what he said:
Generally in the industry a 17^3 cube is considered preview, 33^3 normal and 65^3 for highest quality output.
Note that in a .cube file, we can get 3*LUT_3D_SIZE^3 floats.
The key point is what to do with the float array. We cannot set this array to the cube in ScriptIntrinsic3DLUT with the Allocation.
Before doing this we need to handle the float array.
Handle the data in .cube file
As we know, each RGB component is an 8-bit int if it is 8-bit depth.
R is in the high 8-bit, G is in the middle, and B is in the low 8-bit. In this way, a 24-bit int can contain these
three components at the same time.
In a .cube file, each data line contains 3 floats.
Please note: the blue component goes first!!!
I get this conclusion from trial and error. (Or someone can give a more accurate explanation.)
Each float represents the coefficient of the component according to 255. Therefore, we need to calculate the real
value with these three components:
int getRGBColorValue(float b, float g, float r) {
int bcol = (int) (255 * clamp(b, 0.f, 1.f));
int gcol = (int) (255 * clamp(g, 0.f, 1.f));
int rcol = (int) (255 * clamp(r, 0.f, 1.f));
return bcol | (gcol << 8) | (rcol << 16);
}
So we can get an integer from each data line, which contains 3 floats.
And finally, we get the integer array, the length of which is LUT_3D_SIZE^3. This array is expected to be
applied to the cube.
ScriptIntrinsic3DLUT
RsLutDemo shows how to apply ScriptIntrinsic3DLUT.
RenderScript mRs;
Bitmap mBitmap;
Bitmap mLutBitmap;
ScriptIntrinsic3DLUT mScriptlut;
Bitmap mOutputBitmap;
Allocation mAllocIn;
Allocation mAllocOut;
Allocation mAllocCube;
...
int redDim, greenDim, blueDim;
int[] lut;
if (mScriptlut == null) {
mScriptlut = ScriptIntrinsic3DLUT.create(mRs, Element.U8_4(mRs));
}
if (mBitmap == null) {
mBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.bugs);
mOutputBitmap = Bitmap.createBitmap(mBitmap.getWidth(), mBitmap.getHeight(), mBitmap.getConfig());
mAllocIn = Allocation.createFromBitmap(mRs, mBitmap);
mAllocOut = Allocation.createFromBitmap(mRs, mOutputBitmap);
}
...
// get the expected lut[] from .cube file.
...
Type.Builder tb = new Type.Builder(mRs, Element.U8_4(mRs));
tb.setX(redDim).setY(greenDim).setZ(blueDim);
Type t = tb.create();
mAllocCube = Allocation.createTyped(mRs, t);
mAllocCube.copyFromUnchecked(lut);
mScriptlut.setLUT(mAllocCube);
mScriptlut.forEach(mAllocIn, mAllocOut);
mAllocOut.copyTo(mOutputBitmap);
Demo
I have finished a demo to show the work.
You can view it on Github.
Thanks.
With a 3D LUT yes, you have to use the core framework version as there is no support library version of 3D LUT at this time. Your 3D LUT allocation would have to be created by parsing the file appropriately, there is no built in support for .cube files (or any other 3D LUT format).

Categories

Resources