Yuv (NV21) image converting to bitmap [duplicate] - java

This question already has answers here:
Convert NV21 byte array into bitmap readable format [duplicate]
(2 answers)
Closed 5 years ago.
I am trying to capture images from camera preview and do some drawing on it. The problem is, I have only about 3-4 fps of drawing, and half of the frame processing time is receiving and decoding NV21 image from camera preview and converting to bitmap. I have a code to do this task, which I found on another stack question. It does not seem to be fast, but I do not know how to optimize it. It takes about 100-150 ms on Samsung Note 3, image size 1920x1080. How can I make it work faster?
Code :
public Bitmap curFrameImage(byte[] data, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
int imageFormat = parameters.getPreviewFormat();
if (imageFormat == ImageFormat.NV21)
{
YuvImage img = new YuvImage(data, ImageFormat.NV21, prevSizeW, prevSizeH, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
img.compressToJpeg(new android.graphics.Rect(0, 0, img.getWidth(), img.getHeight()), 50, out);
byte[] imageBytes = out.toByteArray();
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
}
else
{
Log.i(TAG, "Preview image not NV21");
return null;
}
}
The final format of image has to be bitmap, so I could then do processing on it. I've tried to set Camera.Parameters.setPreviewFormat to RGB_565, but could not assign camera params to camera, I've read also that NV21 is the only available format. I am not sure about that, whether it is possible to find solution in these format changes.
Thank you in advance.

Thank you, Alex Cohn, for helping me to do make this conversion faster. I implemented your suggested methods (RenderScript intrinsics). This code, made with RenderScript intrinsics, converts YUV image to bitmap about ~5 times faster. Previous code took 100-150 ms. on Samsung Note 3, this takes 15-30 or so. If someone needs to do the same task, here is the code:
These will be used:
private RenderScript rs;
private ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic;
private Type.Builder yuvType, rgbaType;
private Allocation in, out;
In on create function I initialize..:
rs = RenderScript.create(this);
yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
And the whole onPreviewFrame looks like this (here I receive and convert the image):
if (yuvType == null)
{
yuvType = new Type.Builder(rs, Element.U8(rs)).setX(dataLength);
in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(prevSizeW).setY(prevSizeH);
out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
}
in.copyFrom(data);
yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);
Bitmap bmpout = Bitmap.createBitmap(prevSizeW, prevSizeH, Bitmap.Config.ARGB_8888);
out.copyTo(bmpout);

You can get even more speed (using JellyBean 4.3, API18 or higher):
Camera preview mode must be NV21 !
On "onPreviewFrame()" do only:
aIn.copyFrom(data);
yuvToRgbIntrinsic.forEach(aOut);
aOut.copyTo(bmpout); // and of course, show the bitmap or whatever
Do not create any objects here.
All other stuff (creating rs, yuvToRgbIntrinsic, allocations, bitmap) do
in the onCreate() method before starting the camera preview.
rs = RenderScript.create(this);
yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// you don´t need Type.Builder objects , with cameraPreviewWidth and cameraPreviewHeight do:
int yuvDatalength = cameraPreviewWidth*cameraPreviewHeight*3/2; // this is 12 bit per pixel
aIn = Allocation.createSized(rs, Element.U8(rs), yuvDatalength);
// of course you will need the Bitmap
bmpout = Bitmap.createBitmap(cameraPreviewWidth, cameraPreviewHeight, Bitmap.Config.ARGB_8888);
// create output allocation from bitmap
aOut = Allocation.createFromBitmap(rs,bmpout); // this simple !
// set the script´s in-allocation, this has to be done only once
yuvToRgbIntrinsic.setInput(aIn);
On Nexus 7 (2013, JellyBean 4.3) a full HD (1920x1080) camera preview conversion takes about 0.007 s (YES, 7 ms).

OpenCV-JNI to construct Mat from Nv21 data for 4160x3120 sized image seems 2x faster (38msec) compared to renderscript (68msec-excluding initialization time). If we need to resize down constructed bitmap, OpenCV-JNI seems better approach as we would be using full size only for Y data. CbCr data would be resized to downsized Your data at the time of OpenCV mat construction only.
A still better way is passing NV21 byte array, and int pixel array to Jni. Array copy may not be needed at Jni side. Then, use open source libyuv library (https://chromium.googlesource.com/libyuv/libyuv/) to convert NV21 to ARGB. In Java, we use passed pixel array to construct bitmap. In Jni, conversion from NV21 to ARGB takes only ~4ms for 4160x3120 sized byte array on arm64 platform.

Related

Android add a watermark logo to very large jpg file (say 10000 x 150000) [duplicate]

This question already has an answer here:
Load large picture from file and add watermark
(1 answer)
Closed 8 years ago.
I have a large jpeg file say 10000 x 150000 px. I want to add a small logo to the bottom of the image without re sizing.
I am able to do this If i down sample the original image and draw the logo using canvas.But when i finally save it to file, the image original size will be reduced as I am sampling it.
If i load the original image into bitmap without down sampling, it exceeds the VM.
Below code work for me :-
public static Bitmap mark(Bitmap src, String watermark, Point location, Color color, int alpha, int size, boolean underline) {
int w = src.getWidth();
int h = src.getHeight();
Bitmap result = Bitmap.createBitmap(w, h, src.getConfig());
Canvas canvas = new Canvas(result);
canvas.drawBitmap(src, 0, 0, null);
Paint paint = new Paint();
paint.setColor(color);
paint.setAlpha(alpha);
paint.setTextSize(size);
paint.setAntiAlias(true);
paint.setUnderlineText(underline);
canvas.drawText(watermark, location.x, location.y, paint);
return result;
}
For large image editing you'll need to use native tools like imagemagick. Because there seem to be a lack of advanced image processing libraries in android supported Java.
If you can compile Composite tool's binaries for android. Then you can use them with --limit option to work with limited memory.
Also, you can try OpenCV as an alternative.
You can use BitmapRegionDecoder when deal with large image file. From the official document.
BitmapRegionDecoder can be used to decode a rectangle region from an image. BitmapRegionDecoder is particularly useful when an original image is large and you only need parts of the image.
To create a BitmapRegionDecoder, call newInstance(...). Given a BitmapRegionDecoder, users can call decodeRegion() repeatedly to get a decoded Bitmap of the specified region.
Just decode the part of your image that you need to add watermark, then use Canva to draw text on it.
try {
BitmapRegionDecoder regionDecoder = BitmapRegionDecoder.newInstance("/sdcard/test.png", true);
Bitmap bitmap = regionDecoder.decodeRegion(rect, options);
} catch (IOException e) {
e.printStackTrace();
}

How to get java wrapper for libjpeg-turbo to actually compress?

I'm having trouble getting libjpeg-turbo in my java project to actually compress an image. It writes a .jpg fine - but the final size of the result is always e.g. almost the same as a 24bit windows .bmp of the same image. A 480x854 image turns into a 1.2 Megabyte jpeg with the below code snippet. If I use GRAY sampling it's 800Kb (and these are not fancy images to begin with - mostly a neutral background with some filled primary color discs on them for a game I'm working on).
Here's the code I've got so far:
// for some byte[] src in RGB888 format, representing an image of dimensions
// 'width' and 'height'
try
{
TJCompressor tjc = new TJCompressor(
src,
width
0, // "pitch" - scanline size
height
TJ.PF_RGB // format
);
tjc.setJPEGQuality(75);
tjc.setSubsamp(TJ.SAMP_420);
byte[] jpg_data = tjc.compress(0);
new java.io.FileOutputStream(new java.io.File("/tmp/dump.jpg")).write(jpg_data, 0, jpg_data.length);
}
catch(Exception e)
{
e.printStackTrace(System.err);
}
I'm particularly having a hard time finding sample java usage documentation for this project; it mostly assumes a C background/usage. I don't understand the flags to pass to compress (nor do I really know the internals of the jpeg standard, nor do I want to :)!
Thanks!
Doh! And within 5 minutes of posting the question the answer hit me.
A hexdump of the result showed that the end of the file for these images was just lots and lots of 0s.
For anybody in a similar situation in the future, instead of using jpg_data.length (which is apparently entirely too large for some reason), use TJCompressor.getCompressedSize() immediately after your call to TJCompressor.compress().
Final result becomes:
// for some byte[] src in RGB format, representing an image of dimensions
// 'width' and 'height'
try
{
TJCompressor tjc = new TJCompressor(
src,
width
0, // "pitch" - scanline size
height
TJ.PF_RGB // format
);
tjc.setJPEGQuality(75);
tjc.setSubsamp(TJ.SAMP_420);
byte[] jpg_data = tjc.compress(0);
int actual_size = tjc.getCompressedSize();
new java.io.FileOutputStream(new java.io.File("/tmp/dump.jpg")).
write(jpg_data, 0, actual_size);
}
catch(Exception e)
{
e.printStackTrace(System.err);
}

Android Camera Preview YUV format into RGB on the GPU

I have copy pasted some code I found on stackoverflow to convert the default camera preview YUV into RGB format and then uploaded it to OpenGL for processing.
That worked fine, the issue is that most of the CPU was busy at converting the YUV images into the RGB format and it turned into the bottle neck.
I want to upload the YUV image into the GPU and then convert it into RGB in a fragment shader.
I took the same Java YUV to RGB function I found which worked on the CPU and tried to make it work on the GPU.
It turned to be quite a little nightmare, since there are several differences on doing calculations on Java and the GPU.
First, the preview image comes in byte[] in Java, but bytes are signed, so there might be negative values.
In addition, the fragment shader normally deals with [0..1] floating values for instead of a byte.
I am sure this is solveable and I almost solved it. But I spent a few hours trying to figure out what I was doing wrong and couldn't make it work.
Bottom line, I ask for someone to just write this shader function and preferably test it. For me it would be a tedious monkey job since I don't really understand why this conversion works the way it is, and I just try to mimic the same function on the GPU.
This is a very similar function to what I used on Java:
Displaying YUV Image in Android
What I did some of the job on the CPU, such as turnning the 1.5*wh bytes YUV format into a wh*YUV, as follows:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (int) yuv420sp[yp]+127;
if ((i & 1) == 0) {
v = (int)yuv420sp[uvp++]+127;
u = (int)yuv420sp[uvp++]+127;
}
rgba[yp] = 0xFF000000+(y<<16) | (u<<8) | v;
}
}
}
I added 127 because byte is signed.
I then loaded the rgba into a OpenGL texture and tried to do the rest of the calculation on the GPU.
Any help would be appreaciated...
I used this code from wikipedia to calculate the conversion from YUV to RGB on the GPU:
private static int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
I converted the floats to 0.0..255.0 and then use the above code.
The part on the CPU was to rearrange the original YUV pixels into a YUV matrix(also shown in wikipdia).
Basically I used the wikipedia code and did the simplest float<->byte conersions to make it work out.
Small mistakes like adding 16 to Y or not adding 128 to U and V would give undesirable results. So you need to take care of it.
But it wasn't a lot of work once I used the wikipedia code as the base.
Converting on CPU sounds easy but I believe question is how to do it on GPU?
I did it recently in my project where I needed to get very fast QR code detection even when camera angle is 45 degrees to surface where code is printed, and it worked with great performance:
(following code is trimmed just to contain key lines, it is assumed that you have both Java and OpenGLES solid understanding)
Create a GL texture that will contain stored Camera image:
int[] txt = new int[1];
GLES20.glGenTextures(1,txt,0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,txt[0]);
GLES20.glTextParameterf(... set min filter to GL_LINEAR );
GLES20.glTextParameterf(... set mag filter to GL_LINEAR );
GLES20.glTextParameteri(... set wrap_s to GL_CLAMP_TO_EDGE );
GLES20.glTextParameteri(... set wrap_t to GL_CLAMP_TO_EDGE );
Pay attention that texture type is not GL_TEXTURE_2D. This is important, since only a GL_TEXTURE_EXTERNAL_OES type is supported by SurfaceTexture object, which will be used in the next step.
Setup SurfaceTexture:
SurfaceTexture surfTex = new SurfaceTeture(txt[0]);
surfTex.setOnFrameAvailableListener(this);
Above assumes that 'this' is an object that implements 'onFrameAvailable' function.
public void onFrameAvailable(SurfaceTexture st)
{
surfTexNeedUpdate = true;
// this flag will be read in GL render pipeline
}
Setup camera:
Camera cam = Camera.open();
cam.setPreviewTexture(surfTex);
This Camera API is deprecated if you target Android 5.0, so if you are, you have to use new CameraDevice API.
In your render pipeline, have following block to check if camera has frame available, and update surface texture with it. When surface texture is updated, will fill in GL texture that is linked with it.
if( surfTexNeedUpdate )
{
surfTex.updateTexImage();
surfTexNeedUpdate = false;
}
To bind GL texture which has Camera -> SurfaceTeture link to, just do this in rendering pipe:
GLES20.glBindTexture(GLES20.GL_TEXTURE_EXTERNAL_OS, txt[0]);
Goes without saying, you need to set current active texture.
In your GL shader program which will use above texture in it's fragment part, you must have first line:
#extension GL_OES_EGL_imiage_external : require
Above is a must-have.
Texture uniform must be samplerExternalOES type:
uniform samplerExternalOES u_Texture0;
Reading pixel from it is just like from GL_TEXTURE_2D type, and UV coordinates are in same range (from 0.0 to 1.0):
vec4 px = texture2D(u_Texture0, v_UV);
Once you have your render pipeline ready to render a quad with above texture and shader, just start the camera:
cam.startPreview();
You should see quad on your GL screen with live camera feed. Now you just need to grab the image with glReadPixels:
GLES20.glReadPixels(0,0,width,height,GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bytes);
Above line assumes that your FBO is RGBA, and that bytes is already initialized byte[] array to proper size, and that width and height are size of your FBO.
And voila! You have captured RGBA pixels from camera instead of converting YUV bytes received in onPreviewFrame callback...
You can also use RGB framebuffer object and avoid alpha if you don't need it.
It is important to note that camera will call onFrameAvailable in it's own thread which is not your GL render pipeline thread, thus you should not perform any GL calls in that function.
In February 2011, Renderscript was first introduced. Since Android 3.0 Honeycomb (API 11), and definitely since Android 4.2 JellyBean (API 17), when ScriptIntrinsicYuvToRGB was added, the easiest and most efficient solution has been to use renderscript for YUV to RGB conversion. I have recently generalized this solution to handle device rotation.

Android Pass Bitmap to Native in 2.1 and lower

It's extremely easy to get the Bitmap data in the NDK when working with Android 2.2, but with 2.1 and lower, the AndroidBitmap_lockPixels function is not available. I've been searching for the past few hours, but nothing has worked.
How can I access the pixel data of a bitmap without using that function?
Create empty bitmap with dimensions of original image and ARGB_8888 format:
int width = src.getWidth();
int height = src.getHeight();
Bitmap dest = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Copy pixels from source bitmap to the int array:
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
And set these pixels to destination bitmap:
dest.setPixels(pixels, 0, width, 0, 0, width, height);
Create an IntBuffer in your Java code and pass the array down to your native library:
// this is called from native code
buffer = IntBuffer.allocate(width*height);
return buffer.array();
Use GetIntArrayElements to get an jint* and write to the array:
jint * arr = env->GetIntArrayElements((jintArray)bufferArray, NULL);
Write to the array and when finished, release:
env->ReleaseIntArrayElements((jintArray)bufferArray, arr, 0);
Notify the Java code that the array has been updated and use Canvas.drawBitmap() to draw the IntBuffer:
canvas.drawBitmap(buffer.array(), ....);
To draw to a Bitmap, initialize the canvas with the bitmap
... new Canvas(bitmap)
Someone else just asked the same question - I'll just link to it to avoid duplicating my answer:
Android rendering to live wallpapers
In any event, you probably don't want to copy the bitmap data every time you need to exchange it between Java and JNI code, so if your code is performance sensitive, this may be your only option on Android 2.1 and lower.

Adjust brightness and contrast of BufferedImage in Java

I'm processing a bunch of images with some framework, and all I'm given is a bunch of BufferedImage objects. Unfortunately, these images are really dim, and I'd like to brighten them up and adjust the contrast a little.
Something like:
BufferedImage image = something.getImage();
image = new Brighten(image).brighten(0.3); // for 30%
image = new Contrast(image).contrast(0.3);
// ...
Any ideas?
That was easy, actually.
RescaleOp rescaleOp = new RescaleOp(1.2f, 15, null);
rescaleOp.filter(image, image); // Source and destination are the same.
A scaleFactor of 1.2 and offset of 15 seems to make the image about a stop brighter.
Yay!
Read more in the docs for RescaleOp.

Categories

Resources