It's extremely easy to get the Bitmap data in the NDK when working with Android 2.2, but with 2.1 and lower, the AndroidBitmap_lockPixels function is not available. I've been searching for the past few hours, but nothing has worked.
How can I access the pixel data of a bitmap without using that function?
Create empty bitmap with dimensions of original image and ARGB_8888 format:
int width = src.getWidth();
int height = src.getHeight();
Bitmap dest = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Copy pixels from source bitmap to the int array:
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
And set these pixels to destination bitmap:
dest.setPixels(pixels, 0, width, 0, 0, width, height);
Create an IntBuffer in your Java code and pass the array down to your native library:
// this is called from native code
buffer = IntBuffer.allocate(width*height);
return buffer.array();
Use GetIntArrayElements to get an jint* and write to the array:
jint * arr = env->GetIntArrayElements((jintArray)bufferArray, NULL);
Write to the array and when finished, release:
env->ReleaseIntArrayElements((jintArray)bufferArray, arr, 0);
Notify the Java code that the array has been updated and use Canvas.drawBitmap() to draw the IntBuffer:
canvas.drawBitmap(buffer.array(), ....);
To draw to a Bitmap, initialize the canvas with the bitmap
... new Canvas(bitmap)
Someone else just asked the same question - I'll just link to it to avoid duplicating my answer:
Android rendering to live wallpapers
In any event, you probably don't want to copy the bitmap data every time you need to exchange it between Java and JNI code, so if your code is performance sensitive, this may be your only option on Android 2.1 and lower.
Related
I'm trying to do perspective transformation on my bitmap with a given quadrilateral. However, the Matrix.polyToPoly function stretches not only the part of the image I want but also the pixels outside the given area, so that a huge image is formed in some edge cases which crashes my app because of OOM.
Is there any way to sort of drop the pixels outside of the said area to not be stretched?
Or are there any other possibilities to do a perspective transform which is more memory friendly?
I`m currenty doing it like this:
// Perspective Transformation
float[] destination = {0, 0,
width, 0,
width, height,
0, height};
Matrix post = new Matrix();
post.setPolyToPoly(boundingBox.toArray(), 0, destination, 0, 4);
Bitmap transformed = Bitmap.createBitmap(temp, 0, 0, cropWidth, cropHeight, post, true);
where cropWidth and cropHeight are the size of the bitmap (I cropped it to the edges of the quadrilateral to save memory) and temp is said cropped bitmap.
Thanks to pskink I got it working, here is a post with the code in it:
Distorting an image to a quadrangle fails in some cases on Android
I'm looking for the fastest way to write pixels on javafx.scene.image.Image. Writing to BufferedImage's backing array is much faster. At least on the test image I made it took only ~20ms for BufferedImage, WritableImage on the other hand took ~100ms. I already tried SwingFXUtils but no luck.
Code for BufferedImage (faster):
BufferedImage bi = createCompatibleImage( width, height );
WritableRaster raster = bi.getRaster();
DataBufferInt dataBuffer = (DataBufferInt) raster.getDataBuffer();
System.arraycopy( pixels, 0, dataBuffer.getData(), 0, pixels.length );
Code for WritableImage (slower):
WritableImage wi = new WritableImage( width, height );
PixelWriter pw = wi.getPixelWriter();
WritablePixelFormat<IntBuffer> pf = WritablePixelFormat.getIntArgbInstance();
pw.setPixels( 0, 0, width, height, pf, pixels, 0, width );
Maybe there's a way to write to WritableImage's backing array too?
For the performance of the pixel writer it is absolutely crucial that you pick the right pixel format. You can check what the native pixel format is via
pw.getPixelFormat().getType()
On my Mac this is PixelFormat.Type.BYTE_BGRA_PRE. If your raw data conforms to this pixel format, then the transfer to the image should be pretty fast. Otherwise the pixel data has to be converted and that takes some time.
Can anyone suggest a way to create a small solid colored bitmap image from a hex value?
Alternatively, you can use Bitmap.eraseColor() to set a solid color for your bitmap.
Example:
import android.graphics.Bitmap;
...
Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
image.eraseColor(android.graphics.Color.GREEN);
I think I may have the answer. Technically I believe it is much easier on Android than on a "pc". The last time I searched to create a bitmap (.bmp), I only found some Android functions and the BitmapFactory for non-android, which didn't work for me.
Please look at this site: http://developer.android.com/reference/android/graphics/Bitmap.html
This point could fit for you:
static Bitmap createBitmap(int[] colors, int offset, int stride, int width, int height, Bitmap.Config config)
Returns a immutable bitmap
with the specified width and height, with each pixel value set to the
corresponding value in the colors array.
Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas=new Canvas (image);
int HEX=0xFF888888;
canvas.drawColor (HEX);
Use the createBitmap().
Here is a link that will show you how: http://developer.android.com/reference/android/graphics/Bitmap.html
The following code defines my Bitmap:
Resources res = context.getResources();
mBackground = BitmapFactory.decodeResource(res, R.drawable.background);
// scale bitmap
int h = 800; // height in pixels
int w = 480; // width in pixels
// Make sure w and h are in the correct order
Bitmap scaled = Bitmap.createScaledBitmap(mBackground, w, h, true);
... And the following code is used to execute/draw it (the unscaled Bitmap):
canvas.drawBitmap(mBackground, 0, 0, null);
My question is, how might I set it to draw the scaled Bitmap returned in the form of Bitmap scaled, and not the original?
Define a new class member variable:
Bitmap mScaledBackground;
Then, assign your newly created scaled bitmap to it:
mScaledBackground = scaled;
Then, call in your draw method:
canvas.drawBitmap(mScaledBackground, 0, 0, null);
Note that it is not a good idea to hard-code screen size in the way you did in your snippet above. Better would be to fetch your device screen size in the following way:
int width = getWindowManager().getDefaultDisplay().getWidth();
int height = getWindowManager().getDefaultDisplay().getHeight();
And it would be probably better not to declare a new bitmap for the only purpose of drawing your original background in a scaled way. Bitmaps consume a lot of precious resources, and usually a phone is limited to a few MB of Bitmaps you can load before your app ungracefully fails. Instead you could do something like this:
Rect src = new Rect(0, 0, bitmap.getWidth() - 1, bitmap.getHeight() - 1);
Rect dest = new Rect(0, 0, width - 1, height - 1);
canvas.drawBitmap(mBackground, src, dest, null);
To draw the scaled bitmap you want save your scaled bitmap in a field somewhere (here called mScaled) and call:
canvas.drawBitmap(mScaled, 0, 0, null);
in your draw method (or wherever you call it right now).
to my understanding the following code
int [] pixels = image.getRaster().getPixels(0, 0, width, height, (int[])null);
should generate an array which has exactly the size width x height, but in practice it seems to be much larger, why?
image may be a bufferedimage, toolkitimage or volatileimage.
it generates a array with
new int[numBands * w * h]; // The number of bands of the image data.
from the SampleModel from your Raster