I'm implementing sobel edge detection algo. After processing I apply new pixel values, either 255 or 0. my problem is that the resulting bitmap is not showed in the imageView. I'm using Alpha_8 Configuration because it takes less memory and most suitable for edge results.
How can I preview the results?
The ALPHA_8 format was made to be used as a mask because it only contains alpha and no color information. You should convert it to another format or if you want to use it anyway then you should put it as a mask for background for example. Check out this response from an Android project member to a similar issue. You can also check out my response to another question, there I put a code example about how to apply a mask to another bitmap.
You can use ColorMatrixColorFilter to display ALPHA_8 bitmap.
For example, use the following matrix
mColorFilter = new ColorMatrixColorFilter(new float[]{
0, 0, 0, 1, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 255
});
to draw your bitmap using RED color.
public void onDraw(Canvas canvas){
mPaint.setColorFilter(mColorFilter);
canvas.drawBitmap(mBitmap,x,y,mPaint);
}
Related
I'm trying to do perspective transformation on my bitmap with a given quadrilateral. However, the Matrix.polyToPoly function stretches not only the part of the image I want but also the pixels outside the given area, so that a huge image is formed in some edge cases which crashes my app because of OOM.
Is there any way to sort of drop the pixels outside of the said area to not be stretched?
Or are there any other possibilities to do a perspective transform which is more memory friendly?
I`m currenty doing it like this:
// Perspective Transformation
float[] destination = {0, 0,
width, 0,
width, height,
0, height};
Matrix post = new Matrix();
post.setPolyToPoly(boundingBox.toArray(), 0, destination, 0, 4);
Bitmap transformed = Bitmap.createBitmap(temp, 0, 0, cropWidth, cropHeight, post, true);
where cropWidth and cropHeight are the size of the bitmap (I cropped it to the edges of the quadrilateral to save memory) and temp is said cropped bitmap.
Thanks to pskink I got it working, here is a post with the code in it:
Distorting an image to a quadrangle fails in some cases on Android
I'm trying to implement my own mirroring protocol.
I'm using websockets to retrieve a compressed pixels buffer. After I decompress this buffer I got a big array of position with color that I should display.
I'm using canvas with a for loop on a huge arraylist (2 millions elements) in onDraw like following :
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
for (int i = 0; i < this.pixelsList.size(); i++) { //SIZE = 2.073.600 pixels
canvas.drawPoint(this.pixelsList.get(i).getPosition().x, this.pixelsList.get(i).getPosition().y, this.pixelsList.get(i).getPaint());
}
}
I'm looking for an efficient way to display this huge packet.
I was on OpenGL ES but I did not find any way to display one pixel in one position.. Maybe should I take a look with OpenCV ?
Don't dereference each pixel address and use a texture in OpenGL. This might be the fastest way.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture[texture_id].texture_width, texture[texture_id].texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelsList);
or if it's frequently updated
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, **pixelsList**);
I'm trying to find a faster way of converting RGB to YUV using the Android SDK as the standard pixel for pixel methods are pretty slow in Java. ColorMatrices seem to be pretty efficient and I see there's a setRGB2YUV() method but I can't find any examples and the documentation simply says "Set the matrix to convert RGB to YUV" which is completely useless as usual.
Here's part of my initialization code, which is a little compicated slightly with arrays due to multithreading:
cacheBitmaps = new Bitmap[NumberOfThreads];
cacheCanvas = new Canvas[NumberOfThreads];
mRGB2YUV = new ColorMatrix();
cmfRGB2YUV = new ColorMatrixColorFilter(mRGB2YUV);
pRGB2YUV = new Paint();
pRGB2YUV.setColorFilter(cmfRGB2YUV);
for (int m=0; m< NumberOfThreads; m++) {
cacheBitmaps[m] = Bitmap.createBitmap(widthX,heightY, Config.ARGB_8888);
cacheCanvas[m] = new Canvas(cacheBitmaps[m]);
}
Later I use this to paint an RGB to a canvas with the specified paint:
cacheCanvas[n].drawBitmap(fb.frames[n].getAndroidBitmap(),0,0, pRGB2YUV);
I've also experimented using a standard matrix that shouldn't apply any changes to the RGB values like this:
float[] matrix = {
0, 1, 0, 0, 0, //red
0, 0, 1, 0, 0, //green
0, 0, 0, 1, 0, //blue
0, 0, 0, 0, 1 //alpha
};
mRGB2YUV.set(matrix);
Whatever I do I either get black, green or distorted frames in my output video (using JavaCV with FFMPEG and specifying AV_PIX_FMT_NV21 as a color format after copying the final bitmap to an IplImage).
Anyone know how to use this and if it's even possible or does what it says it does?
How can I tell if a binary image that I am generating is 0-indexed or 1-indexed?
I have made a program which reads in an image, generates a binary image and performs some other functions on the image. I would like to know, however, how to tell which 'index' the pixel values in the binary image are?
How is this done?
Is there a simple built-in function (such as image.getRGB();, for example) which can be called to determine this?
I don't know what you mean with 0- or 1-indexed, but here are some facts.
BufferedImage is a generic image, so pixels start at coordinate (0,0)
If you want an array to work on, coming from this image, the upper-left corner will be in index 0 (unless otherwise specified)
image.getRGB(0, 0, image.getWidth(), image.getHeight(), array, 0, image.getWidth());
BufferedImage doesn't support 1 BPP images natively, but either via a Packed mode with a Colormodel, or a 2-index palette. I can't tell which one you have without examples.
Regardless of the internal format, the different getRGB() methods should always return one value per pixel, and one pixel per value. Note that the full-opacity value (0xFF000000, -16777216) will also be included in results.
eg.
BufferedImage image = new BufferedImage(16, 16, BufferedImage.TYPE_BYTE_BINARY);
image.setRGB(0, 0, 0xFFFFFFFF);
image.setRGB(1, 0, 0xFF000000);
image.setRGB(0, 1, 0xFF000000);
image.setRGB(1, 1, 0xFFFFFFFF);
System.out.println(image.getRGB(0, 0));
System.out.println(image.getRGB(1, 0));
System.out.println(image.getRGB(0, 1));
System.out.println(image.getRGB(1, 1));
int[] array = image.getRGB(0, 0, image.getWidth(), image.getHeight(), null, 0, image.getWidth());
System.out.println(array[0]); // at (0,0)
System.out.println(array[1]); // at (1,0)
System.out.println(array[16]); // at (0,1)
System.out.println(array[17]); // at (1,1)
As the title says I wonder how I can make my images to perspective view. Here's an image showing how they manage to do this in photoshop:
http://netlumination.com/blog/creating-perspective-and-a-mirror-image-in-photoshop
Is it possible to do something like this in android?
Yes.
Have a look at this page http://www.inter-fuser.com/2009/12/android-reflections-with-bitmaps.html.
You can then apply an AffineTransform (at least in AWT, but Android should have something similar, too) using a matrix to distort/skew the image.
Edit: see http://www.jhlabs.com/ip/filters/index.html for an implementation of a PerspectiveFilter.
Note that you could probably also use openGL to achieve a similar effect. See http://developer.android.com/reference/android/opengl/GLU.html and http://developer.android.com/reference/android/opengl/GLSurfaceView.html
This works pretty well for me, for values of rotation between 0 and 60:
Matrix imageMatrix = new Matrix();
float[] srcPoints = {
0, 0,
0, 200,
200, 200,
200, 0};
float[] destPoints = {
rotation, rotation/2f,
rotation, 200 - rotation/2f,
200 - rotation, 200,
200 - rotation, 0};
imageMatrix.setPolyToPoly(srcPoints, 0, destPoints, 0, 4);