Android: How to change OpenGLES texture data / bitmap from gallery / camera - java

I try reuse Android Media Effects samples and want to be able to change texture data with new bitmap from camera or gallery pick. No solution so far
This is the initial load texture from the sample : https://github.com/googlesamples/android-MediaEffects/blob/master/Application/src/main/java/com/example/android/mediaeffects/MediaEffectsFragment.java
private void loadTextures() {
// Generate textures
GLES20.glGenTextures(2, mTextures, 0);
// Load input bitmap
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.puppy);
mImageWidth = bitmap.getWidth();
mImageHeight = bitmap.getHeight();
mTexRenderer.updateTextureSize(mImageWidth, mImageHeight);
// Upload to texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextures[0]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Set texture parameters
GLToolbox.initTexParams();
}
and this is my trouble function :
public void changeBackground(Uri bitmapUri) {
Log.d(TAG, "changeBackground : " + bitmapUri.getPath());
Bitmap bitmap = BitmapFactory.decodeFile(bitmapUri.getPath());
mImageWidth = bitmap.getWidth();
mImageHeight = bitmap.getHeight();
mTexRenderer.updateTextureSize(mImageWidth, mImageHeight);
GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap);
theBackground.requestRender(); //my GLSurfaceView
bitmap.recycle();
}
I tried several ways :
using texSubImage2D only -> texture not updated
delete and recreate textures with new bitmap -> black texture
note: the bitmap dimension is fixed, at 1080 x 1080
How to change/modify/replace texture data with new bitmap ?
UPDATE:
I think the problem more relate to this question : Setting texture on 3D object using image from photo gallery. i call changeBackground() after grabbing the image from gallery intent and then crop it (another intent).
Now, how to effectively change texture data with bitmap from gallery intent? anyone have a clue?
SOLVED
I solved the problem by defering texture load process to onSurfaceChange() function. Simply use a flag to indicate the file to load and onSurfaceChange check the flag again.
private Uri mFileToLoad = null;
public void changeBackground(Uri bitmapUri) {
Log.d(TAG, "changeBackground : " + bitmapUri.getPath());
mFileToLoad = bitmapUri;
theBackground.requestRender();
}
and my onSurfaceChange()
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d(TAG, "onSurfaceChanged w:" + width + " h:" + height);
if (mFileToLoad!=null) {
GLES20.glDeleteTextures(mTextures.length, mTextures, 0);
GLES20.glGenTextures(mTextures.length, mTextures, 0);
Bitmap bitmap = BitmapFactory.decodeFile(mFileToLoad.getPath());
mImageWidth = bitmap.getWidth();
mImageHeight = bitmap.getHeight();
mTexRenderer.updateTextureSize(mImageWidth, mImageHeight);
// Upload to texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextures[0]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Set texture parameters
GLToolbox.initTexParams();
mFileToLoad = null;
bitmap.recycle();
}
if (mTexRenderer != null) {
mTexRenderer.updateViewSize(width, height);
}
}
So far it works.

It looks like your changeBackground() method is probably called from the UI thread, or at least a thread other than the rendering thread.
GLSurfaceView creates a separate rendering thread, which has a current OpenGL context when your rendering methods are called. It is therefore ready for your OpenGL calls. This is not the case for all other threads. The current OpenGL context is per thread. So unless you do something about it (which is possible, but adds significant complexity), you can't make OpenGL calls from other threads.
There are a few options to resolve this. The easiest one is probably using the queueEvent() method on the GLSurfaceView, which allows you to pass in a Runnable that will be executed in the rendering thread.
I'm not sure what the mTexRenderer.updateTextureSize() method in your code does. But generally, if the texture size changes, you'll have to use GLUtils.texImage2D() instead of GLUtils.texSubImage2D() to update the texture.
The code could look like this (untested, so it might not work exactly as typed):
final Bitmap bitmap = BitmapFactory.decodeFile(bitmapUri.getPath());
theBackground.queueEvent(new Runnable() {
#Override
void run() {
mImageWidth = bitmap.getWidth();
mImageHeight = bitmap.getHeight();
mTexRenderer.updateTextureSize(mImageWidth, mImageHeight);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextures[0]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
});
theBackground.requestRender();

Related

Face Detection and extracting the faces using Bounding Box and creating a new Bitmap

How do I use the Rect rect = face.getBoundingBox() data to crop out the detected face from the bitmap and save it as a new bitmap. Ive attempted to construct the bitmap using rect.left etc and simply display the extracted face in imageview.. but it does not seem to work.
Also, is it possible to access the faces directly?
If I understand correctly the detector creates a List of FirebaseVisionFace, what are these listings?
How does it list a face?
Is it possible to access them?
private void processFaceDetection(final Bitmap bitmap) {
FirebaseVisionImage firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap); //firebaseVisionImage is an object created from bitmap firebase uses to detect faces
FirebaseVisionFaceDetectorOptions firebaseVisionFaceDetectorOptions = new FirebaseVisionFaceDetectorOptions.Builder().build();
FirebaseVisionFaceDetector firebaseVisionFaceDetector = FirebaseVision.getInstance().getVisionFaceDetector(firebaseVisionFaceDetectorOptions);
firebaseVisionFaceDetector.detectInImage(firebaseVisionImage).addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> firebaseVisionFaces) {
int counter = 0;
for (FirebaseVisionFace face : firebaseVisionFaces) {
Rect rect = face.getBoundingBox();
RectOverlay rectOverlay = new RectOverlay(graphicOverlay, rect);
graphicOverlay.add(rectOverlay);
Bitmap faceSaved = Bitmap.createBitmap(Math.round(Math.abs(rect.left - rect.right)), Math.round(Math.abs(rect.top - rect.bottom)), Bitmap.Config.ALPHA_8);
imageview.setImageBitmap(facesaved);
imageview.setVisibility(View.VISIBLE);
counter++;
}
ANSWER:
To use the rect data, which can be gathered using rect.toShortString(), produces 4 values for left, top, right, bottom. i.e. [280,495][796,1011]. These are created by the FirebaseVisionFaceDetector and are stored in a list (List) for each detected face.
To save the bitmap data contained within different rects(faces)
for (FirebaseVisionFace face : firebaseVisionFaces) {
Rect rect = face.getBoundingBox();
Bitmap original = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false); //scaled bitmap created from captured image
Bitmap faceCrop = Bitmap.createBitmap(original, rect.left, rect.top, rect.width(), rect.height()); //face cropped using rect values
faceCrop contains the face-only bitmap data contained within the parameters of the rect.
Hope this helps....

Android / Java / Kotlin : Merge 2 Bitmaps in one Canvas

I'm trying to create a kotlin function responsible to take 2 Bitmaps and return one corresponding to the two merge images.
The first one is a default white rounded marker (emptyMarkerBitmap) with a fix width and height.
The second one is an random image that I would like to minimize to fill the first image in overlay.
private fun createBitmapOverlay(emptyMarkerBitmap: Bitmap, categoryIconBitmap: Bitmap): Bitmap {
val cs: Bitmap
val width: Int = emptyMarkerBitmap.width
val height: Int = emptyMarkerBitmap.height
cs = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val comboImage = Canvas(cs)
comboImage.drawBitmap(emptyMarkerBitmap, 0f, 0f, null)
comboImage.drawBitmap(categoryIconBitmap, emptyMarkerBitmap.width.toFloat(), 0f, null)
return cs
}
For the moment, I always get displayed the first image which is the white marker. My second image is never displayed. Where is the issue ?
Try this
fun Bitmap.with(bmp: Bitmap): Bitmap {
// Create new bitmap based on the size and config of the old
val newBitmap: Bitmap = Bitmap.createBitmap(width, height, config)
// Instantiate a canvas and prepare it to paint to the new bitmap
val canvas = Canvas(newBitmap)
// Draw the old bitmap onto of the new white one
canvas.drawBitmap(bmp, 0f, 0f, null)
return newBitmap
}
emptyMarkerBitmap.with(categoryIconBitmap)

How to get Bitmap from session.update() in ARCore [Android Studio]

I am trying to get a Bitmap from the current frame of my ARSession with ARCore. But it always equals null. I've already been searching the web for quite a while but cannot figure out what I am doing wrong.
try {
capturedImage = mFrame.acquireCameraImage();
ByteBuffer buffer = capturedImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length,null);
if (bitmap == null)
Log.e(TAG,"Bitmap was NOT initialized!");
} catch(Exception e){
}
I am getting mFrame from onDrawFrame of my GLSurfaceView which I use to display the camera image. Everything works just fine except that my Bitmap equals null.
I am using a Button, so that only a single Frame is being used, as follows:
scanButton = (Button) findViewById(R.id.scanButton);
scanButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
checkbox = false;
if (capturedImage!=null) capturedImage.close();
BitmapMethod();
}
});
capturedImage, buffer and bytes all do not equal null.
Is there probably something wrong with mFrame.acquireCameraImage()?
Thanks a lot
Is there probably something wrong with mFrame.acquireCameraImage()?
No, mFrame.acquireCameraImage() works as intended.
But it always equals null
The Bitmap will always equal null since bitmap factory does not understand the image data that is passed to it.
The method mFrame.acquireCameraImage() responds with an object of type Image that is in the YUV format or YCbCr. These types of images have 3 planes which is explained here very nicely. The ByteArray contained in these planes may be read directly by a CPU/GPU in native code. BitmapFactory cannot read this type of data. Hence, you need to convert this YUV image into something else.
For that, you need to use YuvImage class to create an instance of YUV & then convert it into JPEG using the compressToJpeg method. Once you have the byteArray from this, you can simply do what you're doing above. Use BitmapFactory to convert it into Bitmap and add it to your ImageView.
Note : YUV has 3 planes. Create a single bytearray from all planes & then pass it to YUV constructor. Though not elaborate, it should look something similar to this :
//The camera image received is in YUV YCbCr Format. Get buffers for each of the planes and use them to create a new bytearray defined by the size of all three buffers combined
val cameraPlaneY = cameraImage.planes[0].buffer
val cameraPlaneU = cameraImage.planes[1].buffer
val cameraPlaneV = cameraImage.planes[2].buffer
//Use the buffers to create a new byteArray that
val compositeByteArray = ByteArray(cameraPlaneY.capacity() + cameraPlaneU.capacity() + cameraPlaneV.capacity())
cameraPlaneY.get(compositeByteArray, 0, cameraPlaneY.capacity())
cameraPlaneU.get(compositeByteArray, cameraPlaneY.capacity(), cameraPlaneU.capacity())
cameraPlaneV.get(compositeByteArray, cameraPlaneY.capacity() + cameraPlaneU.capacity(), cameraPlaneV.capacity())
val baOutputStream = ByteArrayOutputStream()
val yuvImage: YuvImage = YuvImage(compositeByteArray, ImageFormat.NV21, cameraImage.width, cameraImage.height, null)
yuvImage.compressToJpeg(Rect(0, 0, cameraImage.width, cameraImage.height), 75, baOutputStream)
val byteForBitmap = baOutputStream.toByteArray()
val bitmap = BitmapFactory.decodeByteArray(byteForBitmap, 0, byteForBitmap.size)
imageView.setImageBitmap(bitmap)
That's just a rough code. It has scope for improvement perhaps. Also refer here.
I also winded up recreating the same situation as you were facing. I was getting the Image object as Null.
After some research, I found the problem was in the flow of the logic.
I then wrote the following code and it solved my issue:
I defined the following boolean to be set to capture the current frame on the button click:
private static boolean captureCurrentFrame = false;
This code I wrote in the onClick() function to get the current frame's RGB and Depth Image:
public void captureFrame(View view){
Toast.makeText(getApplicationContext(),"Capturing depth and rgb photo",Toast.LENGTH_SHORT).show();
captureCurrentFrame = true;
}
This section I wrote in onDrawFrame() method just after getting the frame from session.update():
if(captureCurrentFrame) {
RGBImage = frame.acquireCameraImage();
DepthImage = frame.acquireDepthImage();
Log.d("Image","Format of the RGB Image: " + RGBImage.getFormat());
Log.d("Image","Format of the Depth Image: " + DepthImage.getFormat());
RGBImage.close();
DepthImage.close();
captureCurrentFrame = false;
}
Reason for getting Null in my case was the code in the onClick listener was getting triggered before going through the onDraw() method, as a result of which the Images were not stored in the variables.
Therefore, I shifted the logic to onDraw() and triggered that flow through the boolean variable that is set by the listener.
I don't know if there is anybody still looking for the answer, but this is my code.
Image image = mFrame.acquireCameraImage();
byte[] nv21;
// Get the three planes.
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
int width = image.getWidth();
int height = image.getHeight();
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] byteArray = out.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);

write a custom name on the top of the picture taken by android cam

If this question is repeated then let me know the link of original question because i enable to findout the good link for resolve my current problem.
I am working on android camera I able to take Picture's from my app. But I want to write name on the top of taken picture. i enable to find out how can i resolve this issue.
Sorry for i don't have any code for take reference.....
any help will be appreciated and i want to pay my thank in advance to all of you.
Try following code.
public Bitmap drawTextToBitmap(Bitmap bitmap, String mText) {
try {
android.graphics.Bitmap.Config bitmapConfig = bitmap.getConfig();
// set default bitmap config if none
if(bitmapConfig == null) {
bitmapConfig = android.graphics.Bitmap.Config.ARGB_8888;
}
// resource bitmaps are imutable,
// so we need to convert it to mutable one
bitmap = bitmap.copy(bitmapConfig, true);
Canvas canvas = new Canvas(bitmap);
// new antialised Paint
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
// text color - #3D3D3D
paint.setColor(Color.rgb(110,110, 110));
// text size in pixels
paint.setTextSize((int) (12 * scale));
// text shadow
paint.setShadowLayer(1f, 0f, 1f, Color.DKGRAY);
// draw text to the Canvas center
Rect bounds = new Rect();
paint.getTextBounds(mText, 0, mText.length(), bounds);
int x = (bitmap.getWidth() - bounds.width())/6;
int y = (bitmap.getHeight() + bounds.height())/5;
canvas.drawText(mText, x * scale, y * scale, paint);
return bitmap;
} catch (Exception e) {
return null;
}
}

Overlay images in Android

I have two images that I want to merge into one. (Eg "House.png" on top of "street.png")
How do i achieve this in Android? I just want to merge the images and export them to a file.
This example Sets the image to an ImageView but i wish to export it.
This other example does not work in Android since the classes are not available.
I'd try something like:
public static Bitmap mergeImages(Bitmap bottomImage, Bitmap topImage) {
final Bitmap output = Bitmap.createBitmap(bottomImage.getWidth(), bottomImage
.getHeight(), Config.ARGB_8888);
final Canvas canvas = new Canvas(output);
final Paint paint = new Paint();
paint.setAntiAlias(true);
canvas.drawBitmap(bottomImage, 0, 0, paint);
canvas.drawBitmap(topImage, 0, 0, paint);
return output;
}
(not tested, I just wrote it here, might be some simple errors in there)
Basically what you do is create a 3rd empty bitmap, draw the bottom image on it and then draw the top image over it.
As for saving to a file, here are a few examples: Save bitmap to location
You can do like this...............
public Bitmap Overlay(Bitmap Bitmap1, Resources paramResources, Bitmap Bitmap2, int alpha)
{
Bitmap bmp1 = Bitmap.createScaledBitmap(Bitmap2, Bitmap1.getWidth(), Bitmap1.getHeight(), true);
Bitmap bmp2 = Bitmap.createBitmap(Bitmap1.getWidth(), Bitmap1.getHeight(), Bitmap1.getConfig());
Paint localPaint = new Paint();
localPaint.setAlpha(alpha);
Canvas localCanvas = new Canvas(bmp2);
Matrix localMatrix = new Matrix();
localCanvas.drawBitmap(Bitmap1, localMatrix, null);
localCanvas.drawBitmap(bmp1, localMatrix, localPaint);
bmp1.recycle();
System.gc();
return bmp2;
}

Categories

Resources