I'm working with GeoTiff/PNG files too large for handling as a whole in my code.
Is there any possibility to decode specific areas (e.g. given by two x,y coordinates) of a file in bitmapfactory? Haven't found anything looking similar at http://developer.android.com/reference/android/graphics/BitmapFactory.html(Android's developer reference).
Thanks!
With kcoppock's hint I've set up the following solution.
Though I'm wondering why rect needs to be initialized by Rect(left, bottom, right, top) instead of Rect(left, top, right, bottom)...
Example call:
Bitmap myBitmap = loadBitmapRegion(context, R.drawable.heightmap,
0.08f, 0.32f, 0.13f, 0.27f);
Function:
public static Bitmap loadBitmapRegion(
Context context, int resourceID,
float regionLeft, float regionTop,
float regionRight, float regionBottom) {
// Get input stream for resource
InputStream is = context.getResources().openRawResource(resourceID);
// Set options
BitmapFactory.Options opt = new BitmapFactory.Options();
//opt.inPreferredConfig = Bitmap.Config.ARGB_8888; //standard
// Create decoder
BitmapRegionDecoder decoder = null;
try {
decoder = BitmapRegionDecoder.newInstance(is, false);
} catch (IOException e) {
e.printStackTrace();
}
// Get resource dimensions
int h = decoder.getHeight();
int w = decoder.getWidth();
// Set region to decode
Rect region = new Rect(
Math.round(regionLeft*w), Math.round(regionBottom*h),
Math.round(regionRight*w), Math.round(regionTop*h));
// Return bitmap
return decoder.decodeRegion(region, opt);
}
You should look into BitmapRegionDecoder. It seems to describe exactly the use case that you are looking for.
I don't know exactly what you mean by "Decode specific areas" but if by decoding you mean, to actually "copy" certain areas of a bitmap, what you can do is make use of canvas in order to get it as shown below:
Bitmap bmpWithArea = Bitmap.createBitmap(widthDesired, heightDesired, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bmpWithArea);
Rect area = new Rect(arealeft, areatop, arearight, areabottom);
Rect actualSize = new Rect(0, 0, widthDesired, heightDesired);
canvas.drawBitmap(bitmapWithAreaYouWantToGet, area, actual, paintIfAny);
//And done, starting from this line "bmpWithArea" has the bmp that you wanted, you can assign it to ImageView and use it as regular bmp...
Hope this helps...
Regards!
Related
How do I use the Rect rect = face.getBoundingBox() data to crop out the detected face from the bitmap and save it as a new bitmap. Ive attempted to construct the bitmap using rect.left etc and simply display the extracted face in imageview.. but it does not seem to work.
Also, is it possible to access the faces directly?
If I understand correctly the detector creates a List of FirebaseVisionFace, what are these listings?
How does it list a face?
Is it possible to access them?
private void processFaceDetection(final Bitmap bitmap) {
FirebaseVisionImage firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap); //firebaseVisionImage is an object created from bitmap firebase uses to detect faces
FirebaseVisionFaceDetectorOptions firebaseVisionFaceDetectorOptions = new FirebaseVisionFaceDetectorOptions.Builder().build();
FirebaseVisionFaceDetector firebaseVisionFaceDetector = FirebaseVision.getInstance().getVisionFaceDetector(firebaseVisionFaceDetectorOptions);
firebaseVisionFaceDetector.detectInImage(firebaseVisionImage).addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> firebaseVisionFaces) {
int counter = 0;
for (FirebaseVisionFace face : firebaseVisionFaces) {
Rect rect = face.getBoundingBox();
RectOverlay rectOverlay = new RectOverlay(graphicOverlay, rect);
graphicOverlay.add(rectOverlay);
Bitmap faceSaved = Bitmap.createBitmap(Math.round(Math.abs(rect.left - rect.right)), Math.round(Math.abs(rect.top - rect.bottom)), Bitmap.Config.ALPHA_8);
imageview.setImageBitmap(facesaved);
imageview.setVisibility(View.VISIBLE);
counter++;
}
ANSWER:
To use the rect data, which can be gathered using rect.toShortString(), produces 4 values for left, top, right, bottom. i.e. [280,495][796,1011]. These are created by the FirebaseVisionFaceDetector and are stored in a list (List) for each detected face.
To save the bitmap data contained within different rects(faces)
for (FirebaseVisionFace face : firebaseVisionFaces) {
Rect rect = face.getBoundingBox();
Bitmap original = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false); //scaled bitmap created from captured image
Bitmap faceCrop = Bitmap.createBitmap(original, rect.left, rect.top, rect.width(), rect.height()); //face cropped using rect values
faceCrop contains the face-only bitmap data contained within the parameters of the rect.
Hope this helps....
I am trying to get a Bitmap from the current frame of my ARSession with ARCore. But it always equals null. I've already been searching the web for quite a while but cannot figure out what I am doing wrong.
try {
capturedImage = mFrame.acquireCameraImage();
ByteBuffer buffer = capturedImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length,null);
if (bitmap == null)
Log.e(TAG,"Bitmap was NOT initialized!");
} catch(Exception e){
}
I am getting mFrame from onDrawFrame of my GLSurfaceView which I use to display the camera image. Everything works just fine except that my Bitmap equals null.
I am using a Button, so that only a single Frame is being used, as follows:
scanButton = (Button) findViewById(R.id.scanButton);
scanButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
checkbox = false;
if (capturedImage!=null) capturedImage.close();
BitmapMethod();
}
});
capturedImage, buffer and bytes all do not equal null.
Is there probably something wrong with mFrame.acquireCameraImage()?
Thanks a lot
Is there probably something wrong with mFrame.acquireCameraImage()?
No, mFrame.acquireCameraImage() works as intended.
But it always equals null
The Bitmap will always equal null since bitmap factory does not understand the image data that is passed to it.
The method mFrame.acquireCameraImage() responds with an object of type Image that is in the YUV format or YCbCr. These types of images have 3 planes which is explained here very nicely. The ByteArray contained in these planes may be read directly by a CPU/GPU in native code. BitmapFactory cannot read this type of data. Hence, you need to convert this YUV image into something else.
For that, you need to use YuvImage class to create an instance of YUV & then convert it into JPEG using the compressToJpeg method. Once you have the byteArray from this, you can simply do what you're doing above. Use BitmapFactory to convert it into Bitmap and add it to your ImageView.
Note : YUV has 3 planes. Create a single bytearray from all planes & then pass it to YUV constructor. Though not elaborate, it should look something similar to this :
//The camera image received is in YUV YCbCr Format. Get buffers for each of the planes and use them to create a new bytearray defined by the size of all three buffers combined
val cameraPlaneY = cameraImage.planes[0].buffer
val cameraPlaneU = cameraImage.planes[1].buffer
val cameraPlaneV = cameraImage.planes[2].buffer
//Use the buffers to create a new byteArray that
val compositeByteArray = ByteArray(cameraPlaneY.capacity() + cameraPlaneU.capacity() + cameraPlaneV.capacity())
cameraPlaneY.get(compositeByteArray, 0, cameraPlaneY.capacity())
cameraPlaneU.get(compositeByteArray, cameraPlaneY.capacity(), cameraPlaneU.capacity())
cameraPlaneV.get(compositeByteArray, cameraPlaneY.capacity() + cameraPlaneU.capacity(), cameraPlaneV.capacity())
val baOutputStream = ByteArrayOutputStream()
val yuvImage: YuvImage = YuvImage(compositeByteArray, ImageFormat.NV21, cameraImage.width, cameraImage.height, null)
yuvImage.compressToJpeg(Rect(0, 0, cameraImage.width, cameraImage.height), 75, baOutputStream)
val byteForBitmap = baOutputStream.toByteArray()
val bitmap = BitmapFactory.decodeByteArray(byteForBitmap, 0, byteForBitmap.size)
imageView.setImageBitmap(bitmap)
That's just a rough code. It has scope for improvement perhaps. Also refer here.
I also winded up recreating the same situation as you were facing. I was getting the Image object as Null.
After some research, I found the problem was in the flow of the logic.
I then wrote the following code and it solved my issue:
I defined the following boolean to be set to capture the current frame on the button click:
private static boolean captureCurrentFrame = false;
This code I wrote in the onClick() function to get the current frame's RGB and Depth Image:
public void captureFrame(View view){
Toast.makeText(getApplicationContext(),"Capturing depth and rgb photo",Toast.LENGTH_SHORT).show();
captureCurrentFrame = true;
}
This section I wrote in onDrawFrame() method just after getting the frame from session.update():
if(captureCurrentFrame) {
RGBImage = frame.acquireCameraImage();
DepthImage = frame.acquireDepthImage();
Log.d("Image","Format of the RGB Image: " + RGBImage.getFormat());
Log.d("Image","Format of the Depth Image: " + DepthImage.getFormat());
RGBImage.close();
DepthImage.close();
captureCurrentFrame = false;
}
Reason for getting Null in my case was the code in the onClick listener was getting triggered before going through the onDraw() method, as a result of which the Images were not stored in the variables.
Therefore, I shifted the logic to onDraw() and triggered that flow through the boolean variable that is set by the listener.
I don't know if there is anybody still looking for the answer, but this is my code.
Image image = mFrame.acquireCameraImage();
byte[] nv21;
// Get the three planes.
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
int width = image.getWidth();
int height = image.getHeight();
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] byteArray = out.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);
I am working on android app that needs to generate a QRCode, and I successfully done with this.
My problem is on the printing side. I used a Mobiprint 3 device which has a built-in Thermal Printer. But my problem is the device printer only support a 24bit Bitmap.
My question is, is there a way to create a 24bit Bitmap in android? since it only support a 32bit. I googled it in a week but no one solved my problem.
Thank you in advance.
BTW.
This is my code
//method for generating a QRCode Bitmap
try {
bitmap = qrGenerator.generateQRCode(duCode);
int width, height;
height = bitmap.getHeight();
width = bitmap.getWidth();
bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(bitmap, 0, 0, paint);
} catch (WriterException e) {
e.printStackTrace();
}
}
and this is when calling the Bitmap for printing.
print.printBitmap(getBitmap());
this codes only prints a pure square black.
PS: print.printBitmap() is from the Mobiprint API.
If this question is repeated then let me know the link of original question because i enable to findout the good link for resolve my current problem.
I am working on android camera I able to take Picture's from my app. But I want to write name on the top of taken picture. i enable to find out how can i resolve this issue.
Sorry for i don't have any code for take reference.....
any help will be appreciated and i want to pay my thank in advance to all of you.
Try following code.
public Bitmap drawTextToBitmap(Bitmap bitmap, String mText) {
try {
android.graphics.Bitmap.Config bitmapConfig = bitmap.getConfig();
// set default bitmap config if none
if(bitmapConfig == null) {
bitmapConfig = android.graphics.Bitmap.Config.ARGB_8888;
}
// resource bitmaps are imutable,
// so we need to convert it to mutable one
bitmap = bitmap.copy(bitmapConfig, true);
Canvas canvas = new Canvas(bitmap);
// new antialised Paint
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
// text color - #3D3D3D
paint.setColor(Color.rgb(110,110, 110));
// text size in pixels
paint.setTextSize((int) (12 * scale));
// text shadow
paint.setShadowLayer(1f, 0f, 1f, Color.DKGRAY);
// draw text to the Canvas center
Rect bounds = new Rect();
paint.getTextBounds(mText, 0, mText.length(), bounds);
int x = (bitmap.getWidth() - bounds.width())/6;
int y = (bitmap.getHeight() + bounds.height())/5;
canvas.drawText(mText, x * scale, y * scale, paint);
return bitmap;
} catch (Exception e) {
return null;
}
}
I am making an Android game, but when I load my Bitmaps, I get a memory error. I know that this is caused by a very large Bitmap (it's the game background), but I don't know how I could keep from getting a "Bitmap size extends VM Budget" error. I can't rescale the Bitmap to make it smaller because I can't make the background smaller. Any suggestions?
Oh yeah, and here's the code that causes the error:
space = BitmapFactory.decodeResource(context.getResources(),
R.drawable.background);
space = Bitmap.createScaledBitmap(space,
(int) (space.getWidth() * widthRatio),
(int) (space.getHeight() * heightRatio), false);
You're going to have to sample down the image. You can't "scale" it down smaller than the screen obviously, but for small screens etc it doesn't have to be as high resolution as it is for big screens.
Long story short you have to use the inSampleSize option to downsample. It should actually be pretty easy if the image fits the screen:
final WindowManager wm = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
final Display display = wm.getDefaultDisplay();
final int dimension = Math.max(display.getHeight(), display.getWidth());
final Options opt = new BitmapFactory.Options();
opt.inJustDecodeBounds = true;
InputStream bitmapStream = /* input stream for bitmap */;
BitmapFactory.decodeStream(bitmapStream, null, opt);
try
{
bitmapStream.close();
}
catch (final IOException e)
{
// ignore
}
final int imageHeight = opt.outHeight;
final int imageWidth = opt.outWidth;
int exactSampleSize = 1;
if (imageHeight > dimension || imageWidth > dimension)
{
if (imageWidth > imageHeight)
{
exactSampleSize = Math.round((float) imageHeight / (float) dimension);
}
else
{
exactSampleSize = Math.round((float) imageWidth / (float) dimension);
}
}
opt.inSampleSize = exactSampleSize; // if you find a nearest power of 2, the sampling will be more efficient... on the other hand math is hard.
opt.inJustDecodeBounds = false;
bitmapStream = /* new input stream for bitmap, make sure not to re-use the stream from above or this won't work */;
final Bitmap img = BitmapFactory.decodeStream(bitmapStream, null, opt);
/* Now go clean up your open streams... : ) */
Hope that helps.
This may help you: http://developer.android.com/training/displaying-bitmaps/index.html
From the Android Developer Website, a tutorial on how to efficiently display bitmaps + other stuff. =]
I don't understand why are you using ImageBitmap? for background. If its necessary , its okay. Otherwise please use Layout and set its background because you are using background image.
This is important. (Check Android docs. They have clearly indicated this issue.)
You can do this in following way
Drawable d = getResources().getDrawable(R.drawable.your_background);
backgroundRelativeLayout.setBackgroundDrawable(d);
In most android devices the Intent size equals 16MB. You MUST Follow these instructions Loading Large Bitmap Efficiently