I want my ListView to look like a notepad, ie with a horizontal lines background pattern. Following the Notepad sample, I can extend TextView and override its onDraw() like this:
r = new Rect();
for (int i = 0; i < getLineCount(); i++) {
int baseline = getLineBounds(i, r);
canvas.drawLine(r.left, baseline + 1, r.right, baseline + 1, paint);
}
super.onDraw(canvas);
but when there are just a few elements in the list, there won't be enough lines to fill the page (actual result on the left, desired on the right):
~~~
So I tried an alternative approach: overriding ListView.onDraw(). Unfortunately, there's no immediate way to compute the top scroll (getScrollY() always returns 0), and above all, I must disable all caching and drawing optimizations, and this will definitely kill performance, other than not being scalable for large lists.
Finally, my row widgets are not plain text views. They are complex layouts, even if the main content is -sure- a TextView. This means that inside the layout I can't call getLineBounds() (a layout is not a text view), and in the text view I can't because the TextView is smaller than the surrounding layout, so there will be gaps on the four sides.
How can I architect a solution to display my custom widgets and fill the entire window with horizontal lines? A naive approach would be to add dummy empty elements to the list as long as it fits all the available space, however this is a hack and there must be a better way of doing things. Using a background image is not an option, since the distance between lines must be customizable at runtime.
The code below is based on the simple example from your question, a custom TextView that draws a line at the bottom(and with no dividers in the list). In this case I would make a custom ListView and override the dispatchDraw method like below:
class CustomListView extends ListView {
private Paint mPaint = new Paint();
private Paint mPaintBackground = new Paint();
public CustomListView(Context context, AttributeSet attrs) {
super(context, attrs);
mPaint.setColor(Color.RED);
mPaintBackground.setColor(Color.CYAN);
}
#Override
protected void dispatchDraw(Canvas canvas) {
super.dispatchDraw(canvas);
// ListView's height
final int currentHeight = getMeasuredHeight();
// this will let you know the status for the ListView, fitting/not
// fitting content
final int scrolledHeight = computeVerticalScrollRange();
if (scrolledHeight >= currentHeight || scrolledHeight == 0) {
// there is no need to draw something(for simplicity I assumed that
// if the adapter has no items i wouldn't draw something on the
// screen. If you still do want the lines then pick a decent value
// to simulate a row's height and draw them until you hit the
// ListView's getMeasuredHeight)
return;
} else {
// get the last drawn child
final View lastChild = getChildAt(getChildCount() - 1);
// values used to know where to start drawing lines
final int lastChildBottom = lastChild.getBottom();
// last child's height(use this to determine an appropriate value
// for the row height)
final int lastChildHeight = lastChild.getMeasuredHeight();
// determine the number of lines required to fill the ListView
final int nrOfLines = (currentHeight - lastChildBottom)
/ lastChildHeight;
// I used this to simulate a special color for the ListView's row background
Rect r = new Rect(0, lastChildBottom, getMeasuredWidth(),
getMeasuredHeight());
canvas.drawRect(r, mPaintBackground);
for (int i = 0; i < nrOfLines; i++) {
canvas.drawLine(0, lastChildBottom + (i + 1) * lastChildHeight,
getMeasuredWidth(), lastChildBottom + (i + 1)
* lastChildHeight, mPaint);
}
}
}
}
See if you can use the code above and adapt it to your own needs.
Related
I have been searching the whole day for a solution. I've checked out several Threads regarding my problem.
Custom detector object
Reduce bar code tracking window
and more...
But it didn't help me a lot. Basically I want that the Camera Preview is fullscreen but text only gets recognized in the center of the screen, where a Rectangle is drawn.
Technologies I am using:
Google Mobile Vision API’s for Optical character recognition(OCR)
Dependecy: play-services-vision
My current state: I created a BoxDetector class:
public class BoxDetector extends Detector {
private Detector mDelegate;
private int mBoxWidth, mBoxHeight;
public BoxDetector(Detector delegate, int boxWidth, int boxHeight) {
mDelegate = delegate;
mBoxWidth = boxWidth;
mBoxHeight = boxHeight;
}
public SparseArray detect(Frame frame) {
int width = frame.getMetadata().getWidth();
int height = frame.getMetadata().getHeight();
int right = (width / 2) + (mBoxHeight / 2);
int left = (width / 2) - (mBoxHeight / 2);
int bottom = (height / 2) + (mBoxWidth / 2);
int top = (height / 2) - (mBoxWidth / 2);
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, width, height, null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(left, top, right, bottom), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame croppedFrame =
new Frame.Builder()
.setBitmap(bitmap)
.setRotation(frame.getMetadata().getRotation())
.build();
return mDelegate.detect(croppedFrame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
#Override
public void receiveFrame(Frame frame) {
mDelegate.receiveFrame(frame);
}
}
And implemented an instance of this class here:
final TextRecognizer textRecognizer = new TextRecognizer.Builder(App.getContext()).build();
// Instantiate the created box detector in order to limit the Text Detector scan area
BoxDetector boxDetector = new BoxDetector(textRecognizer, width, height);
//Set the TextRecognizer's Processor but using the box collider
boxDetector.setProcessor(new Detector.Processor<TextBlock>() {
#Override
public void release() {
}
/*
Detect all the text from camera using TextBlock
and the values into a stringBuilder which will then be set to the textView.
*/
#Override
public void receiveDetections(Detector.Detections<TextBlock> detections) {
final SparseArray<TextBlock> items = detections.getDetectedItems();
if (items.size() != 0) {
mTextView.post(new Runnable() {
#Override
public void run() {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < items.size(); i++) {
TextBlock item = items.valueAt(i);
stringBuilder.append(item.getValue());
stringBuilder.append("\n");
}
mTextView.setText(stringBuilder.toString());
}
});
}
}
});
mCameraSource = new CameraSource.Builder(App.getContext(), boxDetector)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setRequestedPreviewSize(height, width)
.setAutoFocusEnabled(true)
.setRequestedFps(15.0f)
.build();
On execution this Exception is thrown:
Exception thrown from receiver.
java.lang.IllegalStateException: Detector processor must first be set with setProcessor in order to receive detection results.
at com.google.android.gms.vision.Detector.receiveFrame(com.google.android.gms:play-services-vision-common##19.0.0:17)
at com.spectures.shopendings.Helpers.BoxDetector.receiveFrame(BoxDetector.java:62)
at com.google.android.gms.vision.CameraSource$zzb.run(com.google.android.gms:play-services-vision-common##19.0.0:47)
at java.lang.Thread.run(Thread.java:919)
If anyone has a clue, what my fault is or has any alternatives I would really appreciate it. Thank you!
This is what I want to achieve, a Rect. Text area scanner:
Google vision detection have the input is a frame. A frame is an image data and contain a width and height as associated data. U can process this frame (Cut it to smaller centered frame) before pass it to the Detector. This process must be fast and do along camera processing image.
Check out my Github below, Search for FrameProcessingRunnable. U can see the frame input there. u can do the process yourself there.
CameraSource
You can try to pre-parse the CameraSource feed as #'Thành Hà Văn' mentioned (which I myself tried first, but discarded after trying to adjust for the old and new camera apis) but I found it easier to just limit your search area and use the detections returned by the default Vision detections and CameraSource. You can do it in several ways. For example,
(1) limiting the area of the screen by setting bounds based on the screen/preview size
(2) creating a custom class that can be used to dynamically set the detection area
I chose option 2 (I can post my custom class if needed), and then in the detection area, I filtered it for detections only within the specified area:
for (j in 0 until detections.size()) {
val textBlock = detections.valueAt(j) as TextBlock
for (line in textBlock.components) {
if((line.boundingBox.top.toFloat()*hScale) >= scanView.top.toFloat() && (line.boundingBox.bottom.toFloat()*hScale) <= scanView.bottom.toFloat()) {
canvas.drawRect(line.boundingBox, linePainter)
if(scanning)
if (((line.boundingBox.top.toFloat() * hScale) <= yTouch && (line.boundingBox.bottom.toFloat() * hScale) >= yTouch) &&
((line.boundingBox.left.toFloat() * wScale) <= xTouch && (line.boundingBox.right.toFloat() * wScale) >= xTouch) ) {
acceptDetection(line, scanCount)
}
}
}
}
The scanning section is just some custom code I used to allow the user to select what detections they wanted to keep. You would replace everything inside the if(line....) loop with your custom code to only act on the cropped detection area. Note, this example code only crops vertically, but you could also drop horizontally as well, and both directions also.
In google-vision you can get the coordinates of a detected text like described in How to get position of text in an image using Mobile Vision API?
You get the TextBlocks from TextRecognizer, then you filter the TextBlock by their coordinates, that can be determined by the getBoundingBox() or getCornerPoints() method of TextBlocks class :
TextRecognizer
Recognition results are returned by detect(Frame). The OCR algorithm
tries to infer the text layout and organizes each paragraph into
TextBlock instances. If any text is detected, at least one TextBlock
instance will be returned.
[..]
Public Methods
public SparseArray<TextBlock> detect (Frame frame) Detects and recognizes text in a image. Only supports bitmap and NV21 for now.
Returns mapping of int to TextBlock, where the int domain represents an opaque ID for the text block.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextRecognizer
TextBlock
public class TextBlock extends Object implements Text
A block of text (think of it as a paragraph) as deemed by the OCR
engine.
Public Method Summary
Rect getBoundingBox() Returns the TextBlock's axis-aligned bounding box.
List<? extends Text> getComponents() Smaller components that comprise this entity, if any.
Point[] getCornerPoints() 4 corner points in clockwise direction starting with top-left.
String getLanguage() Prevailing language in the TextBlock.
String getValue() Retrieve the recognized text as a string.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextBlock
So you basically proceed like in How to get position of text in an image using Mobile Vision API? however you do not split any block in lines and then any line in words like
//Loop through each `Block`
foreach (TextBlock textBlock in blocks)
{
IList<IText> textLines = textBlock.Components;
//loop Through each `Line`
foreach (IText currentLine in textLines)
{
IList<IText> words = currentLine.Components;
//Loop through each `Word`
foreach (IText currentword in words)
{
//Get the Rectangle/boundingBox of the word
RectF rect = new RectF(currentword.BoundingBox);
rectPaint.Color = Color.Black;
//Finally Draw Rectangle/boundingBox around word
canvas.DrawRect(rect, rectPaint);
//Set image to the `View`
imgView.SetImageDrawable(new BitmapDrawable(Resources, tempBitmap));
}
}
}
instead you get the boundary box of all text blocks and then select the boundary box with the coordinates closest to the center of the screen/frame or the rectangle that you specify (i.e. How can i get center x,y of my view in android?) . For this you use the getBoundingBox() or getCornerPoints() method of TextBlocks ...
I'm trying to implement face detection in my camera preview. I followed the Android reference pages to implement a custom camera preview in a TextureView, placed in a FrameLayout. Also in this FrameLayout is a SurfaceView with a clear background (overlapping the camera preview). My app draws the Rect that is recognized by the first CaptureResult.STATISTICS_FACES face's bounds dynamically to the SurfaceView's canvas, every time the camera preview is updated (once per frame). My app assumes only one face will need to be recognized.
My issue arises when I draw the rectangle. I get the rectangle in the correct place if I keep my face in the center of the camera view, but when I move my head upward the rectangle moves to the right, and when I move my head to the right, the rectangle moves down. It's as if the canvas needs to be rotated by -90, but this doesn't work for me (Explained below code).
in my activity's onCreate():
//face recognition
rectangleView = (SurfaceView) findViewById(R.id.rectangleView);
rectangleView.setZOrderOnTop(true);
rectangleView.getHolder().setFormat(
PixelFormat.TRANSPARENT); //remove black background from view
purplePaint = new Paint();
purplePaint.setColor(Color.rgb(175,0,255));
purplePaint.setStyle(Paint.Style.STROKE);
in TextureView.SurfaceTextureListener.onSurfaceTextureAvailable()(in the try{} block that encapsulates camera.open() :
Rect cameraBounds = cameraCharacteristics.get(
CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);
cameraWidth = cameraBounds.right;
cameraHeight = cameraBounds.bottom;
in the same listener within onSurfaceTextureUpdated() :
if (detectedFace != null && rectangleFace.height() > 0) {
Canvas currentCanvas = rectangleView.getHolder().lockCanvas();
if (currentCanvas != null) {
currentCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
int canvasWidth = currentCanvas.getWidth();
int canvasHeight = currentCanvas.getHeight();
int l = rectangleFace.right;
int t = rectangleFace.bottom;
int r = rectangleFace.left;
int b = rectangleFace.top;
int left = (canvasWidth*l)/cameraWidth;
int top = (canvasHeight*t)/cameraHeight;
int right = (canvasWidth*r)/cameraWidth;
int bottom = (canvasHeight*b)/cameraHeight;
currentCanvas.drawRect(left, top, right, bottom, purplePaint);
}
rectangleView.getHolder().unlockCanvasAndPost(currentCanvas);
}
method onCaptureCompleted in CameraCaptureSession.CameraCallback called by CameraCaptureSession.setRepeatingRequest() looper:
//May need better face recognition sdk or api
Face[] faces = result.get(CaptureResult.STATISTICS_FACES);
if (faces.length > 0)
{
detectedFace = faces[0];
rectangleFace = detectedFace.getBounds();
}
All variables are instantiated outside of the functions.
In case you can't quite understand my question or need additional information, a similar question is posted here:
How can i handle the rotation issue with Preview & FaceDetection
However, unlike the above poster, I couldn't even get my canvas to show the rectangle after rotating my canvas, so that can't be the solution.
I tried to rotate my points by -90 degrees using x=-y, y=x (left=-top, top=left), and it doesn't do the trick either. I feel like some kind of function needs to be applied to the points but I don't know how to go about it.
Any ideas on how to fix this?
For future reference, this is the solution I ended up with:
set a class/Activity variable called orientation_offset :
orientation_offset = cameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
This is the angle that the camera sensor's view (or rectangle for face detection) needs to be rotated to be viewed correctly.
Then, I changed onSurfaceTextureUpdated() :
Canvas currentCanvas = rectangleView.getHolder().lockCanvas();
if (currentCanvas != null) {
currentCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
if (detectedFace != null && rectangleFace.height() > 0) {
int canvasWidth = currentCanvas.getWidth();
int canvasHeight = currentCanvas.getHeight();
int faceWidthOffset = rectangleFace.width()/8;
int faceHeightOffset = rectangleFace.height()/8;
currentCanvas.save();
currentCanvas.rotate(360 - orientation_offset, canvasWidth / 2,
canvasHeight / 2);
int l = rectangleFace.right;
int t = rectangleFace.bottom;
int r = rectangleFace.left;
int b = rectangleFace.top;
int left = (canvasWidth - (canvasWidth*l)/cameraWidth)-(faceWidthOffset);
int top = (canvasHeight*t)/cameraHeight - (faceHeightOffset);
int right = (canvasWidth - (canvasWidth*r)/cameraWidth) + (faceWidthOffset);
int bottom = (canvasHeight*b)/cameraHeight + (faceHeightOffset);
currentCanvas.drawRect(left, top, right, bottom, purplePaint);
currentCanvas.restore();
}
}
rectangleView.getHolder().unlockCanvasAndPost(currentCanvas);
I'll leave the question open in case somebody else has a solution to offer.
I'm working on a drawing application and am pretty close to release but I'm having issues with the eraser part of the app. I have 2 main screens (fragments) one is just a blank white canvas that the user can draw on with some options and so on. The other is a note taking fragment. This note taking fragment looks like notebook paper. So for erasing on the drawing fragment, I can simply use the background of the canvas and the user wont know the difference. On the note fragment though I cannot do this beacuse I need to keep the background in tact. I have looked into PorterDuffer modes and have tried the clear version and tried to draw onto a separate bitmap but nothing has worked. If there was a way to control what gets draw ontop of what then that would be useful. I'm open to any suggestions, I can't seem to get anything to work.
Ive also played with enabling a drawing cache before erasing and that doesn't work. In addition I tried hardware enabling and that made my custom view behave oddly. Below is the relavent code. My on draw methods goes through a lot of paths because I am querying them in order to allow for some other functionallity.
#Override
protected void onDraw(Canvas canvas) {
//draw the backgroumd type
if(mDrawBackground) {
//draw the background
//if the bitmap is not null draw it as the background, otherwise we are in a note view
if(mBackgroundBitmap != null) {
canvas.drawBitmap(mBackgroundBitmap, 0, 0, backPaint);
} else {
drawBackgroundType(mBackgroundType, canvas);
}
}
for (int i = 0; i < paths.size(); i++ ) {
//Log.i("DRAW", "On draw: " + i);
//draw each previous path.
mDrawPaint.setStrokeWidth(strokeSizes.get(i));
mDrawPaint.setColor(colors.get(i));
canvas.drawPath(paths.get(i), mDrawPaint);
}
//set paint attributes to the current value
mDrawPaint.setStrokeWidth(strokeSize);
mDrawPaint.setColor(mDrawColor);
canvas.drawPath(mPath, mDrawPaint);
}
and my draw background method
/**
* Method that actually draws the notebook paper background
* #param canvas the {#code Canvas} to draw on.
*/
private void drawNoteBookPaperBackground(Canvas canvas) {
//create bitmap for the background and a temporary canvas.
mBackgroundBitmap = Bitmap.createBitmap(canvas.getWidth(), canvas.getHeight(), Bitmap.Config.ARGB_8888);
mCanvas = new Canvas(mBackgroundBitmap);
//set the color to white.
mBackgroundBitmap.eraseColor(Color.WHITE);
//get the height and width of the view minus padding.
int height = getHeight() - getPaddingTop() - getPaddingBottom();
int width = getWidth() - getPaddingLeft() - getPaddingRight();
//figure out how many lines we can draw given a certain line width.
int lineWidth = 50;
int numOfLines = Math.round(height / lineWidth);
Log.i("DRAWVIEW", "" + numOfLines);
//iterate through the number of lines and draw them.
for(int i = 0; i < numOfLines * lineWidth; i+=lineWidth) {
mCanvas.drawLine(0+getPaddingLeft(), i+getPaddingTop(), width, i+getPaddingTop(), mNoteBookPaperLinePaint);
}
//now we need to draw the vertical lines on the left side of the view.
float startPoint = 30;
//set the color to be red.
mNoteBookPaperLinePaint.setColor(getResources().getColor(R.color.notebook_paper_vertical_line_color));
//draw first line
mCanvas.drawLine(startPoint, 0, startPoint, getHeight(), mNoteBookPaperLinePaint);
//space the second line next to the first one.
startPoint+=20;
//draw the second line
mCanvas.drawLine(startPoint, 0, startPoint, getHeight(), mNoteBookPaperLinePaint);
//reset the paint color.
mNoteBookPaperLinePaint.setColor(getResources().getColor(R.color.notebook_paper_horizontal_line_color));
canvas.drawBitmap(mBackgroundBitmap, 0, 0, backPaint);
}
To all who see this question I thought I would add how I solved the problem. What I'm doing is creating a background bitmap in my custom view and then passing that to my hosting fragment. The fragment then sets that bitmap as its background for the containing view group so that when I use the PorterDuff.CLEAR Xfermode, the drawn paths are cleared but the background in the fragment parent remains untouched.
In my project i have a View, that draw a canvas.
When the acvitity/fragment is loaded everything is fine with it, and the image is rendered correctly.
But when the orientation change, it is rendered partially.
Here are two examples:
Here the correct rendering
2.This is the rendering when orientation change:
The Class that i wrote extends View, and override the onDraw() method. This is the code:
#Override
protected void onDraw(Canvas canvas){
super.onDraw(canvas);
int size = this.width/2;
int left = size/2;
int right = size +left;
canvas.drawRoundRect(resistorRects[0], 6, 6,resPaint[0]);
canvas.drawRect(resistorRects[FIRST_COLOR], resPaint[1]);
canvas.drawRect(resistorRects[SECOND_COLOR], resPaint[2]);
canvas.drawRect(resistorRects[THIRD_COLOR], resPaint[3]);
canvas.drawRect(resistorRects[MULTIPLIER], resPaint[4]);
canvas.drawRect(resistorRects[TOLERANCE], resPaint[5]);
canvas.drawLine((float)left-15, (float)16.5, (float)left, (float)17.5, resPaint[6]);
canvas.drawLine((float)right, (float)16.5, (float)right+15, (float)17.5, resPaint[6]);
}
The constructor of the class is:
public ResistorGraphicsView(Context context, int width) {
super(context);
this.width = width;
int i=0;
resPaint = new Paint[7];
resistorRects = new RectF[6];
while(i < 6){
resPaint[i] = new Paint(Paint.ANTI_ALIAS_FLAG);
resPaint[i].setColor(Color.parseColor("#DEADBEEF"));
i++;
}
int size = this.width/2;
int left = size/2;
int right = size +left;
resPaint[6] = new Paint(Paint.ANTI_ALIAS_FLAG);
resPaint[6].setColor(Color.parseColor("#FFFFFFFF"));
resistorRects[0]= new RectF(left,0,right,35);
resistorRects[FIRST_COLOR]= new RectF(left+10,0,left+20,35);
resistorRects[SECOND_COLOR]= new RectF(left+30,0,left+40,35);
resistorRects[THIRD_COLOR] = new RectF(left+50,0, left+60,35);
resistorRects[MULTIPLIER]= new RectF(left+70,0,left+80,35);
resistorRects[TOLERANCE]= new RectF(right-30,0,right-20,35);
}
The strange fact is that when i select the option that inflates the layout, on the side pane the image is rendered correctly:
case RESISTOR_VALUE:
getFragmentManager().beginTransaction().replace(R.id.activity_container, new ResistorCalcFragment()).commit();
break;
In fact, if after the bad rendering, i select it from the side pane, the view is then rendered correctly.
I tried to add some breakpoint, and it seems that everything is called correctly. Notice that every time the view is rotated i create a new instance of that object.
I'm working on the android half of a cross-platform android/ios framework that lets you write apps in JS that work on both platforms. I say this because it means I can't use things like 9-patches to get this effect. Full code at https://github.com/mschulkind/cordova-true-native-android
Here are two screenshots of the problem:
-Images redacted because I'm too new to be this useful. I will have to add them when I'm no longer a newbie.-
Here's the code that generates the drawable from https://github.com/mschulkind/cordova-true-native-android/blob/master/src/org/apache/cordova/plugins/truenative/ViewPlugin.java#L146
// Borrowed from:
// http://www.betaful.com/2012/01/programmatic-shapes-in-android/
private class ViewBackground extends ShapeDrawable {
private final Paint mFillPaint, mStrokePaint;
private final int mBorderWidth;
public ViewBackground(
Shape s, int backgroundColor, int borderColor, int borderWidth) {
super(s);
mFillPaint = new Paint(this.getPaint());
mFillPaint.setColor(backgroundColor);
mStrokePaint = new Paint(mFillPaint);
mStrokePaint.setStyle(Paint.Style.STROKE);
mStrokePaint.setStrokeWidth(borderWidth);
mStrokePaint.setColor(borderColor);
mBorderWidth = borderWidth;
}
#Override
protected void onDraw(Shape shape, Canvas canvas, Paint paint) {
shape.resize(canvas.getClipBounds().right, canvas.getClipBounds().bottom);
Matrix matrix = new Matrix();
matrix.setRectToRect(
new RectF(
0, 0,
canvas.getClipBounds().right, canvas.getClipBounds().bottom),
new RectF(
mBorderWidth/2, mBorderWidth/2,
canvas.getClipBounds().right - mBorderWidth/2,
canvas.getClipBounds().bottom - mBorderWidth/2),
Matrix.ScaleToFit.FILL);
canvas.concat(matrix);
shape.draw(canvas, mFillPaint);
if (mBorderWidth > 0) {
shape.draw(canvas, mStrokePaint);
}
}
}
This has happened both when the drawable was set as the background of the EditText directly and when I set it as the background of a parent view around the EditText.
Anyone have an idea of what's going on here or what avenues I should explore?
Looks like you want to draw a rounded rectangle.
To achieve such a style, it is simpler to use a XML drawable.
You simply put a XML file into the drawable/ directory. Here you can describe the desired shape.
Some documentation about XML drawables is here : http://idunnolol.com/android/drawables.html
Look at the tag.