In my project i have a View, that draw a canvas.
When the acvitity/fragment is loaded everything is fine with it, and the image is rendered correctly.
But when the orientation change, it is rendered partially.
Here are two examples:
Here the correct rendering
2.This is the rendering when orientation change:
The Class that i wrote extends View, and override the onDraw() method. This is the code:
#Override
protected void onDraw(Canvas canvas){
super.onDraw(canvas);
int size = this.width/2;
int left = size/2;
int right = size +left;
canvas.drawRoundRect(resistorRects[0], 6, 6,resPaint[0]);
canvas.drawRect(resistorRects[FIRST_COLOR], resPaint[1]);
canvas.drawRect(resistorRects[SECOND_COLOR], resPaint[2]);
canvas.drawRect(resistorRects[THIRD_COLOR], resPaint[3]);
canvas.drawRect(resistorRects[MULTIPLIER], resPaint[4]);
canvas.drawRect(resistorRects[TOLERANCE], resPaint[5]);
canvas.drawLine((float)left-15, (float)16.5, (float)left, (float)17.5, resPaint[6]);
canvas.drawLine((float)right, (float)16.5, (float)right+15, (float)17.5, resPaint[6]);
}
The constructor of the class is:
public ResistorGraphicsView(Context context, int width) {
super(context);
this.width = width;
int i=0;
resPaint = new Paint[7];
resistorRects = new RectF[6];
while(i < 6){
resPaint[i] = new Paint(Paint.ANTI_ALIAS_FLAG);
resPaint[i].setColor(Color.parseColor("#DEADBEEF"));
i++;
}
int size = this.width/2;
int left = size/2;
int right = size +left;
resPaint[6] = new Paint(Paint.ANTI_ALIAS_FLAG);
resPaint[6].setColor(Color.parseColor("#FFFFFFFF"));
resistorRects[0]= new RectF(left,0,right,35);
resistorRects[FIRST_COLOR]= new RectF(left+10,0,left+20,35);
resistorRects[SECOND_COLOR]= new RectF(left+30,0,left+40,35);
resistorRects[THIRD_COLOR] = new RectF(left+50,0, left+60,35);
resistorRects[MULTIPLIER]= new RectF(left+70,0,left+80,35);
resistorRects[TOLERANCE]= new RectF(right-30,0,right-20,35);
}
The strange fact is that when i select the option that inflates the layout, on the side pane the image is rendered correctly:
case RESISTOR_VALUE:
getFragmentManager().beginTransaction().replace(R.id.activity_container, new ResistorCalcFragment()).commit();
break;
In fact, if after the bad rendering, i select it from the side pane, the view is then rendered correctly.
I tried to add some breakpoint, and it seems that everything is called correctly. Notice that every time the view is rotated i create a new instance of that object.
Related
I have been searching the whole day for a solution. I've checked out several Threads regarding my problem.
Custom detector object
Reduce bar code tracking window
and more...
But it didn't help me a lot. Basically I want that the Camera Preview is fullscreen but text only gets recognized in the center of the screen, where a Rectangle is drawn.
Technologies I am using:
Google Mobile Vision API’s for Optical character recognition(OCR)
Dependecy: play-services-vision
My current state: I created a BoxDetector class:
public class BoxDetector extends Detector {
private Detector mDelegate;
private int mBoxWidth, mBoxHeight;
public BoxDetector(Detector delegate, int boxWidth, int boxHeight) {
mDelegate = delegate;
mBoxWidth = boxWidth;
mBoxHeight = boxHeight;
}
public SparseArray detect(Frame frame) {
int width = frame.getMetadata().getWidth();
int height = frame.getMetadata().getHeight();
int right = (width / 2) + (mBoxHeight / 2);
int left = (width / 2) - (mBoxHeight / 2);
int bottom = (height / 2) + (mBoxWidth / 2);
int top = (height / 2) - (mBoxWidth / 2);
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, width, height, null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(left, top, right, bottom), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame croppedFrame =
new Frame.Builder()
.setBitmap(bitmap)
.setRotation(frame.getMetadata().getRotation())
.build();
return mDelegate.detect(croppedFrame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
#Override
public void receiveFrame(Frame frame) {
mDelegate.receiveFrame(frame);
}
}
And implemented an instance of this class here:
final TextRecognizer textRecognizer = new TextRecognizer.Builder(App.getContext()).build();
// Instantiate the created box detector in order to limit the Text Detector scan area
BoxDetector boxDetector = new BoxDetector(textRecognizer, width, height);
//Set the TextRecognizer's Processor but using the box collider
boxDetector.setProcessor(new Detector.Processor<TextBlock>() {
#Override
public void release() {
}
/*
Detect all the text from camera using TextBlock
and the values into a stringBuilder which will then be set to the textView.
*/
#Override
public void receiveDetections(Detector.Detections<TextBlock> detections) {
final SparseArray<TextBlock> items = detections.getDetectedItems();
if (items.size() != 0) {
mTextView.post(new Runnable() {
#Override
public void run() {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < items.size(); i++) {
TextBlock item = items.valueAt(i);
stringBuilder.append(item.getValue());
stringBuilder.append("\n");
}
mTextView.setText(stringBuilder.toString());
}
});
}
}
});
mCameraSource = new CameraSource.Builder(App.getContext(), boxDetector)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setRequestedPreviewSize(height, width)
.setAutoFocusEnabled(true)
.setRequestedFps(15.0f)
.build();
On execution this Exception is thrown:
Exception thrown from receiver.
java.lang.IllegalStateException: Detector processor must first be set with setProcessor in order to receive detection results.
at com.google.android.gms.vision.Detector.receiveFrame(com.google.android.gms:play-services-vision-common##19.0.0:17)
at com.spectures.shopendings.Helpers.BoxDetector.receiveFrame(BoxDetector.java:62)
at com.google.android.gms.vision.CameraSource$zzb.run(com.google.android.gms:play-services-vision-common##19.0.0:47)
at java.lang.Thread.run(Thread.java:919)
If anyone has a clue, what my fault is or has any alternatives I would really appreciate it. Thank you!
This is what I want to achieve, a Rect. Text area scanner:
Google vision detection have the input is a frame. A frame is an image data and contain a width and height as associated data. U can process this frame (Cut it to smaller centered frame) before pass it to the Detector. This process must be fast and do along camera processing image.
Check out my Github below, Search for FrameProcessingRunnable. U can see the frame input there. u can do the process yourself there.
CameraSource
You can try to pre-parse the CameraSource feed as #'Thành Hà Văn' mentioned (which I myself tried first, but discarded after trying to adjust for the old and new camera apis) but I found it easier to just limit your search area and use the detections returned by the default Vision detections and CameraSource. You can do it in several ways. For example,
(1) limiting the area of the screen by setting bounds based on the screen/preview size
(2) creating a custom class that can be used to dynamically set the detection area
I chose option 2 (I can post my custom class if needed), and then in the detection area, I filtered it for detections only within the specified area:
for (j in 0 until detections.size()) {
val textBlock = detections.valueAt(j) as TextBlock
for (line in textBlock.components) {
if((line.boundingBox.top.toFloat()*hScale) >= scanView.top.toFloat() && (line.boundingBox.bottom.toFloat()*hScale) <= scanView.bottom.toFloat()) {
canvas.drawRect(line.boundingBox, linePainter)
if(scanning)
if (((line.boundingBox.top.toFloat() * hScale) <= yTouch && (line.boundingBox.bottom.toFloat() * hScale) >= yTouch) &&
((line.boundingBox.left.toFloat() * wScale) <= xTouch && (line.boundingBox.right.toFloat() * wScale) >= xTouch) ) {
acceptDetection(line, scanCount)
}
}
}
}
The scanning section is just some custom code I used to allow the user to select what detections they wanted to keep. You would replace everything inside the if(line....) loop with your custom code to only act on the cropped detection area. Note, this example code only crops vertically, but you could also drop horizontally as well, and both directions also.
In google-vision you can get the coordinates of a detected text like described in How to get position of text in an image using Mobile Vision API?
You get the TextBlocks from TextRecognizer, then you filter the TextBlock by their coordinates, that can be determined by the getBoundingBox() or getCornerPoints() method of TextBlocks class :
TextRecognizer
Recognition results are returned by detect(Frame). The OCR algorithm
tries to infer the text layout and organizes each paragraph into
TextBlock instances. If any text is detected, at least one TextBlock
instance will be returned.
[..]
Public Methods
public SparseArray<TextBlock> detect (Frame frame) Detects and recognizes text in a image. Only supports bitmap and NV21 for now.
Returns mapping of int to TextBlock, where the int domain represents an opaque ID for the text block.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextRecognizer
TextBlock
public class TextBlock extends Object implements Text
A block of text (think of it as a paragraph) as deemed by the OCR
engine.
Public Method Summary
Rect getBoundingBox() Returns the TextBlock's axis-aligned bounding box.
List<? extends Text> getComponents() Smaller components that comprise this entity, if any.
Point[] getCornerPoints() 4 corner points in clockwise direction starting with top-left.
String getLanguage() Prevailing language in the TextBlock.
String getValue() Retrieve the recognized text as a string.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextBlock
So you basically proceed like in How to get position of text in an image using Mobile Vision API? however you do not split any block in lines and then any line in words like
//Loop through each `Block`
foreach (TextBlock textBlock in blocks)
{
IList<IText> textLines = textBlock.Components;
//loop Through each `Line`
foreach (IText currentLine in textLines)
{
IList<IText> words = currentLine.Components;
//Loop through each `Word`
foreach (IText currentword in words)
{
//Get the Rectangle/boundingBox of the word
RectF rect = new RectF(currentword.BoundingBox);
rectPaint.Color = Color.Black;
//Finally Draw Rectangle/boundingBox around word
canvas.DrawRect(rect, rectPaint);
//Set image to the `View`
imgView.SetImageDrawable(new BitmapDrawable(Resources, tempBitmap));
}
}
}
instead you get the boundary box of all text blocks and then select the boundary box with the coordinates closest to the center of the screen/frame or the rectangle that you specify (i.e. How can i get center x,y of my view in android?) . For this you use the getBoundingBox() or getCornerPoints() method of TextBlocks ...
Situation: I have a picture and user could add texts on it, change there color, size, position, rotation, font size and etc., i need to save all this texts in one image. It's ok, i'm saving them by using drawing cache.
//RelativeLayout layout - layout with textviews
layout.setDrawingCacheEnabled(true);
Bitmap bitmap = null;
if (layout.getDrawingCache() != null)
bitmap = Bitmap.createBitmap(layout.getDrawingCache());
layout.setDrawingCacheEnabled(false);
Problem: Result image could be small due to screen size of the user's device. I need this image in resolution of 1500-2000 px. In case of just resizing this image - text looks fuzzy and not as good as it was on the screen.
Question: Is there're some other ways to save textviews as image without just resizing and loosing quality of text?
Ok, finally i found working solution.
The idea: user add text view on the image with 800x800 px size, do something with it and then i need to get the same image but in 2000x2000 px. The problem was - after resizing text was fuzzy and noisy. But how can i take a screenshot of not rendered view with size bigger than screen?
Here code that i used, it works just fine, i get the same image, text in the same positions, same size and etc. but no resizing noise, text look clear and not fuzzy. Also, this code save bitmap much bigger than screen size and without showing it to user.
private Bitmap makeTextLayer(int maxWidth, int maxHeight, ImageObject imageObject) {
Context c = mContext;
View v = LayoutInflater.from(c).inflate(R.layout.text_view_generator, new LinearLayout(c), false);
RelativeLayout editTexts = (RelativeLayout) v.findViewById(R.id.editTexts);
initView(v, maxWidth, maxHeight);
for (int i = 0; i < imageObject.getEditTexts().size(); ++i) {
ImageObject.TextInImage textInImage = imageObject.getEditTexts().get(i);
//text view in relative layout - init his size, in my case it's as big as image
CustomEditText editText = new CustomEditText(c);
RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.FILL_PARENT, RelativeLayout.LayoutParams.FILL_PARENT);
params.addRule(RelativeLayout.CENTER_HORIZONTAL, RelativeLayout.TRUE);
// don't forget to add your view to layout, this view will be saved as screenshot
editTexts.addView(editText, params);
editText.getLayoutParams().width = maxWidth;
editText.getLayoutParams().height = maxHeight;
editText.loadTextParams(textInImage);
editText.loadSizeAndRotation(textInImage);
// this is important, without new init - position of text will be wrong
initView(v, maxWidth, maxHeight);
// and here i configure position
editText.loadPosition();
}
Bitmap result = getViewBitmap(v, maxWidth, maxHeight);
return result;
}
Bitmap getViewBitmap(View v, int maxWidth, int maxHeight) {
//Get the dimensions of the view so we can re-layout the view at its current size
//and create a bitmap of the same size
int width = v.getWidth();
int height = v.getHeight();
int measuredWidth = View.MeasureSpec.makeMeasureSpec(width, View.MeasureSpec.EXACTLY);
int measuredHeight = View.MeasureSpec.makeMeasureSpec(height, View.MeasureSpec.EXACTLY);
//Cause the view to re-layout
v.measure(measuredWidth, measuredHeight);
v.layout(0, 0, v.getMeasuredWidth(), v.getMeasuredHeight());
//Create a bitmap backed Canvas to draw the view into
Bitmap b = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
//Now that the view is laid out and we have a canvas, ask the view to draw itself into the canvas
v.draw(c);
return b;
}
private void initView(View view, int maxWidth, int maxHeight){
ViewGroup.LayoutParams vParams = view.getLayoutParams();
//If the View hasn't been attached to a layout, or had LayoutParams set
//return null, or handle this case however you want
if (vParams == null) {
return;
}
int wSpec = measureSpecFromDimension(vParams.width, maxWidth);
int hSpec = measureSpecFromDimension(vParams.height, maxHeight);
view.measure(wSpec, hSpec);
int width = view.getMeasuredWidth();
int height = view.getMeasuredHeight();
//Cannot make a zero-width or zero-height bitmap
if (width == 0 || height == 0) {
return;
}
view.layout(0, 0, width, height);
}
private int measureSpecFromDimension(int dimension, int maxDimension) {
switch (dimension) {
case ViewGroup.LayoutParams.MATCH_PARENT:
return View.MeasureSpec.makeMeasureSpec(maxDimension, View.MeasureSpec.EXACTLY);
case ViewGroup.LayoutParams.WRAP_CONTENT:
return View.MeasureSpec.makeMeasureSpec(maxDimension, View.MeasureSpec.AT_MOST);
default:
return View.MeasureSpec.makeMeasureSpec(dimension, View.MeasureSpec.EXACTLY);
}
}
I would like to thank the authors of the comments in these posts:
Converting a view to Bitmap without displaying it in Android?
Taking a "screenshot" of a specific layout in Android
Take a screenshot of a whole View
Capture whole scrollview bigger than screen
How to screenshot or snapshot a view before it's rendered?
I found my solution when read them, if my solution will not work for you - check out this posts.
I'm trying to implement face detection in my camera preview. I followed the Android reference pages to implement a custom camera preview in a TextureView, placed in a FrameLayout. Also in this FrameLayout is a SurfaceView with a clear background (overlapping the camera preview). My app draws the Rect that is recognized by the first CaptureResult.STATISTICS_FACES face's bounds dynamically to the SurfaceView's canvas, every time the camera preview is updated (once per frame). My app assumes only one face will need to be recognized.
My issue arises when I draw the rectangle. I get the rectangle in the correct place if I keep my face in the center of the camera view, but when I move my head upward the rectangle moves to the right, and when I move my head to the right, the rectangle moves down. It's as if the canvas needs to be rotated by -90, but this doesn't work for me (Explained below code).
in my activity's onCreate():
//face recognition
rectangleView = (SurfaceView) findViewById(R.id.rectangleView);
rectangleView.setZOrderOnTop(true);
rectangleView.getHolder().setFormat(
PixelFormat.TRANSPARENT); //remove black background from view
purplePaint = new Paint();
purplePaint.setColor(Color.rgb(175,0,255));
purplePaint.setStyle(Paint.Style.STROKE);
in TextureView.SurfaceTextureListener.onSurfaceTextureAvailable()(in the try{} block that encapsulates camera.open() :
Rect cameraBounds = cameraCharacteristics.get(
CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);
cameraWidth = cameraBounds.right;
cameraHeight = cameraBounds.bottom;
in the same listener within onSurfaceTextureUpdated() :
if (detectedFace != null && rectangleFace.height() > 0) {
Canvas currentCanvas = rectangleView.getHolder().lockCanvas();
if (currentCanvas != null) {
currentCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
int canvasWidth = currentCanvas.getWidth();
int canvasHeight = currentCanvas.getHeight();
int l = rectangleFace.right;
int t = rectangleFace.bottom;
int r = rectangleFace.left;
int b = rectangleFace.top;
int left = (canvasWidth*l)/cameraWidth;
int top = (canvasHeight*t)/cameraHeight;
int right = (canvasWidth*r)/cameraWidth;
int bottom = (canvasHeight*b)/cameraHeight;
currentCanvas.drawRect(left, top, right, bottom, purplePaint);
}
rectangleView.getHolder().unlockCanvasAndPost(currentCanvas);
}
method onCaptureCompleted in CameraCaptureSession.CameraCallback called by CameraCaptureSession.setRepeatingRequest() looper:
//May need better face recognition sdk or api
Face[] faces = result.get(CaptureResult.STATISTICS_FACES);
if (faces.length > 0)
{
detectedFace = faces[0];
rectangleFace = detectedFace.getBounds();
}
All variables are instantiated outside of the functions.
In case you can't quite understand my question or need additional information, a similar question is posted here:
How can i handle the rotation issue with Preview & FaceDetection
However, unlike the above poster, I couldn't even get my canvas to show the rectangle after rotating my canvas, so that can't be the solution.
I tried to rotate my points by -90 degrees using x=-y, y=x (left=-top, top=left), and it doesn't do the trick either. I feel like some kind of function needs to be applied to the points but I don't know how to go about it.
Any ideas on how to fix this?
For future reference, this is the solution I ended up with:
set a class/Activity variable called orientation_offset :
orientation_offset = cameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
This is the angle that the camera sensor's view (or rectangle for face detection) needs to be rotated to be viewed correctly.
Then, I changed onSurfaceTextureUpdated() :
Canvas currentCanvas = rectangleView.getHolder().lockCanvas();
if (currentCanvas != null) {
currentCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
if (detectedFace != null && rectangleFace.height() > 0) {
int canvasWidth = currentCanvas.getWidth();
int canvasHeight = currentCanvas.getHeight();
int faceWidthOffset = rectangleFace.width()/8;
int faceHeightOffset = rectangleFace.height()/8;
currentCanvas.save();
currentCanvas.rotate(360 - orientation_offset, canvasWidth / 2,
canvasHeight / 2);
int l = rectangleFace.right;
int t = rectangleFace.bottom;
int r = rectangleFace.left;
int b = rectangleFace.top;
int left = (canvasWidth - (canvasWidth*l)/cameraWidth)-(faceWidthOffset);
int top = (canvasHeight*t)/cameraHeight - (faceHeightOffset);
int right = (canvasWidth - (canvasWidth*r)/cameraWidth) + (faceWidthOffset);
int bottom = (canvasHeight*b)/cameraHeight + (faceHeightOffset);
currentCanvas.drawRect(left, top, right, bottom, purplePaint);
currentCanvas.restore();
}
}
rectangleView.getHolder().unlockCanvasAndPost(currentCanvas);
I'll leave the question open in case somebody else has a solution to offer.
I've been trying this since morning, yet I can't get it to work.
What I'm trying to do is create a somewhat like long shadow for the TextView, which is similar to the following:
http://www.iceflowstudios.com/v3/wp-content/uploads/2013/07/long_shadow_banner.jpg
http://web3canvas.com/wp-content/uploads/2013/07/lsd-ps-action-720x400.png
My solution so far was to create a lot of TextViews and cascade them under each other, but there are a lot of performance issues if I go with the current way.
Another solution is the usage of a custom font that has that similar allure, yet I cannot find any that matches the font I am currently using.
So I was wondering, is it possible to use: (I have to mention, the textviews are created dynamically)
TV.setShadowLayer(1f, 5f, 5f, Color.GREY);
To create several of them in a line (as a cascading layer), making the shadow seem smooth? Or do you guys suggest any other solutions?
Thanks in advance.
Try to play with raster images:
Detect bounds of text using Paint.getTextBounds() method
Create transparent Bitmap with such metrics (W + H) x H (you may use Bitmap.Config.ALPHA_8 to optimize memory usage)
Draw text on this Bitmap at 0x0 position
Copy first row of Bitmap into new one with original width, but with height of 1px
Iterate over the Y-axis of Bitmap (from top to bottom) and draw single-line Bitmap with the corresponding offset by X-axis (you will overdraw some transparent pixels)
Now you have the top-part of your shadow
Draw the bottom part using same technique, but choosing last row of this Bitmap
This algorithm may be optimized if you detect, that all pixels in last row have the same color (full shadow).
UPDATE 1
I achieved such result using this quick solution:
MainActivity.java
import android.app.Activity;
import android.os.Bundle;
public class MainActivity extends Activity {
#Override
protected void onCreate(Bundle state) {
super.onCreate(state);
LongShadowTextView longShadow = new LongShadowTextView(this);
longShadow.setText("Hello World");
setContentView(longShadow);
}
}
LongShadowTextView.java
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.Rect;
import android.graphics.RectF;
import android.view.View;
public class LongShadowTextView extends View {
private Bitmap mBitmap;
private String mText;
public LongShadowTextView(Context context) {
super(context);
}
public void setText(String text) {
Paint paint = new Paint();
// TODO provide setters for these values
paint.setColor(Color.BLACK);
paint.setTextSize(142);
Rect rect = new Rect();
paint.getTextBounds(text, 0, text.length(), rect);
Bitmap bitmap = Bitmap.createBitmap(rect.width() + rect.height(), rect.height(), Bitmap.Config.ALPHA_8);
Canvas canvas = new Canvas(bitmap);
canvas.drawText(text, 0, rect.height(), paint);
Rect src = new Rect();
RectF dst = new RectF();
int w = bitmap.getWidth();
int h = bitmap.getHeight();
src.left = 0;
src.right = w;
for (int i = 0; i < h; ++i) {
src.top = i;
src.bottom = i + 1;
dst.left = 1;
dst.top = i + 1;
dst.right = 1 + w;
dst.bottom = i + 2;
canvas.drawBitmap(bitmap, src, dst, null);
}
mText = text;
mBitmap = bitmap;
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawBitmap(mBitmap, 0, 0, null);
}
}
UPDATE 2
Here is final result which I achieved. Clone this demo from github.
I'm afraid your suggested approach of using setShadowLayer() wont work as this approach effectively draws a second TextPaint with blurring.
Superimposing several TextPaints on top of each other will essentially mean you need to offset it by 1px for each step, which is very graphically intensive and will have a very poor performance.
This is an excellent question and a real challenge!
The only solution that comes to mind is to handle each glyph independently, inspecting all path elements and extending a shadow between the furthest bottom-left and top-right point. This seems very complicated, and I don't know if there's any mechanics in the SDK that facilitates an approach like that.
Suggested reading:
This question tackles obtaining glyph paths from TTFs.
This answer illustrates how you can leverage using paths, although it concerns a JavaScript approach.
Small comment if someone would try to run setText() method. it is not working now.
You should call invalidate(); in setText(); method
public void setText(String value) {
boolean changed =
mText == null && value != null || mText != null && !mText.equals(value);
mText = value;
if (changed) {
refresh();
}
invalidate();
}
I want my ListView to look like a notepad, ie with a horizontal lines background pattern. Following the Notepad sample, I can extend TextView and override its onDraw() like this:
r = new Rect();
for (int i = 0; i < getLineCount(); i++) {
int baseline = getLineBounds(i, r);
canvas.drawLine(r.left, baseline + 1, r.right, baseline + 1, paint);
}
super.onDraw(canvas);
but when there are just a few elements in the list, there won't be enough lines to fill the page (actual result on the left, desired on the right):
~~~
So I tried an alternative approach: overriding ListView.onDraw(). Unfortunately, there's no immediate way to compute the top scroll (getScrollY() always returns 0), and above all, I must disable all caching and drawing optimizations, and this will definitely kill performance, other than not being scalable for large lists.
Finally, my row widgets are not plain text views. They are complex layouts, even if the main content is -sure- a TextView. This means that inside the layout I can't call getLineBounds() (a layout is not a text view), and in the text view I can't because the TextView is smaller than the surrounding layout, so there will be gaps on the four sides.
How can I architect a solution to display my custom widgets and fill the entire window with horizontal lines? A naive approach would be to add dummy empty elements to the list as long as it fits all the available space, however this is a hack and there must be a better way of doing things. Using a background image is not an option, since the distance between lines must be customizable at runtime.
The code below is based on the simple example from your question, a custom TextView that draws a line at the bottom(and with no dividers in the list). In this case I would make a custom ListView and override the dispatchDraw method like below:
class CustomListView extends ListView {
private Paint mPaint = new Paint();
private Paint mPaintBackground = new Paint();
public CustomListView(Context context, AttributeSet attrs) {
super(context, attrs);
mPaint.setColor(Color.RED);
mPaintBackground.setColor(Color.CYAN);
}
#Override
protected void dispatchDraw(Canvas canvas) {
super.dispatchDraw(canvas);
// ListView's height
final int currentHeight = getMeasuredHeight();
// this will let you know the status for the ListView, fitting/not
// fitting content
final int scrolledHeight = computeVerticalScrollRange();
if (scrolledHeight >= currentHeight || scrolledHeight == 0) {
// there is no need to draw something(for simplicity I assumed that
// if the adapter has no items i wouldn't draw something on the
// screen. If you still do want the lines then pick a decent value
// to simulate a row's height and draw them until you hit the
// ListView's getMeasuredHeight)
return;
} else {
// get the last drawn child
final View lastChild = getChildAt(getChildCount() - 1);
// values used to know where to start drawing lines
final int lastChildBottom = lastChild.getBottom();
// last child's height(use this to determine an appropriate value
// for the row height)
final int lastChildHeight = lastChild.getMeasuredHeight();
// determine the number of lines required to fill the ListView
final int nrOfLines = (currentHeight - lastChildBottom)
/ lastChildHeight;
// I used this to simulate a special color for the ListView's row background
Rect r = new Rect(0, lastChildBottom, getMeasuredWidth(),
getMeasuredHeight());
canvas.drawRect(r, mPaintBackground);
for (int i = 0; i < nrOfLines; i++) {
canvas.drawLine(0, lastChildBottom + (i + 1) * lastChildHeight,
getMeasuredWidth(), lastChildBottom + (i + 1)
* lastChildHeight, mPaint);
}
}
}
}
See if you can use the code above and adapt it to your own needs.