projecting Tango 3D point to screen Google Project Tango - java

Ptoject Tango provides a point cloud, how can you get the position in pixels of a 3D point in the point cloud in meters?
I tried using the projection matrix but I get very small values (0.5,1.3 etc) instead of say 1234,324 (in pixels).
I include the code I have tried
//Get the current rotation matrix
Matrix4 projMatrix = mRenderer.getCurrentCamera().getProjectionMatrix();
//Get all the points in the pointcloud and store them as 3D points
FloatBuffer pointsBuffer = mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer;
Vector3[] points3D = new Vector3[pointsBuffer.capacity()/3];
int j =0;
for (int i = 0; i < pointsBuffer.capacity() - 3; i = i + 3) {
points3D[j]= new Vector3(
pointsBuffer.get(i),
pointsBuffer.get(i+1),
pointsBuffer.get(i+2));
//Log.v("Points3d", "J: "+ j + " X: " +points3D[j].x + "\tY: "+ points3D[j].y +"\tZ: "+ points3D[j].z );
j++;
}
//Get the projection of the points in the screen.
Vector3[] points2D = new Vector3[points3D.length];
for(int i =0; i < points3D.length-1;i++)
{
Log.v("Points", "X: " +points3D[i].x + "\tY: "+ points3D[i].y +"\tZ: "+ points3D[i].z );
points2D[i] = points3D[i].multiply(projMatrix);
Log.v("Points", "pX: " +points2D[i].x + "\tpY: "+ points2D[i].y +"\tpZ: "+ points2D[i].z );
}
The example I'm using is the point cloud java which can be found here
https://github.com/googlesamples/tango-examples-java
UPDATE
TangoCameraIntrinsics ccIntrinsics = mTango.getCameraIntrinsics(TangoCameraIntrinsics.TANGO_CAMERA_COLOR);
double fx = ccIntrinsics.fx;
double fy = ccIntrinsics.fy;
double cx = ccIntrinsics.cx;
double cy = ccIntrinsics.cy;
double[][] projMatrix = new double[][] {
{fx, 0 , -cx},
{0, fy, -cy},
{0, 0, 1}
};
Then to compute the projected point I use
for(int i =0; i < points3D.length-1;i++)
{
double[][] point = new double[][] {
{points3D[i].x},
{points3D[i].y},
{points3D[i].z}
};
double [][] point2d = CustomMatrix.multiplyByMatrix(projMatrix, point);
points2D[i] = new Vector2(0,0);
if(point2d[2][0]!=0)
{
Log.v("temp point", "pX: " +point2d[0][0]/point2d[2][0]+" pY: " +point2d[1][0]/point2d[2][0] );
points2D[i] = new Vector2(point2d[0][0]/point2d[2][0],point2d[1][0]/point2d[2][0]);
}
}
But I think that the results are still not what is expected, I for instance get results like:
pX: -175.58042313027244 pY: -92.573740812066
Which to me looks not right.
UPDATE
Using color camera as suggested gives better results, but poitns are still negative
-1127.8086915171814 pY: -652.5887102192332
Would it be ok to just multiply them by -1?

You have to multiply 3D point with RGB camera's intrinsics matrix to obtain pixel coordinate. 3D points are in Depthcamera's frame. You get pixel coordinates by following method:
and
x and y are pixel coordinates.
And K is constructed with parameters using intrinsics function

Related

Generate 2d world with vectors

I try to get started with a 2d map generator with vectors. Now I have some questions how should I go for it.
public float[] gen() {
float[] vec = new float[100];
float x = 0, y = 0;
float a = 20,
b = 10;
for (int i = 0; i < vec.length; i += 2) {
vec[i] = MathUtils.random(x, x + a);
vec[i+1] = MathUtils.random(y - b, y + b);
x = vec[i];
y = vec[i+1];
}
return vec;
}
Would this be a good way to get verctors from? var a, b could be changed for smooth land / hills.
I thought about a neverending map. But how could I only render the part of the player sees. I will work with box2d.
Generated world parts should be saved to a file. Should I save them as chunks like in Minecraft done? And what file fromat is reccoment (json)?
I just liked to get a little perpective to look forwart.
Answer for 2nd Q.
You have to have a Camera
if(x >=camera.project(new Vector3(camera.position.x - camera.viewportWidth - 25, 0, 0)).x) && x <= camera.project(new Vector3(camera.viewportWidth + camera.position.x + 25, 0, 0)).x)
and y the same or less laggy than this(every frame there is created new Vector3 object, so...)
Good luck!

Java 3d parametric surfaces drawing

i really need your help since that i am fighting with the unknown for some time now.
I am trying to draw a parametric surface on java 3d. The surface is being drawn if i a use a point array. Here is the code :
PointArray lsa=new PointArray(length, GeometryArray.COLOR_3|GeometryArray.NORMALS|GeometryArray.COORDINATES);
float maxV=(float) ((float) 2*Math.PI);
float maxU=(float) ((float) Math.PI);
Vector3f norm = new Vector3f();
for (float v = 0.01f; v < maxV; v+=0.03)
{
for (float u = 0.01f; u < maxU; u+=0.03)
{
vIndex++;
Point3f pt = new Point3f();
pt.x=(float) (Math.sin(u)*Math.cos(v));
pt.y=(float) (2*Math.sin(u)*Math.sin(v));
pt.z=(float) Math.cos(u);
lsa.setCoordinate(vIndex, pt);
lsa.setColor(vIndex, new Color3f(0.9f,0.0f,0.0f));
}
}
Shape3D shape = new Shape3D(lsa);
The problem that I have is that it's drawing only the points (dots) so it's not a full drawn surface. How can I draw this parametric surface with polygons or any surface? Are there any methods ?
I am searching the Web, bought Books but I still can not make it with java 3d.
Thank you very much.
Here's how I would do it.
I would define a
Point3f[][] points = new Point3f[(int)((umax-umin)/du)][(int)((vmax-vmin)/dv)];
Then use loops similar to the ones you have int i = 0; i<points.length; i++, int j = 0; j < points[0].length; j++. Define u = i * du + umin, v = j * dv + vmin.
and populate this array with Point3f corresponding to (u, v).
Loop int i = 0; i<points.length - 1; i++, int j = 0; j < points[0].length - 1; j++ and get the points at points[i][j], points[i+1][j], points[i][j+1], and points[i+1][j+1].
Then use the method given in this article to convert these points into a Polygon. Add it to your model / an array that you later add to your model.
Of course, this may not be the best way to do it and I have the feeling that it doesn't handle discontinuities very well, but it should at least make polygons.
Hello here is the solution, it draws a coons surface for example, it should work for any parametric surface x(s, t), y(s, t), z(s, t).
public static Shape3D getShape3D()
{
//Coons
int ns=100;
int nt=100;
float param0=1.0f;
float param1=3.0f;
float s=0.0f;
float t=0.0f;
if (ns>500) ns=500;
if (nt>500) nt=500;
Point3f[][] f=new Point3f[ns][nt];
int sizeOfVectors=0;
for (int i=0;i<ns;i++) //t -->s
{
for (int j=0;j<nt;j++) //u ---t
{
s=((float) i/ns);
t=((float) j/nt);
//System.out.println(" i "+ i + " j "+ j + " s "+ s + " t "+ t);
f[i][j]=new Point3f();
//f[i][j].x=s;
//f[i][j].y=2*t;
//f[i][j].z=10*t*(1-s);
f[i][j].x=param0*s;
f[i][j].y=param1*t;
f[i][j].z=(float) (0.5*((54*s*Math.sqrt(s)-126*Math.sqrt(s)+72*s-6)*t+(27*Math.sqrt(s)-27*s+6)));
/*f[i][j].x = (float) (Math.sqrt(s)*Math.cos(t));
f[i][j].y=(float) (Math.sqrt(s)*Math.sin(t));
f[i][j].z=s;*/
sizeOfVectors++;
sizeOfVectors++;
}
}
System.out.println("Total vectors "+sizeOfVectors);
Shape3D plShape = new Shape3D();
int vIndex=-1;
int k=0;
for (int i=0;i<(ns-1);i++)
{
k=i+1;
sizeOfVectors=nt*2;
vIndex=-1;
TriangleStripArray lsa=new TriangleStripArray(sizeOfVectors, GeometryArray.COLOR_3|GeometryArray.COORDINATES|GeometryArray.NORMALS, new int[] {sizeOfVectors});
for (int j=0;j<nt;j++)
{
vIndex++;
lsa.setCoordinate(vIndex, f[i][j]);
lsa.setColor(vIndex, new Color3f(0.9f,0.0f,0.0f));
vIndex++;
lsa.setCoordinate(vIndex, f[k][j]);
lsa.setColor(vIndex, new Color3f(0.9f,0.0f,0.0f));
}
plShape.addGeometry(lsa);
}
return plShape;
}
It works like a dream. Your guidance was the catalyst to finally make it.

Hough circle detection accuracy very low

I am trying to detect a circular shape from an image which appears to have very good definition. I do realize that part of the circle is missing but from what I've read about the Hough transform it doesn't seem like that should cause the problem I'm experiencing.
Input:
Output:
Code:
// Read the image
Mat src = Highgui.imread("input.png");
// Convert it to gray
Mat src_gray = new Mat();
Imgproc.cvtColor(src, src_gray, Imgproc.COLOR_BGR2GRAY);
// Reduce the noise so we avoid false circle detection
//Imgproc.GaussianBlur( src_gray, src_gray, new Size(9, 9), 2, 2 );
Mat circles = new Mat();
/// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(src_gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 1, 160, 25, 0, 0);
// Draw the circles detected
for( int i = 0; i < circles.cols(); i++ ) {
double[] vCircle = circles.get(0, i);
Point center = new Point(vCircle[0], vCircle[1]);
int radius = (int) Math.round(vCircle[2]);
// circle center
Core.circle(src, center, 3, new Scalar(0, 255, 0), -1, 8, 0);
// circle outline
Core.circle(src, center, radius, new Scalar(0, 0, 255), 3, 8, 0);
}
// Save the visualized detection.
String filename = "output.png";
System.out.println(String.format("Writing %s", filename));
Highgui.imwrite(filename, src);
I have Gaussian blur commented out because (counter intuitively) it was greatly increasing the number of equally inaccurate circles found.
Is there anything wrong with my input image that would cause Hough to not work as well as I expect? Are my parameters way off?
EDIT: first answer brought up a good point about the min/max radius hint for Hough. I resisted adding those parameters as the example image in this post is just one of thousands of images all with varying radii from ~20 to almost infinity.
I've adjusted my RANSAC algorithm from this answer: Detect semi-circle in opencv
Idea:
choose randomly 3 points from your binary edge image
create a circle from those 3 points
test how "good" this circle is
if it is better than the previously best found circle in this image, remember
loop 1-4 until some number of iterations reached. then accept the best found circle.
remove that accepted circle from the image
repeat 1-6 until you have found all circles
problems:
at the moment you must know how many circles you want to find in the image
tested only for that one image.
c++ code
result:
code:
inline void getCircle(cv::Point2f& p1,cv::Point2f& p2,cv::Point2f& p3, cv::Point2f& center, float& radius)
{
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
// PLEASE CHECK FOR TYPOS IN THE FORMULA :)
center.x = (x1*x1+y1*y1)*(y2-y3) + (x2*x2+y2*y2)*(y3-y1) + (x3*x3+y3*y3)*(y1-y2);
center.x /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
center.y = (x1*x1 + y1*y1)*(x3-x2) + (x2*x2+y2*y2)*(x1-x3) + (x3*x3 + y3*y3)*(x2-x1);
center.y /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
radius = sqrt((center.x-x1)*(center.x-x1) + (center.y-y1)*(center.y-y1));
}
std::vector<cv::Point2f> getPointPositions(cv::Mat binaryImage)
{
std::vector<cv::Point2f> pointPositions;
for(unsigned int y=0; y<binaryImage.rows; ++y)
{
//unsigned char* rowPtr = binaryImage.ptr<unsigned char>(y);
for(unsigned int x=0; x<binaryImage.cols; ++x)
{
//if(rowPtr[x] > 0) pointPositions.push_back(cv::Point2i(x,y));
if(binaryImage.at<unsigned char>(y,x) > 0) pointPositions.push_back(cv::Point2f(x,y));
}
}
return pointPositions;
}
float verifyCircle(cv::Mat dt, cv::Point2f center, float radius, std::vector<cv::Point2f> & inlierSet)
{
unsigned int counter = 0;
unsigned int inlier = 0;
float minInlierDist = 2.0f;
float maxInlierDistMax = 100.0f;
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
if(maxInlierDist>maxInlierDistMax) maxInlierDist = maxInlierDistMax;
// choose samples along the circle and count inlier percentage
for(float t =0; t<2*3.14159265359f; t+= 0.05f)
{
counter++;
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
inlierSet.push_back(cv::Point2f(cX,cY));
}
}
return (float)inlier/float(counter);
}
float evaluateCircle(cv::Mat dt, cv::Point2f center, float radius)
{
float completeDistance = 0.0f;
int counter = 0;
float maxDist = 1.0f; //TODO: this might depend on the size of the circle!
float minStep = 0.001f;
// choose samples along the circle and count inlier percentage
//HERE IS THE TRICK that no minimum/maximum circle is used, the number of generated points along the circle depends on the radius.
// if this is too slow for you (e.g. too many points created for each circle), increase the step parameter, but only by factor so that it still depends on the radius
// the parameter step depends on the circle size, otherwise small circles will create more inlier on the circle
float step = 2*3.14159265359f / (6.0f * radius);
if(step < minStep) step = minStep; // TODO: find a good value here.
//for(float t =0; t<2*3.14159265359f; t+= 0.05f) // this one which doesnt depend on the radius, is much worse!
for(float t =0; t<2*3.14159265359f; t+= step)
{
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) <= maxDist)
{
completeDistance += dt.at<float>(cY,cX);
counter++;
}
}
return counter;
}
int main()
{
//RANSAC
cv::Mat color = cv::imread("HoughCirclesAccuracy.png");
// convert to grayscale
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
// get binary image
cv::Mat mask = gray > 0;
unsigned int numberOfCirclesToDetect = 2; // TODO: if unknown, you'll have to find some nice criteria to stop finding more (semi-) circles
for(unsigned int j=0; j<numberOfCirclesToDetect; ++j)
{
std::vector<cv::Point2f> edgePositions;
edgePositions = getPointPositions(mask);
std::cout << "number of edge positions: " << edgePositions.size() << std::endl;
// create distance transform to efficiently evaluate distance to nearest edge
cv::Mat dt;
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
unsigned int nIterations = 0;
cv::Point2f bestCircleCenter;
float bestCircleRadius;
//float bestCVal = FLT_MAX;
float bestCVal = -1;
//float minCircleRadius = 20.0f; // TODO: if you have some knowledge about your image you might be able to adjust the minimum circle radius parameter.
float minCircleRadius = 0.0f;
//TODO: implement some more intelligent ransac without fixed number of iterations
for(unsigned int i=0; i<2000; ++i)
{
//RANSAC: randomly choose 3 point and create a circle:
//TODO: choose randomly but more intelligent,
//so that it is more likely to choose three points of a circle.
//For example if there are many small circles, it is unlikely to randomly choose 3 points of the same circle.
unsigned int idx1 = rand()%edgePositions.size();
unsigned int idx2 = rand()%edgePositions.size();
unsigned int idx3 = rand()%edgePositions.size();
// we need 3 different samples:
if(idx1 == idx2) continue;
if(idx1 == idx3) continue;
if(idx3 == idx2) continue;
// create circle from 3 points:
cv::Point2f center; float radius;
getCircle(edgePositions[idx1],edgePositions[idx2],edgePositions[idx3],center,radius);
if(radius < minCircleRadius)continue;
//verify or falsify the circle by inlier counting:
//float cPerc = verifyCircle(dt,center,radius, inlierSet);
float cVal = evaluateCircle(dt,center,radius);
if(cVal > bestCVal)
{
bestCVal = cVal;
bestCircleRadius = radius;
bestCircleCenter = center;
}
++nIterations;
}
std::cout << "current best circle: " << bestCircleCenter << " with radius: " << bestCircleRadius << " and nInlier " << bestCVal << std::endl;
cv::circle(color,bestCircleCenter,bestCircleRadius,cv::Scalar(0,0,255));
//TODO: hold and save the detected circle.
//TODO: instead of overwriting the mask with a drawn circle it might be better to hold and ignore detected circles and dont count new circles which are too close to the old one.
// in this current version the chosen radius to overwrite the mask is fixed and might remove parts of other circles too!
// update mask: remove the detected circle!
cv::circle(mask,bestCircleCenter, bestCircleRadius, 0, 10); // here the radius is fixed which isnt so nice.
}
cv::namedWindow("edges"); cv::imshow("edges", mask);
cv::namedWindow("color"); cv::imshow("color", color);
cv::imwrite("detectedCircles.png", color);
cv::waitKey(-1);
return 0;
}
If you'd set minRadius and maxRadius paramaeters properly, it'd give you good results.
For your image, I tried following parameters.
method - CV_HOUGH_GRADIENT
minDist - 100
dp - 1
param1 - 80
param2 - 10
minRadius - 250
maxRadius - 300
I got the following output
Note: I tried this in C++.

Java OpenCV + Tesseract OCR "code" regocnition

I'm trying to automate a process where someone manually converts a code to a digital one.
Then I started reading about OCR. So I installed tesseract OCR and tried it on some images. It doesn't even detect something close to the code.
I figured after reading some questions on stackoverflow, that the images need some preprocessing like skewing the image to a horizontal one, which can been done by openCV for example.
Now my questions are:
What kind of preprocessing or other methods should be used in a case like the above image?
Secondly, can I rely on the output? Will it always work in cases like the above image?
I hope someone can help me!
I have decided to capture the whole card instead of the code only. By capturing the whole card it is possible to transform it to a plain perspective and then I could easily get the "code" region.
Also I learned a lot of things. Especially regarding speed. This function is slow on high resolution images. It can take up to 10 seconds with a size of 3264 x 1836.
What I did to speed things up, is re-sizing the input matrix by a factor of 1 / 4. Which makes it 4^2 times faster and gave me a minimal lose of precision. The next step is scaling the quadrangle which we found back to the normal size. So that we can transform the quadrangle to a plain perspective using the original source.
The code I created for detecting the largest area is heavily based on code I found on stackoverflow. Unfortunately they didn't work as expected for me, so I combined more code snippets and modified a lot.
This is what I got:
private static double angle(Point p1, Point p2, Point p0 ) {
double dx1 = p1.x - p0.x;
double dy1 = p1.y - p0.y;
double dx2 = p2.x - p0.x;
double dy2 = p2.y - p0.y;
return (dx1 * dx2 + dy1 * dy2) / Math.sqrt((dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2) + 1e-10);
}
private static MatOfPoint find(Mat src) throws Exception {
Mat blurred = src.clone();
Imgproc.medianBlur(src, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U), gray = new Mat();
List<MatOfPoint> contours = new ArrayList<>();
List<Mat> blurredChannel = new ArrayList<>();
blurredChannel.add(blurred);
List<Mat> gray0Channel = new ArrayList<>();
gray0Channel.add(gray0);
MatOfPoint2f approxCurve;
double maxArea = 0;
int maxId = -1;
for (int c = 0; c < 3; c++) {
int ch[] = {c, 0};
Core.mixChannels(blurredChannel, gray0Channel, new MatOfInt(ch));
int thresholdLevel = 1;
for (int t = 0; t < thresholdLevel; t++) {
if (t == 0) {
Imgproc.Canny(gray0, gray, 10, 20, 3, true); // true ?
Imgproc.dilate(gray, gray, new Mat(), new Point(-1, -1), 1); // 1 ?
} else {
Imgproc.adaptiveThreshold(gray0, gray, thresholdLevel, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, (src.width() + src.height()) / 200, t);
}
Imgproc.findContours(gray, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (MatOfPoint contour : contours) {
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve, Imgproc.arcLength(temp, true) * 0.02, true);
if (approxCurve.total() == 4 && area >= maxArea) {
double maxCosine = 0;
List<Point> curves = approxCurve.toList();
for (int j = 2; j < 5; j++)
{
double cosine = Math.abs(angle(curves.get(j % 4), curves.get(j - 2), curves.get(j - 1)));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.3) {
maxArea = area;
maxId = contours.indexOf(contour);
//contours.set(maxId, getHull(contour));
}
}
}
}
}
if (maxId >= 0) {
return contours.get(maxId);
//Imgproc.drawContours(src, contours, maxId, new Scalar(255, 0, 0, .8), 8);
}
return null;
}
You can call it like so:
MathOfPoint contour = find(src);
See this answer for quadrangle detection from a contour and transforming it to a plain perspective:
Java OpenCV deskewing a contour

How to determine a vector using 2 Points in Android map?

I'm trying to do some advanced features with android maps and to do that I need to do some operations on vectors. Now - I read the answer from this and it gave me some hints and tips. However, there is a part which I don't understand. Please allow me to quote this:
Now that we have the ray with its start and end coordinates, the problem shifts from "is the point within the polygon" to "how often intersects the ray a polygon side". Therefor we can't just work with the polygon points as before (for the bounding box), now we need the actual sides. A side is always defined by two points.
side 1: (X1/Y1)-(X2/Y2) side 2:
(X2/Y2)-(X3/Y3) side 3:
(X3/Y3)-(X4/Y4)
So my understanding is that every side of the triangle is actually a vector. But how is it possible to substract 2 points? Let's say I got a triangle with 3 vertices: A(1,1) , B(2,2), C (1,3). So according to that, I have to do, for example, (1,1)-(2,2) in order to calculate one of the sides. The question is how to do it programatically in java/android? Below I'm attaching the code which I already developed:
/** Creating the containers for screen
* coordinates taken from geoPoints
*/
Point point1_screen = new Point();
Point point2_screen = new Point();
Point point3_screen = new Point();
/* Project them from the map to screen */
mapView.getProjection().toPixels(point1, point1_screen);
mapView.getProjection().toPixels(point2, point2_screen);
mapView.getProjection().toPixels(point3, point3_screen);
int xA = point1_screen.x;
int yA = point1_screen.y;
int xB = point2_screen.x;
int yB = point2_screen.y;
int xC = point3_screen.x;
int yC = point3_screen.y;
int[] xPointsArray = new int[3];
int[] yPointsArray = new int[3];
xPointsArray[0] = xA;
xPointsArray[1] = xB;
xPointsArray[2] = xC;
yPointsArray[0] = yA;
yPointsArray[1] = yB;
yPointsArray[2] = yC;
Arrays.sort(xPointsArray);
int xMin = xPointsArray[0];
int yMin = yPointsArray[0];
int xMax = xPointsArray[xPointsArray.length-1];
int yMax = xPointsArray[xPointsArray.length-1];
int e = (xMax - xMin) / 100; // for ray calcultions
int width = mapView.getWidth();
int height = mapView.getHeight();
if(pPoint.x < xMin || pPoint.x > xMax || pPoint.y > yMin || pPoint.y < yMax)
{
DisplayInfoMessage(pPoint.x + " < " + xMin + " AND " + pPoint.x + " > " + xMax + " || " + pPoint.y + " < " + yMin + " AND " + pPoint.y + " > " + yMax );
// DisplayInfoMessage("Minimum is: "+ yPointsArray[0] + " and the maximum is: "+ yPointsArray[xPointsArray.length-1]);
}
else
{
GeoPoint start_point = new GeoPoint(xMin - e, pPoint.y);
Point start_point_container = new Point();
mapView.getProjection().toPixels(start_point, start_point_container);
int a, b, c, tx, ty;
int d1, d2, hd;
int ix, iy;
float r;
// calculating vector for 1st line
tx = xB - xA;
ty = yB - yA;
// equation for 1st line
a = ty;
b = tx;
c = xA*a - yA*b;
// get distances from line for line 2
d1 = a*xB + b*yB + c;
d2 = a*pPoint.x + b*pPoint.y + c;
DisplayInfoMessage("You clicked inside the triangle!" + "TRIANGLE POINTS: A("+xA+","+yA+") B("+xB+","+yB+") C("+xC+","+yC+")");
}
The pPoint hold the coordinates of the point which user clicked. I hope that I explained my problem well enough. Can someone give me some help with that? Appreciated!
I'm not an Android developer, but I see that android.graphics.drawable.shapes.Shape lacks the contains() method found in java.awt.Shape. It appears you'll have to develop your own test, as suggested in the article you cited. In addition, you might want to look at crossing/winding number algorithms.
But how is it possible to subtract 2 points?
Subtraction of vectors is well defined, and easily implemented in Java. Given two points as vectors, the components of the difference represent the tangent (slope) of a line connecting the points. The example in the article implements this in the following lines:
//get tangent vector for line 1
tx = v1x2 - v1x1;
ty = v1y2 - v1y1;
The foundation for the approach shown is discussed further in Line and Segment Intersections.

Categories

Resources