I am making a java rigid body physics engine, and it has gone great so far, until I tried to implement rotation. I don't know where the problem is coming from. I have methods calculating the moment of inertia of convex polygons and circles using formulas from these websites:
http://lab.polygonal.de/?p=57
http://en.wikipedia.org/wiki/List_of_moments_of_inertia
This is the code for the polygon moment of inertia:
public float momentOfInertia() {
Vector C = centerOfMass().subtract(position); //center of mass
Line[] sides = sides(); //sides of the polygon
float moi = 0; //moment of inertia
for(int i = 0; i < sides.length; i++) {
Line l = sides[i]; //current side of polygon being looped through
Vector p1 = C; //points 1, 2, and 3 are the points of the triangle
Vector p2 = l.point1;
Vector p3 = l.point2;
Vector Cp = p1.add(p2).add(p3).divide(3); //center of mass of the triangle, or C'
float d = new Line(C, Cp).length(); //distance between center of mass
Vector bv = p2.subtract(p1); //vector for side b of triangle
float b = bv.magnitude(); //scalar for length of side b
Vector u = bv.divide(b); //unit vector for side b
Vector cv = p3.subtract(p1); //vector for side c of triangle, only used to calculate variables a and h
float a = cv.dot(u); //length of a in triangle
Vector av = u.multiply(a); //vector for a in triangle
Vector hv = cv.subtract(av); //vector for height of triangle, or h in diagram
float h = hv.magnitude(); //length of height of triangle, or h in diagram
float I = ((b*b*b*h)-(b*b*h*a)+(b*h*a*a)+(b*h*h*h))/36; //calculate moment of inertia of individual triangle
float M = (b*h)/2; //mass or area of triangle
moi += I+M*d*d; //equation in sigma series of website
}
return moi;
}
And this is for the circle:
public float momentOfInertia() {
return (float) Math.pow(radius, 2)*area()/2;
}
I know for a fact that the area functions work fine, I have checked them. I just don't know how to check if the moment of inertia equations are wrong.
For collision detection, I used the separating axis theorem to check for any combination of two polygons and circles, where it can find out whether they are colliding, the normal velocity of the collision, and the contact point of the collision. These methods all work beautifully.
I might also like to say how positions are organized. Every body has a position and a shape, either a polygon or a circle. Each shape has a position, and polygons have individual vertices. So if I want to find the absolute position of a vertex of a polygon-shaped body, I need to add the positions of the body, the polygon, and the vertex itself. The center of mass equation is in absolute position according to the shape, with no account for the body. The center of mass and moment of inertia methods are in the Shape class.
For every body, the constants are being updated according to the force and torque in the body's update method where dt is delta time. I also rotate the polygon based on the difference in rotation, because the vertices are ever changing.
public void update(float dt) {
if(mass != 0) {
momentum = momentum.add(force.multiply(dt));
velocity = momentum.divide(mass);
position = position.add(velocity.multiply(dt));
angularMomentum += torque*dt;
angularVelocity = angularMomentum/momentOfInertia;
angle += angularVelocity*dt;
shape.rotate(angularVelocity*dt);
}
}
Finally, I also have a CollisionResolver class which fixes the collision of two colliding bodies, involving applying the normal force and friction. Here is the class's only method which does all of this:
public static void resolveCollision(Body a, Body b, float dt) {
//calculate normal vector
Vector norm = CollisionDetector.normal(a, b);
Vector normb = norm.multiply(-1);
//undo overlap between bodies
float ratio1 = a.mass/(a.mass+b.mass);
float ratio2 = b.mass/(b.mass+a.mass);
a.position = a.position.add(norm.multiply(ratio1));
b.position = b.position.add(normb.multiply(ratio2));
//calculate contact point of collision and other values needed for rotation
Vector cp = CollisionDetector.contactPoint(a, b, norm);
Vector c = a.shape.centerOfMass().add(a.position);
Vector cb = b.shape.centerOfMass().add(b.position);
Vector d = cp.subtract(c);
Vector db = cp.subtract(cb);
//create the normal force vector from the velocity
Vector u = norm.unit();
Vector ub = u.multiply(-1);
Vector F = new Vector(0, 0);
boolean doA = a.mass != 0;
if(doA) {
F = a.force;
}else {
F = b.force;
}
Vector n = new Vector(0, 0);
Vector nb = new Vector(0, 0);
if(doA) {
Vector Fyp = u.multiply(F.dot(u));
n = Fyp.multiply(-1);
nb = Fyp;
}else{
Vector Fypb = ub.multiply(F.dot(ub));
n = Fypb;
nb = Fypb.multiply(-1);
}
//calculate normal force for body A
float r = a.restitution;
Vector v1 = a.velocity;
Vector vy1p = u.multiply(u.dot(v1));
Vector vx1p = v1.subtract(vy1p);
Vector vy2p = vy1p.multiply(-r);
Vector v2 = vy2p.add(vx1p);
//calculate normal force for body B
float rb = b.restitution;
Vector v1b = b.velocity;
Vector vy1pb = ub.multiply(ub.dot(v1b));
Vector vx1pb = v1b.subtract(vy1pb);
Vector vy2pb = vy1pb.multiply(-rb);
Vector v2b = vy2pb.add(vx1pb);
//calculate friction for body A
float mk = (a.friction+b.friction)/2;
Vector v = a.velocity;
Vector vyp = u.multiply(v.dot(u));
Vector vxp = v.subtract(vyp);
float fk = -n.multiply(mk).magnitude();
Vector fkv = vxp.unit().multiply(fk); //friction force
Vector vr = vxp.subtract(d.multiply(a.angularVelocity));
Vector fkvr = vr.unit().multiply(fk); //friction torque - indicated by r for rotation
//calculate friction for body B
Vector vb = b.velocity;
Vector vypb = ub.multiply(vb.dot(ub));
Vector vxpb = vb.subtract(vypb);
float fkb = -nb.multiply(mk).magnitude();
Vector fkvb = vxpb.unit().multiply(fkb); //friction force
Vector vrb = vxpb.subtract(db.multiply(b.angularVelocity));
Vector fkvrb = vrb.unit().multiply(fkb); //friction torque - indicated by r for rotation
//move bodies based on calculations
a.momentum = v2.multiply(a.mass).add(fkv.multiply(dt));
if(a.mass != 0) {
a.velocity = a.momentum.divide(a.mass);
a.position = a.position.add(a.velocity.multiply(dt));
}
b.momentum = v2b.multiply(b.mass).add(fkvb.multiply(dt));
if(b.mass != 0) {
b.velocity = b.momentum.divide(b.mass);
b.position = b.position.add(b.velocity.multiply(dt));
}
//apply torque to bodies
float t = (d.cross(fkvr)+d.cross(n));
float tb = (db.cross(fkvrb)+db.cross(nb));
if(a.mass != 0) {
a.angularMomentum = t*dt;
a.angularVelocity = a.angularMomentum/a.momentOfInertia;
a.angle += a.angularVelocity*dt;
a.shape.rotate(a.angularVelocity*dt);
}
if(b.mass != 0) {
b.angularMomentum = tb*dt;
b.angularVelocity = b.angularMomentum/b.momentOfInertia;
b.angle += b.angularVelocity*dt;
b.shape.rotate(b.angularVelocity*dt);
}
}
As for the actual problem, both the circles and polygons rotate very slowly and often in wrong directions. I know I am throwing a lot out there, but this problem has been bugging me for a while, and I would appreciate any help I can get.
Thanks.
This answer addresses the "I just don't know how to check if the moment of inertia equations are wrong." part of the question.
There are several possible approaches, some of which you may have already tried, and they can be used in combination:
Unit testing
Take your moment of inertia code and apply it to problems with known solutions from a tutorial or textbook.
Dimensional analysis
I would recommend this anyway for any scientific or engineering program. You may have deleted comments for compactness of posted code, but they are important. Annotate each variable that represents a physical quantity with its units. Check that every expression you evaluate has the right units, based on its inputs, for its result variable. For example, in the classic equation F=ma in SI units: F is in Newtons, equivalent to kg.m/(s^2), m is in kg, a is in m/(s^2), so it all balances. Be careful with transitions between physics world coordinates and screen coordinates.
Program simplification
Try working first with only one instance of one very simple shape for which you can do all the calculations by hand. Since some of your problems do not relate to rotation, a circle may be a good first choice because of its symmetry. Debug that, comparing intermediate results to equivalent results from paper-and-pencil (and calculator). Gradually add more instances of the same shape, then debug a single instance of the next shape...
Deliberate error
Given that you suspect your inertia calculations, try setting arbitrary values slightly different from your calculations, and see what differences they make in the display. Are the effects similar to the problems you are seeing? If so, keep it as a hypothesis.
As a more general note, programs that do iterative simulation can be very vulnerable to accumulated floating point error. Unless you have a real need to save space, and have done enough analysis of the numerical stability of your code to be sure float is OK, I strongly recommend using double instead. This is probably not your current problem, but is something that could become an issue later.
Related
I'm working on a school project for the graphics class. My task is detecting edges on a colored image, we received the suggestion to use the Canny edge detection algorithm.
I've decided to write the entire program by myself in Java, because it looks easy with the given formulas. I've created a window with Java Swing, I'm reading in the input image as an sRGB image, converting it to CIELab* (because thats part of the task). I have managed to apply the Sobel kernels (Cx,Cy) which determines the partial derivative. However I'm stuck with the direction formula, and coding it.
My first problem is, I don't know if I should calculate the directions in every separate color channel, or do it in one piece.
Here are the formulas for the calculations (first is the direction, I'm stuck with, and on the right size there is the magnitude which requires the direction -theta)
Here is the source code for calculating the direction:
//Returns the gradients direction from Cx,Cy
public LabImg direction(LabImg Cx, LabImg Cy) {
LabImg result = new LabImg(Cx.getWidth(),Cx.getHeight());
for(int x = 0; x < result.getWidth(); x++) {
for(int y = 0; y < result.getHeight(); y++) {
float CxL = Cx.getPixel(x, y).getL();
float Cxa = Cx.getPixel(x, y).getA();
float Cxb = Cx.getPixel(x, y).getB();
float CyL = Cy.getPixel(x, y).getL();
float Cya = Cy.getPixel(x, y).getA();
float Cyb = Cy.getPixel(x, y).getB();
float dirL = (float) ((2*CxL*CyL)/((CxL*CxL)-(CyL*CyL)));
float dira = (float) ((2*Cxa*Cya)/((Cxa*Cxa)-(Cya*Cya)));
float dirb = (float) ((2*Cxb*Cyb)/((Cxb*Cxb)-(Cyb*Cyb)));
//float dir = (2*CxL*CyL+Cxa*Cya+Cxb*Cyb)/((CxL*CxL+Cxa*Cxa+Cxb*Cxb)-(CyL*CyL+Cya*Cya+Cyb*Cyb));
result.setLab(x, y, dirL, dira, dirb);
}
}
return result;
}
LabImg is a data type, which contains the size of the image, a 2D array of pixel values, and a buffered image.
If you want to do color edge detection, then you will need to process each color channel separately. So, you will have to find gradient directions for the three color channels separately.
Secondly, you can compute magnitude and directions as:
magnitude = Math.sqrt(Xgrad*Xgrad + Ygrad*Ygrad)
theta = Math.atan2(Ygrad,Xgrad)
I have an image (basically, I get raw image data as 1024x1024 pixels) and the position in lat/lon of the center pixel of the image.
Each pixel represents the same fixed pixel scale in meters (e.g. 30m per pixel).
Now, I would like to draw the image onto a map which uses the coordinate reference system "EPSG:4326" (WGS84).
When I draw it by defining just corners in lat/lon of the image, depending on a "image size in pixel * pixel scale" calculation and converting the distances from the center point to lat/lon coordinates of each corner, I suppose, the image is not correctly drawn onto the map.
By the term "not correctly drawn" I mean, that the image seems to be shifted and also the contents of the image are not at the map location, where I expected them to be.
I suppose this is the case because I "mix" a pixel scaled image and a "EPSG:4326" coordinate reference system.
Now, with the information I have given, can I transform the whole pixel matrix from fixed pixel scale base to a new pixel matrix in the "EPSG:4326" coordinate reference system, using Geotools?
Of course, the transformation must be dependant on the center position in lat/lon, that I have been given, and on the pixel scale.
I wonder if using something like this would point me into the correct direction:
MathTransform transform = CRS.findMathTransform(DefaultGeocentricCRS.CARTESIAN, DefaultGeographicCRS.WGS84, true);
DirectPosition2D srcDirectPosition2D = new DirectPosition2D(DefaultGeocentricCRS.CARTESIAN, degreeLat.getDegree(), degreeLon.getDegree());
DirectPosition2D destDirectPosition2D = new DirectPosition2D();
transform.transform(srcDirectPosition2D, destDirectPosition2D);
double transX = destDirectPosition2D.x;
double transY = destDirectPosition2D.y;
int kmPerPixel = mapImage.getWidth / 1024; // It is known to me that my map is 1024x1024km ...
double x = zeroPointX + ((transX * 0.001) * kmPerPixel);
double y = zeroPointY + (((transX * -1) * 0.001) * kmPerPixel);
(got this code from another SO thread and already modified it a little bit, but still wonder if this is the correct starting point for my problem.)
I only suppose that my original image coordinate reference system is of the type DefaultGeocentricCRS.CARTESIAN. Can someone confirm this?
And from here on, is this the correct start to use Geotools for this kind of problem solving, or am I on the complete wrong path?
Additionally, I would like to add that this would be used in a quiet dynamic system. So my image update would be about 10Hz and the transormations have to be performed accordingly often.
Again, is this initial thought of mine leading to a solution, or do you have other solutions for solving my problem?
Thank you very much,
Kiamur
This is not as simple as it might sound. You are essentially trying to define an area on a sphere (ellipsoid technically) using a flat square. As such there is no "correct" way to do it, so you will always end up with some distortion. Without knowing exactly where your image came from there is no way to answer this exactly but the following code provides you with 3 different possible answers:
The first two make use of GeoTools' GeodeticCalculator to calculate the corner points using bearings and distances. These are the blue "square" and the green "square" above. The blue is calculating the corners directly while the green calculates the edges and infers the corners from the intersections (that's why it is squarer).
final int width = 1024, height = 1024;
GeometryFactory gf = new GeometryFactory();
Point centre = gf.createPoint(new Coordinate(0,51));
WKTWriter writer = new WKTWriter();
//direct method
GeodeticCalculator calc = new GeodeticCalculator(DefaultGeographicCRS.WGS84);
calc.setStartingGeographicPoint(centre.getX(), centre.getY());
double height2 = height/2.0;
double width2 = width/2.0;
double dist = Math.sqrt(height2*height2+width2 *width2);
double bearing = 45.0;
Coordinate[] corners = new Coordinate[5];
for (int i=0;i<4;i++) {
calc.setDirection(bearing, dist*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
corners[i] = new Coordinate(corner.getX(),corner.getY());
bearing+=90.0;
}
corners[4] = corners[0];
Polygon bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
double[] edges = new double[4];
bearing = 0;
for(int i=0;i<4;i++) {
calc.setDirection(bearing, height2*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
if(i%2 ==0) {
edges[i] = corner.getY();
}else {
edges[i] = corner.getX();
}
bearing+=90.0;
}
corners[0] = new Coordinate( edges[1],edges[0]);
corners[1] = new Coordinate( edges[1],edges[2]);
corners[2] = new Coordinate( edges[3],edges[2]);
corners[3] = new Coordinate( edges[3],edges[0]);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
Another way to do this is to transform the centre point into a projection that is "flatter" and use simple addition to calculate the corners and then reverse the transformation. To do this we can use the AUTO projection defined by the OGC WMS Specification to generate an Orthographic projection centred on our point, this gives the red "square" which is very similar to the blue one.
String code = "AUTO:42003," + centre.getX() + "," + centre.getY();
// System.out.println(code);
CoordinateReferenceSystem auto = CRS.decode(code);
// System.out.println(auto);
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84,
auto);
MathTransform rtransform = CRS.findMathTransform(auto,DefaultGeographicCRS.WGS84);
Point g = (Point)JTS.transform(centre, transform);
width2 *=1000.0;
height2 *= 1000.0;
corners[0] = new Coordinate(g.getX()-width2,g.getY()-height2);
corners[1] = new Coordinate(g.getX()+width2,g.getY()-height2);
corners[2] = new Coordinate(g.getX()+width2,g.getY()+height2);
corners[3] = new Coordinate(g.getX()-width2,g.getY()+height2);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
bbox = (Polygon)JTS.transform(bbox, rtransform);
System.out.println(writer.write(bbox));
Which solution to use is a matter of taste, and depends on where your image came from but I suspect that either the red or the blue will be best. If you need to do this at 10Hz then you will need to test them for speed, but I suspect that transforming the images will be the bottle neck.
Once you have your bounding box setup to your satisfaction you can convert you (unreferenced) image to a georeferenced coverage using:
GridCoverageFactory factory = CoverageFactoryFinder.getGridCoverageFactory(null);
GridCoverage2D gc = factory.create("name", image, new ReferencedEnvelope(bbox.getEnvelopeInternal(),DefaultGeographicCRS.WGS84));
String fileName = "myImage.tif";
AbstractGridFormat format = GridFormatFinder.findFormat(fileName);
File out = new File(fileName);
GridCoverageWriter writer = format.getWriter(out);
try {
writer.write(gc, null);
writer.dispose();
} catch (IllegalArgumentException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.
The code I'm using in my collision detection code is this:
(note: Vector3f is part of the LWJGL library.)
(Note2:Tri is a class composed of three of LWJGL's Vector3fs. v1, v2, and v3.)
public Vector<Tri> getTrisTouching(Vector3f pos, float radius){
Vector<Tri> tempVec = new Vector<Tri>();
for(int i = 0; i < tris.size(); i++){
Tri t = tris.get(i);
Vector3f one_to_point = new Vector3f(0,0,0);
Vector3f.sub(pos,t.v1,one_to_point); //Storing vector A->P
Vector3f one_to_two = new Vector3f(0,0,0);
Vector3f.sub(t.v2,t.v1, one_to_two); //Storing vector A->B
Vector3f one_to_three = new Vector3f(0,0,0);
Vector3f.sub(t.v3, t.v1, one_to_three); //Storing vector A->C
float q1 = Vector3f.dot(one_to_point, one_to_two) / one_to_two.lengthSquared(); // The normalized "distance" from a to
float q2 = Vector3f.dot(one_to_point, one_to_three) / one_to_three.lengthSquared(); // The normalized "distance" from a to
if (q1 > 0 && q2 > 0 && q1 + q2 < 1){
tempVec.add(t);
}
}
return tempVec;
}
My question is how do I correctly see if a point in space is touching one of my triangles?
To test if your point is inside the triangle, create a ray with its origin at the test point and extend it to infinity. A nice easy one would be a ray which is horizontal ( e.g. y constant, and x increases to infinity. Then count the number of times it intersects with one of your polygon edges. Zero or an even number of intersections means you are outside the triangle. The nice thing about this it works not just for triangles but any polygon.
http://erich.realtimerendering.com/ptinpoly/
The only way I can help you is by providing you with this link.
Unfortunately, I'm not very good with LWJGL Geometry so here you are. - Vector3f
Hope it helps! If it does, please tick the answer to accept.
I have a list of lat/long coordinates that I would like to use to calculate an area of a polygon. I can get exact in many cases, but the larger the polygon gets, the higher chance for error.
I am first converting the coordinates to UTM using http://www.ibm.com/developerworks/java/library/j-coordconvert/
From there, I am using http://www.mathopenref.com/coordpolygonarea2.html to calculate the area of the UTM coordinates.
private Double polygonArea(int[] x, int[] y) {
Double area = 0.0;
int j = x.length-1;
for(int i = 0; i < x.length; i++) {
area = area + (x[j]+x[i]) * (y[j]-y[i]);
j = i;
}
area = area/2;
if (area < 0)
area = area * -1;
return area;
}
I compare these areas to the same coordinates I put into Microsoft SQL server and ArcGIS, but I cannot seem to match them exactly all the time. Does anyone know of a more exact method than this?
Thanks in advance.
EDIT 1
Thank you for the comments.
Here is my code for getting the area (CoordinateConversion code is listed above on the IBM link):
private Map<Integer, GeoPoint> vertices;
private Double getArea() {
List<Integer> xpoints = new ArrayList<Integer>();
List<Integer> ypoints = new ArrayList<Integer>();
CoordinateConversion cc = new CoordinateConversion();
for(Entry<Integer, GeoPoint> itm : vertices.entrySet()) {
GeoPoint pnt = itm.getValue();
String temp = cc.latLon2MGRUTM(pnt.getLatitudeE6()/1E6, pnt.getLongitudeE6()/1E6);
// Example return from CC: 02CNR0634657742
String easting = temp.substring(5, 10);
String northing = temp.substring(10, 15);
xpoints.add(Integer.parseInt(easting));
ypoints.add(Integer.parseInt(northing));
}
int[] x = toIntArray(xpoints);
int[] y = toIntArray(ypoints);
return polygonArea(x,y);
}
Here is an example list of points:
44.80016800 -106.40808100
44.80016800 -106.72123800
44.75016800 -106.72123800
44.75016800 -106.80123800
44.56699100 -106.80123800
In ArcGIS and MS SQL server I get 90847.0 Acres.
Using the code above I get 90817.4 Acres.
Another example list of points:
45.78412600 -108.51506700
45.78402600 -108.67972100
45.75512200 -108.67949400
45.75512200 -108.69962300
45.69795400 -108.69929400
In ArcGIS and MS SQL server I get 15732.9 Acres.
Using the code above I get 15731.9 Acres.
The area formula you are using is valid only on a flat plane. As the polygon gets larger, the Earth's curvature starts to have an effect, making the area larger than what you calculate with this formula. You need to find a formula that works on a the surface of a sphere.
A simple Google search for "area of polygon on spherical surface" turns up a bunch of hits, of which the most interesting is Wolfram MathWorld Spherical Polygon
It turns out that UTM just isn't able to get the extreme accuracy I was looking for. Switching projection systems to something more accurate like Albers or State Plane provided a much more accurate calculation.