I am trying to generate a list of a hundred thousand random points in 3d space within a 3d boundary without having any of the points occupy the same position. I'm literally trying to create a non-repeating Vector3 generator. Is there any efficient way of doing this? Also, it is okay if these points are not evenly distributed, it is actually preferable if they are someone clustered here there, just as long as they do not occupy the same position.
To clarify I am not trying to generate 300,000 unique points. But instead 100,000 3d points. So a vector values of (0, 0, 0) and (0, 0, 1) is acceptable.
But (4, 4, 4) and (4, 4, 4) is unacceptable.
public class Vector3
{
public float x;
public float y;
public float z;
Vector3(float x, float y, float z)
{
this.x = x;
this.y = y;
this.z = z;
}
public static ArrayList<Vector3> generateVector3s()
{
ArrayList<Vector3> tempVector3List = new ArrayList<>();
for (int i = 0; i < 100000; i++)
{
tempVector3List.add(new Vector3(RANDOMVALUE, RANDOMVALUE, RANDOMVALUE));
}
return tempVector3List ;
}
}
First, generation of Random Numbers in Java:
Geeks for Geeks on Random Numbers in Java
What I might do is generate a value then use the triplet as a key for a hashset. If it exists, the value is already there, and toss it and try again. This will ensure uniqueness and should be relatively efficient.
If you want your datapoints spread equally about your problem space, you might need a fancier algorithm.
It sounds like you are looking for a 3D version of Poisson disc sampling, which guarantees points will not cluster in the same areas.
The algorithm is fairly complicated so I won't describe it here fully, but the basic idea is that you choose a seed point and grow outwards by always adding points that are at least a given distance away from the existing points.
You can find detailed descriptions and implementations here:
https://bost.ocks.org/mike/algorithms/
http://gregschlom.com/devlog/2014/06/29/Poisson-disc-sampling-Unity.html
https://github.com/kchapelier/poisson-disk-sampling
https://youtu.be/7WcmyxyFO7o
https://youtu.be/flQgnCUxHlw
Related
EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.
I'm trying to create a large 2D Array int[][] from a LinkedHashMap which contains a number of smaller Arrays for an A* Pathfinder I'm working on.
The Map the Pathfinder is using is streamed in smaller chunks to the client and converted into a simplified version for the Pathfinder.
Map<Coord, int[][]> pfmapcache = new LinkedHashMap<Coord, int[][]>(9, 0.75f, true);
The Coord look like this: Coord(0,0) or Coord(-1,0).... etc. and the int[][] are always int[100][100] big.
Now I would like to create a new large int[][] that would encompass all the smaller Array where the small Array Coord(0,0) would be in the center of the new Large Array.
int[][] largearray = [-1,1][0,1][1,1]
[-1,0][0,0][1,0]
[-1,-1][0,-1][1,-1]
So that the large array would be int[300][300] big in this example.
2. I would like to expand the new large Array if a new small array gets added to the pfmapcache.
int[][] largearray = [][][1,2]
[-1,1][0,1][1,1]
[-1,0][0,0][1,0]
[-1,-1][0,-1][1,-1]
I don't have to store the smaller Arrays in pfmapcache I could add them as they are created with a 2 small arrays combining etc.. but with the negative Position of the Arrays in relation to the original I have no idea how to combine them and preserve their relative postion.
First time posting here, if I need to clarify something pls let me know.
You're wondering how to use your existing pathfinder algo with a chunked map.
This is when you need to place an abstraction layer between your data representation, and your data usage (like a Landscape class).
Q: Does a pathfinding algorithm need to know it works on a grid, on a chunked grid, on a sparse matrix, or on a more exotic representation?
A: No. A Pathfinder only needs to know one thing: 'Where can I get from here?'
Ideally
you should drop any reference to the fact that your world is on a grid by working with only a class like:
public interface IdealLandscape {
Map<Point, Integer> neighbours(Point location); // returns all neighbours, and the cost to get there
}
Easy alternative
However I understood your existing implementation 'knows' about grids, with the added value that adjacency is implicit and you're working with points as (x, y). You however lost this when introducing chunks, so working with the grid doesn't work anymore. So let's make the abstraction as painless as possible. Here's the plan:
1. Introduce a Landscape class
public interface Landscape {
public int getHeight(int x, int y); // Assuming you're storing height in your int[][] map?
}
2. Refactor your Pathfinder
It's reaaally easy:
just replace map[i][j] with landscape.getHeight(i, j)
3. Test your refactoring
Use a very simple GridLandscape implementation like:
public class GridLandscape implements Landscape {
int[][] map;
public GridLandscape(...){
map = // Build it somehow
}
#Override
public int getHeight(int x, int y){
return map[x][y]; // Maybe check bounds here ?
}
}
4. Use your ChunkedGridLandscape
Now your map is abstracted away, and you know your Pathfinder works on it, you can replace it with your chunked map!
public class ChunkedGridLandscape implements Landscape {
private static final int CHUNK_SIZE = 300;
Map<Coord, int[][]> mapCache = new LinkedHashMap<>(9, 0.75f, true);
Coord centerChunkCoord;
public ChunkedGridLandscape(Map<Coord, int[][]> pfmapcache, Coord centerChunkCoord){
this.mapCache = pfmapcache;
this.centerChunkCoord = centerChunkCoord;
}
#Override
public int getHeight(int x, int y){
// compute chunk coord
int chunkX = x / CHUNK_SIZE - centerChunkCoord .getX();
int chunkX = y / CHUNK_SIZE - centerChunkCoord .getY();
Coord chunkCoord = new Coord(chunkX, chunkY);
// Now retrieve the correct chunk
int[][] chunk = mapCache.get(chunkCoord); // Careful: is the chunk already loaded?
// Now retrieve the height within the chunk
int xInChunk = (x + chunkX*CHUNK_SIZE) % CHUNK_SIZE; // Made positive again!
int yInChunk = (y + chunkY*CHUNK_SIZE) % CHUNK_SIZE; // Made positive again!
// We have everything !
return chunk[xInChunk][yInChunk];
}
}
Gotcha: Your Coord class NEEDS to have a equals and hashCode methods properly overloaded!
5. It just works
This should just immediately work with your pathfinder. Enjoy!
I have a set of rectangles and I would like to "reduce" the set so I have the fewest number of rectangles to describe the same area as the original set. If possible, I would like it to also be fast, but I am more concerned with getting the number of rectangles as low as possible. I have an approach now which works most of the time.
Currently, I start at the top-left most rectangle and see if I can expand it out right and down while keeping it a rectangle. I do that until it can't expand anymore, remove and split all intersecting rectangles, and add the expanded rectangle back in the list. Then I start the process again with the next top-left most rectangle, and so on. But in some cases, it doesn't work. For example:
With this set of three rectangles, the correct solution would end up with two rectangles, like this:
However, in this case, my algorithm starts by processing the blue rectangle. This expand downwards and splits the yellow rectangle (correctly). But then when the remainder of the yellow rectangle is processed, instead of expanding downwards, it expands right first and takes back the portion that was previously split off. Then the last rectangle is processed and it can't expand right or down, so the original set of rectangles is left. I could tweak the algorithm to expand down first and then right. That would fix this case, but it would cause the same problem in a similar scenario that was flipped.
Edit: Just to clarify, the original set of rectangles do not overlap and do not have to be connected. And if a subset of rectangles are connected, the polygon which completely covers them can have holes in it.
Despite the title to your question, I think you’re actually looking for the minimum dissection into rectangles of a rectilinear polygon. (Jason’s links are about minimum covers by rectangles, which is quite a different problem.)
David Eppstein discusses this problem in section 3 of his 2010 survey article Graph-Theoretic Solutions to Computational Geometry Problems, and he gives a nice summary in this answer on mathoverflow.net:
The idea is to find the maximum number of disjoint axis-parallel diagonals that have two concave vertices as endpoints, split along those, and then form one more split for each remaining concave vertex. To find the maximum number of disjoint axis-parallel diagonals, form the intersection graph of the diagonals; this graph is bipartite so its maximum independent set can be found in polynomial time by graph matching techniques.
Here’s my gloss on this admirably terse description, using figure 2 from Eppstein’s article. Suppose we have a rectilinear polygon, possibly with holes.
When the polygon is dissected into rectangles, each of the concave vertices must be met by at least one edge of the dissection. So we get the minimum dissection if as many of these edges as possible do double-duty, that is, they connect two of the concave vertices.
So let’s draw the axis-parallel diagonals between two concave vertices that are contained entirely within the polygon. (‘Axis-parallel’ means ‘horizontal or vertical’ here, and a diagonal of a polygon is a line connecting two non-adjacent vertices.) We want to use as many of these lines as possible in the dissection as long as they don’t intersect.
(If there are no axis-parallel diagonals, the dissection is trivial—just make a cut from each concave vertex. Or if there are no intersections between the axis-parallel diagonals then we use them all, plus a cut from each remaining concave vertex. Otherwise, read on.)
The intersection graph of a set of line segments has a node for every line segment, and an edge joins two nodes if the lines cross. Here’s the intersection graph for the axis-parallel diagonals:
It’s bipartite with the vertical diagonals in one part, and the horizontal diagonals in the other part. Now, we want to pick as many of the diagonals as possible as long as they don’t intersect. This corresponds to finding the maximum independent set in the intersection graph.
Finding the maximum independent set in a general graph is an NP-hard problem, but in the special case of a bipartite graph, König’s theorem shows that it’s equivalent to the problem of finding a maximum matching, which can be solved in polynomial time, for example by the Hopcroft–Karp algorithm. A given graph can have several maximum matchings, but any of them will do, as they all have the same size. In the example, all the maximum matchings have three pairs of vertices, for example {(2, 4), (6, 3), (7, 8)}:
(Other maximum matchings in this graph include {(1, 3), (2, 5), (7, 8)}; {(2, 4), (3, 6), (5, 7)}; and {(1, 3), (2, 4), (7, 8)}.)
To get from a maximum matching to the corresponding minimum vertex cover, apply the proof of König’s theorem. In the matching shown above, the left set is L = {1, 2, 6, 7}, the right set is R = {3, 4, 5, 8}, and the set of unmatched vertices in L is U = {1}. There is only one alternating path starting in U, namely 1–3–6, so the set of vertices in alternating paths is Z = {1, 3, 6} and the minimum vertex cover is thus K = (L \ Z) ∪ (R ∩ Z) = {2, 3, 7}, shown in red below, with the maximum independent set in green:
Translating this back into the dissection problem, this means that we can use five axis-parallel diagonals in the dissection:
Finally, make a cut from each remaining concave vertex to complete the dissection:
Today I found O(N^5) solution for this problem, and I will share it here.
For the first step, you need to find a way to get the sum of any rectangle in a matrix, with complexity O(1). It's pretty easy to do.
Now for the second step, you need to know dynamic programming. The idea is to store a rectangle and break it into smaller pieces. If the rectangle is empty, you can return 0. And if it's filled, return 1.
There are N^4 states to store the rectangle, plus the O(N) complexity for each state... So you will get an O(N^5) algorithm.
Here's my code. I think it will help.
The input is simple. N, M (size of matrix)
After that, the following N lines will have 1s and 0s.
Example:
4 9
010000010
111010111
101111101
000101000
#include <bits/stdc++.h>
#define MAX 51
int tab[MAX][MAX];
int N,M;
int sumed[MAX][MAX];
int t(int x,int y) {
if(x<0||y<0)return 0;
return sumed[x][y];
}
int subrec(int x1,int y1,int x2,int y2) {
return t(x2,y2)-t(x2,y1-1)-t(x1-1,y2)+t(x1-1,y1-1);
}
int resp[MAX][MAX][MAX][MAX];
bool exist[MAX][MAX][MAX][MAX];
int dp(int x1,int y1,int x2,int y2) {
if(exist[x1][y1][x2][y2])return resp[x1][y1][x2][y2];
exist[x1][y1][x2][y2]=true;
int soma = subrec(x1,y1,x2,y2);
int area = (x2-x1+1)*(y2-y1+1);
if(soma==area){return resp[x1][y1][x2][y2]=1;}
if(!soma) {return 0;}
int best = 1000000;
for(int i = x1;i!=x2;++i) {
best = std::min(best,dp(x1,y1,i,y2)+dp(i+1,y1,x2,y2));
}
for(int i = y1;i!=y2;++i) {
best = std::min(best,dp(x1,y1,x2,i)+dp(x1,i+1,x2,y2));
}
return resp[x1][y1][x2][y2]=best;
}
void backtracking(int x1,int y1,int x2,int y2) {
int soma = subrec(x1,y1,x2,y2);
int area = (x2-x1+1)*(y2-y1+1);
if(soma==area){std::cout<<x1+1<<" "<<y1+1<<" "<<x2+1<<" "<<y2+1<<"\n";return;}
if(!soma) {return;}
int best = 1000000;
int obj = resp[x1][y1][x2][y2];
for(int i = x1;i!=x2;++i) {
int ans = dp(x1,y1,i,y2)+dp(i+1,y1,x2,y2);
if(ans==obj){
backtracking(x1,y1,i,y2);
backtracking(i+1,y1,x2,y2);
return;
}
}
for(int i = y1;i!=y2;++i) {
int ans = dp(x1,y1,x2,i)+dp(x1,i+1,x2,y2);
if(ans==obj){
backtracking(x1,y1,x2,i);
backtracking(x1,i+1,x2,y2);
return;
}
}
}
int main()
{
std::cin >> N >> M;
for(int i = 0; i != N;++i) {
std::string s;
std::cin >> s;
for(int j = 0; j != M;++j) {
if(s[j]=='1')tab[i][j]++;
}
}
for(int i = 0; i != N;++i) {
int val = 0;
for(int j = 0; j != M;++j) {
val += tab[i][j];
sumed[i][j]=val;
if(i)sumed[i][j]+=sumed[i-1][j];
}
}
std::cout << dp(0,0,N-1,M-1) << std::endl;
backtracking(0,0,N-1,M-1);
}
Hello I am fairly new to programming and I am trying, in Java, to create a function that creates recursive triangles from a larger triangles midpoints between corners where the new triangles points are deviated from the normal position in y-value. See the pictures below for a visualization.
The first picture shows the progression of the recursive algorithm without any deviation (order 0,1,2) and the second picture shows it with(order 0,1).
I have managed to produce a working piece of code that creates just what I want for the first couple of orders but when we reach order 2 and above I run into the problem where the smaller triangles don't use the same midpoints and therefore looks like the picture below.
So I need help with a way to store and call the correct midpoints for each of the triangles. I have been thinking of implementing a new class that controls the calculation of the midpoints and stores them and etc, but as I have said I need help with this.
Below is my current code
The point class stores a x and y value for a point
lineBetween creates a line between the the selected points
void fractalLine(TurtleGraphics turtle, int order, Point ett, Point tva, Point tre, int dev) {
if(order == 0){
lineBetween(ett,tva,turtle);
lineBetween(tva,tre,turtle);
lineBetween(tre,ett,turtle);
} else {
double deltaX = tva.getX() - ett.getX();
double deltaY = tva.getY() - ett.getY();
double deltaXtre = tre.getX() - ett.getX();
double deltaYtre = tre.getY() - ett.getY();
double deltaXtva = tva.getX() - tre.getX();
double deltaYtva = tva.getY() - tre.getY();
Point one;
Point two;
Point three;
double xt = ((deltaX/2))+ett.getX();
double yt = ((deltaY/2))+ett.getY() +RandomUtilities.randFunc(dev);
one = new Point(xt,yt);
xt = (deltaXtre/2)+ett.getX();
yt = (deltaYtre/2)+ett.getY() +RandomUtilities.randFunc(dev);
two = new Point(xt,yt);
xt = ((deltaXtva/2))+tre.getX();
yt = ((deltaYtva/2))+tre.getY() +RandomUtilities.randFunc(dev);
three = new Point(xt,yt);
fractalLine(turtle,order-1,one,tva,three,dev/2);
fractalLine(turtle,order-1,ett,one,two,dev/2);
fractalLine(turtle,order-1,two,three,tre,dev/2);
fractalLine(turtle,order-1,one,two,three,dev/2);
}
}
Thanks in Advance
Victor
You can define a triangle by 3 points(vertexes). So the vertexes a, b, and c will form a triangle. The combinations ab,ac and bc will be the edges. So the algorithm goes:
First start with the three vertexes a,b and c
Get the midpoints of the 3 edges p1,p2 and p3 and get the 4 sets of vertexes for the 4 smaller triangles. i.e. (a,p1,p2),(b,p1,p3),(c,p2,p3) and (p1,p2,p3)
Recursively find the sub-triangles of the 4 triangles till the depth is reached.
So as a rough guide, the code goes
findTriangles(Vertexes[] triangle, int currentDepth) {
//Depth is reached.
if(currentDepth == depth) {
store(triangle);
return;
}
Vertexes[] first = getFirstTriangle(triangle);
Vertexes[] second = getSecondTriangle(triangle);
Vertexes[] third = getThirdTriangle(triangle);;
Vertexes[] fourth = getFourthTriangle(triangle)
findTriangles(first, currentDepth+1);
findTriangles(second, currentDepth+1);
findTriangles(third, currentDepth+1);
findTriangles(fourth, currentDepth+1);
}
You have to store the relevant triangles in a Data structure.
You compute the midpoints of any vertex again and again in the different paths of your recursion. As long as you do not change them by random, you get the same midpoint for every path so there's no problem.
But of course, if you modify the midpoints by random, you'll end with two different midpoints in two different paths of recursion.
You could modify your algorithm in a way that you not only pass the 3 corners of the triangle along, but also the modified midpoints of each vertex. Or you keep them in a separate list or map or something and only compute them one time and look them up otherwise.
I have a 2D convex polygon in 3D space and a function to measure the area of the polygon.
public double area() {
if (vertices.size() >= 3) {
double area = 0;
Vector3 origin = vertices.get(0);
Vector3 prev = vertices.get(1).clone();
prev.sub(origin);
for (int i = 2; i < vertices.size(); i++) {
Vector3 current = vertices.get(i).clone();
current.sub(origin);
Vector3 cross = prev.cross(current);
area += cross.magnitude();
prev = current;
}
area /= 2;
return area;
} else {
return 0;
}
}
To test that this method works at all orientations of the polygon I had my program rotate it a little bit each iteration and calculate the area. Like so...
Face f = poly.getFaces().get(0);
for (int i = 0; i < f.size(); i++) {
Vector3 v = f.getVertex(i);
v.rotate(0.1f, 0.2f, 0.3f);
}
if (blah % 1000 == 0)
System.out.println(blah + ":\t" + f.area());
My method seems correct when testing with a 20x20 square. However the rotate method (a method in the Vector3 class) seems to introduce some error into the position of each vertex in the polygon, which affects the area calculation. Here is the Vector3.rotate() method
public void rotate(double xAngle, double yAngle, double zAngle) {
double oldY = y;
double oldZ = z;
y = oldY * Math.cos(xAngle) - oldZ * Math.sin(xAngle);
z = oldY * Math.sin(xAngle) + oldZ * Math.cos(xAngle);
oldZ = z;
double oldX = x;
z = oldZ * Math.cos(yAngle) - oldX * Math.sin(yAngle);
x = oldZ * Math.sin(yAngle) + oldX * Math.cos(yAngle);
oldX = x;
oldY = y;
x = oldX * Math.cos(zAngle) - oldY * Math.sin(zAngle);
y = oldX * Math.sin(zAngle) + oldY * Math.cos(zAngle);
}
Here is the output for my program in the format "iteration: area":
0: 400.0
1000: 399.9999999999981
2000: 399.99999999999744
3000: 399.9999999999959
4000: 399.9999999999924
5000: 399.9999999999912
6000: 399.99999999999187
7000: 399.9999999999892
8000: 399.9999999999868
9000: 399.99999999998664
10000: 399.99999999998386
11000: 399.99999999998283
12000: 399.99999999998215
13000: 399.9999999999805
14000: 399.99999999998016
15000: 399.99999999997897
16000: 399.9999999999782
17000: 399.99999999997715
18000: 399.99999999997726
19000: 399.9999999999769
20000: 399.99999999997584
Since this is intended to eventually be for a physics engine I would like to know how I can minimise the cumulative error since the Vector3.rotate() method will be used on a very regular basis.
Thanks!
A couple of odd notes:
The error is proportional to the amount rotated. ie. bigger rotation per iteration -> bigger error per iteration.
There is more error when passing doubles to the rotate function than when passing it floats.
You'll always have some cumulative error with repeated floating point trig operations — that's just how they work. To deal with it, you basically have two options:
Just ignore it. Note that, in your example, after 20,000 iterations(!) the area is still accurate down to 13 decimal places. That's not bad, considering that doubles can only store about 16 decimal places to begin with.
Indeed, plotting your graph, the area of your square seems to be going down more or less linearly:
This makes sense, assuming that the effective determinant of your approximate rotation matrix is about 1 − 3.417825 × 10-18, which is well within normal double precision floating point error range of one. If that's the case, the area of your square would continue a very slow exponential decay towards zero, such that you'd need about two billion (2 × 109) 7.3 × 1014 iterations to get the area down to 399. Assuming 100 iterations per second, that's about seven and a half months 230 thousand years.
Edit: When I first calculated how long it would take for the area to reach 399, it seems I made a mistake and somehow managed to overestimate the decay rate by a factor of about 400,000(!). I've corrected the mistake above.
If you still feel you don't want any cumulative error, the answer is simple: don't iterate floating point rotations. Instead, have your object store its current orientation in a member variable, and use that information to always rotate the object from its original orientation to its current one.
This is simple in 2D, since you just have to store an angle. In 3D, I'd suggest storing either a quaternion or a matrix, and occasionally rescaling it so that its norm / determinant stays approximately one (and, if you're using a matrix to represent the orientation of a rigid body, that it remains approximately orthogonal).
Of course, this approach won't eliminate cumulative error in the orientation of the object, but the rescaling does ensure that the volume, area and/or shape of the object won't be affected.
You say there is cumulative error but I don't believe there is (note how your output desn't always go down) and the rest of the error is just due to rounding and loss of precision in a float.
I did work on a 2d physics engine in university (in java) and found double to be more precise (of course it is see oracles datatype sizes
In short you will never get rid of this behaviour you just have to accept the limitations of precision
EDIT:
Now I look at your .area function there is possibly some cumulative due to
+= cross.magnitude
but I have to say that whole function looks a bit odd. Why does it need to know the previous vertices to calculate the current area?