Convert from java.awt.geom.Area to java.awt.Polygon - java

I need to convert a java.awt.geom.Area or java.awt.Shape to java.awt.Polygon. What I know about the are is: isSingular = true, isPolygonal = true. So I think a polygon shuld be able to describe the same area.

I'm not sure that it is worth converting, because Polygon is an old Java 1.0 class that can store only integer coordinates, so you might lose some precision.
Anyway, you can get a PathIterator from the Shape, and as you iterate it, add new points to a Polygon:
public static void main(String[] args) {
Area a = new Area(new Rectangle(1, 1, 5, 5));
PathIterator iterator = a.getPathIterator(null);
float[] floats = new float[6];
Polygon polygon = new Polygon();
while (!iterator.isDone()) {
int type = iterator.currentSegment(floats);
int x = (int) floats[0];
int y = (int) floats[1];
if(type != PathIterator.SEG_CLOSE) {
polygon.addPoint(x, y);
System.out.println("adding x = " + x + ", y = " + y);
}
iterator.next();
}
}
EDIT As Bill Lin commented, this code may give you a wrong polygon if the PathIterator describes multiple subpaths (for example in the case of an Area with holes). In order to take this into account, you also need to check for PathIterator.MOVETO segments, and possibly create a list of polygons.
In order to decide which polygons are holes, you could calculate the bounding box (Shape.getBounds2D()), and check which bounding box contains the other. Note that the getBounds2D API says that "there is no guarantee that the returned Rectangle2D is the smallest bounding box that encloses the Shape, only that the Shape lies entirely within the indicated Rectangle2D", but in my experience for polygonal shapes it would be the smallest, and anyway it is trivial to calculate the exact bounding box of a polygon (just find the smallest and biggest x and y coordinates).

Related

My game collision system doesn't work on more than one tile

I'm working on a collision system for my game, however I can't get it to work, if I add more than one wall (which is the object I'm rendering) the collision system doesn't work and I can get through the block.
However if I leave only one wall the collision works correctly, or if at the end of the loop I add a break;
the collision works but only on the first wall of the map, the others don't get the collision.
Would anyone know how to solve this? I've been trying to solve it for 2 days and I couldn't.
public boolean checkCollisionWall(int xnext, int ynext){
int[] xpoints1 = {xnext+3,xnext+3,xnext+4,xnext+3,xnext+3,xnext+4,xnext+10,xnext+11,xnext+11,xnext+10,xnext+11,xnext+11};
int[] ypoints1 = {ynext+0,ynext+8,ynext+9,ynext+11,ynext+12,ynext+15,ynext+15,ynext+12,ynext+11,ynext+9,ynext+8,ynext+0};
int npoints1 = 12;
Polygon player = new Polygon(xpoints1,ypoints1,npoints1);
Area area = new Area(player);
for(int i = 0; i < Game.walls.size(); i++){
Wall atual = Game.walls.get(i);
int[] xpoints2 = {atual.getX(),atual.getX(),atual.getX()+16,atual.getX()+16};
int[] ypoints2 = {atual.getY(),atual.getY()+16,atual.getY()+16,atual.getY()};
int npoints2 = 4;
Polygon Wall = new Polygon(xpoints2,ypoints2,npoints2);
area.intersect(new Area(Wall));
if(area.isEmpty()){
return true;
}
}
return false;
}
I'm pretty sure the problem is this call:
area.intersect(new Area(Wall));
Here's the JavaDoc for that method:
public void intersect(Area rhs)
Sets the shape of this Area to the intersection of its current shape
and the shape of the specified Area. The resulting shape of this Area
will include only areas that were contained in both this Area and also
in the specified Area.
So area, which represents the shape of your player, is going to be modified by each test with a wall, which is why it's only working with one wall.
You could fix the issue by simply making the player Area the argument of the call, as in:
Area wallArea = new Area(Wall);
wallArea.intersect(area);
if(wallArea.isEmpty()){
return true;
}
By the way, this logic is reversed isn't it. Don't you want to check that the resulting area is not empty, i.e. there was an overlap between the player and the wall?
The other option, if each Wall is actually a rectangle, would be to use the this Area method instead:
public boolean intersects(double x,
double y,
double w,
double h)
Tests if the interior of the Shape intersects the interior of a
specified rectangular area. The rectangular area is considered to
intersect the Shape if any point is contained in both the interior of
the Shape and the specified rectangular area.
Something like this:
if(area.intersects(atual.getX(), atual.getY(), 16, 16)) {
return true;
}
As this avoids the creation of an Area object for each wall, and the intersection test is going to be much simpler, and faster.

Detecting costs/variables resulting in unbounded optimization problem in ojAlgo

I am using the ojAlgo linear/quadratic solver via ExpressionsBasedModel to solve the layout of graphical elements in a plotting library so that they fit neatly into the screen boundaries. Specifically, I want to solve for scale and translation so that the coordinates of a scatter plot fill up the screen space. I do that by declaring scale and translation variables of the ExpressionsBasedModel and transform the scatter plot coordinates to the screen using those variables and then construct linear constraints that the transformed coordinates should project inside the screen. I also add a negative cost to the scale variables, so that they are maximized and the scatter plot covers as much screen space as possible. My problem is that in some special cases, for example if I have only one point to plot, this results in an unbounded problem where the scale goes towards infinity without any constraint being active. How can I detect the scale variables for which this would happen and fix them to some default values?
To illustrate the above problem, I constructed a toy plotting library (the full library that I am working on is too big to fit in this question). To help layout the graphical elements, I have a problem class:
class Problem {
private ArrayList<Variable> _scale_variables = new ArrayList<Variable>();
private ExpressionsBasedModel _model = new ExpressionsBasedModel();
Variable freeVariable() {
return _model.addVariable();
}
Variable scaleVariable() {
Variable x = _model.addVariable();
x.lower(0.0); // Negative scale not allowed
_scale_variables.add(x);
return x;
}
Expression expr() {
return _model.addExpression();
}
Result solve() {
for (Variable scale_var: _scale_variables) {
// This is may result in unbounded solution for degenerate cases.
Expression expr = _model.addExpression("Encourage-larger-scale");
expr.set(scale_var, -1.0);
expr.weight(1.0);
}
return _model.minimise();
}
}
It wraps an ExpressionsBasedModel and has some facilities to create variables. For the transform that I will use to map my scatter point coordinates to screen coordinates, I have this class:
class Transform2d {
Variable x_scale;
Variable y_scale;
Variable x_translation;
Variable y_translation;
Transform2d(Problem problem) {
x_scale = problem.scaleVariable();
y_scale = problem.scaleVariable();
x_translation = problem.freeVariable();
y_translation = problem.freeVariable();
}
void respectBounds(double x, double y, double marker_size,
double width, double height,
Problem problem) {
// Respect left and right screen bounds
{
Expression expr = problem.expr();
expr.set(x_scale, x);
expr.set(x_translation, 1.0);
expr.lower(marker_size);
expr.upper(width - marker_size);
}
// Respect top and bottom screen bounds
{
Expression expr = problem.expr();
expr.set(y_scale, y);
expr.set(y_translation, 1.0);
expr.lower(marker_size);
expr.upper(height - marker_size);
}
}
}
The respectBounds method is used to add the constraints of a single point in the scatter plot the the Problem class mentioned before. To add all the points of a scatter plot, I have this function:
void addScatterPoints(
double[] xy_pairs,
// How much space every marker occupies
double marker_size,
Transform2d transform_to_screen,
// Screen size
double width, double height,
Problem problem) {
int data_count = xy_pairs.length/2;
for (int i = 0; i < data_count; i++) {
int offset = 2*i;
double x = xy_pairs[offset + 0];
double y = xy_pairs[offset + 1];
transform_to_screen.respectBounds(x, y, marker_size, width, height, problem);
}
}
First, let's look at what a non-degenerate case looks like. I specify the screen size and the size of the markers used for the scatter plot. I also specify the data to plot, build the problem and solve it. Here is the code
Problem problem = new Problem();
double marker_size = 4;
double width = 800;
double height = 600;
double[] data_to_plot = new double[] {
1.0, 2.0,
4.0, 9.3,
7.0, 4.5};
Transform2d transform = new Transform2d(problem);
addScatterPoints(data_to_plot, marker_size, transform, width, height, problem);
Result result = problem.solve();
System.out.println("Solution: " + result);
which prints out Solution: OPTIMAL -81.0958904109589 # { 0, 81.0958904109589, 795.99999999999966, -158.19178082191794 }.
This is what a degenerate case looks like, plotting two points with the same y-coordinate:
Problem problem = new Problem();
double marker_size = 4;
double width = 800;
double height = 600;
double[] data_to_plot = new double[] {
1, 1,
9, 1
};
Transform2d transform = new Transform2d(problem);
addScatterPoints(data_to_plot, marker_size, transform, width, height, problem);
Result result = problem.solve();
System.out.println("Solution: " + result);
It displays Solution: UNBOUNDED -596.0 # { 88.44444444444444, 596, 0, 0 }.
As mentioned before, my question is: How can I detect the scale variables whose negative cost would result in an unbounded solution and constraint them to some default value, so that my solution is not unbounded?

Transform cartesian pixel-data-array to lat/lon pixel-data-array

I have an image (basically, I get raw image data as 1024x1024 pixels) and the position in lat/lon of the center pixel of the image.
Each pixel represents the same fixed pixel scale in meters (e.g. 30m per pixel).
Now, I would like to draw the image onto a map which uses the coordinate reference system "EPSG:4326" (WGS84).
When I draw it by defining just corners in lat/lon of the image, depending on a "image size in pixel * pixel scale" calculation and converting the distances from the center point to lat/lon coordinates of each corner, I suppose, the image is not correctly drawn onto the map.
By the term "not correctly drawn" I mean, that the image seems to be shifted and also the contents of the image are not at the map location, where I expected them to be.
I suppose this is the case because I "mix" a pixel scaled image and a "EPSG:4326" coordinate reference system.
Now, with the information I have given, can I transform the whole pixel matrix from fixed pixel scale base to a new pixel matrix in the "EPSG:4326" coordinate reference system, using Geotools?
Of course, the transformation must be dependant on the center position in lat/lon, that I have been given, and on the pixel scale.
I wonder if using something like this would point me into the correct direction:
MathTransform transform = CRS.findMathTransform(DefaultGeocentricCRS.CARTESIAN, DefaultGeographicCRS.WGS84, true);
DirectPosition2D srcDirectPosition2D = new DirectPosition2D(DefaultGeocentricCRS.CARTESIAN, degreeLat.getDegree(), degreeLon.getDegree());
DirectPosition2D destDirectPosition2D = new DirectPosition2D();
transform.transform(srcDirectPosition2D, destDirectPosition2D);
double transX = destDirectPosition2D.x;
double transY = destDirectPosition2D.y;
int kmPerPixel = mapImage.getWidth / 1024; // It is known to me that my map is 1024x1024km ...
double x = zeroPointX + ((transX * 0.001) * kmPerPixel);
double y = zeroPointY + (((transX * -1) * 0.001) * kmPerPixel);
(got this code from another SO thread and already modified it a little bit, but still wonder if this is the correct starting point for my problem.)
I only suppose that my original image coordinate reference system is of the type DefaultGeocentricCRS.CARTESIAN. Can someone confirm this?
And from here on, is this the correct start to use Geotools for this kind of problem solving, or am I on the complete wrong path?
Additionally, I would like to add that this would be used in a quiet dynamic system. So my image update would be about 10Hz and the transormations have to be performed accordingly often.
Again, is this initial thought of mine leading to a solution, or do you have other solutions for solving my problem?
Thank you very much,
Kiamur
This is not as simple as it might sound. You are essentially trying to define an area on a sphere (ellipsoid technically) using a flat square. As such there is no "correct" way to do it, so you will always end up with some distortion. Without knowing exactly where your image came from there is no way to answer this exactly but the following code provides you with 3 different possible answers:
The first two make use of GeoTools' GeodeticCalculator to calculate the corner points using bearings and distances. These are the blue "square" and the green "square" above. The blue is calculating the corners directly while the green calculates the edges and infers the corners from the intersections (that's why it is squarer).
final int width = 1024, height = 1024;
GeometryFactory gf = new GeometryFactory();
Point centre = gf.createPoint(new Coordinate(0,51));
WKTWriter writer = new WKTWriter();
//direct method
GeodeticCalculator calc = new GeodeticCalculator(DefaultGeographicCRS.WGS84);
calc.setStartingGeographicPoint(centre.getX(), centre.getY());
double height2 = height/2.0;
double width2 = width/2.0;
double dist = Math.sqrt(height2*height2+width2 *width2);
double bearing = 45.0;
Coordinate[] corners = new Coordinate[5];
for (int i=0;i<4;i++) {
calc.setDirection(bearing, dist*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
corners[i] = new Coordinate(corner.getX(),corner.getY());
bearing+=90.0;
}
corners[4] = corners[0];
Polygon bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
double[] edges = new double[4];
bearing = 0;
for(int i=0;i<4;i++) {
calc.setDirection(bearing, height2*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
if(i%2 ==0) {
edges[i] = corner.getY();
}else {
edges[i] = corner.getX();
}
bearing+=90.0;
}
corners[0] = new Coordinate( edges[1],edges[0]);
corners[1] = new Coordinate( edges[1],edges[2]);
corners[2] = new Coordinate( edges[3],edges[2]);
corners[3] = new Coordinate( edges[3],edges[0]);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
Another way to do this is to transform the centre point into a projection that is "flatter" and use simple addition to calculate the corners and then reverse the transformation. To do this we can use the AUTO projection defined by the OGC WMS Specification to generate an Orthographic projection centred on our point, this gives the red "square" which is very similar to the blue one.
String code = "AUTO:42003," + centre.getX() + "," + centre.getY();
// System.out.println(code);
CoordinateReferenceSystem auto = CRS.decode(code);
// System.out.println(auto);
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84,
auto);
MathTransform rtransform = CRS.findMathTransform(auto,DefaultGeographicCRS.WGS84);
Point g = (Point)JTS.transform(centre, transform);
width2 *=1000.0;
height2 *= 1000.0;
corners[0] = new Coordinate(g.getX()-width2,g.getY()-height2);
corners[1] = new Coordinate(g.getX()+width2,g.getY()-height2);
corners[2] = new Coordinate(g.getX()+width2,g.getY()+height2);
corners[3] = new Coordinate(g.getX()-width2,g.getY()+height2);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
bbox = (Polygon)JTS.transform(bbox, rtransform);
System.out.println(writer.write(bbox));
Which solution to use is a matter of taste, and depends on where your image came from but I suspect that either the red or the blue will be best. If you need to do this at 10Hz then you will need to test them for speed, but I suspect that transforming the images will be the bottle neck.
Once you have your bounding box setup to your satisfaction you can convert you (unreferenced) image to a georeferenced coverage using:
GridCoverageFactory factory = CoverageFactoryFinder.getGridCoverageFactory(null);
GridCoverage2D gc = factory.create("name", image, new ReferencedEnvelope(bbox.getEnvelopeInternal(),DefaultGeographicCRS.WGS84));
String fileName = "myImage.tif";
AbstractGridFormat format = GridFormatFinder.findFormat(fileName);
File out = new File(fileName);
GridCoverageWriter writer = format.getWriter(out);
try {
writer.write(gc, null);
writer.dispose();
} catch (IllegalArgumentException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

How to draw a series of connected lines in Libgdx?

I'm using polygon on shape renderer and the problem is that the user should be able to add vertex whenever they want, basically the vertices are not set by me but by the user. What I did is whenever the user add a point, I add them to an arrayList.
ArrayList<Float> v = new ArrayList<Float>();
public void ontouch(screenX, screenY){
v.add(screenX);
v.add(screenY)
}
And then I have this problem when I try to render a polygon on a shapeRenderer
for(int i = 0; i < v.size; i++){
float[] vertices = new float[v.size()]
vertices[i - 1] = v.get(i - 1);
vertices[i] = v.get(i);
}
sr.polygon(v);
But I just get errors.
I am trying to achieve something like this, if you know a different way of doing this then that would be really helpful. By the way I'm also using box2d and this does not need to have collision it's just for the user visual.
The way I would personally do it is by having an LinkedList with Vector2 objects. Vector2 objects store two floats, so for every click, you get the x and y coordinates and make a new Vector2 object. By storing them in the LinkedList, you can retrieve the points at any time in the correct order so you can connect a line.
LinkedList<Vector2> v = new LinkedList<Vector2>();
public void ontouch(screenX, screenY){
v.add(new Vector2(screenX, screenY)); // add Vector2 into LinkedList
}
How you want to draw the lines or connect the points is up to you.
Another was is to just only keep the two most recent points that were clicked, and throw the others away. This would mean storing the lines instead of the points. If the lines are objects, then you can do this:
Vector2 previousPoint;
Vector2 currentPoint;
ArrayList<MyLineClass> lines = new ArrayList<MyLineClass>();
public void ontouch(screenX, screenY){
if(previousPoint == null){
previousPoint = new Vector2(screenX, screenY);
}else{
previousPoint = currentPoint;
currentPoint = new Vector2(screenX, screenY);
lines.add(new MyLineClass(currentPoint, previousPoint)
}
}
I wrote this off the cuff but I believe this example should work.
EDIT:
Good thing LibGDX is open source. If you want to use an array of float numbers, the method simply gets an x and y coordinate in alternating order. So for each index:
0 = x1
1 = y1
2 = x2
3 = y2
4 = x3
5 = y3
etc.
It's an odd method, but I suppose it works.

Rotation won't work in Java physics engine

I am making a java rigid body physics engine, and it has gone great so far, until I tried to implement rotation. I don't know where the problem is coming from. I have methods calculating the moment of inertia of convex polygons and circles using formulas from these websites:
http://lab.polygonal.de/?p=57
http://en.wikipedia.org/wiki/List_of_moments_of_inertia
This is the code for the polygon moment of inertia:
public float momentOfInertia() {
Vector C = centerOfMass().subtract(position); //center of mass
Line[] sides = sides(); //sides of the polygon
float moi = 0; //moment of inertia
for(int i = 0; i < sides.length; i++) {
Line l = sides[i]; //current side of polygon being looped through
Vector p1 = C; //points 1, 2, and 3 are the points of the triangle
Vector p2 = l.point1;
Vector p3 = l.point2;
Vector Cp = p1.add(p2).add(p3).divide(3); //center of mass of the triangle, or C'
float d = new Line(C, Cp).length(); //distance between center of mass
Vector bv = p2.subtract(p1); //vector for side b of triangle
float b = bv.magnitude(); //scalar for length of side b
Vector u = bv.divide(b); //unit vector for side b
Vector cv = p3.subtract(p1); //vector for side c of triangle, only used to calculate variables a and h
float a = cv.dot(u); //length of a in triangle
Vector av = u.multiply(a); //vector for a in triangle
Vector hv = cv.subtract(av); //vector for height of triangle, or h in diagram
float h = hv.magnitude(); //length of height of triangle, or h in diagram
float I = ((b*b*b*h)-(b*b*h*a)+(b*h*a*a)+(b*h*h*h))/36; //calculate moment of inertia of individual triangle
float M = (b*h)/2; //mass or area of triangle
moi += I+M*d*d; //equation in sigma series of website
}
return moi;
}
And this is for the circle:
public float momentOfInertia() {
return (float) Math.pow(radius, 2)*area()/2;
}
I know for a fact that the area functions work fine, I have checked them. I just don't know how to check if the moment of inertia equations are wrong.
For collision detection, I used the separating axis theorem to check for any combination of two polygons and circles, where it can find out whether they are colliding, the normal velocity of the collision, and the contact point of the collision. These methods all work beautifully.
I might also like to say how positions are organized. Every body has a position and a shape, either a polygon or a circle. Each shape has a position, and polygons have individual vertices. So if I want to find the absolute position of a vertex of a polygon-shaped body, I need to add the positions of the body, the polygon, and the vertex itself. The center of mass equation is in absolute position according to the shape, with no account for the body. The center of mass and moment of inertia methods are in the Shape class.
For every body, the constants are being updated according to the force and torque in the body's update method where dt is delta time. I also rotate the polygon based on the difference in rotation, because the vertices are ever changing.
public void update(float dt) {
if(mass != 0) {
momentum = momentum.add(force.multiply(dt));
velocity = momentum.divide(mass);
position = position.add(velocity.multiply(dt));
angularMomentum += torque*dt;
angularVelocity = angularMomentum/momentOfInertia;
angle += angularVelocity*dt;
shape.rotate(angularVelocity*dt);
}
}
Finally, I also have a CollisionResolver class which fixes the collision of two colliding bodies, involving applying the normal force and friction. Here is the class's only method which does all of this:
public static void resolveCollision(Body a, Body b, float dt) {
//calculate normal vector
Vector norm = CollisionDetector.normal(a, b);
Vector normb = norm.multiply(-1);
//undo overlap between bodies
float ratio1 = a.mass/(a.mass+b.mass);
float ratio2 = b.mass/(b.mass+a.mass);
a.position = a.position.add(norm.multiply(ratio1));
b.position = b.position.add(normb.multiply(ratio2));
//calculate contact point of collision and other values needed for rotation
Vector cp = CollisionDetector.contactPoint(a, b, norm);
Vector c = a.shape.centerOfMass().add(a.position);
Vector cb = b.shape.centerOfMass().add(b.position);
Vector d = cp.subtract(c);
Vector db = cp.subtract(cb);
//create the normal force vector from the velocity
Vector u = norm.unit();
Vector ub = u.multiply(-1);
Vector F = new Vector(0, 0);
boolean doA = a.mass != 0;
if(doA) {
F = a.force;
}else {
F = b.force;
}
Vector n = new Vector(0, 0);
Vector nb = new Vector(0, 0);
if(doA) {
Vector Fyp = u.multiply(F.dot(u));
n = Fyp.multiply(-1);
nb = Fyp;
}else{
Vector Fypb = ub.multiply(F.dot(ub));
n = Fypb;
nb = Fypb.multiply(-1);
}
//calculate normal force for body A
float r = a.restitution;
Vector v1 = a.velocity;
Vector vy1p = u.multiply(u.dot(v1));
Vector vx1p = v1.subtract(vy1p);
Vector vy2p = vy1p.multiply(-r);
Vector v2 = vy2p.add(vx1p);
//calculate normal force for body B
float rb = b.restitution;
Vector v1b = b.velocity;
Vector vy1pb = ub.multiply(ub.dot(v1b));
Vector vx1pb = v1b.subtract(vy1pb);
Vector vy2pb = vy1pb.multiply(-rb);
Vector v2b = vy2pb.add(vx1pb);
//calculate friction for body A
float mk = (a.friction+b.friction)/2;
Vector v = a.velocity;
Vector vyp = u.multiply(v.dot(u));
Vector vxp = v.subtract(vyp);
float fk = -n.multiply(mk).magnitude();
Vector fkv = vxp.unit().multiply(fk); //friction force
Vector vr = vxp.subtract(d.multiply(a.angularVelocity));
Vector fkvr = vr.unit().multiply(fk); //friction torque - indicated by r for rotation
//calculate friction for body B
Vector vb = b.velocity;
Vector vypb = ub.multiply(vb.dot(ub));
Vector vxpb = vb.subtract(vypb);
float fkb = -nb.multiply(mk).magnitude();
Vector fkvb = vxpb.unit().multiply(fkb); //friction force
Vector vrb = vxpb.subtract(db.multiply(b.angularVelocity));
Vector fkvrb = vrb.unit().multiply(fkb); //friction torque - indicated by r for rotation
//move bodies based on calculations
a.momentum = v2.multiply(a.mass).add(fkv.multiply(dt));
if(a.mass != 0) {
a.velocity = a.momentum.divide(a.mass);
a.position = a.position.add(a.velocity.multiply(dt));
}
b.momentum = v2b.multiply(b.mass).add(fkvb.multiply(dt));
if(b.mass != 0) {
b.velocity = b.momentum.divide(b.mass);
b.position = b.position.add(b.velocity.multiply(dt));
}
//apply torque to bodies
float t = (d.cross(fkvr)+d.cross(n));
float tb = (db.cross(fkvrb)+db.cross(nb));
if(a.mass != 0) {
a.angularMomentum = t*dt;
a.angularVelocity = a.angularMomentum/a.momentOfInertia;
a.angle += a.angularVelocity*dt;
a.shape.rotate(a.angularVelocity*dt);
}
if(b.mass != 0) {
b.angularMomentum = tb*dt;
b.angularVelocity = b.angularMomentum/b.momentOfInertia;
b.angle += b.angularVelocity*dt;
b.shape.rotate(b.angularVelocity*dt);
}
}
As for the actual problem, both the circles and polygons rotate very slowly and often in wrong directions. I know I am throwing a lot out there, but this problem has been bugging me for a while, and I would appreciate any help I can get.
Thanks.
This answer addresses the "I just don't know how to check if the moment of inertia equations are wrong." part of the question.
There are several possible approaches, some of which you may have already tried, and they can be used in combination:
Unit testing
Take your moment of inertia code and apply it to problems with known solutions from a tutorial or textbook.
Dimensional analysis
I would recommend this anyway for any scientific or engineering program. You may have deleted comments for compactness of posted code, but they are important. Annotate each variable that represents a physical quantity with its units. Check that every expression you evaluate has the right units, based on its inputs, for its result variable. For example, in the classic equation F=ma in SI units: F is in Newtons, equivalent to kg.m/(s^2), m is in kg, a is in m/(s^2), so it all balances. Be careful with transitions between physics world coordinates and screen coordinates.
Program simplification
Try working first with only one instance of one very simple shape for which you can do all the calculations by hand. Since some of your problems do not relate to rotation, a circle may be a good first choice because of its symmetry. Debug that, comparing intermediate results to equivalent results from paper-and-pencil (and calculator). Gradually add more instances of the same shape, then debug a single instance of the next shape...
Deliberate error
Given that you suspect your inertia calculations, try setting arbitrary values slightly different from your calculations, and see what differences they make in the display. Are the effects similar to the problems you are seeing? If so, keep it as a hypothesis.
As a more general note, programs that do iterative simulation can be very vulnerable to accumulated floating point error. Unless you have a real need to save space, and have done enough analysis of the numerical stability of your code to be sure float is OK, I strongly recommend using double instead. This is probably not your current problem, but is something that could become an issue later.

Categories

Resources