Java code to get rotation angle around an axis from quaternion - java

I am really struggeling to find the correct way to get the rotation angle around a single axis from an arbitrary quaternion. So other words I want to find the portion of the expressed rotation around a specified axis (in my case the Z-axis of the coordinate system, but an arbitrary solution would be nice) in terms of the angle. Can anyone point out to achieve this? Ideally some java fragment would be nice.
I tried the solution proposed in 1 for attitude, which is:
asin(2*qx*qy + 2*qz*qw)
However, this fails in some cases, e.g. a single rotation around the Z-axis with more than 0.6 * PI.

Angle and rotation axis of a quaternion
Every quaternion q can be decomposed as some kind of polar decomposition
q = r * (c + s * e)
where
r = |q|, s = |imag(q/r)|, c = real(q/r) and e = imag(q/s/r)
The axis of rotation of x ↦ q * x * q^(-1) is e, the angle is twice the angle α of the point (c,s)=(cos(α),sin(α)) on the unit circle.
To just compute the angle of rotation, the scaling by r is not that important, so
angle = 2*atan2( norm(imag(q)), real(q) )
Theory for Euler angles
A rotation about the X-axis is represented by a quaternion ca+sa*i, rotation about the Y axis by quaternion cb+sb*j and Z-axis by cc+sc*k where ca²+sa²=1 represent the cosine-sine pair of half the rotation angle a etc. Later 2a, c2a and s2a etc. will denote the double angle and its cosine and sine values.
Multiplying in the order xyz of application to the object at the origin gives a product
q=qw+qx*i+qy*j+qz*k
=(cc+sc*k)*(cb+sb*j)*(ca+sa*i)
Now interesting things happen in q*i*q^(-1) and q^(-1)*k*q, in that the inner terms commute and cancel, so that
q*i*q^(-1)*(-i) = (cc+sc*k)*(cb+sb*j)*(cb+sb*j)*(cc+sc*k)
= (cc+sc*k)*(c2b+s2b*j)*(cc+sc*k)
= (c2c+s2c*k)*c2b+s2b*j
(-k)*q^(-1)*k*q = (ca+sa*i)*(cb+sb*j)*(cb+sb*j)*(ca+sa*i)
=(ca+sa*i)*(c2b+s2b*j)*(ca+sa*i)
=(c2a+s2a*i)*c2b+s2b*j
which can then be used to isolate the angles 2a, 2b and 2c from
q*i*q^(-1)*(-i) = (q*i)*(i*q)^(-1)
= (qw*i-qx-qy*k+qz*j)*(-qw*i-qx-qy*k+qz*j)
= (qw²+qx²-qy²-qz²)
+ 2*(qw*qy-qx*qz)*j
+ 2*(qw*qz+qx*qy)*k
(-k)*q^(-1)*k*q = (q*k)^(-1)*(k*q)
= (-qw*k+qx*j-qy*i-qz)*(qw*k+qx*j-qy*i-qz)
= (qw²-qx²-qy²+qz²)
+ 2*(qw*qx+qy*qz)*i
+ 2*(qw*qy-qx*qz)*j
Resulting algorithm
Identifying expressions results in
s2b = 2*(qw*qy-qx*qz)
c2b*(c2a+s2a*i) = (qw²-qx²-qy²+qz²) + 2*(qw*qx+qy*qz)*i
c2b*(c2c+s2c*k) = (qw²+qx²-qy²-qz²) + 2*(qw*qz+qx*qy)*k
or
2a = atan2(2*(qw*qx+qy*qz), (qw²-qx²-qy²+qz²))
2b = asin(2*(qw*qy-qx*qz))
2c = atan2(2*(qw*qz+qx*qy), (qw²+qx²-qy²-qz²))
This constructs the angles in a way that
c2b=sqrt( (qw²+qx²+qy²+qz²)²+8*qw*qx*qy*qz )
is positive, so 2b is between -pi/2 and pi/2. By some sign manipulations, one could also obtain a solution where c2b is negative.
Answer to the question on the asin formula
Obviously, a different kind of rotation order was used, where the Z-rotation is the middle rotation. To be precise,
q = (cb+sb*j)*(cc+sc*k)*(ca+sa*i)
where
2b = heading
2a = bank
2c = attitude
To handle attitude rotation angles 2c larger that 0.5*pi, you need to compute the full set of Euler angles, since they will then contain two flips around the other axes before and after the Z-rotation.
Or you need to detect this situation, either by keeping the cosine of bank positive or by checking for overly large angle changes, and apply sign modifications inside the atan formulas, changing their resulting angle by pi (+or-), and change the Z angle computation to pi-asin(...)
Or, to only manipulate the angles after computation, if (2a,2b,2c) is the computed solution, then
(2a-sign(2a)*pi, 2b-sign(2b)*pi, sign(2c)*pi-2c)
is another solution giving the same quaternion and rotation. Chose the one that is closest to the expected behavior.

The answer can be found here Component of a quaternion rotation around an axis
"swing twist decomposition" from http://www.euclideanspace.com/maths/geometry/rotations/for/decomposition/

Related

X Y distance from longitude and latitude

I have two set's of longitude and latitude, i am desperately trying to figure out how many meters point A is displaced from point B, horizontally and vertically.
My goal would be have to +/-X and +/-Y values - I already have the shortest distance between the two points via Location.distanceBetween()....i thought i could use this with the Location.bearingTo() to find the values im looking for via basic trigonometry.
My thinking was i could use the bearing as angle A, 90 degrees as angle C and legnth of Side C (distanceBetween) to calculate the legnth of side A (x axis) and B (y axis) but the results were underwhelming to say the least lol
//CALCULATE ANGLES
double ANGLE_A;
ANGLE_A = current_Bearing; //Location.bearingTo()
ANGLE_A = ANGLE_A*Math.PI/180; //CONVERT DEGREES TO RADIANS
double ANGLE_C;
ANGLE_C = 90; // Always Right Angle
ANGLE_C = ANGLE_C*Math.PI/180; //CONVERT DEGREES TO RADIANS
double ANGLE_B;
ANGLE_B = 180 - ANGLE_A - ANGLE_C; // 3 sides of triangle must add up to 180, if 2 sides known 3rd can be calced
ANGLE_B = ANGLE_B*Math.PI/180; //CONVERT DEGREES TO RADIANS
//CALCULATE DISTANCES
double SIDE_C = calculatedDistance; //Location.distanceTo()
double SIDE_A = Math.sin(ANGLE_A) * SIDE_C /Math.sin(ANGLE_C);
double SIDE_B = Math.sin(ANGLE_B)*SIDE_C/Math.sin(ANGLE_C);
What im noticing is that my bearing changes very little between the two points regardless of how we move, though mind you im testing this at 10 - 100m distance, its always at 64.xxxxxxx and only the last few decimals really change.
All the online references i can find always look at computing the shortest path, and although this awesome site references x and y positions it always ends up combining them into shortest distance again
Would SUPER appreciate any pointers in the right direction!
Since the earth is not flat, your idea with 90 degree angles will not work properly.
What might be better, is this.
Lets say your 2 known points A and B have latitude and longitude latA, longA and latB, longB.
Now you could introduce two additional points C and D with latC = latA, longC = longB, and latD = latB, longD = longA, so the points A, B, C, D form a rectangle on the earth's surface.
Now you can simply use distanceBetween(A, C) and distanceBerween(A, D) to get the required distances.
It may be possible to utilize Location.distanceBetween(), if following conditions meet,
the points are located far apart from polar regions and
distance is short enough (compared to radius of the Earth).
The way is very simple. Just fix either longitude or latitude and vary only the other. Then calculate distance.
Location location1 = new Location("");
Location location2 = new Location("");
location1.setLatitude(37.4184359437);
location1.setLongitude(-122.088038921);
location2.setLatitude(37.3800232707);
location2.setLongitude(-122.073230422);
float[] distance = new float[3];
Location.distanceBetween(
location1.getLatitude(), location1.getLongitude(),
location2.getLatitude(), location2.getLongitude(),
distance
);
double lat_mid = (location1.getLatitude() + location2.getLatitude()) * 0.5;
double long_mid = (location1.getLongitude() + location2.getLongitude()) * 0.5;
float[] distanceLat = new float[3];
Location.distanceBetween(
location1.getLatitude(), long_mid,
location2.getLatitude(), long_mid,
distanceLat
);
float[] distanceLong = new float[3];
Location.distanceBetween(
lat_mid, location1.getLongitude(),
lat_mid, location2.getLongitude(),
distanceLong
);
double distance_approx = Math.sqrt(
Math.pow(distanceLong[0], 2.0) + Math.pow(distanceLat[0], 2.0)
);
Compare distance[0] and distance_approx, check whether accuracy meets your requiement.
If your points are close enough, you may easily calculate x-y distances from latitude / longitude once you know that 1 degree of latitude is 111km, and one degree of longitude is 111km * cos(latitude):
y_dist = abs(a.lat - b.lat) * 111000;
x_dist = abs(a.lon - b.lon) * 111000 * cos(a.lat);
For short distances we could easily ignore that earth is not exactly a sphere, the error is approximately 0.1-0.2% depending on your exact location.
There is no valid answer to this question until you define what projection.
The azimuth of a "straight" line varies along the route unless you are travelling exactly due south or due north. You can only calculate the angles at each node, or azimuth at a specific point along the route. Angles at the nodes will not add up to 180° because you're referring to an ellipsoidal triangle, and calculating an ellipsoidal triangle is a multiple-step process that in all honesty, is better left to the libraries out there such as OSGEO.
If you want to fit the geometry to a plane Cartesian, it is usually using the Lambert projection for areas mostly long on east and west directions, and Transverse Mercator on longer north to south projections. The entire Earth is mapped in the UTM (Universal Transverse Mercator) that will give you Cartesian coordinates anywhere, but in no case will you get perfect Eucldian geometry when dealing with geodetics. For instance, if you go south 10 miles, turn left 90° and go east for 10 miles, turn left 90° again, you can be anywhere from 10 miles from your starting point, to exactly back to where you started, if that point happened to be the North pole. So you may have a mathematically beautiful bearing on the UTM coordinate plane, but on the ground, you cannot turn the same angles as the UTM geometry indicates and follow that same path on ground. You will either follow a straight line on the ground and a curved line on a cartesian plane, or vice-versa.
You could do a distance between two points on the same northings and separately, the same eastings, and derive a north distance and an east distance. However, in reality the angles of this triangle will make sense only on paper, and not on the globe. If a plane took off at the bearing calculated by such a triangle, it would arrive in the wrong continent.

Issues with Raytracing triangles (orientation and coloring)

EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.

Minimising cumulative floating point arithmetic error

I have a 2D convex polygon in 3D space and a function to measure the area of the polygon.
public double area() {
if (vertices.size() >= 3) {
double area = 0;
Vector3 origin = vertices.get(0);
Vector3 prev = vertices.get(1).clone();
prev.sub(origin);
for (int i = 2; i < vertices.size(); i++) {
Vector3 current = vertices.get(i).clone();
current.sub(origin);
Vector3 cross = prev.cross(current);
area += cross.magnitude();
prev = current;
}
area /= 2;
return area;
} else {
return 0;
}
}
To test that this method works at all orientations of the polygon I had my program rotate it a little bit each iteration and calculate the area. Like so...
Face f = poly.getFaces().get(0);
for (int i = 0; i < f.size(); i++) {
Vector3 v = f.getVertex(i);
v.rotate(0.1f, 0.2f, 0.3f);
}
if (blah % 1000 == 0)
System.out.println(blah + ":\t" + f.area());
My method seems correct when testing with a 20x20 square. However the rotate method (a method in the Vector3 class) seems to introduce some error into the position of each vertex in the polygon, which affects the area calculation. Here is the Vector3.rotate() method
public void rotate(double xAngle, double yAngle, double zAngle) {
double oldY = y;
double oldZ = z;
y = oldY * Math.cos(xAngle) - oldZ * Math.sin(xAngle);
z = oldY * Math.sin(xAngle) + oldZ * Math.cos(xAngle);
oldZ = z;
double oldX = x;
z = oldZ * Math.cos(yAngle) - oldX * Math.sin(yAngle);
x = oldZ * Math.sin(yAngle) + oldX * Math.cos(yAngle);
oldX = x;
oldY = y;
x = oldX * Math.cos(zAngle) - oldY * Math.sin(zAngle);
y = oldX * Math.sin(zAngle) + oldY * Math.cos(zAngle);
}
Here is the output for my program in the format "iteration: area":
0: 400.0
1000: 399.9999999999981
2000: 399.99999999999744
3000: 399.9999999999959
4000: 399.9999999999924
5000: 399.9999999999912
6000: 399.99999999999187
7000: 399.9999999999892
8000: 399.9999999999868
9000: 399.99999999998664
10000: 399.99999999998386
11000: 399.99999999998283
12000: 399.99999999998215
13000: 399.9999999999805
14000: 399.99999999998016
15000: 399.99999999997897
16000: 399.9999999999782
17000: 399.99999999997715
18000: 399.99999999997726
19000: 399.9999999999769
20000: 399.99999999997584
Since this is intended to eventually be for a physics engine I would like to know how I can minimise the cumulative error since the Vector3.rotate() method will be used on a very regular basis.
Thanks!
A couple of odd notes:
The error is proportional to the amount rotated. ie. bigger rotation per iteration -> bigger error per iteration.
There is more error when passing doubles to the rotate function than when passing it floats.
You'll always have some cumulative error with repeated floating point trig operations — that's just how they work. To deal with it, you basically have two options:
Just ignore it. Note that, in your example, after 20,000 iterations(!) the area is still accurate down to 13 decimal places. That's not bad, considering that doubles can only store about 16 decimal places to begin with.
Indeed, plotting your graph, the area of your square seems to be going down more or less linearly:
This makes sense, assuming that the effective determinant of your approximate rotation matrix is about 1 − 3.417825 × 10-18, which is well within normal double precision floating point error range of one. If that's the case, the area of your square would continue a very slow exponential decay towards zero, such that you'd need about two billion (2 × 109) 7.3 × 1014 iterations to get the area down to 399. Assuming 100 iterations per second, that's about seven and a half months 230 thousand years.
Edit: When I first calculated how long it would take for the area to reach 399, it seems I made a mistake and somehow managed to overestimate the decay rate by a factor of about 400,000(!). I've corrected the mistake above.
If you still feel you don't want any cumulative error, the answer is simple: don't iterate floating point rotations. Instead, have your object store its current orientation in a member variable, and use that information to always rotate the object from its original orientation to its current one.
This is simple in 2D, since you just have to store an angle. In 3D, I'd suggest storing either a quaternion or a matrix, and occasionally rescaling it so that its norm / determinant stays approximately one (and, if you're using a matrix to represent the orientation of a rigid body, that it remains approximately orthogonal).
Of course, this approach won't eliminate cumulative error in the orientation of the object, but the rescaling does ensure that the volume, area and/or shape of the object won't be affected.
You say there is cumulative error but I don't believe there is (note how your output desn't always go down) and the rest of the error is just due to rounding and loss of precision in a float.
I did work on a 2d physics engine in university (in java) and found double to be more precise (of course it is see oracles datatype sizes
In short you will never get rid of this behaviour you just have to accept the limitations of precision
EDIT:
Now I look at your .area function there is possibly some cumulative due to
+= cross.magnitude
but I have to say that whole function looks a bit odd. Why does it need to know the previous vertices to calculate the current area?

problems getting the angle of two quaternions

Okay, so I'm trying to get the angle of two quaternions, and it almost works perfectly, but then it jumps from
evec angle: 237.44999653311922
evec angle: 119.60001380112993
and I can't figure out why for the life of me. (Note: evec was a old variable name that just stayed in the print)
Anyway, here's my code:
FloatBuffer fb = BufferUtils.createFloatBuffer(16);
// get the current modelview matrix
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, fb);
Matrix4f mvm = new Matrix4f();
mvm.load(fb);
Quaternion qmv2 = new Quaternion();
Matrix4f imvm = new Matrix4f();
Matrix4f.invert(mvm, imvm);
qmv2.setFromMatrix(imvm);
qmv2.normalise();
Matrix3f nil = new Matrix3f();
nil.setIdentity();
Quaternion qnil = new Quaternion();
qnil.setFromMatrix(nil);
qnil.normalise();
float radAngle = (float)(2.0 * Math.acos(Quaternion.dot(qmv2, qnil)));
System.out.println("evec angle: " + Math.toDegrees(radAngle));
How do I make it stop jumping from 237 to 119 and keep going up to the full 360?
First, what does an angle between two four dimensional vectors (=quaternions) mean to you geometrically? You can calculate it but the result might not make sense. Maybe you are looking for the angle between the axis of the rotations that the two quaternions represent?
Second, you have an error here:
float radAngle = (float)(2.0 * Math.acos(Quaternion.dot(qmv2, qnil)));
^^^^^
The result from acos is the angle. Don't multiply by 2.
Third, the angle between two vectors in a 3D or 4D space can never be greater than 180°. On a plane it can because the plane imposes an orientation. In a 3D space you would have to define an arbitrary direction as "up" to get angles higher than 180°.

Mathematical Vectors and Rotations (Topdown java game dev - physics problem)

I've been working on a top down car game for quite a while now, and it seems it always comes back to being able to do one thing properly. In my instance it's getting my car physics properly done.
I'm having a problem with my current rotation not being handled properly. I know the problem lies in the fact that my magnitude is 0 while multiplying it by Math.cos/sin direction, but I simply have no idea how to fix it.
This is the current underlying code.
private void move(int deltaTime) {
double secondsElapsed = (deltaTime / 1000.0);// seconds since last update
double speed = velocity.magnitude();
double magnitude = 0;
if (up)
magnitude = 100.0;
if (down)
magnitude = -100.0;
if (right)
direction += rotationSpeed * (speed/topspeed);// * secondsElapsed;
if (left)
direction -= rotationSpeed * (speed/topspeed);// * secondsElapsed;
double dir = Math.toRadians(direction - 90);
acceleration = new Vector2D(magnitude * Math.cos(dir), magnitude * Math.sin(dir));
Vector2D deltaA = acceleration.scale(secondsElapsed);
velocity = velocity.add(deltaA);
if (speed < 1.5 && speed != 0)
velocity.setLength(0);
Vector2D deltaP = velocity.scale(secondsElapsed);
position = position.add(deltaP);
...
}
My vector class emulates vector basics - including addition subtraction, multiplying by scalars... etc.
To re-iterate the underlying problem - that is magnitude * Math.cos(dir) = 0 when magnitude is 0, thus when a player only presses right or left arrow keys with no 'acceleration' direction doesn't change.
If anyone needs more information you can find it at
http://www.java-gaming.org/index.php/topic,23930.0.html
Yes, those physics calculations are all mixed up. The fundamental problem is that, as you've realized, multiplying the acceleration by the direction is wrong. This is because your "direction" is not just the direction the car is accelerating; it's the direction the car is moving.
The easiest way to straighten this out is to start by considering acceleration and steering separately. First, acceleration: For this, you've just got a speed, and you've got "up" and "down" keys. For that, the code looks like this (including your threshold code to reduce near-zero speeds to zero):
if (up)
acceleration = 100.0;
if (down)
acceleration = -100.0;
speed += acceleration * secondsElapsed;
if (abs(speed) < 1.5) speed = 0;
Separately, you have steering, which changes the direction of the car's motion -- that is, it changes the unit vector you multiply the speed by to get the velocity. I've also taken the liberty of modifying your variable names a little bit to look more like the acceleration part of the code, and clarify what they mean.
if (right)
rotationRate = maxRotationSpeed * (speed/topspeed);
if (left)
rotationRate = maxRotationSpeed * (speed/topspeed);
direction += rotationRate * secondsElapsed;
double dir = Math.toRadians(direction - 90);
velocity = new Vector2D(speed * Math.cos(dir), speed * Math.sin(dir));
You can simply combine these two pieces, using the speed from the first part in the velocity computation from the second part, to get a complete simple acceleration-and-steering simulation.
Since you asked about acceleration as a vector, here is an alternate solution which would compute things that way.
First, given the velocity (a Vector2D value), let's suppose you can compute a direction from it. I don't know your syntax, so here's a sketch of what that might be:
double forwardDirection = Math.toDegrees(velocity.direction()) + 90;
This is the direction the car is pointing. (Cars are always pointing in the direction of their velocity.)
Then, we get the components of the acceleration. First, the front-and-back part of the acceleration, which is pretty simple:
double forwardAcceleration = 0;
if (up)
forwardAcceleration = 100;
if (down)
forwardAcceleration = -100;
The acceleration due to steering is a little more complicated. If you're going around in a circle, the magnitude of the acceleration towards the center of that circle is equal to the speed squared divided by the circle's radius. And, if you're steering left, the acceleration is to the left; if you're steering right, it's to the right. So:
double speed = velocity.magnitude();
double leftAcceleration = 0;
if (right)
leftAcceleration = ((speed * speed) / turningRadius);
if (left)
leftAcceleration = -((speed * speed) / turningRadius);
Now, you have a forwardAcceleration value that contains the acceleration in the forward direction (negative for backward), and a leftAcceleration value that contains the acceleration in the leftward direction (negative for rightward). Let's convert that into an acceleration vector.
First, some additional direction variables, which we use to make unit vectors (primarily to make the code easy to explain):
double leftDirection = forwardDirection + 90;
double fDir = Math.toRadians(forwardDirection - 90);
double ldir = Math.toRadians(leftDirection - 90);
Vector2D forwardUnitVector = new Vector2D(Math.cos(fDir), Math.sin(fDir));
Vector2D leftUnitVector = new Vector2D(Math.cos(lDir), Math.sin(lDir));
Then, you can create the acceleration vector by assembling the forward and leftward pieces, like so:
Vector2D acceleration = forwardUnitVector.scale(forwardAcceleration);
acceleration = acceleration.add(leftUnitVector.scale(leftAcceleration));
Okay, so that's your acceleration. You convert that to a change in velocity like so (note that the correct term for this is deltaV, not deltaA):
Vector2D deltaV = acceleration.scale(secondsElapsed);
velocity = velocity.add(deltaV).
Finally, you probably want to know what direction the car is headed (for purposes of drawing it on screen), so you compute that from the new velocity:
double forwardDirection = Math.toDegrees(velocity.direction()) + 90;
And there you have it -- the physics computation done with acceleration as a vector, rather than using a one-dimensional speed that rotates with the car.
(This version is closer to what you were initially trying to do, so let me analyze a bit of where you went wrong. The part of the acceleration that comes from up/down is always in a direction that is pointed the way the car is pointed; it does not turn with the steering until the car turns. Meanwhile, the part of the acceleration that comes from steering is always purely to the left or right, and its magnitude has nothing to do with the front/back acceleration -- and, in particular, its magnitude can be nonzero even when the front/back acceleration is zero. To get the total acceleration vector, you need to compute these two parts separately and add them together as vectors.)
Neither of these computations are completely precise. In this one, you compute the "forward" and "left" directions from where the car started, but the car is rotating and so those directions change over the timestep. Thus, the deltaV = acceleration * time equation is only an estimate and will produce a slightly wrong answer. The other solution has similar inaccuracies -- but one of the reasons that the other solution is better is that, in this one, the small errors mean that the speed will increase if you steer the car left and right, even if you don't touch the "up" key -- whereas, in the other one, that sort of cross-error doesn't happen because we keep the speed and steering separate.

Categories

Resources