I'm making a billiards game in Java. I used this guide for collision resolution. During testing, I noticed that there is more velocity between the two collided pool balls after collision. The amount of extra velocity seems to be 0%-50%. About 0% on a straight shot and 50% on an extremely wide shot. I assumed that the combined velocities would remain the same. Is it my code or my understanding of physics that is wrong?
private void solveCollision(PoolBall b1, PoolBall b2) {
System.out.println(b1.getMagnitude() + b2.getMagnitude());
// vector tangent to collision point
float vTangX = b2.getY() - b1.getY();
float vTangY = -(b2.getX() - b1.getX());
// normalize tangent vector
float mag = (float) (Math.sqrt((vTangX * vTangX) + (vTangY * vTangY)));
vTangX /= mag;
vTangY /= mag;
// get new vector based on velocity of circle being collided with
float NVX1 = b1.getVector().get(0) - b2.getVector().get(0);
float NVY1 = b1.getVector().get(1) - b2.getVector().get(1);
// dot product
float dot = (NVX1 * vTangX) + (NVY1 * vTangY);
// adjust length of tangent vector
vTangX *= dot;
vTangY *= dot;
// velocity component perpendicular to tangent
float vPerpX = NVX1 - vTangX;
float vPerpY = NVY1 - vTangY;
// apply vector to pool balls
b1.setVector(b1.getVector().get(0) - vPerpX, b1.getVector().get(1) - vPerpY);
b2.setVector(b2.getVector().get(0) + vPerpX, b2.getVector().get(1) + vPerpY);
System.out.println(b1.getMagnitude() + b2.getMagnitude());
}
Not all of this explanation will be strictly on topic, and I will assume minimal foreknowledge to accommodate potential future users - unfortunately some may consequently find it pedantic.
Velocity is not a conserved quantity and therefore the magnitude-sum of velocities before a collision is not necessarily equal to the magnitude-sum of velocities after.
This is more intuitive for inelastic collisions, particularly when you consider a scenario such as an asteroid colliding with Earth's moon1, where a typical impact velocity is on the order of 10 - 20 kilometers per second. If scalar velocity was conserved in this case - even at a 'wide' impact angle of 45° (the most probable) - the resulting velocity for the moon would be sufficient to eject it from Earth's orbit.
So clearly scalar velocity is not necessarily conserved for an inelastic collision. Elastic collisions are less intuitive.
This - as you've noted - is because there is a scenario where scalar velocity in a perfectly elastic collision is conserved (a straight-on collision), while inelastic collisions never conserve velocity2. This creates an unacceptable incongruence.
To rectify this, we have to treat velocity as a vector instead of a scalar. Consider the simplest elastic collision between two balls: one ball at rest and the second ball striking the first 'straight-on' (impact angle of 90°). The second ball will come to rest and the first will leave the collision with velocity equal to the second's initial velocity. Velocity is conserved - the magnitude-sum of velocities before and after are equal - all is well.
This will not, however, be the case for impact angles other than 90° because the magnitude sum fails to account for vector components canceling out. Say for example you have one ball again at rest and the second ball striking it at 45°. Both balls will then leave the collision at 45° angles from the second ball's initial direction of travel3. The two balls will then also have the same velocity component parallel to the initial direction of motion, and equal but opposite perpendicular velocity components. When you take a vector sum the two perpendicular components will cancel and the sum of the two parallel components will recover the initial velocity vector. However, the magnitude of each ball's resulting velocity vector will be larger than the magnitude of the second ball's initial velocity - because the magnitude is calculated by a sum of squared values and therefore does not account for opposing components.
Of course the best approach is not to look at the velocity but at the momentum - it is the conservation of momentum which governs the behaviors outlined above and in terms of momentum the explanation is very simple: it dictates that in a perfectly elastic collision the velocity of the center of mass must not change.
1 The bigger one - since Earth recently captured a second true satellite.
2 This is, in fact, part of the definition of an inelastic collision.
3 For additional background on calculating angles of departure see here.
Related
So I'm attempting to calculate if a point is inside of an angle. While researching, I came across many terms I am unfamiliar with. Essentially from a point (A) with a 120° angle protruding from point (A). I want to test if a secondary point (B) would be inside of the angle. All that is known is the degree of the angle and the degree at which the angle is facing and the X and Y values of both points. This will all be done in Java so any and all help is appreciated!
To better explain it:
There is a point with two vectors protruding from said point. The angle that is known is the angle that is created by the protrusion of the two vectors.
First of all, an angle is not defined for two points -- only for two lines.
Define a line that is your 0° 2D-space. For a line, you need a point and a direction.
Calculate the normal-vector for your line (Turn your directional vector by 90°); normalize both your directional and normal vector so that sqrt(x^2+y^2) = 1.
Calculate the distance vector between your initial point and the other point, this is your second line, sharing the same initial point.
Calculate the dot-product of a and b:
a = distance vector × normal vector
b = distance vector × directional vector
Use simple trigonometry to calculate the angle. It's the arctangent of (a/b) or (b/a).
You probably wanna take the absolute value of the result as well if you don't care about left and right.
So, I'm working on a 2D physics engine, and I have an issue. I'm having a hard time conceptualizing how you would calculate this:
Take two squares:They move, collide, and at some vector based off of the velocity of the two + the shape of them.
I have two vector lists(2D double lists) that represent these two shapes, how does one get the normal vector?
The hit vector is just (s1 is the first shape, s2 the second) s2 - s1 in terms of the position of the center of mass.
Now, I know a normal vector is one perpendicular to an edge, and I know that you can get the perpendicular vector of a line by 90 degrees, but what edge?
I read in several places, it is the edge a corner collided on. How do you determine this?
It just makes no sense to me, how you would mathematically or programmatically determine what edge.
Can anyone point out what I'm doing wrong in my understanding? Sorry for providing no code to explain this, as I'm having an issue writing the code for it in the first place.
Figure1: In 2D the normal vector is perpendicular to the tangent line:
Figure2: In 3D the normal vector is perpindicular to the tangent plane
Figure3: For a square the normal vector is easy if you are not at a corner; It is just perpendicular to the side of the square (in the image above, n = 1 i + 0 j, for any point along the right side of the square).
However, at a corner it becomes a little more difficult because the tangent is not well-defined (in terms of derivatives, the tangent is discontinuous at the corner, so perpendicular is ambiguous).
Even though the normal vector is not defined at a corner, it is defined directly to the left and right of it. Therefore, you can use the average of those two normals (n1 and n2) as the normal at a corner.
To be less technical, the normal vector will be in the direction from the center of the square to the corner of the collision.
EDIT: To answer the OP's further questions in the chat below: "How do you calculate the normal vector for a generic collision between two polygons s1 and s2 by only knowing the intersecting vertices."
In general, you can calculate the norm like this (N is total verts, m is verts inside collision):
vcenter = (∑N vi) / N
vcollision = (∑m vi) / m
n = vcollision - vcenter
Fig. 1 - vcollision is only a single vertex.
Fig. 2 - vcollision is avg of two verts.
Fig. 3 - vcollision for generic polygon intersection.
I'm aware of Quaternion methods of doing this. But ultimately these methods require us to transform all objects in question into the rotation 'space' of the Camera.
However, looking at the math, I'm certain there must be a simple way to get the XY, YZ and XZ equations for a line based on only the YAW (heading) and PITCH of a camera.
For instance, given the normals of the view frustrum such as (sqrt(2), sqrt(2), 0) you can easily construct the line (x+y=0) for the XY plane. But once the Z (in this case, Z is being used for depth, not GL's Y coordinate scrambling) changes, the calculations become more complex.
Additionally, given the order of applying rotations: yaw, pitch, roll; roll does not affect the normals of the view frustrum at all.
So my question is very simple. How do I go from a 3-coordinate view normal (that is normalized, i.e the vector length is 1) or a yaw (in radians), pitch (in radians) pair to a set of three line equations that map the direction of the 'eye' through space?
NOTE:
Quaternions I have had success with in this, but the math is too complex for every entity in a simulation to do for visual checks, along with having to check against all visible objects, even with various checks to reduce the number of viewable objects.
Use any of the popular methods out there for constructing a matrix from yaw and pitch to represent the camera rotation. The matrix elements now contain all kinds of useful information. For example (when using the usual representation) the first three elements of the third column will point along the view vector (either into or out of the camera, depending on the convention you're using). The first three elements of the second column will point 'up' relative to the camera. And so on.
However it's hard to answer your question with confidence as lots of things you say don't make sense to me. For example I've no idea what "a set of three line equations that map the direction of the 'eye' through space" means. The eye direction is simply given by a vector as I described above.
nx = (float)(-Math.cos(yawpos)*Math.cos(pitchpos));
ny = (float)(Math.sin(yawpos)*Math.cos(pitchpos));
nz = (float)(-Math.sin(pitchpos)));
That gets the normals of the camera. This assumes yaw and pitch are in radians.
If you have the position of the camera (px,py,pz) you can get the parametric equation thusly:
x = px + nx*t
y = py + ny*t
z = pz + nz*t
You can also construct the 2d projections of this line:
0 = ny(x-px) + nx(y-py)
0 = nz(y-px) + ny(z-pz)
0 = nx(z-pz) + nz(x-px)
I think this is correct. If someone notes an incorrect plus/minus let me know.
I have around 1000 points. I'm trying to group this points base on distance. Im using the harversine formula, but it seems to be super slow. In android for 1000 points takes 4 seconds. In my local environment takes 60 ms.
I do not care about precession and the points are no more than 25 km apart.
Is there another formula I can use?
First, for items that close to each other, curvature of the Earth is not going to matter too much. Hence, you can treat it as flat, at which point you're looking at the Pythagorean Theorem for distance (square root of the sum of the squares of the x/y distances).
Second, if all you are doing is sorting/grouping, you can drop the square root calculation and just sort/group on the square of the distance. On devices lacking a floating point coprocessor, such as the first couple of generations of Android phones, that will do a lot of good right there.
Third, you don't indicate the coordinate system you are using for the points, but if you can do your calculations using fixed-point math, that too will boost performance, particularly on coprocessor-less devices. That's why the Google Maps add-on for Android uses GeoPoint and microdegrees rather than the Java double degrees in the Location you get back from LocationManager.
So long as you don't need to cope with near the polls and an aproximation is OK which for grouping it should be. Then you can work out the relative scaling between the lattitude degrees and the longitude degrees just the once and use it for every straight X squared + y squared calculation, for relative distances you can skip the square root.
If your working with degrees to scale them to be the same relative distance for lattitude and longitude you use cos of the lattitude. I would scale the latitude to the longitude then each degrees map to a good knowen distance the calculation will will be something like.
(lattitude diference for two points) * 1/cos(latitude)
You work out the 1/cos(latitude) just the once for all points assuming the latitude is not changeing much over your sample set.
Perhaps remove the calculation of the curvature of the earth..?
If the functionality of your app permits this, do so.
This format always holds true. Given two points, you can always plot them, draw the right triangle, and then find the length of the hypotenuse. The length of the hypotenuse is the distance between the two points. Since this format always works, it can be turned into a formula:
Distance Formula: Given the two points (x1, y1) and (x2, y2), the distance between these points is given by the formula: http://www.purplemath.com/modules/distform.htm
Distance = sqrt( (x2 - x1)^2 + (y2 - y1)^2 )
Update with correct notation:
double distance = Math.sqrt(Math.pow(x2 - x1, 2) + Math.pow(y2 - y1, 2));
As far as I know, best way to do this is to use Graph Theory, and it has Dikstra's algorithm , it's the fastest algorthm in my knowledge for this kind of task.
Really worth learning, optimizes work very well.
I'm trying to understand how the quaternion rotations work, I found this mini tutorial http://www.julapy.com/blog/2008/12/22/quaternion-rotation/ but He makes some assumptions that I can't workout, like how can I do "work out the rotation vectors around each axis, simply by rotating the vector around an axis." and how does he calculate angleDegreesX, angleDegreesY and angleDegreesZ?
Can some one provide a working example or explanation?
The shortest possible summary is that a quaternion is just shorthand for a rotation matrix. Whereas a 4x4 matrix requires 16 individual values, a quaternion can represent the exact same rotation in 4.
For the mathematically inclined, I am fully aware that the above is super over-simplified.
To provide a little more detail, let's refer to the Wikipedia article:
Unit quaternions provide a convenient
mathematical notation for representing
orientations and rotations of objects
in three dimensions. Compared to Euler
angles they are simpler to compose and
avoid the problem of gimbal lock.
Compared to rotation matrices they are
more numerically stable and may be
more efficient
What isn't clear from that opening paragraph is that a quaternion is not only convenient, it's unique. If you have a particular orientation of an object, twisting on any number of axes, there exists a single unique quaternion that represents that orientation.
Again, for the mathematically inclined, my uniqueness comment above assumes right-handed rotations. There is an equivalent left-handed quaternion that rotates in the opposite direction around the opposite axis.
For the purpose of simple explanation, that is something of a distinction without a difference.
If you'd like to make a simple quaternion that represents rotation about an axis, here's a short series of steps that will get you there:
Pick your axis of rotation v = {x, y, z}. Just for politeness, please pick a unit vector: if it's not already of length 1, divide all the components by the length of v.
Pick an angle of rotation that you'd like to turn about this axis and call that theta.
The equivalent unit quaternion can be computed using the sample code below:
Quaternion construction:
q = { cos(theta/2.0), // This is the angle component
sin(theta/2.0) * x, // Remember, angle is in radians, not degrees!
sin(theta/2.0) * y, // These capture the axis of rotation
sin(theta/2.0) * z};
Note those divisions by two: those ensure that there's no confusion in the rotation. With a normal rotation matrix, rotating to the right 90 degrees is the same as rotating to the left by 270. The quaternions that are equivalent to those two rotations are distinct: you can't confuse one with the other.
EDIT: responding to the question in the comments:
Let's simplify the problem by setting the following frame of reference:
Pick the center of the screen as the origin (we're going to rotate around that).
X axis points to the right
Y axis points up (top of the screen)
Z axis points out of the screen at your face (forming a nice right handed coordinate system).
So, if we have an example object (say an arrow) that starts by pointing to the right (positive x axis). If we move the mouse up from the x axis, the mouse will provide us with a positive x and positive y. So, working through the series of steps:
double theta = Math.atan2(y, x);
// Remember, Z axis = {0, 0, 1};
// pseudo code for the quaternion:
q = { cos(theta/2.0), // This is the angle component
sin(theta/2.0) * 0, // As you can see, the zero components are ignored
sin(theta/2.0) * 0, // Left them in for clarity.
sin(theta/2.0) * 1.0};
You need some basic math to do what you need. Basically, you rotate a point around an axis by multiyplying the matrix representing that point with a rotation matrix. The result is the rotated matrix represantation of that point.
The line
angleX = angleDegreesX * DEGTORAD;
just converts the degrees representation into a radians reprensentation by a simple formular (see this Wikipedia entry on Radians)
You can find some more information and examples of rotation matrizes here: Rotation around arbitrary axes
There are probably tools in your programming framework to do that rotation work and retrieve the matrices. Unfortunately, I cannot help you with the quaternions but your problems seem to be a little bit more basic.