I've implemented a particle system. I'm drawing their textures on billboards that should be rotated towards the camera.
This works fine except for the case when the angle between particle->camera and the normal comes near to 180 degrees. Then the particle starts rotating around itself many times.
The angle is calculated using cos(angle) = dot(a, b) / (length(a) * length(b), the length are both 1 cause the Vectors are normalized.
The axis is calculated using the cross product of those two vectors.
glDisable(GL_CULL_FACE);
//calculate rotation
Vector3f normal = new Vector3f(0, 0, 1);
Vector3f dir = Vector3f.sub(new Vector3f(GraphicsData.camera.x, GraphicsData.camera.y, GraphicsData.camera.z), new Vector3f(x, y, z), null);
if(dir.length() == 0)
{
glEnable(GL_CULL_FACE);
return;
}
dir = (Vector3f) dir.normalise();
float angle = (float) Math.toDegrees(Math.acos(Vector3f.dot(normal, dir)));
Vector3f rotationAxis = Vector3f.cross(normal, dir, null);
rotationAxis = (Vector3f) rotationAxis.normalise();
System.out.println("Angle: + " + angle + " Axis: " + rotationAxis);
glBindTexture(GL_TEXTURE_2D, ParticleEngine.particleTextures.get(typeId).texture.getTextureID());
glColor4f(1f,1f,1f, time >= lifeTime - decayTime ? ((float)lifeTime - (float)time) / ((float)lifeTime - (float)decayTime) : 1f);
shaderEngine.createModelMatrix(new Vector3f(x, y, z), new Vector3f(angle * rotationAxis.x, angle * rotationAxis.y, angle * rotationAxis.z), new Vector3f(sx, sy, sz));
shaderEngine.loadModelMatrix(shaderEngine.particle);
glCallList(ParticleEngine.particleTextures.get(typeId).displayListId + textureIndex);
glEnable(GL_CULL_FACE);
What am i doing wrong calculating the rotation?
public static void createModelMatrix(Vector3f pos, Vector3f rot, Vector3f scale)
{
GraphicsData.camera.modelMatrix = new Matrix4f();
GraphicsData.camera.modelMatrix.setIdentity();
GraphicsData.camera.modelMatrix.translate(pos);
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.x), new Vector3f(1,0,0));
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.y), new Vector3f(0,1,0));
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.z), new Vector3f(0,0,1));
GraphicsData.camera.modelMatrix.scale(scale);
}
More a long comment or perhaps a partial answer to the problem:
If you are computing the cross product anyway, then use that
norm( a × b ) = sin(angle) * norm(a)*norm(b)
dot(a,b) = cos(angle) * norm(a)*norm(b)
to determine
angle = atan2( norm(a×b), dot(a,b) )
Related
I'm trying to create a ray to that translates my mouse coordinates to 3d world coordinates.
Cx = Mx / screenWidth * 2 - 1
Cy = -( My / screenHeight * 2 - 1 )
vNear = InverseViewProjectionMatrix * ( Cx, Cy, -1, 1 )
VFar = InverseViewProjectionMatrix * ( Cx, Cy, 1, 1 )
vNear /= vNear.w
vFar /= vFar.w
After testing the ray's vFar always appears to come from the same general direction
It seems like I need to add the camera perspective as I would expect vFar to always be behind my camera.
I'm not entirely sure how that should be added in. Here's my test code.
public void mouseToWorldCordinates(Window window,Camera camera, Vector2d mousePosition){
float normalised_x = (float)((mousePosition.x / (window.getWidth()*2)) -1);
float normalised_y = -(float)((mousePosition.y / (window.getHeight()*2)) -1);
Vector4f mouse = new Vector4f(normalised_x,normalised_y,-1,1);
Matrix4f projectionMatrix = new Matrix4f(transformation.getProjectionMatrix()).invert();
Matrix4f mouse4f = new Matrix4f(mouse,new Vector4f(),new Vector4f(),new Vector4f());
Matrix4f vNear4f = projectionMatrix.mul(mouse4f);
Vector4f vNear = new Vector4f();
vNear4f.getColumn(0,vNear);
mouse.z = 1f;
projectionMatrix = new Matrix4f(transformation.getProjectionMatrix()).invert();
mouse4f = new Matrix4f(mouse,new Vector4f(),new Vector4f(),new Vector4f());
Matrix4f vFar4f = projectionMatrix.mul(mouse4f);
Vector4f vFar = new Vector4f();
vFar4f.getColumn(0,vFar);
vNear.div(vNear.w);
vFar.div(vFar.w);
lines[0] = vNear.x;
lines[1] = vNear.y;
lines[2] = vNear.z;
lines[3] = vFar.x;
lines[4] = vFar.y;
lines[5] = vFar.z;
}
The computation of normalised_x and normalised_y is wrong. Normalized device coordinates are in range [-1.0, 1.0]:
float normalised_x = 2.0f * (float)mousePosition.x / (float)window.getWidth() - 1.0f;
float normalised_y = 1.0f - 2.0f * (float)mousePosition.y / (float)window.getHeight();
I'm working with ARCore in Android Studio using java and am trying to implement ray intersection with an object.
I started with Google's provided sample (as found here: https://developers.google.com/ar/develop/java/getting-started).
Upon touching the screen, a ray gets projected and when this ray touches a Plane, a PlaneAttachment (with an Anchor/a Pose) is created in the intersection point.
I would then like to put a 3D triangle in the world attached to this Pose.
At the moment I create my Triangle based on the Pose's translation, like this:
In HelloArActivity, during onDrawFrame(...)
//Code from sample, determining the hits on planes
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon.
if (hit instanceof PlaneHitResult && ((PlaneHitResult) hit).isHitInPolygon()) {
mTouches.add(new PlaneAttachment(
((PlaneHitResult) hit).getPlane(),
mSession.addAnchor(hit.getHitPose())));
//creating a triangle in the world
Pose hitPose = hit.getHitPose();
float[] poseCoords = new float[3];
hitPose.getTranslation(poseCoords, 0);
mTriangle = new Triangle(poseCoords);
}
}
}
Note: I am aware that the triangle's coordinates should be updated every time the Pose's coordinates get updated. I left this out as it is not part of my issue.
Triangle class
public class Triangle {
public float[] v0;
public float[] v1;
public float[] v2;
//create triangle around a given coordinate
public Triangle(float[] poseCoords){
float x = poseCoords[0], y = poseCoords[1], z = poseCoords[2];
this.v0 = new float[]{x+0.0001f, y-0.0001f, z};
this.v1 = new float[]{x, y+ 0.0001f, z-0.0001f};
this.v2 = new float[]{x-0.0001f, y, z+ 0.0001f};
}
After this, upon tapping the screen again I create a ray projected from the tapped (x,y) part of the screen, using Ian M his code sample provided in the answer to this question: how to check ray intersection with object in ARCore
Ray Creation, in HelloArActivity
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
The result of this however is that, no matter where I tap the screen, it always counts as a hit (regardless of how much distance I add between the points, in Triangle's constructor).
I suspect this has to do with how a Pose is located in the world, and using the Pose's translation coordinates as a reference point for my triangle is not the way to go, so I'm looking for the correct way to do this, but any remarks regarding other parts of my method are welcome!
Also I have tested my method for ray-triangle intersection and I don't think it is the problem, but I'll include it here for completeness:
public Point3f intersectRayTriangle(CustomRay R, Triangle T) {
Point3f I = new Point3f();
Vector3f u, v, n;
Vector3f dir, w0, w;
float r, a, b;
u = new Vector3f(T.V1);
u.sub(new Point3f(T.V0));
v = new Vector3f(T.V2);
v.sub(new Point3f(T.V0));
n = new Vector3f(); // cross product
n.cross(u, v);
if (n.length() == 0) {
return null;
}
dir = new Vector3f(R.direction);
w0 = new Vector3f(R.origin);
w0.sub(new Point3f(T.V0));
a = -(new Vector3f(n).dot(w0));
b = new Vector3f(n).dot(dir);
if ((float)Math.abs(b) < SMALL_NUM) {
return null;
}
r = a / b;
if (r < 0.0) {
return null;
}
I = new Point3f(R.origin);
I.x += r * dir.x;
I.y += r * dir.y;
I.z += r * dir.z;
return I;
}
Thanks in advance!
Follow-up for: Calculating world coordinates from camera coordinates
I'm multiplying a 2D vector with a transformation matrix (OpenGL's model-view matrix) to get world coordinates from my camera coordinates.
I do this calculation like this:
private Vector2f toWorldCoordinates(Vector2f position) {
glPushMatrix();
glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);
ByteBuffer m = ByteBuffer.allocateDirect(64);
m.order(ByteOrder.nativeOrder());
glGetFloatv(GL_MODELVIEW_MATRIX, m);
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(16)) + (position.y * m.getFloat(20)) + m.getFloat(28);
glPopMatrix();
return new Vector2f(x, y);
}
Now I also want to do this vice-versa: calculate the camera coordinates for a position in the world. How can I reverse this calculation?
To create a matrix representing the inverse transform to the one above, apply the transforms in reverse, with negative quantities for the rotation and translation and an inverse quantity for the zoom:
glRotatef(-ROTATION, 0, 0, 1);
glTranslatef(-this.position.x, -this.position.y, 0);
glScalef(1.0f / this.zoom, 1.0f / this.zoom, 1);
Then multiply by the position vector as before.
The alternative is to compute the inverse matrix, but this way is much simpler.
I'm using bezier curves as paths for my spaceships to travel along when they are coming into dock at a station. I have a simple algorithm to calculate where the ship should be at time t along a cubic bezier curve:
public class BezierMovement{
public BezierMovement(){
// start docking straight away in this test version
initDocking();
}
private Vector3 p0;
private Vector3 p1;
private Vector3 p2;
private Vector3 p3;
private double tInc = 0.001d;
private double t = tInc;
protected void initDocking(){
// get current location
Vector3 location = getCurrentLocation();
// get docking point
Vector3 dockingPoint = getDockingPoint();
// ship's normalised direction vector
Vector3 direction = getDirection();
// docking point's normalised direction vector
Vector3 dockingDirection = getDockingDirection();
// scalars to multiply normalised vectors by
// The higher the number, the "curvier" the curve
float curveFactorShip = 10000.0f;
float curveFactorDock = 2000.0f;
p0 = new Vector3(location.x,location.y,location.z);
p1 = new Vector3(location.x + (direction.x * curveFactorShip),
location.y + (direction.y * curveFactorShip),
location.z + (direction.z * curveFactorShip));
p2 = new Vector3(dockingPoint.x + (dockingDirection.x * curveFactorDock),
dockingPoint.y + (dockingDirection.y * curveFactorDock),
dockingPoint.z + (dockingDirection.z * curveFactorDock));
p3 = new Vector3(dockingPoint.x, dockingPoint.y, dockingPoint.z);
}
public void incrementPosition() {
bezier(p0, p1, p2, p3, t, getCurrentLocation());
// make ship go back and forth along curve for testing
t += tInc;
if(t>=1){
tInc = 0-tInc;
} else if(t<0){
tInc = 0-tInc;
}
}
protected void bezier(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3, double t, Vector3 outputVector){
double a = (1-t)*(1-t)*(1-t);
double b = 3*((1-t)*(1-t))*t;
double c = 3*(1-t)*(t*t);
double d = t*t*t;
outputVector.x = a*p0.x + b*p1.x + c*p2.x + d*p3.x;
outputVector.y = a*p0.y + b*p1.y + c*p2.y + d*p3.y;
outputVector.z = a*p0.z + b*p1.z + c*p2.z + d*p3.z;
}
}
The curve start point is the spaceship location, and end point is the entrance to the docking bay (red dots on diagram). The spaceship has a normalised vector for its direction, and the docking bay has another normalised vector to indicate the direction the ship must be traveling in so as to be aligned straight on to the docking bay when it arrives (the yellow lines on the diagram)
The green line is a possible path of the spaceship, and the purple circle, the spaceship's radius. Finally, the black box is the bounding box for the station.
I have two problems:
The spaceship is supposed to only be able to turn at r radians per second
The spaceship can't fly through the station
I assume that this translates into:
a). Finding the "curve factors" (control point lengths) that will give a path where the ship doesn't have to turn too tightly
b). Finding the spaceship location/direction from which it can't avoid colliding with the station (and creating a path to guide it out of that state, so it can get on with part a))
However, with both of these, I haven't had much luck finding a solution. I already have code to detect intersections between vectors, boxes, points and spheres, but not bezier curves yet. I also have functions to let me find the distance between two points.
Any help would be most appreciated
Thanks,
James
Finding the exact intersections of a Cubic Bezier Curve involves solving a 5th or 6th degree polynomial. More feasible solutions are either using numerical methods, or subdividing the Bezier Curve.
protected void subdivide(
Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3,
Vector3 q0, Vector3 q1, Vector3 q2, Vector3 q3,
Vector3 q4, Vector3 q5, Vector3 q6) {
q0.x = p0.x; q0.y = p0.y; q0.z = p0.z;
q6.x = p3.x; q6.y = p3.y; q6.z = p3.z;
q1.x = (p0.x + p1.x) * 0.5;
q1.y = (p0.y + p1.y) * 0.5;
q1.z = (p0.z + p1.z) * 0.5;
q5.x = (p2.x + p3.x) * 0.5;
q5.y = (p2.y + p3.y) * 0.5;
q5.z = (p2.z + p3.z) * 0.5;
double x3 = (p1.x + p2.x) * 0.5;
double y3 = (p1.y + p2.y) * 0.5;
double z3 = (p1.z + p2.z) * 0.5;
q2.x = (q1.x + x3) * 0.5;
q2.y = (q1.y + y3) * 0.5;
q2.z = (q1.z + z3) * 0.5;
q4.x = (x3 + q1.x) * 0.5;
q4.y = (y3 + q1.y) * 0.5;
q4.z = (z3 + q1.z) * 0.5;
q3.x = (q2.x + q4.x) * 0.5;
q3.y = (q2.y + q4.y) * 0.5;
q3.z = (q2.z + q4.z) * 0.5;
}
q1..q3 becomes the first segment. q3..q6 becomes the second segment.
Subdivide the curve 2-5 times, and use the control-points as a polyline.
The curvature could be calculated at the end-points of each segment:
protected double curvatureAtStart(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3) {
double dx1 = p1.x - p0.x;
double dy1 = p1.y - p0.y;
double dz1 = p1.z - p0.z;
double A = dx1 * dx1 + dy1 * dy1 + dz1 * dz1;
double dx2 = p0.x - 2*p1.x + p2.x;
double dy2 = p0.y - 2*p1.y + p2.y;
double dz2 = p0.z - 2*p1.z + p2.z;
double B = dx1 * dx2 + dy1 * dy2 + dz1 * dz2;
double Rx = (dx2 - dx1*B/A)/A*2/3;
double Ry = (dy2 - dy1*B/A)/A*2/3;
double Rz = (dz2 - dz1*B/A)/A*2/3;
return Math.sqrt(Rx * Rx + Ry * Ry + Rz * Rz);
}
To solve Problem 1, subdivide the curve a few times, and calculate the curvature at each segment's endpoint. This will just be an approximation, but you could selectively subdivide segments with high curvature to get a better approximation in that region.
To solve Problem 2, you could subdivide three curves:
One with velocity zero at both endpoints (C0). This would produce a straight line.
One with velocity zero at the first endpoint, and one at the second (C1).
One with velocity one at the first endpoint, and zero at the second (C2).
If you subdivide all curves in the same way, you could quickly evaluate the control-points of the final curve. You blend the corresponding control-points, parametrized by the velocities at the end-points:
C[i] = C0[i] + (C1[i] - C0[i])*v1 + (C2[i] - C0[i])*v2
You could with this find valid parameter-ranges, so that no segment (evaluated as a straight line-segment) intersects the station. (v1 and v2 can go above 1.0).
I got a problem again. Since a couple of days I try to write a camera in Java without a gimbal lock. For solving this I try to use Quaternions and glMultMatrix from OpenGL. I also use the library "LWJGL" especially the classes Matrix4f, Vector4f and Quaternions.
Here is the code which calculates the Quaternions:
int DX = Mouse.getDX(); //delta-mouse-movement
int DY = Mouse.getDY();
Vector4f axisY = new Vector4f();
axisY.set(0, 1, 0,DY);
Vector4f axisX = new Vector4f();
axisX.set(1, 0, 0, DX);
Quaternion q1 = new Quaternion();
q1.setFromAxisAngle(axisX);
Quaternion q2 = new Quaternion();
q2.setFromAxisAngle(axisY);
Quaternion.mul(q1, q2, q1);
Quaternion.mul(camera,q1,camera);
And whit this I convert the Quaternion into a matrix:
public Matrix4f quatToMatrix(Quaternion q){
double sqw = q.w*q.w;
double sqx = q.x*q.x;
double sqy = q.y*q.y;
double sqz = q.z*q.z;
Matrix4f m = new Matrix4f();
// invs (inverse square length) is only required if quaternion is not already normalised
double invs = 1 / (sqx + sqy + sqz + sqw);
m.m00 = (float)(( sqx - sqy - sqz + sqw)*invs) ; // since sqw + sqx + sqy + sqz =1/invs*invs
m.m11 = (float)((-sqx + sqy - sqz + sqw)*invs);
m.m22 =(float) ((-sqx - sqy + sqz + sqw)*invs);
double tmp1 = q.x*q.y;
double tmp2 = q.z*q.w;
m.m10 = (float) (2.0 * (tmp1 + tmp2)*invs);
m.m01 = (float) (2.0 * (tmp1 - tmp2)*invs) ;
tmp1 = q.x*q.z;
tmp2 = q.y*q.w;
m.m20 = (float)(2.0 * (tmp1 - tmp2)*invs) ;
m.m02 = (float)(2.0 * (tmp1 + tmp2)*invs) ;
tmp1 = q.y*q.z;
tmp2 = q.x*q.w;
m.m21 = (float)(2.0 * (tmp1 + tmp2)*invs) ;
m.m12 = (float)(2.0 * (tmp1 - tmp2)*invs) ;
return m;
}
A converted Quaternion looks for example like this:
-0.5191307 0.027321965 -0.85425806 0.0
0.048408303 -0.9969446 -0.061303165 0.0
-0.8533229 -0.07317754 0.51622194 0.0
0.0 0.0 0.0 1.0
After this I draw the scene with this code:
java.nio.FloatBuffer fb = BufferUtils.createFloatBuffer(32);
quatToMatrix(camera).store(fb);
GL11.glMultMatrix(fb);
drawPlayer();
My problem now is that the camera maybe doesn't move, or doesn't move enough, because I only see my player model and nothing else (There also is another cube in the scene I draw after the player model).
I don't know what exactly is wrong. Is it the drawing, the rotation, or the converting?
Please help me.
EDIT:
that is my OpenGL initialisation:
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(45.0f, ((float) setting.displayW() / (float) setting.displayH()), 0.1f,10000.0f);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
Any Idea what is wrong?
you've got some errors in your mouse movement to quaternion function (where do you make a quaternion of the X movement?). Besides that, we'd also need to see the rest of your drawing setup code (projection matrix, modelview initialization).