Quaternions and drawing with glMultMatrix (OpenGL) - java

I got a problem again. Since a couple of days I try to write a camera in Java without a gimbal lock. For solving this I try to use Quaternions and glMultMatrix from OpenGL. I also use the library "LWJGL" especially the classes Matrix4f, Vector4f and Quaternions.
Here is the code which calculates the Quaternions:
int DX = Mouse.getDX(); //delta-mouse-movement
int DY = Mouse.getDY();
Vector4f axisY = new Vector4f();
axisY.set(0, 1, 0,DY);
Vector4f axisX = new Vector4f();
axisX.set(1, 0, 0, DX);
Quaternion q1 = new Quaternion();
q1.setFromAxisAngle(axisX);
Quaternion q2 = new Quaternion();
q2.setFromAxisAngle(axisY);
Quaternion.mul(q1, q2, q1);
Quaternion.mul(camera,q1,camera);
And whit this I convert the Quaternion into a matrix:
public Matrix4f quatToMatrix(Quaternion q){
double sqw = q.w*q.w;
double sqx = q.x*q.x;
double sqy = q.y*q.y;
double sqz = q.z*q.z;
Matrix4f m = new Matrix4f();
// invs (inverse square length) is only required if quaternion is not already normalised
double invs = 1 / (sqx + sqy + sqz + sqw);
m.m00 = (float)(( sqx - sqy - sqz + sqw)*invs) ; // since sqw + sqx + sqy + sqz =1/invs*invs
m.m11 = (float)((-sqx + sqy - sqz + sqw)*invs);
m.m22 =(float) ((-sqx - sqy + sqz + sqw)*invs);
double tmp1 = q.x*q.y;
double tmp2 = q.z*q.w;
m.m10 = (float) (2.0 * (tmp1 + tmp2)*invs);
m.m01 = (float) (2.0 * (tmp1 - tmp2)*invs) ;
tmp1 = q.x*q.z;
tmp2 = q.y*q.w;
m.m20 = (float)(2.0 * (tmp1 - tmp2)*invs) ;
m.m02 = (float)(2.0 * (tmp1 + tmp2)*invs) ;
tmp1 = q.y*q.z;
tmp2 = q.x*q.w;
m.m21 = (float)(2.0 * (tmp1 + tmp2)*invs) ;
m.m12 = (float)(2.0 * (tmp1 - tmp2)*invs) ;
return m;
}
A converted Quaternion looks for example like this:
-0.5191307 0.027321965 -0.85425806 0.0
0.048408303 -0.9969446 -0.061303165 0.0
-0.8533229 -0.07317754 0.51622194 0.0
0.0 0.0 0.0 1.0
After this I draw the scene with this code:
java.nio.FloatBuffer fb = BufferUtils.createFloatBuffer(32);
quatToMatrix(camera).store(fb);
GL11.glMultMatrix(fb);
drawPlayer();
My problem now is that the camera maybe doesn't move, or doesn't move enough, because I only see my player model and nothing else (There also is another cube in the scene I draw after the player model).
I don't know what exactly is wrong. Is it the drawing, the rotation, or the converting?
Please help me.
EDIT:
that is my OpenGL initialisation:
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(45.0f, ((float) setting.displayW() / (float) setting.displayH()), 0.1f,10000.0f);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
Any Idea what is wrong?

you've got some errors in your mouse movement to quaternion function (where do you make a quaternion of the X movement?). Besides that, we'd also need to see the rest of your drawing setup code (projection matrix, modelview initialization).

Related

Computing Mouse Position to 3d Space - OpenGL

I'm trying to create a ray to that translates my mouse coordinates to 3d world coordinates.
Cx = Mx / screenWidth * 2 - 1
Cy = -( My / screenHeight * 2 - 1 )
vNear = InverseViewProjectionMatrix * ( Cx, Cy, -1, 1 )
VFar = InverseViewProjectionMatrix * ( Cx, Cy, 1, 1 )
vNear /= vNear.w
vFar /= vFar.w
After testing the ray's vFar always appears to come from the same general direction
It seems like I need to add the camera perspective as I would expect vFar to always be behind my camera.
I'm not entirely sure how that should be added in. Here's my test code.
public void mouseToWorldCordinates(Window window,Camera camera, Vector2d mousePosition){
float normalised_x = (float)((mousePosition.x / (window.getWidth()*2)) -1);
float normalised_y = -(float)((mousePosition.y / (window.getHeight()*2)) -1);
Vector4f mouse = new Vector4f(normalised_x,normalised_y,-1,1);
Matrix4f projectionMatrix = new Matrix4f(transformation.getProjectionMatrix()).invert();
Matrix4f mouse4f = new Matrix4f(mouse,new Vector4f(),new Vector4f(),new Vector4f());
Matrix4f vNear4f = projectionMatrix.mul(mouse4f);
Vector4f vNear = new Vector4f();
vNear4f.getColumn(0,vNear);
mouse.z = 1f;
projectionMatrix = new Matrix4f(transformation.getProjectionMatrix()).invert();
mouse4f = new Matrix4f(mouse,new Vector4f(),new Vector4f(),new Vector4f());
Matrix4f vFar4f = projectionMatrix.mul(mouse4f);
Vector4f vFar = new Vector4f();
vFar4f.getColumn(0,vFar);
vNear.div(vNear.w);
vFar.div(vFar.w);
lines[0] = vNear.x;
lines[1] = vNear.y;
lines[2] = vNear.z;
lines[3] = vFar.x;
lines[4] = vFar.y;
lines[5] = vFar.z;
}
The computation of normalised_x and normalised_y is wrong. Normalized device coordinates are in range [-1.0, 1.0]:
float normalised_x = 2.0f * (float)mousePosition.x / (float)window.getWidth() - 1.0f;
float normalised_y = 1.0f - 2.0f * (float)mousePosition.y / (float)window.getHeight();

OpenCV HoughLine only detect one line in image

I am following the docs/tutorial of openCV to detect lines in image. However, I only got one out of the four similar lines in the image.
Here is the result
And here is my code:
Mat im = Imgcodecs.imread("C:/Users/valer/eclipse-workspace/thesis-application/StartHere/resource/4 lines.JPG");
Mat gray = new Mat(im.rows(), im.cols(), CvType.CV_8SC1);
Imgproc.cvtColor(im, gray, Imgproc.COLOR_RGB2GRAY);
Imgproc.Canny(gray, gray, 50, 150);
Mat lines = new Mat();
Imgproc.HoughLines(gray, lines, 1, Math.PI/180, 200);
for (int i = 0; i < lines.cols(); i++){
double data[] = lines.get(0, i);
double rho = data[0];
double theta = data[1];
double cosTheta = Math.cos(theta);
double sinTheta = Math.sin(theta);
double x0 = cosTheta * rho;
double y0 = sinTheta * rho;
Point pt1 = new Point(x0 + 10000 * (-sinTheta), y0 + 10000 * cosTheta);
Point pt2 = new Point(x0 - 10000 * (-sinTheta), y0 - 10000 * cosTheta);
Imgproc.line(im, pt1, pt2, new Scalar(0, 0, 200), 3);
}
Imgcodecs.imwrite("C:/Users/valer/eclipse-workspace/thesis-application/StartHere/resource/process/line_output.jpg", im);
I have tried playing around with the parameters for threshold, but I kept getting same (and sometimes worst) results.
Would anyone please point out where am I doing wrong?
In the lines matrix result, lines are stored by row, not by column.
So lines.rows() gives you line count and you can iterate with lines.get(i, 0) to fetch each line.

Billboard facing the camera has wrong rotation near 180 degrees

I've implemented a particle system. I'm drawing their textures on billboards that should be rotated towards the camera.
This works fine except for the case when the angle between particle->camera and the normal comes near to 180 degrees. Then the particle starts rotating around itself many times.
The angle is calculated using cos(angle) = dot(a, b) / (length(a) * length(b), the length are both 1 cause the Vectors are normalized.
The axis is calculated using the cross product of those two vectors.
glDisable(GL_CULL_FACE);
//calculate rotation
Vector3f normal = new Vector3f(0, 0, 1);
Vector3f dir = Vector3f.sub(new Vector3f(GraphicsData.camera.x, GraphicsData.camera.y, GraphicsData.camera.z), new Vector3f(x, y, z), null);
if(dir.length() == 0)
{
glEnable(GL_CULL_FACE);
return;
}
dir = (Vector3f) dir.normalise();
float angle = (float) Math.toDegrees(Math.acos(Vector3f.dot(normal, dir)));
Vector3f rotationAxis = Vector3f.cross(normal, dir, null);
rotationAxis = (Vector3f) rotationAxis.normalise();
System.out.println("Angle: + " + angle + " Axis: " + rotationAxis);
glBindTexture(GL_TEXTURE_2D, ParticleEngine.particleTextures.get(typeId).texture.getTextureID());
glColor4f(1f,1f,1f, time >= lifeTime - decayTime ? ((float)lifeTime - (float)time) / ((float)lifeTime - (float)decayTime) : 1f);
shaderEngine.createModelMatrix(new Vector3f(x, y, z), new Vector3f(angle * rotationAxis.x, angle * rotationAxis.y, angle * rotationAxis.z), new Vector3f(sx, sy, sz));
shaderEngine.loadModelMatrix(shaderEngine.particle);
glCallList(ParticleEngine.particleTextures.get(typeId).displayListId + textureIndex);
glEnable(GL_CULL_FACE);
What am i doing wrong calculating the rotation?
public static void createModelMatrix(Vector3f pos, Vector3f rot, Vector3f scale)
{
GraphicsData.camera.modelMatrix = new Matrix4f();
GraphicsData.camera.modelMatrix.setIdentity();
GraphicsData.camera.modelMatrix.translate(pos);
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.x), new Vector3f(1,0,0));
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.y), new Vector3f(0,1,0));
GraphicsData.camera.modelMatrix.rotate((float) Math.toRadians(rot.z), new Vector3f(0,0,1));
GraphicsData.camera.modelMatrix.scale(scale);
}
More a long comment or perhaps a partial answer to the problem:
If you are computing the cross product anyway, then use that
norm( a × b ) = sin(angle) * norm(a)*norm(b)
dot(a,b) = cos(angle) * norm(a)*norm(b)
to determine
angle = atan2( norm(a×b), dot(a,b) )

Drawing normal faces with triangle strips?

I am having to calculate the normals for a triangle strip and am having a issue where every other triangle is dark and not shaded well. I am using the flat shade model. I can't tell if it has to do with the winding direction. When I look under the triangle strip i notice that it is the same thing as the top except the dark areas or switched. I think what the problem may be is that the surface normals I am trying to calculate are using shared vertices. If that is the case would you recommend switching to GL_TRIANGLES? How would you resolve this?
Here is what I have as of now. The triangle class is has the triVerts array in it which have three Vert objects. The Vert objects have variables x, y, and z.
Triangle currentTri = new Triangle();
int triPointIndex = 0;
List<Triangle> triList = new ArrayList<Triangle>()
GL11.glBegin(GL11.GL_TRIANGLE_STRIP);
int counter1 = 0;
float stripZ = 1.0f;
float randY;
for (float x=0.0f; x<20.0f; x+=2.0f) {
if (stripZ == 1.0f) {
stripZ = -1.0f;
} else { stripZ = 1.0f; }
randY = (Float) randYList.get(counter1);
counter1 += 1;
GL11.glVertex3f(x, randY, stripZ);
Vert currentVert = currentTri.triVerts[triPointIndex];
currentVert.x = x;
currentVert.y = randY;
currentVert.z = stripZ;
triPointIndex++;
System.out.println(triList);
Vector3f normal = new Vector3f();
float Ux = currentTri.triVerts[1].x - currentTri.triVerts[0].x;
float Uy = currentTri.triVerts[1].y - currentTri.triVerts[0].y;
float Uz = currentTri.triVerts[1].z - currentTri.triVerts[0].z;
float Vx = currentTri.triVerts[2].x - currentTri.triVerts[0].x;
float Vy = currentTri.triVerts[2].y - currentTri.triVerts[0].y;
float Vz = currentTri.triVerts[2].z - currentTri.triVerts[0].z;
normal.x = (Uy * Vz) - (Uz * Vy);
normal.y = (Uz * Vx) - (Ux * Vz);
normal.z = (Ux * Vy) - (Uy * Vx);
GL11.glNormal3f(normal.x, normal.y, normal.z);
if (triPointIndex == 3) {
triList.add(currentTri);
Triangle nextTri = new Triangle();
nextTri.triVerts[0] = currentTri.triVerts[1];
nextTri.triVerts[1] = currentTri.triVerts[2];
currentTri = nextTri;
triPointIndex = 2;
}
}
GL11.glEnd();
I had to draw a pyramid with about 8-10 faces and some lighting and I used triangles to be properly lighted. For each triangle I had to calculate the normal. This way it worked. Also I think is important to keep the clockwise/counter sense in which you draw the vertices for each triangle. I hope it helps.

java 3D rotation with quaternions

I have this method for rotating points in 3D using quaternions, but it seems not to work properly:
public static ArrayList<Float> rotation3D(ArrayList<Float> points, double angle, int xi, int yi, int zi)
{
ArrayList<Float> newPoints = new ArrayList<>();
for (int i=0;i<points.size();i+=3)
{
float x_old = points.get(i);
float y_old = points.get(i+1);
float z_old = points.get(i+2);
double w = Math.cos(angle/2.0);
double x = xi*Math.sin(angle/2.0);
double y = yi*Math.sin(angle/2.0);
double z = zi*Math.sin(angle/2.0);
float x_new = (float) ((1 - 2*y*y -2*z*z)*x_old + (2*x*y + 2*w*z)*y_old + (2*x*z-2*w*y)*z_old);
float y_new = (float) ((2*x*y - 2*w*z)*x_old + (1 - 2*x*x - 2*z*z)*y_old + (2*y*z + 2*w*x)*z_old);
float z_new = (float) ((2*x*z + 2*w*y)*x_old + (2*y*z - 2*w*x)*y_old + (1 - 2*x*x - 2*y*y)*z_old);
newPoints.add(x_new);
newPoints.add(y_new);
newPoints.add(z_new);
}
return newPoints;
}
If i make this call rotation3D(list, Math.toRadians(90), 0, 1, 0); where points is (0,0,10), the output is (-10.0, 0.0, 2.220446E-15), but it should be (-10,0,0), right? Could someone take a look at my code and tell me if is there somethig wrong?
Here are 4 screens that represent the initial position of my object, and 3 rotations with -90 degrees (the object is not properly painted, that's a GL issue, that i will work on later):
I haven't studied the code but what you get from it is correct: Assuming a left-handed coordinate system, when you rotate the point (0,0,10) 90 degrees around the y-axis (i.e. (0,1,0)) you end up with (-10,0,0).
If your coordinate system is right-handed I think you have to reverse the sign of the angle.

Categories

Resources