Rotation matrix for direction vector - java

I've been playing with some algorithms on the internet for a while and I can't seem to get them to work, so I'm tossing the question out here;
I am attempting to render a velocity vector line from a point. Drawing the line isn't difficult: just insert a line with length velocity.length into the graph. This puts the line centered at the point in the y-axis direction. We need to get this now in the proper rotation and translation.
The translational vector is not difficult to calculate: it is half the velocity vector. The rotational matrix, however, is being exceedingly elusive to me. Given a directional vector <x, y, z>, what's the matrix I need?
Edit 1: Look; if you don't understand the question, you probably won't be able to give me an answer.
Here is what I currently have:
Vector3f translation = new Vector3f();
translation.scale(1f/2f, body.velocity);
Vector3f vec_z = (Vector3f) body.velocity.clone();
vec_z.normalize();
Vector3f vec_y; // reference vector, will correct later
if (vec_z.x == 0 && vec_z.z == 0) {
vec_y = new Vector3f(-vec_z.y, 0f, 0f); // could be optimized
} else {
vec_y = new Vector3f(0f, 1f, 0f);
}
Vector3f vec_x = new Vector3f();
vec_x.cross(vec_y, vec_z);
vec_z.normalize();
vec_y.cross(vec_x, vec_z);
vec_y.normalize();
vec_y.negate();
Matrix3f rotation = new Matrix3f(
vec_z.z, vec_z.x, vec_z.y,
vec_x.z, vec_x.x, vec_x.y,
vec_y.z, vec_y.x, vec_y.y
);
arrowTransform3D.set(rotation, translation, 1f);
based off of this article. And yes, I've tried the standard rotation matrix (vec_x.x, vec_y.x, etc) and it didn't work. I've been rotating the columns and rows to see if there's any effect.
Edit 2:
Apologies about the rude wording of my comments.
So it looks like there were a combination of two errors; one of which House MD pointed out (really bad naming of variables: vec_z was actually vec_y, and so on), and the other was that I needed to invert the matrix before passing it off to the rendering engine (transposing was close!). So the modified code is:
Vector3f vec_y = (Vector3f) body.velocity.clone();
vec_y.normalize();
Vector3f vec_x; // reference vector, will correct later
if (vec_y.x == 0 && vec_y.z == 0) {
vec_x = new Vector3f(-vec_y.y, 0f, 0f); // could be optimized
} else {
vec_x = new Vector3f(0f, 1f, 0f);
}
Vector3f vec_z = new Vector3f();
vec_z.cross(vec_x, vec_y);
vec_z.normalize();
vec_x.cross(vec_z, vec_y);
vec_x.normalize();
vec_x.negate();
Matrix3f rotation = new Matrix3f(
vec_x.x, vec_x.y, vec_x.z,
vec_y.x, vec_y.y, vec_y.z,
vec_z.x, vec_z.y, vec_z.z
);
rotation.invert();

This should do you

Dupe.
The question there involves getting a rotation to a certain axis, whereas I'm concerned with getting a rotation matrix.
Gee, I wonder if you could turn convert one to the other?
BTW, your current solution of picking an arbitrary y axis and then reorthogonalising should work fine; it looks bugged though, or at least badly written. 'z_vec' is not a good variable-name for the y-axis. What's with the 'z,x,y' ordering, anyway?
If it still doesn't work, try making random changes until it does - transpose the matrix, negate vectors until you have an even number of sign errors, that kind of thing.
Also your tone of voice comes across as sort-of rude, given that you're asking strangers to spend their time helping you.

Related

Local rotation in specific implementation openGL and LWJGL

I am also fiddling with the global/local rotation problem and I cannnot put my finger on it. I used a lwjgl book for the implementation of my game, using openGL and LWJGL. I am using the JOML lib for vectors and matrices.
The modelview matrix construction is below. By the book, this is originally without local rotations, I added them myself. The idea is that each object has a global and local rotation. Those rotations get individually calculated and then are multiplied left/right side with the modelview matrix.
public Matrix4f getModelViewMatrix(Object obj, Matrix4f viewMatrix) {
Vector3f rotation = obj.getRot();
Vector3f localRot = obj.getLocalRot();
Matrix4f localRotMat = new Matrix4f().identity();
Matrix4f worldRotMat = new Matrix4f().identity();
localRotMat.rotateLocalX((float)Math.toRadians(localRot.x)).
rotateLocalY((float)Math.toRadians(localRot.y)).
rotateLocalZ((float)Math.toRadians(localRot.z));
worldRotMat.rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z));
modelViewMatrix.identity().translate(obj.getPos());
modelViewMatrix.mulLocal(localRotMat);
modelViewMatrix.mul(worldRotMat);
modelViewMatrix.scale(obj.getScale());
Matrix4f viewCurr = new Matrix4f(viewMatrix);
return viewCurr.mul(modelViewMatrix);
}
This still results in local rotations around the 'wrong' axes. I've seen implementations using quaternions and read about gimbal lock and the like, but either the answers are very specific or too general for me. Furthermore, it would be great if I wouldn't need to use a quaternions implementation, as I would have to refactor a lot of code possibly.
Relevant code for the object class:
// Object class
private final Vector3f rot;
private final Vector3f localRot;
public Object() {
pos = new Vector3f(0, 0, 0);
scale = 1;
rot = new Vector3f(0, 0, 0);
localRot = new Vector3f(0, 0, 0);
}
// getters and setters for above
Can somebody explain what is wrong about the calculation of the rotations for the modelview matrix?
EDIT:
I can rewrite the code like below, which is a bit more in line with the hints from #GeestWagen. However, the 'local rotation' of my object is still displayed as global, so it indeed seems like it is applied 'the same' rotation twice. However, now I am stuck, because I cant find more documentation on these functions (rotateLocal/rotate).
modelViewMatrix.identity().translate(obj.getPos()).
rotateLocalX((float)Math.toRadians(-localRot.x)).
rotateLocalY((float)Math.toRadians(-localRot.y)).
rotateLocalZ((float)Math.toRadians(-localRot.z)).
rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z)).
scale(obj.getScale());
Okay, I finally fixed it. It resulted in me doing a bunch more research. What I came up with was the following:
Vector3f rotation = obj.getRot();
Vector3f localRot = obj.getLocalRot();
Quaternionf rotationQ = new Quaternionf().
rotateAxis((float)Math.toRadians(-localRot.z), new Vector3f(0f, 0f, 1f)).
rotateAxis((float)Math.toRadians(-localRot.y), new Vector3f(0f, 1f, 0f)).
rotateAxis((float)Math.toRadians(-localRot.x), new Vector3f(1f, 0f, 0f)).
premul(new Quaternionf().
rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z))
);
modelViewMatrix.identity().
translate(obj.getPos()).
rotate(rotationQ).
scale(obj.getScale());
This is inspired by among others this and this. What confused me a lot was the lack of hits on doing local and global rotations. Most stuff I was able to find was either. This creates a quaternion and sets the x, y, and z axes to the local rotation of the object. Then it pre-multiplies by a quaternion which axes are set to the global rotation of the object. Then this resulting quaternion is used for the modelView matrix.
Thus, for combining local and global rotations, a quaternion seems to be necessary. I thought it was used to make sure the axes do not change in a local/global rotation, but they should also be used when combining both.

LWJGL Mesh to JBullet collider

I'm working on creating a voxel engine in LWJGL 3, I have all the basics down (chunks, mesh rendering, etc).
Now I'm working on adding physics using JBullet. This is my first time using JBullet directly, but I've used Bullet before in other 3D engines.
From here I gathered that all I needed to do to create a collision object the same shape as my mesh was the plug the vertices and indices into a TriangleIndexVertexArray and use that for a BvhTriangleMeshShape.
Here is my code:
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Float.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
I know that the vertices and indices are valid as I'm getting them here after drawing my mesh.
This seems to somewhat work, but when I try to drop a cube rigidbody onto the terrain, it seems to collide way above the terrain! (I know that the cube is setup correctly because if I remove the mesh collider it hits the base ground plane at y=0).
I thought maybe it was a scaling issue (although I don't see how that could be), so I tried changing:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f))); to:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 0.5f)));
But after changing the scale from 1 it acted like the mesh collider didn't exist.
It's hard to find any resources or code for JBullet surrounding mesh collision, and I've been working on this for almost 2 days, so I'm hoping maybe some of you people who have done it before can help me out :)
Update 1:
I created an implementation of the IDebugDrawer so I could draw the debug infomation in the scene.
To test it I ran it with just a basic ground plane and a falling cube. I noticed that when the cube is falling the aabb matches the cube size, but when it hits the floor the aabb becomes significantly larger then it was.
I'm going to assue that this is normal Bullet behavior due to collition bouncing, and look at that later as it doesn't effect my current problem.
I re-enabled the generation of the colliders from the chunk meshs, and saw this:
It appears that the aabb visualization of the chunk is a lot higher then the actual chunk (I know my y positioning of the overall collision object is correct).
I'm going to try to figure out if I can draw the actual collision mesh or not.
Update 2:
As far as I can see looking at the source, the meshof the colliders should be drawing in debug, so I'm not sure why it isn't.
I tried changing the Box rigidbody to a sphere, and it actually rolled across the top of the visualized aabb for the terrain collider. It just rolled flat though and didn't go hit or down where there where hills or dips in the terrain where, so it was obviously just rolling across the flat top of the aabb.
So after adding in the Debug Drawer, I was confused as to why the aabb was x2 larger then it should have been.
After spending hours trying little adjustments, I noticed something odd - there was a 0.25 gap between the collider and the edge of the chunk. I proceeded to zoom out and surprisingly noticed this:
There is an extera row and column of colliders? No that doesn't make sense, there should be 5x5 colliders to match the 5x5 chunks.
Then I counted blocks and realized that the colliders where spanning 64 blocks (my chunks are 32x32!).
I quickly realized that this was a scaling issue, and after adding
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
To scale the colliders down by half, everything fit and worked! My "sphere" rolled and came to a stop where there was a hill in the terrain like it should.
My full code for coverting an LWJGL mesh to a JBullet mesh collder is:
public void addMesh(org.joml.Vector3f position, Mesh mesh){
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Integer.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.rewind();
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Integer.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.rewind();
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
}
Update 1:
Even though the scaling was the fix for said prolem, it caused me to look deeper and realize that I mistakenly was using to block size (0.5f) for the mesh scaling factor in my mesh view matrix. Changing the scale to 1 like it should be fixed it.

Issues with Raytracing triangles (orientation and coloring)

EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.

Lambertian shader still not working

yesterday i posted this:
Lambertian Shader not working
My shader is still not working, I've done some debugging to try to find the reason. When I run my program and hit a sphere, shader.shade(renderable.colour) is called and this code is run:
public Colour shade(Intersection intersection, Light light){
Vector3D lightDirection = light.location.subtract(intersection.point);
lightDirection.normalise();
Normal normal = intersection.normal;
normal.normalise();
Colour finalColour = new Colour();
float lambCoef = (float) normal.dot(lightDirection);
if(lambCoef>0){
finalColour.r = Math.max(0.0f, diffuseColour.r * lambCoef * light.intensity.r);
finalColour.g = Math.max(0.0f, diffuseColour.g * lambCoef * light.intensity.g);
finalColour.b = Math.max(0.0f, diffuseColour.b * lambCoef * light.intensity.b);
}
return finalColour;
}
I'm getting different values for lambCoef each time but not by very much for example for the red sphere, for pixels about 20 pixels vertical from each other, I get:
0.9446402
0.94463843
0.9446326
0.94462925
For to get the normal for the sphere I use:
public Normal getNormalAt(Vector3D point) {
Normal normal = new Normal(point);
normal = normal.subtract(center);
normal = normal.multiply(-1);
normal.normalise();
return normal;
}
which seems to work.
Then for my dot and cross code I use:
public double dot(Vector3D vector){
return x*vector.x + y*vector.y +z*vector.z;
}
public double dot(Point3D point){
return x*point.x + y*point.y +z*point.z;
}
public double dot(Normal normal){
return x*normal.x + y*normal.y +z*normal.z;
}
public Vector3D cross(Vector3D vector) {
Vector3D crossedVector = new Vector3D();
crossedVector.x = y*vector.z - z*vector.y;
crossedVector.y = z*vector.x - x*vector.z;
crossedVector.z = x*vector.y - y*vector.x;
return crossedVector;
}
Which also seems to be correct.
Any help would really be appreciated, and I'll be happy to provide more info if needed.
I'm now getting this sort of image:
Which sort of makes sense since the plane is at a much shallower angle to the spheres. It's still wrong though.
+1 for using discrete Normal, Point, and Vector classes. In your shade() method, it doesn't look to me like you're accounting for the distance of the light source to the intersection point. Basically, you want intersection points that are further away from a given light to receive less light than points that are closer to the light. You can fudge this for point light sources by adjusting the light's intensity by a factor of c / (distance * distance), where c is an empirically-determined lightness correction factor (start with 1 and then adjust upward if the result is too dark) and distance is the distance between the point light source and your intersection point. When I say adjust I mean add that term to the finalColour calculation, not actually change the value of light.intensity.
Once you get that working, you may want to think about sampling light sources in terms of PDFs instead of using the 1/d*d hack.

Rotation of cube by recalcutating each vertex

i expose my problem!
I'm rotate the cube recalculating the individual coordinates of each vertex at each rotation and I must say
I'm getting excellent results. If the cube rotates along a single axis everything to perfection, however,
rotates along two or three axes vertices ranging widening causing the cube after a bit of
time has stratospheric size.
Below is the code for the rotation, behind which I think is hiding
problem.
The multiplication of the components of the individual vertices for the respective row of the rotation matrix
make the vertex detached from its trajectory, but I do not understand why.
//for each cube
for(contatoreOnDraw=0;contatoreOnDraw<numberOfCube;contatoreOnDraw++)
{
x=contatoreOnDraw*3;
y=(contatoreOnDraw*3)+1;
z=(contatoreOnDraw*3)+2;
gl.glPushMatrix();
gl.glTranslatef(translation[row][x], translation[row][y], translation[row][z]);
float angle=2;
//Rotation matrix 3x3
c =(float) Math.cos(angle*(Math.PI/180));
s =(float) Math.sin(angle*(Math.PI/180));
rotation[0][0] = (rX*rX*(1-c)) + c;
rotation[0][1] = (rX*rY*(1-c))-rZ*s;
rotation[0][2] = (rX*rZ*(1-c))+rY*s;
rotation[1][0] = (rY*rX*(1-c))+rZ*s;
rotation[1][1] = (rY*rY*(1-c)) + c;
rotation[1][2] = (rY*rZ*(1-c))-rX*s;
rotation[2][0] = (rX*rZ*(1-c))-rY*s;
rotation[2][1] = (rY*rZ*(1-c))+rX*s;
rotation[2][2] = (rZ*rZ*(1-c)) + c;
//Updating each vertex component
for(int i=0;i<70;i=i+3)
{
vX_tmp=(rotation[0][0]*cubes[contatoreOnDraw].getVertices(i))+(rotation[0][1]*cubes[contatoreOnDraw].getVertices(i+1))+(rotation[0][2]*cubes[contatoreOnDraw].getVertices(i+2));
vY_tmp=(rotation[1][0]*cubes[contatoreOnDraw].getVertices(i))+(rotation[1][1]*cubes[contatoreOnDraw].getVertices(i+1))+(rotation[1][2]*cubes[contatoreOnDraw].getVertices(i+2));
vZ_tmp=(rotation[2][0]*cubes[contatoreOnDraw].getVertices(i))+(rotation[2][1]*cubes[contatoreOnDraw].getVertices(i+1))+(rotation[2][2]*cubes[contatoreOnDraw].getVertices(i+2));
cubes[contatoreOnDraw].setVertices(i, vX_tmp);
cubes[contatoreOnDraw].setVertices(i+1, vY_tmp);
cubes[contatoreOnDraw].setVertices(i+2, vZ_tmp);
}
cubes[contatoreOnDraw].draw(gl);
gl.glPopMatrix();
}
Thanks to all!!
I've find the solution, i've not normalized the vector (rX,rY,rZ).
After normalization all work perfect!
#datenwolf:
thanks for reply, i've already done the same operation on GPU, but i want execute it on CPU for an other question!

Categories

Resources