I have been trying to make some 3d stuff with JOGL.
When rendering 2 2d boxes in the 3d plane FPS can drop to under 15
when I move the camera closer. I don't know exactly what is taking down the performance so much.
float camX = (float) Main.cameraX;
float camY = (float) Main.cameraY;
float camZ = (float) Main.cameraZ;
float camRotX = (float) Main.cameraRotX;
float camRotY = (float) Main.cameraRotY;
float camRotZ = (float) Main.cameraRotZ;
gl.glRotatef(camRotX, 1, 0, 0);
gl.glRotatef(camRotY, 0, 1, 0);
gl.glRotatef(camRotZ, 0, 0, 1);
gl.glTranslatef(x - camX, y - camY, z - camZ);
gl.glVertexPointer(3, GL2.GL_FLOAT, 0, verticesBuffer);
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDrawArrays(GL2.GL_TRIANGLES, 0, vertices.length / 3);
gl.glTranslatef(-x + camX, -y + camY, -z + camZ);
gl.glRotatef(-camRotZ, 0, 0, 1);
gl.glRotatef(-camRotY, 0, 1, 0);
gl.glRotatef(-camRotX, 1, 0, 0);
Related
I am working on an Android application using OpenGL.
In a database, I store the rotation of objects using local Euler rotation, x, y, then z, but in the editor, I would like to apply a global rotation by the x, y or z global axis. I took two approaches, outlined below.
I've simplified these methods to remove irrelevant Android code.
I've tried taking the matrix approach, but the object appears to rotate in an axis not aligned with the global x, y or z after calling the method a second time. I've read somewhere that the floating point error builds up over time making the rotation matrix "numerically unstable", which I assume is what's happening in the first method.
// rotAxis = 0 means rotation around the X global axis
// rotAxis = 1 means rotation around the Y global axis
// rotAxis = 2 means rotation around the Z global axis
public void executeRotationWithMatrix(float rotAngle, int rotAxis){
float[] rotationMatrix = new float[16];
// Matrix class is in android.opengl
Matrix.setIdentityM(rotationMatrix, 0);
switch (rotAxis){
case 0:
Matrix.rotateM(rotationMatrix, 0, rotAngle, 1.f, 0.f, 0.f);
break;
case 1:
Matrix.rotateM(rotationMatrix, 0, rotAngle, 0.f, 1.f, 0.f);
break;
case 2:
Matrix.rotateM(rotationMatrix, 0, rotAngle, 0.f, 0.f, 1.f);
break;
}
float rotx = getLocalRotationOfObjectOnX(); // Pseudocode
float roty = getLocalRotationOfObjectOnY(); // Pseudocode
float rotz = getLocalRotationOfObjectOnZ(); // Pseudocode
Matrix.rotateM(rotationMatrix, 0, rotx, 1.f, 0.f, 0.f);
Matrix.rotateM(rotationMatrix, 0, roty, 0.f, 1.f, 0.f);
Matrix.rotateM(rotationMatrix, 0, rotz, 0.f, 0.f, 1.f);
Vector3f rotationVector = rotationMatrixToEulerAngles(rotationMatrix);
saveLocalRotationOfObjectOnX(rotationVector.x); // Pseudocode
saveLocalRotationOfObjectOnY(rotationVector.y); // Pseudocode
saveLocalRotationOfObjectOnZ(rotationVector.z); // Pseudocode
}
In the second method, I tried to take the rotation quaternion approach by applying the rotations, but I get even weirder results whenever I try to use this method.
// rotAxis = 0 means rotation around the X global axis
// rotAxis = 1 means rotation around the Y global axis
// rotAxis = 2 means rotation around the Z global axis
public void executeRotationWithQuat(float rotAngle, int rotAxisInd){
Quat4f rotationQuat = new Quat4f(0, 0, 0, 1);
Quat4f tempQuat = new Quat4f(0, 0, 0, 1);
switch (rotAxisInd){
case 0:
QuaternionUtil.setRotation(tempQuat, new Vector3f(1, 0, 0), rotAngle);
break;
case 1:
QuaternionUtil.setRotation(tempQuat, new Vector3f(0, 1, 0), rotAngle);
break;
case 2:
QuaternionUtil.setRotation(tempQuat, new Vector3f(0, 0, 1), rotAngle);
break;
}
tempQuat.normalize();
rotationQuat.mul(tempQuat);
rotationQuat.normalize();
float rotx = getLocalRotationOfObjectOnX(); // Pseudocode
float roty = getLocalRotationOfObjectOnY(); // Pseudocode
float rotz = getLocalRotationOfObjectOnZ(); // Pseudocode
QuaternionUtil.setRotation(tempQuat, new Vector3f(1, 0, 0), rotx); tempQuat.normalize();
rotationQuat.mul(tempQuat);
rotationQuat.normalize();
QuaternionUtil.setRotation(tempQuat, new Vector3f(0, 1, 0), roty); tempQuat.normalize();
rotationQuat.mul(tempQuat);
rotationQuat.normalize();
QuaternionUtil.setRotation(tempQuat, new Vector3f(0, 0, 1), rotz); tempQuat.normalize();
rotationQuat.mul(tempQuat);
rotationQuat.normalize();
float qw = rotationQuat.w;
float qx = rotationQuat.x;
float qy = rotationQuat.y;
float qz = rotationQuat.z;
float[] rotationMatrix = new float[]{
1.0f - 2.0f*qy*qy - 2.0f*qz*qz, 2.0f*qx*qy - 2.0f*qz*qw, 2.0f*qx*qz + 2.0f*qy*qw, 0.0f,
2.0f*qx*qy + 2.0f*qz*qw, 1.0f - 2.0f*qx*qx - 2.0f*qz*qz, 2.0f*qy*qz - 2.0f*qx*qw, 0.0f,
2.0f*qx*qz - 2.0f*qy*qw, 2.0f*qy*qz + 2.0f*qx*qw, 1.0f - 2.0f*qx*qx - 2.0f*qy*qy, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
Vector3f rotationVector = rotationMatrixToEulerAngles(rotationMatrix);
saveLocalRotationOfObjectOnX(rotationVector.x); // Pseudocode
saveLocalRotationOfObjectOnY(rotationVector.y); // Pseudocode
saveLocalRotationOfObjectOnZ(rotationVector.z); // Pseudocode
}
The following are helper methods used in the above two methods.
public Vector3f rotationMatrixToEulerAngles(float[] m){
float sy = (float)Math.sqrt(m[6]*m[6] + m[10]*m[10]);
float x, y, z;
x = (float)Math.atan2(m[6], m[10]);
y = (float)Math.atan2(-m[2], sy);
z = (float)Math.atan2(m[1], m[0]);
//convert angles from radians to degrees
float conFactor = (float)(180/Math.PI);
x *= conFactor;
y *= conFactor;
z *= conFactor;
return new Vector3f(x, y, z);
}
public class QuaternionUtil {
public static void setRotation(Quat4f q, Vector3f axis, float angle) {
float d = axis.length();
assert (d != 0f);
float s = (float)Math.sin(angle * 0.5f) / d;
q.set(axis.x * s, axis.y * s, axis.z * s, (float) Math.cos(angle * 0.5f));
}
}
public class Vector3f{
public final float length() {
return (float)Math.sqrt((double)(this.x * this.x + this.y * this.y + this.z * this.z));
}
}
Any help would be greatly appreciated!
I have a problem with an Orthographic projection, isometric camera view, where objects I draw far away from center 0, 0, 0 on a scene is clipped even if the objects is placed in the view.
In the onSurfaceChanged method I have:
GLES20.glViewport(0, 0, width, height);
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
int useForOrtho = Math.min(width, height);
mProjectionMatrix = createOrthoProjectionMatrix(left, right, top, bottom, near, far);
Matrix.orthoM(mViewMatrix, 0,
-useForOrtho / 2,
useForOrtho / 2,
-useForOrtho / 2,
useForOrtho / 2, 0.01f, 100.0f);
Matrix.rotateM(mViewMatrix, 0, 45, 0.0f, 0.0f, 1.0f);
Matrix.rotateM(mViewMatrix, 0, 35, 1.0f, -1.0f, 0.0f);
Here is the function for the projection matrix:
public float[] createOrthoProjectionMatrix(float left, float right, float top, float bottom, float near, float far)
{
float[] m = new float[16];
m[0] = 2.0f / (right - left);
m[1] = 0.0f;
m[2] = 0.0f;
m[3] = 0.0f;
m[4] = 0.0f;
m[5] = 2.0f / (top - bottom);
m[6] = 0.0f;
m[7] = 0.0f;
m[8] = 0.0f;
m[9] = 0.0f;
m[10] = -2.0f / (far - near);
m[11] = 0.0f;
m[12] = -(right + left ) / (right - left );
m[13] = -(top + bottom) / (top - bottom);
m[14] = -(far + near ) / (far - near );
m[15] = 1.0f;
return m;
}
Where can I control how far out on x, y, z away from 0, 0, 0 that objects can be drawn without being clipped? The problem objects isn't placed "out of view", but more like out of the current rendervolume - out of distance - having triangles clipped.
I`m using Vertex Buffers in JOGL. I have a few hundred thousand triangles. Each triangle contains :
9 floats for the vertices - 3 for each edge
3 floats for the surface normal
3 floats for the colors.
I can`t seem to display the triangles or the colors. I know the normals are being calculated correctly.
This doesn`t work.
gl.glDrawArrays(GL2.GL_TRIANGLES, 0, vertxcnt);
But, the below snippet works - however I don`t see the colors. So, I know the points that are making up the triangles are correct.
gl.glDrawArrays(GL2.GL_POINTS, 0, vertxcnt);
So, if the points and the normals are correctly being calculated, I thinking is I`m going wrong in the render(gl) function. The code for that is below. What am I doing wrong? I cant post SSCCE now due to the complexity, but would like to know if anything is glaringly wrong.
private void render(GL2 gl) {
// VBO
// Enable Pointers
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, VBOVertices[0]); // Set Pointers To Our Data
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY); // Enable Vertex Arrays
gl.glVertexPointer(3, GL.GL_FLOAT, BufferUtil.SIZEOF_FLOAT * 15, 0); //15 = 9 vertices of triangles + 3 normal + 3 colors
gl.glEnableClientState(GL2.GL_NORMAL_ARRAY);
gl.glNormalPointer(GL.GL_FLOAT, BufferUtil.SIZEOF_FLOAT * 15, BufferUtil.SIZEOF_FLOAT * 9);
gl.glEnableClientState(GL2.GL_COLOR_ARRAY);
gl.glColorPointer(3, GL.GL_FLOAT, BufferUtil.SIZEOF_FLOAT * 15, BufferUtil.SIZEOF_FLOAT * 12);
// Render
// Draw All Of The Triangles At Once
gl.glPointSize(4);
gl.glDrawArrays(GL2.GL_POINTS, 0, vertxcnt);
// Disable Pointers
// Disable Vertex, Normals and Color Arrays
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_NORMAL_ARRAY);
gl.glDisableClientState(GL2.GL_COLOR_ARRAY);
}
Here is the init and display functions.
#Override
public void init(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClearColor(0.0f, 0.0f, 6.0f, 0.5f);
gl.glClearDepth(1.0f); // Depth Buffer Setup
gl.glDepthFunc(GL.GL_LEQUAL); // The Type Of Depth Testing (Less Or
// Equal)
gl.glEnable(GL.GL_DEPTH_TEST); // Enable Depth Testing
gl.glDepthFunc(GL2.GL_LESS);
gl.glEnable(GL2.GL_LIGHTING);
gl.glEnable(GL2.GL_LIGHT0);
gl.glEnable(GL2.GL_AUTO_NORMAL);
gl.glEnable(GL2.GL_NORMALIZE);
gl.glEnable(GL2.GL_CULL_FACE);
gl.glFrontFace(GL2.GL_CCW);
gl.glCullFace(GL2.GL_BACK);
gl.glHint(GL2.GL_PERSPECTIVE_CORRECTION_HINT, GL2.GL_NICEST);
gl.glShadeModel(GL2.GL_SMOOTH);
buildVBOs(gl);
}
#Override
public void display(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClearColor(.0f, .0f, .2f, 0.9f);
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
glu.gluLookAt(45, 0, 0, 0, 0, 0, 0.0, 1.0, 0.0);
float ma_x = (float) getMax(fx0);
float mi_x = (float) getMin(fx0);
float tr_x = (ma_x + mi_x) / 2;
float ma_y = (float) getMax(fy0);
float mi_y = (float) getMin(fy0);
float tr_y = (ma_y + mi_y) / 2;
float ma_z = (float) getMax(fz0);
float mi_z = (float) getMin(fz0);
float tr_z = (ma_z + mi_z) / 2;
gl.glScalef(scaleFac, scaleFac, scaleFac);
gl.glRotatef(rotFac, 0, 1, 0);
gl.glTranslatef(-tr_x, -tr_y, -tr_z);
for (int i = 0; i < 30; i++) {
render(gl);
gl.glRotatef(12, 0, 0, 1);
}
}
*/
private void createVects(double ang) {
int cnt = fx0.size();
for (int i = 0; i < cnt - 1; i++) {
// Triangle 1 and 2 [Top]
float x0 = (float) (fx0.get(i) * Math.cos(ang) - fy0.get(i) * Math.sin(ang));
float y0 = (float) (fx0.get(i) * Math.sin(ang) + fy0.get(i) * Math.cos(ang));
float z0 = fz0.get(i).floatValue();
Vect3D v0 = new Vect3D(x0, y0, z0);
fvert.add(v0); // 0
float x1 = (float) (fx0.get(i + 1) * Math.cos(ang) - fy0.get(i + 1) * Math.sin(ang));
float y1 = (float) (fx0.get(i + 1) * Math.sin(ang) + fy0.get(i + 1) * Math.cos(ang));
float z1 = fz0.get(i + 1).floatValue();
Vect3D v1 = new Vect3D(x1, y1, z1);
fvert.add(v1);// 1
float x2 = (float) (fx1.get(i + 1) * Math.cos(ang) - fy1.get(i + 1) * Math.sin(ang));
float y2 = (float) (fx1.get(i + 1) * Math.sin(ang) + fy1.get(i + 1) * Math.cos(ang));
float z2 = fz1.get(i + 1).floatValue();
Vect3D v2 = new Vect3D(x2, y2, z2);
fvert.add(v2);// 2
Vect3D n0 = calcNormal(v0, v1, v2);
fnorm.add(n0);
// VBO
vertices.put(x0); //vertices of the triangle
vertices.put(y0);
vertices.put(z0);
vertices.put(x1);
vertices.put(y1);
vertices.put(z1);
vertices.put(x2);
vertices.put(y2);
vertices.put(z2);
vertices.put(n0.x); // normals
vertices.put(n0.y);
vertices.put(n0.z);
vertices.put(0.5f); // colors // for now
vertices.put(0.0f);
vertices.put(0.0f);
}
}
I did a simple app based on the sample of getting started at developer.android.com. It was working fine, but I changed the rotation logic and now it doesn't show the triangle. I'm not sure I understood the matrix stuff, so I would like if someone could check my code.
This is the SurfaceView method I think may have a problem:
#Override
public boolean onTouchEvent(MotionEvent e){
float x = e.getX();
float y = e.getY();
switch(e.getAction()){
case MotionEvent.ACTION_MOVE:
float rotateX = 0.0f;
float rotateY = 0.0f;
float rotateZ = 0.0f;
//na coluna da esquerda, rotação no eixo Z
if(x < getWidth() / 3){
rotateZ = (y - previousY);
}
//na coluna do meio, rotação no eixo X
if(getWidth()/3 < x && x < 2*getWidth()/3){
rotateX = (y - previousY);
}
//na coluna da direita, rotação no eixo Z invertido
if(x > getWidth() / 3){
rotateZ = - (y - previousY);
}
//na linha superior, rotação no eixo Z
if(y < getHeight() / 3){
rotateZ = (x - previousX);
}
//na linha do meio, rotação no eixo Y
if(getHeight()/3 < y && y < 2*getHeight()/3){
rotateY = (x - previousY);
}
//na linha inferior, rotação no eixo Z invertido
if(y > 2*getHeight() / 3){
rotateZ = - (x - previousX);
}
mRenderer.setAngulo(mRenderer.getAnguloX() + (rotateX) * TOUCH_SCALE_FACTOR,
mRenderer.getAnguloY() + (rotateY) * TOUCH_SCALE_FACTOR,
mRenderer.getAnguloZ() + (rotateZ) * TOUCH_SCALE_FACTOR);
requestRender();
}
previousX = x;
previousY = y;
return true;
}
This is the renderer method:
#Override
public void onDrawFrame(GL10 unused) {
//desenha o fundo
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//define a posição da camera
//Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5f, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
//cria uma matrix para a rotação de cada angulo
Matrix.setRotateM(xRotationMatrix, 0, anguloX, 1, 0, 0);
Matrix.setRotateM(yRotationMatrix, 0, anguloY, 0, 1, 0);
Matrix.setRotateM(zRotationMatrix, 0, anguloZ, 0, 0, 1);
float[] mMatrix = new float[16];
//aplica todas as rotações na matrix scratch
Matrix.multiplyMM(mMatrix, 0, xRotationMatrix, 0, mMatrix, 0);
Matrix.multiplyMM(mMatrix, 0, yRotationMatrix, 0, mMatrix, 0);
Matrix.multiplyMM(mMatrix, 0, zRotationMatrix, 0, mMatrix, 0);
// Calcula a view e depois a projeção
Matrix.multiplyMM(resultingMatrix, 0, mViewMatrix, 0, mMatrix, 0);
Matrix.multiplyMM(resultingMatrix, 0, mProjectionMatrix, 0, resultingMatrix, 0);
// Draw shape
triangulo.draw(resultingMatrix);
}
The Triangulo class must be right because I didn't changed it since the last time the app was working.
Well, i analysed your code thoroughly and there does not seem to be any problem in it.
But, i guess the problem is this :
When you call
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5f, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
You are setting the camera at Z = -5 and looking towards origin i.e towards the +ve Z direction.
(I guess initially the vertices of the triangle must be in XY plane and hence the triangle is visible.)
Then when you call
Matrix.setRotateM(zRotationMatrix, 0, anguloZ, 0, 0, 1);
It rotates the triangle about Z axis which is fine.
But then you call
Matrix.setRotateM(xRotationMatrix, 0, anguloX, 1, 0, 0);
Matrix.setRotateM(yRotationMatrix, 0, anguloY, 0, 1, 0);
Here you are rotating the triangle about X and Y axis and since triangles have no thickness,
It becomes invisible.
Possible fixes :
1) Remove the rotations about X and Y axis and check if it works.
OR
2) Put a cube instead of triangle and see if it works.
OR
3) If neither works, give me the full code.I will debug and run it.
Best of Luck
Ok, i am a self taught programmer, and i am trying to use lwjgl and slik-utils to make a library to provide tools to make games. i have been trying to make a spritesheet, and i am using glTexCoord() to try to get only a part of the image. But to my best efforts, i thas not worked. here is the draw code.
public SpriteSheet draw(int x, int y, Point2D p)
{
GL11.glPushMatrix();
float x1 = p.posX * size + (size / 2F);
float y1 = p.posY * size + (size / 2F);
float d = 1F / texture.getImageHeight();
int i = size / 2;
//Texture centers for coords
float x2 = x1 * d;
float y2 = y1 * d;
float d1 = i * d;
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID());
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(x2 - d1, y2 - d1);
//GL11.glTexCoord2f(0, 0);
GL2D.vertex(x - i, y - i);
GL11.glTexCoord2f(x2 - d1, y2 + d1);
//GL11.glTexCoord2f(0, 1);
GL2D.vertex(x - i, y + i);
GL11.glTexCoord2f(x2 + d1, y2 + d1);
//GL11.glTexCoord2f(1, 1);
GL2D.vertex(x + i, x + i);
GL11.glTexCoord2f(x2 + d1, y2 - d1);
//GL11.glTexCoord2f(1, 0);
GL2D.vertex(x - i, y + i);
GL11.glEnd();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
GL11.glPopMatrix();
//TexturedQuad2D t = new TexturedQuad2D(32, 32, Color.black, texture);
//t.draw(x, y);
return this;
}
Yes, You are doing it wrong :)
your x and y are your world space coordinates and not your texture coordinate space, imagine your x and y being on coordinates 700, 800 and your texture size is 512. 700 / 512 = 1.4, texture coordinate go from 0 to 1.
so as first step try to set your texture coordinates with 0 and 1.
GL11.glTexCoord2f(0, 0);
GL11.glTexCoord2f(0, 1);
GL11.glTexCoord2f(1, 1);
GL11.glTexCoord2f(1, 0);
now as second step start to figure out how to calculate a portion of your texture coordinates
so if we have a sprite with 2 frames we go from 0 to 0.5 for first frame and 0.5 to 1 for second
//first frame would be
GL11.glTexCoord2f(0, 0);
GL11.glTexCoord2f(0, 1);
GL11.glTexCoord2f(0.5, 1);
GL11.glTexCoord2f(0.5, 0);
//second frame would be
GL11.glTexCoord2f(0.5, 0);
GL11.glTexCoord2f(0.5, 1);
GL11.glTexCoord2f(1, 1);
GL11.glTexCoord2f(1, 0);
Now as third step write a sprite class to calculate this coordinates for you!
ps. on second notice are you enabling textures for OpenGL anywhere GL11.glEnable(GL11.GL_TEXTURE_2D); to get any texture at all?