I'm making a game in Java using LWJGL and slick_util recently I'm trying to implement a health bar that will hover on top of the player's head. Problem is, it is not positioning correctly. The health bar's bottom left corner(from where OpenGL starts drawing) always appears at the top right corner of the player's rectangle, and when the player moves in either x or y or both, the health bar moves away from the player in the same direction. I think it might be a problem with the glTranslatef or perhaps something silly that I missed.
Render method of Player:
protected void render() {
Draw.rect(x, y, 32, 32, tex); //Player texture drawn
Draw.rect(x, y + 33, 32, 15, 1, 0, 0); //Health bar drawn. x is the same as player's, but y is +33 because I want it to hover on top
}
Draw class:
package rpgmain;
import static org.lwjgl.opengl.GL11.*;
import org.newdawn.slick.opengl.*;
/**
*
* #author Samsung
*/
public class Draw {
public static void rect(float x, float y, float width, float height, Texture tex) {
glEnable(GL_TEXTURE_2D);
glTranslatef(x, y, 0);
glColor4f(1f, 1f, 1f, 1f);
tex.bind();
glBegin(GL_QUADS); //Specifies to the program where the drawing code begins. just to keep stuff neat. GL_QUADS specifies the type of shape you're going to be drawing.
{
//PNG format for images
glTexCoord2f(0,1); glVertex2f(0, 0); //Specify the vertices. 0, 0 is on BOTTOM LEFT CORNER OF SCREEN.
glTexCoord2f(0,0); glVertex2f(0, height); //2f specifies the number of args we're taking(2) and the type (float)
glTexCoord2f(1,0); glVertex2f(width, height);
glTexCoord2f(1,1); glVertex2f(width, 0);
}
glEnd();
glDisable(GL_TEXTURE_2D);
}
public static void rect(float x, float y, float width, float height, float r, float g, float b) {
glDisable(GL_TEXTURE_2D);
glTranslatef(x, y, 0);
glColor3f(r, g, b);
glBegin(GL_QUADS); //Specifies to the program where the drawing code begins. just to keep stuff neat. GL_QUADS specifies the type of shape you're going to be drawing.
{
glVertex2f(0, 0); //Specify the vertices. 0, 0 is on BOTTOM LEFT CORNER OF SCREEN.
glVertex2f(0, height); //2f specifies the number of args we're taking(2) and the type (float)
glVertex2f(width, height);
glVertex2f(width, 0);
}
glEnd();
glEnable(GL_TEXTURE_2D);
}
}
I think your problem is with matrices. When you call glTranslatef, you are transforming the OpenGL ModelView matrix. However, since OpenGL is a state-based machine, this transformation is preserved for further drawing events. The second rectangle will be translated twice. What you want to do is use the OpenGL matrix stack. I'll rewrite one of the draw methods here:
public static void rect(float x, float y, float width, float height, Texture tex) {
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glTranslatef(x, y, 0);
glColor4f(1f, 1f, 1f, 1f);
tex.bind();
glBegin(GL_QUADS);
{
glTexCoord2f(0,1); glVertex2f(0, 0);
glTexCoord2f(0,0); glVertex2f(0, height);
glTexCoord2f(1,0); glVertex2f(width, height);
glTexCoord2f(1,1); glVertex2f(width, 0);
}
glEnd();
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
The line glPushMatrix() will add a new matrix to the top of the stack. All transformations from this point on will be applied to the newly added matrix. Then, when you call glPopMatrix(), the matrix is discarded and the stack is returned to it's previous state.
Related
I am making a 3D game that has a player with the follow cam. Before I started using real models I used the cube and I rendered it using displaylist and it everything moved fine. However, now that I am importing full 3D models with many more vertices, I looked into VBOs. I have a full structure setup for my VBOs and I can see the model drawn initially but it is drawn at the center of the game world. When I move the player, the model doesn't translate as it should. The model doesn't move its position.
Here is the code that I used initially to draw the player as a rectangle (which works):
public static void drawRectPrism(float centerx, float centery, float centerz, float length, float height, float width, float rx, float ry, float rz)
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
{
glTranslatef(centerx, centery, centerz);
glRotatef(rx, 1, 0, 0);
glRotatef(ry, 0, 1, 0);
glRotatef(rz, 0, 0, 1);
glTranslatef(-centerx, -centery, -centerz);
glTranslatef(-length/2f, -height/2f, -width/2f);
glBegin(GL_QUADS);
{
glColor3f(1.0f, 0, 0);
glVertex3f(centerx, centery, centerz);
glVertex3f(centerx + length, centery, centerz);
glVertex3f(centerx + length, centery + height, centerz);
glVertex3f(centerx, centery + height, centerz);
glColor3f(0, 1.0f, 0);
glVertex3f(centerx, centery, centerz + width);
glVertex3f(centerx + length, centery, centerz + width);
glVertex3f(centerx + length, centery + height, centerz + width);
glVertex3f(centerx, centery + height, centerz + width);
glColor3f(0, 0, 1.0f);
glVertex3f(centerx, centery, centerz);
glVertex3f(centerx, centery + height, centerz);
glVertex3f(centerx, centery + height, centerz + width);
glVertex3f(centerx, centery, centerz + width);
glColor3f(0, 1.0f, 1.0f);
glVertex3f(centerx + length, centery, centerz);
glVertex3f(centerx + length, centery + height, centerz);
glVertex3f(centerx + length, centery + height, centerz + width);
glVertex3f(centerx + length, centery, centerz + width);
glColor3f(1.0f, 1.0f, 0);
glVertex3f(centerx, centery, centerz);
glVertex3f(centerx + length, centery, centerz);
glVertex3f(centerx + length, centery, centerz + width);
glVertex3f(centerx, centery, centerz + width);
glColor3f(1.0f, 0, 1.0f);
glVertex3f(centerx, centery + height, centerz);
glVertex3f(centerx + length, centery + height, centerz);
glVertex3f(centerx + length, centery + height, centerz + width);
glVertex3f(centerx, centery + height, centerz + width);
}
glEnd();
}
glPopMatrix();
}
I tried a couple of different ways somewhere better than others and probably implemented terrible programming structure, but I figured it should still work.
First attempt: to adapt the rectangle code to load my vertices and models instead of specific rectangle verticies:
public void translate(float x, float y, float z, float rx, float ry, float rz)
{
File f = new File("graveDigga.obj");
try{
m = OBJLoader.loadModel(f);
}
catch(FileNotFoundException e)
{
e.printStackTrace();
Display.destroy();
System.exit(1);
}
catch(IOException e)
{
e.printStackTrace();
Display.destroy();
System.exit(1);
}
displayListChar = glGenLists(1);
glNewList(displayListChar, GL_COMPILE);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
{
glTranslatef(x, y, z);
glRotatef(rx, 1, 0, 0);
glRotatef(ry, 0, 1, 0);
glRotatef(rz, 0, 0, 1);
//glTranslatef(-x, -y, -z);
//glTranslatef(-length/2f, -height/2f, -width/2f);
glBegin(GL_TRIANGLES);
for(Face face : m.faces)
{
Vector2f t1 = m.textures.get((int) face.textures.x - 1);
glTexCoord2f(t1.x +x ,1-(t1.y +y ));
Vector3f n1 = m.normals.get((int) face.normal.x-1);
glNormal3f(n1.x +x ,n1.y+y,n1.z +z);
Vector3f v1 = m.vertices.get((int) face.vertex.x-1);
glVertex3f(v1.x +x,v1.y+y,v1.z+z);
Vector2f t2 = m.textures.get((int) face.textures.y - 1);
glTexCoord2f(t2.x +x, 1 - (t2.y+y ));
Vector3f n2 = m.normals.get((int) face.normal.y-1);
glNormal3f(n2.x+x,n2.y+y ,n2.z+z);
Vector3f v2 = m.vertices.get((int) face.vertex.y-1);
glVertex3f(v2.x+x,v2.y+y ,v2.z+z);
Vector2f t3 = m.textures.get((int) face.textures.z - 1);
glTexCoord2f(t3.x +x, 1 - (t3.y +y));
Vector3f n3 = m.normals.get((int) face.normal.z-1);
glNormal3f(n3.x+x,n3.y+y,n3.z +z);
Vector3f v3 = m.vertices.get((int) face.vertex.z-1);
glVertex3f(v3.x+x,v3.y+y,v3.z +z);
}
glEnd();
}
glPopMatrix();
//}
//glPopMatrix();
build();
}
I next tried to display model by creating a VBO from this data and call a rander method in my game loop. Before calling render I would run through the code to attempt to translate the position of VBO but nothing was happening.
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
{
//glLoadIdentity();
glTranslatef(x,y,z);
glRotatef(rx, 1, 0, 0);
glRotatef(ry, 0, 1, 0);
glRotatef(rz, 0, 0, 1);
glTranslatef(-x,-y,-z);
glDrawArrays(GL_TRIANGLES, 0, m.faces.size() * 3);
}
I am not sure if I should be using shaders for this or not, but part of me is questioning why it is hard to move a 3D model in world space? Is there a messy way that is easier to implement for a temporary basis?
Considering you're a new person with 1 rep, have an upvote. Nicely written question :).
I am not sure if I should be using shaders for this or not, but part of me is questioning why it is hard to move a 3D model in world space? Is there a messy way that is easier to implement for a temporary basis?
A simple glTranslatef() call should suffice to move an object, and simple glRotatef() call will rotate the object.
For example, using glRotatef(90, 1, 0, 0) returns this result:
Wheras without the line, the grass is not rotated at all:
Same with the ship. Using glRotatef(40, -1, 0, 0) returns this result:
Wheras without the line it just returns flat:
Obviously that is just the pitch. glRotatef(AmountToRotate, PITCH, YAW, ROLL) can roll the ship onto its side, or rotate the ship around.
Enough about rotation.
For rendering the grass, I just use this:
public void render(){
Main.TerrainDemo.shader.start();
glPushMatrix();
glDisable(GL_LIGHTING);
glTranslatef(location.x * TerrainDemo.scale, location.y, location.z * TerrainDemo.scale);
TexturedModel texturedModel = TerrainDemo.texModel;
RawModel model = texturedModel.getRawModel();
GL30.glBindVertexArray(model.getVaoID());
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texturedModel.getTexture().getID());
glScalef(10f, 10f, 10f);
glColor4f(0, 0, 0, 0.5f);
glDrawElements(GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL30.glBindVertexArray(0);
glEnable(GL_LIGHTING);
glPopMatrix();
Main.TerrainDemo.shader.stop();
}
To cut through code which you will probably not recognize/understand, I am basically just saying that glTranslatef(location.x * TerrainDemo.scale, location.y, location.z * TerrainDemo.scale) is the only thing that is setting the location. After translating, I simple set the scale (the size), and then draw it.
Wheras if I remove the glTranslatef() line, all the grass will just render in the same location:
So to answer your question:
use something like this (Psuedocode)
PUSH MATRIX HERE (glPushMatrix) SO TO NOT SAVE THE CURRENT TRANSLATION
USE GLTRANSLATEF ON LOCATION OF CURRENT OBJECT
RENDER/DRAW OBJECT
POP THE MATRIX
Unfortunately, looking through your code I could'nt find the specific issue that is actually causing it not to draw, meaning I cannot just say "Have xxx code and it will work", but I hope that it helped on how to move/rotate an object.
I am using VBOs just like you are for rendering the grass, ship, and trees (though I use a display list for terrain because I am lazy).
My skype is joehot200 if you wish to discuss anything further.
I'm playing around with creating a small voxel based project with LWJGL. Part of the project is loading small chunks of landscape around the player as they move. The loading part of this works okay, but I ran into an issue where as I walked along the +X axis, the chunks of landscape moving the same distance along the -X axis would load. This would also happen for the Z axis.
I got curious, so I tried reversing the X and Z axis rendering direction on the chunks, which seemed to fix the issue. However, I also decided to render the axis as lines as well, and verify that everything was now drawing correctly, with which I generated the following image:
(I can't embed images apparently, so link: http://i.imgur.com/y5hO1Im.png)
In this image, the red, blue and green lines are drawn along the negative axes, whereas the purple, yellow and cyan lines are drawn along the positive axes. What's really weird about this is that the image is showing that the camera is in the +X and +Z range, but internally, the position vector of the camera is in the -X and -Z range. This would make sense as to why the chunks were loading on the opposite axis, as if the camera was rendering on +X but was internally at a position of -X, then the -X chunks would be loaded instead.
So I'm not sure what's going on here anymore. I'm sure there's a small setting or incorrect positive/negative that I'm missing, but I just can't seem to find anything. So I guess my question is, is the camera rendering correctly with the internal position? If so, do I need to just reverse everything that I render? If not, is there something clearly visible in the camera that is messing up the rendering?
Some snippets of relevant code, trying to not to overflow the post with code blocks
Camera.java
public class Camera {
// Camera position
private Vector3f position = new Vector3f(x, y, z);
// Camera view properties
private float pitch = 1f, yaw = 0.0f, roll = 0.0f;
// Mouse sensitivity
private float mouseSensitivity = 0.25f;
// Used to change the yaw of the camera
public void yaw(float amount) {
this.yaw += (amount * this.mouseSensitivity);
}
// Used to change the pitch of the camera
public void pitch(float amount) {
this.pitch += (amount * this.mouseSensitivity);
}
// Used to change the roll of the camera
public void roll(float amount) {
this.roll += amount;
}
// Moves the camera forward relative to its current rotation (yaw)
public void walkForward(float distance) {
position.x -= distance * (float)Math.sin(Math.toRadians(yaw));
position.z += distance * (float)Math.cos(Math.toRadians(yaw));
}
// Moves the camera backward relative to its current rotation (yaw)
public void walkBackwards(float distance) {
position.x += distance * (float)Math.sin(Math.toRadians(yaw));
position.z -= distance * (float)Math.cos(Math.toRadians(yaw));
}
// Strafes the camera left relative to its current rotation (yaw)
public void strafeLeft(float distance) {
position.x -= distance * (float)Math.sin(Math.toRadians(yaw-90));
position.z += distance* (float)Math.cos(Math.toRadians(yaw-90));
}
// Strafes the camera right relative to its current rotation (yaw)
public void strafeRight(float distance) {
position.x -= distance * (float)Math.sin(Math.toRadians(yaw+90));
position.z += distance * (float)Math.cos(Math.toRadians(yaw+90));
}
// Translates and rotates the matrix so that it looks through the camera
public void lookThrough() {
GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f);
GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f);
GL11.glTranslatef(position.x, position.y, position.z);
}
}
Main.java render code
private void render() {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
GL11.glLoadIdentity();
// Set the view matrix to the player's view
this.player.lookThrough();
// Render the visible chunks
this.chunkManager.render();
// Draw axis
GL11.glBegin(GL11.GL_LINES);
// X Axis
GL11.glColor3f(1, 0, 0);
GL11.glVertex3f(-100, 0, 0);
GL11.glVertex3f(0, 0, 0);
GL11.glColor3f(1, 1, 0);
GL11.glVertex3f(0, 0, 0);
GL11.glVertex3f(100, 0, 0);
// Y Axis
GL11.glColor3f(0, 1, 0);
GL11.glVertex3f(0, -100, 0);
GL11.glVertex3f(0, 0, 0);
GL11.glColor3f(0, 1, 1);
GL11.glVertex3f(0, 0, 0);
GL11.glVertex3f(0, 100, 0);
// Z Axis
GL11.glColor3f(0, 0, 1);
GL11.glVertex3f(0, 0, -100);
GL11.glVertex3f(0, 0, 0);
GL11.glColor3f(1, 0, 1);
GL11.glVertex3f(0, 0, 0);
GL11.glVertex3f(0, 0, 100);
GL11.glEnd();
// Render the origin
this.origin.render();
}
chunkManager.render() just iterates through each of the loaded chunks and calls .render() on them, which in turn creates a giant solid cube that is rendered at the origin of the chunk.
More code can be provided if needed.
Replace
GL11.glTranslatef(position.x, position.y, position.z);
with
GL11.glTranslatef(-position.x, -position.y, -position.z);
Think about it, you want to be translating the world to the inverse of where the camera is so that that 0,0,0 is where the camera is.
I want to render 2D quads on my screen by switching to a 2D scene then switching back to 3D.
I dont want to use any external librarys besides LWJGL.
This is what I got so far:
private static void renderLetter(char c, float x, float y) {
int character = c+1;
GL11.glPushMatrix();
setOrthoOn();
GL11.glTranslatef(x, y, 0);
float[] xy = game.getResourceManager().getSpriteSheets().get(fontSheet).getXYForCell(character);
float cellx = game.getResourceManager().getSpriteSheets().get(fontSheet).getCell_sizeX();
float celly = game.getResourceManager().getSpriteSheets().get(fontSheet).getCell_sizeY();
float xx = xy[0];
float yy = xy[1];
GL11.glBindTexture(GL11.GL_TEXTURE_2D, game.getResourceManager().getTextures().get(game.getResourceManager().getSpriteSheets().get(fontSheet).getTextureID()));
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(xx, yy);
GL11.glVertex2f(0,0);
GL11.glTexCoord2f(xx+cellx, yy);
GL11.glVertex2f(fontSize,0);
GL11.glTexCoord2f(xx+cellx, yy+celly);
GL11.glVertex2f(fontSize,fontSize);
GL11.glTexCoord2f(xx, yy+celly);
GL11.glVertex2f(0,fontSize);
GL11.glEnd();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
setOrthoOff();
GL11.glPopMatrix();
}
public static void setOrthoOn()
{
GL11.glDisable(GL11.GL_LIGHTING);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1);
GL11.glViewport(0,0,1360,768);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 1360, 768, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
}
public static void setOrthoOff()
{
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(game.getFieldOfView(), 1360f/768f, 0.1f, 1000);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
}
the first method is to render a letter. what I am trying to do is by calling setOrthoOn switch to 2D rendering then render the quad then call setOrthoOff to switch back to 3D.
This code does nothing for me when I run it.. What am I doing wrong?
This should do it.
Note: some of the code may be extraneous. I'm not very experienced with OpenGL, but this worked for me. I would also recommend you import GL11 staticly. That way you don't have to type "GL11.BlaBlaBla", you can just type "BlaBlaBla".
So I'm using gluProject to take the 3d coordinates and tell me where on the screen they rendered. The problem is I don't get the same coordinates with gluProject and where they actually ended up rendering. The weird part is that if the difference between the x, y, and z of the camera and the point i'm testing are equal (e.g. camera at 5,3,4 and point at 2,0,1) it gives the correct values. I'm using gluLookAt() for camera transforms. I have confirmed that the same matrices that I use for rendering are being passed in to gluProject. Below is my rendering code, camera.look(), and my part with gluProject:
Rendering:
glEnable(GL_LIGHTING);
glEnable(GL_NORMAL_ARRAY);
glViewport(0, 0, Display.getHeight(), Display.getHeight());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0f, 1, .1f, 1000f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cam.look();
for (Cube cube : cubes) {
cube.draw();
}
grid.draw();
Camera.look():
void look() {
calcPoints(xAng, yAng);
gluLookAt(x, y, z, x + dx, y + dy, z + dz, 0, 1, 0);
}
part with gluProject:
glViewport(0, 0, Display.getHeight(), Display.getHeight());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0f, 1, .1f, 1000f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cam.look();
modelview = BufferUtils.createFloatBuffer(16); //floatbuffers, defined earlier
projection = BufferUtils.createFloatBuffer(16);
viewport = BufferUtils.createIntBuffer(16);
FloatBuffer result = BufferUtils.createFloatBuffer(3);
glGetFloat(GL_MODELVIEW_MATRIX, modelview);
glGetFloat(GL_PROJECTION_MATRIX, projection);
glGetInteger(GL_VIEWPORT, viewport);
gluProject(1f, 0f, 0f, modelview, projection, viewport, result);
//Utils.printFloatBuffer(projection);
//Utils.printFloatBuffer(modelview);
System.out.println(result.get(0) + " " + result.get(1) + " " + result.get(2));
Maybe try having the up vector at a right angle to the camera direction and position instead of (0, 1, 0) constantly.
I'm trying to have no size difference from sprites, if you increase the z to far away.
however i have no luck, it still gets smaller:
||EDIT||
I now have these methods
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
float _width = 320f;
float _height = 480f;
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0, _width, 0, _height, 1, 100);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
// Load textures ,
gl.glEnable(GL10.GL_TEXTURE_2D);
for (int a = 0; a < squares.length; a++) {
squares[a].loadGLTexture(gl, context);
}
}
.
public void onDrawFrame(GL10 gl) {
//Clear Screen And Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity(); //Reset The Current Modelview Matrix
gl.glTranslatef(.0f, 1.0f, locZ);
squares[0].draw(gl);
gl.glLoadIdentity();
gl.glTranslatef(0.5f, 0.f, locZ);
squares[1].draw(gl);
gl.glLoadIdentity();
gl.glTranslatef(-0.5f, -0.5f, locZ);
squares[2].draw(gl);
gl.glLoadIdentity();
gl.glTranslatef(-0.5f, -0.5f, locZ);
squares[3].draw(gl);
//change zvalues
if(locZ >= 4.0f){
speedZ *= -1.0f;
locZ = 3.9f;
}
else if(locZ <= -4.0){
speedZ *= -1.0f;
locZ = -3.9f;
}
locZ += speedZ;
}
I'm changing the z-values, and therefor the distance from the 'camera', and expecting that since I don't want to use perspective(orthographic mode), the sizes of the squares should stay constant. But they don't. Hope this helps some more.
You have bad glOrtho parameters:
gl.glOrthof(0, width, 0, height, 0.01f, 100.0f);
Or
gl.glOrthof(0, width, height, 0, 0.01f, 100.0f);
EDIT: forget to reset matrix - glLoadIdentity.
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
/* SET NEW PROJECTION HERE: ortho or perspective */
gl.glOrthof(0, _width, 0, _height, 0.001f, 100);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
/* SET NEW MODELVIEW MATRIX( space transformation ) HERE and DRAW YOUR STUFF */
//change zvalues
if(locZ >= 99.0f){
speedZ *= -1.0f;
locZ = 99.0f;
}
else if(locZ <= 1.0){
speedZ *= -1.0f;
locZ = 1.0f;
}
}
These steps have to be done before rendering 2D, resp. moving from 3D projection to 2D projection, not when creating texture or any object. Don't know much about public void onSurfaceCreated, but it doesn't seem to be part of rendering loop.
So the origin is in the middle of your GLSurfaceView, it's not a bad idea to do something like:
gl.glOrthof(-width/2, width/2, -height/2, height/2, 0.1f, 100.0f);
here you could have two methods; one to switch to orthoscopic view in which one openGLUnit = one screen pixel for drawing in 2d on screen. Then the next method switches it back to 3d drawing. Do your 2d drawing after rendering 3d and first call the switchToOrtho method and when your finished call the switchBackToFrustum method.
public void switchToOrtho() {
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrthof(0, self.view.bounds.size.width, 0, self.view.bounds.size.height, -5, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity();
}
public void switchBackToFrustum() {
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
}