I'm using lwjgl 3 as a Java wrapper to OpenGL, and I'm trying to figure out how to render things with Vertex Array Objects. I have not been able to find any examples online that show how to use a VAO that has both shader attributes and uses the element array buffer.
According to this vao tutorial, I need to do something like this:
// Create a new Vertex Array Object in memory and select it (bind)
// A VAO can have up to 16 attributes (VBO's) assigned to it by default
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
// A VBO is a collection of Vectors which in this case resemble the location of each vertex.
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, verticesBuffer, GL15.GL_STATIC_DRAW);
// Put the VBO in the attributes list at index 0
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
However, I don't understand how to adapt this tutorial to my project, because I am using glVertexAttribPointer to deal with fields in the shader. These fields have id's that are assigned to them when the shader is compiled. In the above example, they seem to be able to use the default field 0. However, I checked, for me posAttrib is in field 0.
How should I associate a VBO with my VAO?
private int createVao(int shaderProgramId, FloatBuffer vertices, IntBuffer order) {
int floatSize = 4;
int stride = 7 * floatSize;
int vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);
int vboId = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW);
int eboId = glGenBuffers();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, eboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, order, GL_STATIC_DRAW);
int posAttrib = shaderProgramService.getAttribute(shaderProgramId, "position");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, false, stride, 0);
int colAttrib = shaderProgramService.getAttribute(shaderProgramId, "color");
glEnableVertexAttribArray(colAttrib);
glVertexAttribPointer(colAttrib, 3, GL_FLOAT, false, stride, 2 * floatSize);
int texAttrib = shaderProgramService.getAttribute(shaderProgramId, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, false, stride, 5 * floatSize);
// glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); <==== Removed
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
return vaoId;
}
EDIT: Thanks #RetoKoradi, getting rid of the glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); call improved things. The scene flashed onscreen, but I got a seg-fault on the second frame. After some debugging, the issue was with another object I was trying to render that was not yet using VAO's. I was making calls like this to render the object:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, texturedPoints.getEboId());
glBindBuffer(GL_ARRAY_BUFFER, texturedPoints.getVboId());
glBindTexture(GL_TEXTURE_2D, texturedPoints.getTextureId());
specifyVertexAttributes();
int count = texturedPoints.getCount();
glDrawElements(GL_TRIANGLES, count, GL_UNSIGNED_INT, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Somehow that interfered with subsequent VAO rendering calls that looked like this:
glBindVertexArray(texturedPoints.getVaoId());
glBindTexture(GL_TEXTURE_2D, texturedPoints.getTextureId());
glDrawElements(GL_TRIANGLES, texturedPoints.getCount(), GL_UNSIGNED_INT, 0);
The glDrawElements call would seg-fault the second time that the main loop went around and called it. After moving everything over to VAO's the problems have gone away.
Related
I want to render my vertices with the order defined by an Index Buffer.
If I use
glDrawElements(int mode, ByteBuffer indices);
everything works fine, but if I want to upload this data to GL_ELEMENT_ARRAY_BUFFER I get a black screen and the glError 1281.
This is my code:
public void buildBuffer() {
//generate VBOs
FloatBuffer buffer = BufferUtils.createFloatBuffer(this.model.getVertexData().size() * 3);
for (float[] f : this.model.getVertexData()) {
buffer.put(f);
}
buffer.flip();
glBindBuffer(GL_ARRAY_BUFFER, this.vVBO);
glBufferData(GL_ARRAY_BUFFER, buffer, GL_STATIC_DRAW);
ByteBuffer ibuffer = BufferUtils.createByteBuffer(3);
ibuffer.put((byte) 1);
ibuffer.put((byte) 2);
ibuffer.put((byte) 3);
ibuffer.flip();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this.iVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ibuffer, GL_STATIC_DRAW);
glBindVertexArray(this.VAO);
glBindBuffer(GL_ARRAY_BUFFER, this.vVBO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
glBindVertexArray(0);
}
public void draw() {
this.shader.use();
glBindVertexArray(this.VAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this.iVBO);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, 0);
glBindVertexArray(0);
Shader.NONE.use();
}
It's hard to tell, but i'll try to take a shot at it.
You never actually create the VAO, before binding the vertex array, you have to set the VAO to vaoID = GL30.glGenVertexArrays();.
When you put the array of data into the FloatBuffer, you don't need a for loop. It's as simple as buffer.put(vertices);.
I started playing around with OpenGL 3.3+ last week, and I ran into this problem when trying to get indexed drawing to work. Right now, I'm just trying to get a triangle to draw using an IBO.
Index buffer, and indexHandle:
int[] tIndices = {
0, 1, 2
};
IntBuffer indexBuffer = BufferUtils.createIntBuffer(tIndices.length);
indexBuffer.put(tIndices);
indexBuffer.flip();
int indexHandle = glGenBuffers();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexHandle);
glBufferData(indexHandle, indexBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
This is the main drawing loop in my program, which draws nothing:
while (!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programHandle);
glBindVertexArray(vaoHandle);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexHandle);
// This call does nothing v
glDrawElements(GL_TRIANGLE_STRIP, tIndices.length, GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(0);
glBindVertexArray(0);
glUseProgram(0);
Display.update();
}
This is the main loop with the glDrawElements(int, IntBuffer) variant (which does draw my triangle):
while (!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programHandle);
glBindVertexArray(vaoHandle);
glEnableVertexAttribArray(0);
// indexBuffer is an IntBuffer
glDrawElements(GL_TRIANGLES, indexBuffer);
glDisableVertexAttribArray(0);
glBindVertexArray(0);
glUseProgram(0);
Display.update();
}
Can someone explain to me why the 1st block is not drawing anything at all? What am I doing wrong?
Also, the 2nd block runs at about 1.3k FPS on a 600x600 window, while the 1st one is about 800 FPS. Why is this?
I will provide more information in the morning if anyone needs them
Your first argument to glBufferData() is wrong for the index buffer. You have this:
glBufferData(indexHandle, indexBuffer, GL_STATIC_DRAW);
The first argument is the target, not the buffer id. It should look like this instead:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexBuffer, GL_STATIC_DRAW);
I'm developing a simple 2D game in Java using the LWJGL wrapper for OpenGL.
For the rendering method, I use VBOs. It seems very good and faster the the other rendering methods.I was reading some articles and was seeking for some questions here on StackOverflow and I discovered that using 2 triangles is better than using one quad, since modern GPUs show only triangles (And it'll be a waste to let the GPU translate that quad into triangles).
The only way I know is creating 2 buffers for storing the vertex data and the texture coordinates data. and that's for a quad, this is how I do it:
int vertexID; //Holding the GL buffer ID for the Vertex
int texCoordsID; //Holding the GL buffer ID for the texture coords
void init(){
//BufferUtils is a Utility class provided by the SlickUtil library, I use it for creating buffers.
//Create float buffer for storing vertex data
FloatBuffer vertexBuffer = BufferUtils.createFloatBuffer(4 * 2);
//Put vertex data inside the buffer
vertexBuffer.put(new float[]{
0, 0,
100, 0,
100, 100,
0, 100
});
//Rewind the buffer
vertexBuffer.rewind();
FloatBuffer texCoordsBuffer = BufferUtils.createFloatBuffer(4 * 2);
texCoordsBuffer.put(new float[]{
0, 0,
1, 0,
1, 1,
0, 1
});
texCoordsBuffer.rewind();
vertexID = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vertexID);
glBufferData(GL_ARRAY_BUFFER, vertexBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
texCoordsID = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texCoordsID);
glBufferData(GL_ARRAY_BUFFER, texCoordsBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void render(){
glBindTexture(GL_TEXTURE_2D, texture.id); //Not so important.
glBindBuffer(GL_ARRAY_BUFFER, vertexID);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, texCoordsID);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Now my main question is, how can you split it into 2 triangles instead of 1 quad?
And a side question: does it really matter for a 2D game? does it make a slight change?
Just change
glDrawArrays(GL_QUADS, 0, 4);
to
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
and add a slightly different vertex buffer
vertexBuffer.put(new float[]{
1.0, 1.0
-1.0, 1.0
1.0,-1.0
-1.0,-1.0
});
This represents two triangles, two 'points' are the same, so we have only four 'points'.
Just like:
x3__x4
| \ |
| \ |
x1__x2
I am using VAOs and Mapped VBOs together to get as much performance as I can. Now, my VBOs are interleaved in this form VCVCVCVCVCVC so there is vertex of 3 floats then a color of 4 floats.
My problem is that it doesn't recognize the color even when I have the correct stride and offset. This problem started happening when I implemented VAOs.
Imported parts of the code:
Mapping the VBO and creating the VAO
Binding everything:
glBindVertexArray(vaoHandel);
glBindBuffer(GL_ARRAY_BUFFER, vboHandel);
glBufferData(GL_ARRAY_BUFFER, NumberOfIndecies << 2, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 7<<2, 0<<2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 4, GL_FLOAT, false, 7<<2, 3<<2);
glEnableVertexAttribArray(1);
Mapping part:
ByteBuffer dataBuffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY, NumberofIndecies << 2, null);
FloatBuffer vboData = dataBuffer.order(ByteOrder.nativeOrder()).asFloatBuffer();
Bulding the VBO:
build(vboData);
vboData.flip();
Unmapping:
glUnmapBuffer(GL_ARRAY_BUFFER);
Unbinding:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
That was building and mapping the VBO and VAO.
Rendering:
glBindVertexArray(vaoHandel);
glDrawArrays(GL_QUADS, 0, capacity);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glBindVertexArray(0);
So again to recap My problem is that the COLORS don't work, they don't show up. The QUADS that I am drawing show up but they are WHITE. When after checking the colors I'm putting in are clearly RED.
I'm trying to learn the modern GLSL but i cant even display my a cube...
This is how i create VBO:
glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
glVertexPointer(3, GL_FLOAT, 0, 0L);
glBindBuffer(GL_ARRAY_BUFFER, vboNormalHandle);
glBufferData(GL_ARRAY_BUFFER, normalData, GL_STATIC_DRAW);
glNormalPointer(GL_FLOAT, 0, 0L);
glBindBuffer(GL_ARRAY_BUFFER, vboTextureHandle);
glBufferData(GL_ARRAY_BUFFER, textureData, GL_STATIC_DRAW);
glTexCoordPointer(2, GL_FLOAT, 0, 0L);
glBindBuffer(GL_ARRAY_BUFFER, 0);
this is how i render vbo:
glLoadIdentity();
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_MODELVIEW);
glTranslatef(0f, 0f, camera.zoom);
glRotatef(camera.rotation.x, 1, 0, 0);
glRotatef(camera.rotation.y, 0, 1, 0);
glRotatef(camera.rotation.z, 0, 0, 1);
glTranslatef(camera.position.x, camera.position.y, camera.position.z);
glPopAttrib();
texture.bind();
glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
glVertexPointer(3, GL_FLOAT, 0, 0L);
glBindBuffer(GL_ARRAY_BUFFER, vboNormalHandle);
glNormalPointer(GL_FLOAT, 0, 0L);
glBindBuffer(GL_ARRAY_BUFFER, vboTextureHandle);
glTexCoordPointer(2, GL_FLOAT, 0, 0L);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glMaterialf(GL_FRONT, GL_SHININESS, 10f);
glDrawArrays(GL_TRIANGLES, 0, triangles.size() * 3);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
TextureImpl.bindNone();
if i dont use any shader programs or use something in the old glsl 120 everything renders fine, but when i use this program which should display my cube as i think, i get a black screen...
Vertex shader program:
#version 330
layout (std140) uniform Matrices {
mat4 pvm;
} ;
in vec4 position;
out vec4 color;
void main()
{
gl_Position = pvm * position ;
}
Fragment shader program:
#version 330
out vec4 outputF;
void main()
{
outputF = vec4(1.0, 0.0, 0.0, 1.0);
}
What do i do wrong? Where can i find out how to do this basic stuff with the modern glsl?
What we have here is a failure to communicate.
Consider this:
glVertexPointer(3, GL_FLOAT, 0, 0L);
OK. You're telling OpenGL that the position data is provided by some buffer object and that it has 3 floats per vertex. OK, fine.
How does OpenGL know that this position data is supposed to go to position in the vertex shader?
Answer: it doesn't.
glVertexPointer is a function that has been removed from GL 3.1+. It doesn't feed data to arbitrary vertex shader inputs; it feeds data to the removed vertex shader input gl_Vertex. This is hard-coded.
You should be using generic vertex attributes, generally through glVertexAttribPointer and glEnableVertexAttribArray. You should also be using VAOs.
Similarly, there is no way for OpenGL to know that you want it's removed matrix functions to feed data to the uniform pvm. Indeed, this is much worse, because you put that in a uniform block. The data for a uniform block is supposed to come from a user-provided buffer object. Which you did not provide, and which OpenGL won't magically provide for you.
In short, you can't mix old-style OpenGL code with modern-style GLSL. You can use old-style GLSL (using gl_Vertex and gl_ModelViewProjectionMatrix), but then you're not using modern-style GLSL.