I have been stuck all day yesterday with this problem and cant figure it out. The code is below but generally i am trying to give a mesh Vertex Attributes for 1.Postions 2.Indices 3.Normals and 4.a single float value.
The values are all stored in different VBOs and after binding each vbo i declare the vertexAttribPointer. I cant get both normals and float value working. What im seeing seems like the position of the float value is either the x y or z part of the normals vec3 in the previous vbo.
GL4 gl = GLContext.getCurrentGL().getGL4();
int[] vaoids = new int[1];
gl.glGenVertexArrays(1,vaoids,0);
int[] vboids = new int[4];
gl.glGenBuffers(4,vboids,0);
gl.glBindVertexArray(vaoids[0]);
FloatBuffer verticesBuffer = FloatBuffer.allocate(mesh.vertices.length);
verticesBuffer.put(mesh.vertices);
verticesBuffer.flip();
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vboids[0]);
gl.glBufferData(gl.GL_ARRAY_BUFFER, mesh.vertices.length * 4 ,verticesBuffer,gl.GL_STATIC_DRAW);
gl.glEnableVertexAttribArray(0);
gl.glVertexAttribPointer(0, 3, gl.GL_FLOAT, false, 0, 0);
verticesBuffer.clear();
verticesBuffer = null;
//normal buffer
FloatBuffer normalBuffer = FloatBuffer.allocate(mesh.normals.length);
normalBuffer.put(mesh.normals);
normalBuffer.flip();
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vboids[2]);
gl.glBufferData(gl.GL_ARRAY_BUFFER, mesh.normals.length * 4 ,normalBuffer,gl.GL_STATIC_DRAW);
gl.glEnableVertexAttribArray(2);
gl.glVertexAttribPointer(2, 3, gl.GL_FLOAT, false, 0, 0);
normalBuffer.clear();
normalBuffer = null;
//color buffer
float[] colors = new float[mesh.vertices.length/3];
Arrays.fill(colors,255.0f);
FloatBuffer colorBuffer = FloatBuffer.allocate(colors.length);
colorBuffer.put(colors);
colorBuffer.flip();
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vboids[3]);
gl.glBufferData(gl.GL_ARRAY_BUFFER, colors.length * 4 ,colorBuffer,gl.GL_STATIC_DRAW);
gl.glEnableVertexAttribArray(3);
gl.glVertexAttribPointer(3, 1, gl.GL_FLOAT,false, 0, 0);
colorBuffer.clear();
colorBuffer = null;
IntBuffer indicesBuffer = IntBuffer.allocate(mesh.indices.length);
indicesBuffer.put(mesh.indices);
indicesBuffer.flip();
gl.glBindBuffer(gl.GL_ELEMENT_ARRAY_BUFFER, vboids[1]);
gl.glBufferData(gl.GL_ELEMENT_ARRAY_BUFFER, mesh.indices.length * 4 ,indicesBuffer,gl.GL_STATIC_DRAW);
gl.glEnableVertexAttribArray(1);
gl.glVertexAttribPointer(1, mesh.type.equals(MeshType.TRIANGLE) ? 3 : mesh.type.equals(MeshType.LINE) ? 2 : mesh.type.equals(MeshType.POINT) ? 1:0, gl.GL_UNSIGNED_INT, false, 0, 0);
indicesBuffer.clear();
indicesBuffer = null;
//gl.glBindBuffer(gl.GL_ARRAY_BUFFER,0);
gl.glBindVertexArray(0);
This the code that declares the vao and vbos. I render with glDrawElements and enable the needed VertexAttributeArray Indices before that. In my Shader I access the value as following:
layout (location=0) in vec3 position;
layout (location=2) in vec3 normal;
layout (location=3) in float color;
out vec3 normals;
out vec4 positionWorldSpace;
out flat float vertexColor;
And the fragment shader
in flat float color;
I can get both of them working separate but if i declare both they float values are not correct anymore. The normals seems to be right however. As i said the values in the float seem the be values from the normals. Can there be some sort of overflow from the normal vbo to the float vbo? After hours of looking at the code i just cant spot the error.
The indices are not attributes. The [Index buffer](Index buffers) (GL_ELEMENT_ARRAY_BUFFER) is stated in the VAO directly. See Vertex Specification.
When you use glDrawArrays then the order vertex coordinates of the vertex coordinates in the array defines the primitives. If you want to use a different order or you want to use vertices for different primitives, then you have to use glDrawElements. When you use glDrawElements, then the primitives are defined by the vertex indices in the GL_ELEMENT_ARRAY_BUFFER buffer:
gl.glBindBuffer(gl.GL_ELEMENT_ARRAY_BUFFER, vboids[1]);
gl.glBufferData(gl.GL_ELEMENT_ARRAY_BUFFER, mesh.indices.length * 4 ,indicesBuffer,gl.GL_STATIC_DRAW);
// DELETE
//gl.glEnableVertexAttribArray(1);
//gl.glVertexAttribPointer(1, mesh.type.equals(MeshType.TRIANGLE) ? 3 : mesh.type.equals(MeshType.LINE) ? 2 : mesh.type.equals(MeshType.POINT) ? 1:0, gl.GL_UNSIGNED_INT, false, 0, 0);
indicesBuffer.clear();
indicesBuffer = null;
gl.glBindVertexArray(0);
gl.glDrawElements(gl.GL_TRIANGLES, mesh.indices.length, gl.GL_UNSIGNED_INT, null);
Related
I have a problem with rendering multiple instances of an object, using one VertexArrayObject and four VertexBufferObjects.
I cannot get my head around what's wrong with my approach.
Here are the basics:
My relatively simple Vertex-Shader-code:
#version 330 core
precision highp float;
layout (location=0) in vec3 position;
layout (location=1) in vec2 texcoord;
layout (location=3) in mat4 modelViewMatrix;
out vec2 textureCoord;
uniform mat4 pr_matrix;
void main() {
textureCoord = vec2(texcoord.x, texcoord.y);
vec4 mvPos = modelViewMatrix * vec4(position, 1.0);
gl_Position = pr_matrix * mvPos;
}
As you can see, I try to pass the model view matrix (model and camera_view combined) as an VertexAttribute.
As far as I know, a VertexAttribute is limited to a max of vec4, which means my mat4 will actually take up 4 * vec4 locations.
Each VAO and VBO exists only once. I do not use a separate one for each "gameobject", as some online-tutorials do.
Therefore I update each of the Buffers at specific positions. Buf first, let me show you the following code, which initializes them:
// VAO
this.vao = glGenVertexArrays();
glBindVertexArray(this.vao);
// buffer for vertex positions
this.positionVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, this.positionVBO);
// upload null data to allocate vbo storage in memory
glBufferData(GL_ARRAY_BUFFER, vertexpoints * Float.BYTES, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
// buffer for texture coordinates
this.textureVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, this.textureVBO);
glBufferData(GL_ARRAY_BUFFER, texturepoints * Float.BYTES, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);
// buffer for transform matrices
this.matricesVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, this.matricesVBO);
glBufferData(GL_ARRAY_BUFFER, mtrxsize * Float.BYTES, GL_DYNAMIC_DRAW);
// Byte size of one vec4
int vec4Size = 4 * Float.BYTES;
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 4, GL_FLOAT, false, 4 * vec4Size, 0);
glVertexAttribDivisor(3, 1);
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 4, GL_FLOAT, false, 4 * vec4Size, 1 * vec4Size);
glVertexAttribDivisor(4, 1);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, false, 4 * vec4Size, 2 * vec4Size);
glVertexAttribDivisor(5, 1);
glEnableVertexAttribArray(6);
glVertexAttribPointer(6, 4, GL_FLOAT, false, 4 * vec4Size, 3 * vec4Size);
glVertexAttribDivisor(6, 1);
//buffer for indices
this.indicesVBO = glGenBuffers();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this.indicesVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indsize * Integer.BYTES, GL_DYNAMIC_DRAW);
//unbind buffers and array
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
As far as I'm aware, this initialization should correspond to the 4 VertexAttributes, defined in the Vertex-Shader.
For test purposes, I initialize 4 gameobjects (A1, A2, B1, B2), each with:
4 x vec3 points for the position-attribute, resulting in 4 * 3 = 12 floating-points pushed to the positionVBO
4 x vec2 points for the texture-attribute, resulting in 4 * 2 = 8 floating-points pushed to the textureVBO
6 x int points for the indices to draw (0, 1, 2, 2, 3, 0), pushed to the indicesVBO
1 x mat4 for the modelViewMatrix-attribute, resulting in 4 * vec4 = 4 * 4 = 16 floating-points pushed to the matricesVBO
for each gameobject I push its data to the VBOs, using the following logic:
long vertoff = gameObjectVertOffset; // offset of the current gameobject's vertex points in position data
long texoff = gameObjectTexOffset; // offset of the current gameobject's texture points in texture data
long indoff = gameObjectIndOffset; // offset of the current gameobject's indices in index data
long instoff = gameObjectMatrixOffset; // offset of the current gameobject's matrix (vec4) in matrices data
// upload new position data
if(gameObjectVertBuf.capacity() > 0) {
gameObjectVertBuf.flip();
glBindBuffer(GL_ARRAY_BUFFER, this.positionVBO);
glBufferSubData(GL_ARRAY_BUFFER, vertoff * Float.BYTES, gameObjectVertBuf);
}
// upload new texture data
if(gameObjectTexBuf.capacity() > 0) {
gameObjectTexBuf.flip();
glBindBuffer(GL_ARRAY_BUFFER, this.textureVBO);
glBufferSubData(GL_ARRAY_BUFFER, texoff * Float.BYTES, gameObjectTexBuf);
}
// upload new indices data
if(gameObjectIndBuf.capacity() > 0) {
gameObjectIndBuf.flip();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this.indicesVBO);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, indoff * Integer.BYTES, gameObjectIndBuf);
}
// upload new model matrix data
if(gameObjectMatrixBuf.capacity() > 0) {
gameObjectMatrixBuf.flip();
glBindBuffer(GL_ARRAY_BUFFER, this.matricesVBO);
glBufferSubData(GL_ARRAY_BUFFER, instoff * Float.BYTES, gameObjectMatrixBuf);
}
Now to the actual rendering:
I want to draw element A 2 times and after that, element B 2 times.
for the instanced rendering, I group together the gameobjects, I knew i could render in one call, inside lists.
I now have two lists, each with two elements in them:
List1[A1, A2]
List2[B1, B2]
Once per list I now do:
numInstances = 2;
this.vao.bind();
shaderprogram.useProgram();
glDrawElementsInstanced(GL_TRIANGLES, numIndices, GL_UNSIGNED_INT, (int) (offset * Integer.BYTES), numInstances);
offset += 6 * numInstances; // 6 indices * 2 instances
The Problem:
This results in the first two elements being rendered correctly, but the second two (from the second list / glDrawElementsInstanced() call) are rendered with the transformation matrices of the first two elements.
It does not matter, which list of objects are rendered first. The second iteration always seems to use the modelViewMatrix attributes from the first ones.
As far as I understood, the glVertexAttribDivisor() call should limit the iteration of the matrices per instance instead of per vertex.
What am I missing here?
the second two (from the second list / glDrawElementsInstanced() call) are rendered with the transformation matrices of the first two elements.
That's what you asked to do. How would the system know that it would need to use the second two elements from the array instead of the first two? All it sees is another draw call, and there are no changes to VAO state between them.
The system doesn't keep up with how many instances have been used in prior draw calls. That's your job.
Now, you could change the buffer binding for the attributes in question, but it's easier to use base-instance rendering. In these drawing functions, you specify an offset that is applied to the instance index for instanced attributes. So if you want to render two instances starting at instance index 2, you do this:
glDrawElementsInstancedBaseInstance(GL_TRIANGLES, numIndices, GL_UNSIGNED_INT, (int) (offset * Integer.BYTES), 2, 2);
Base instanced rendering is a GL 4.2 feature.
As I did not find success in drawing multiple triangles with different matrices for each, for now I am stuck with transforming vertices on CPU and use a shader without matrix transformation..
Help will be greatly appreciated !
Here is my current shader :
attribute vec2 vertices;
attribute vec2 textureUvs;
varying vec2 textureUv;
void main()
{
gl_Position = vec4(vertices,0.0,1.0);
textureUv = textureUvs;
};
It works very well except that all vertices are transformed by the CPU before calling OpenGL drawArray(),I suppose that I will get better performance if I can send each triangles matrix and let OpenGL compute vertices.
And here is the draw call :
public final static void drawTexture(FloatBuffer vertices, FloatBuffer textureUvs, int textureHandle, int count, boolean triangleFan)
{
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
GLES20.glUseProgram(progTexture);
GLES20.glEnableVertexAttribArray(progTextureVertices);
GLES20.glVertexAttribPointer(progTextureVertices, 2, GLES20.GL_FLOAT, false, 2*Float.BYTES, vertices);
GLES20.glEnableVertexAttribArray(progTextureUvs);
GLES20.glVertexAttribPointer(progTextureUvs, 2, GLES20.GL_FLOAT, false, 2*Float.BYTES, textureUvs);
if(triangleFan)
{
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4 * count); //Faster 10%
}
else
{
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * count);
}
GLES20.glDisableVertexAttribArray(progTextureVertices);
GLES20.glDisableVertexAttribArray(progTextureUvs);
}
Note that it is a Sprite renderer that's why I used only 2d vertices.
I finally answered my own question, and yes it is possible to draw mutiple triangles with differents matrices with OpenGLES2 and it worth it !
The answer is related to this one How to include model matrix to a VBO? and #httpdigest comment.
Basically for sprites it only reqiere two vec3 as attributes of the shader that you fill with first and second row of your matrix 3x3.
Here is the shader I am using :
attribute vec3 xTransform;
attribute vec3 yTransform;
attribute vec2 vertices;
attribute vec2 textureUvs;
varying vec2 textureUv;
void main()
{
gl_Position = vec4(dot(vec3(vertices,1.0), xTransform), dot(vec3(vertices,1.0), yTransform), 0.0,1.0) ;
textureUv = textureUvs;
}
First you get two attributes pointers :
int progTextureXTransform = GLES20.glGetAttribLocation(progTexture, "xTransform");
int progTextureYTransform = GLES20.glGetAttribLocation(progTexture, "yTransform");
And for drawing you pass one vector of each per vertex :
GLES20.glEnableVertexAttribArray(progTextureXTransform);
GLES20.glVertexAttribPointer(progTextureXTransform, 3, GLES20.GL_FLOAT, false, 3*Float.BYTES, xTransforms);
GLES20.glEnableVertexAttribArray(progTextureYTransform);
GLES20.glVertexAttribPointer(progTextureYTransform, 3, GLES20.GL_FLOAT, false, 3*Float.BYTES, yTransforms);
On a galaxy Tab 2 this is twice faster than computing vertices with CPU.
xTransform is the first row of your 3x3 matrix
yTransform is the second row of your 3x3 matrix
And of course this can be extended for 3d rendering by adding a zTransform + switch to vec4
I'm starting work on a simple shape-batching system for my 3D engine that will enable me to draw lines and rectangles, etc... with a lower draw call count. I think I've got the basic ideas figured out for the most part, but I'm having problems when I try to draw multiple objects (currently just lines with a thickness you can specify).
Here's a screenshot to show you what I mean:
I'm using indexed rendering with glDrawElements, and two VBOs to represent the vertex data - one for positions, and one for colours.
I construct a line for my shape-batcher by specifying start and end points, like so:
shapeRenderer.begin();
shapeRenderer.setViewMatrix(viewMatrix);
shapeRenderer.setProjectionMatrix(projectionMatrix);
shapeRenderer.setCurrentColour(0, 1f, 0);
shapeRenderer.drawLine(2, 2, 5, 2);
shapeRenderer.setCurrentColour(0, 1f, 1f);
shapeRenderer.drawLine(2, 5, 5, 5);
shapeRenderer.end();
The first line, represented in green in the screenshot, shows perfectly. If I draw only one line it's completely fine. If I were to draw only the second line it would show perfectly as well.
When I call drawLine the following code executes, which I use to compute directions and normals:
private Vector2f temp2fA = new Vector2f();
private Vector2f temp2fB = new Vector2f();
private Vector2f temp2fDir = new Vector2f();
private Vector2f temp2fNrm = new Vector2f();
private Vector2f temp2fTMP = new Vector2f();
private boolean flip = false;
public void drawLine(float xStart, float yStart, float xEnd, float yEnd){
resetLineStates();
temp2fA.set(xStart, yStart);
temp2fB.set(xEnd, yEnd);
v2fDirection(temp2fA, temp2fB, temp2fDir);
v2fNormal(temp2fDir, temp2fNrm);
float halfThickness = currentLineThickness / 2;
//System.out.println("new line called");
v2fScaleAndAdd(temp2fB, temp2fNrm, -halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fB, temp2fNrm, halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fA, temp2fNrm, halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fA, temp2fNrm, -halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
//System.out.println(indexCount + " before rendering.");
int index = indexCount;
pushIndices(index, index + 1, index + 3);
pushIndices(index + 1, index + 2, index + 3);
//System.out.println(indexCount + " after rendering.");
}
private void resetLineStates(){
temp2fA.set(0);
temp2fB.set(0);
temp2fDir.set(0);
temp2fNrm.set(0);
temp2fTMP.set(0);
}
pushIndices is the following function:
private void pushIndices(int i1, int i2, int i3){
shapeIndices.add(i1);
shapeIndices.add(i2);
shapeIndices.add(i3);
indexCount += 3;
}
And pushVertex works like so:
private void pushVertex(float x, float y, float z){
shapeVertexData[vertexDataOffset] = x;
shapeColourData[vertexDataOffset] = currentShapeColour.x;
shapeVertexData[vertexDataOffset + 1] = y;
shapeColourData[vertexDataOffset + 1] = currentShapeColour.y;
shapeVertexData[vertexDataOffset + 2] = z;
shapeColourData[vertexDataOffset + 2] = currentShapeColour.z;
//System.out.println("\tpushed vertex: " + data.x + ", " + data.y + ", 0");
vertexDataOffset += 3;
}
I'm using the following fields to store vertex data and such - this is all sub-buffered to a VBO when I flush the batch. If the vertex data arrays have not had to grow in size, I will sub-buffer them to their respective VBO, likewise with the element buffer, otherwise if they have had to grow then I re-buffer the VBO to fit.
private float[] shapeVertexData;
private float[] shapeColourData;
private int vertexDataOffset;
private ArrayList<Integer> shapeIndices;
private int indexCount;
When I use my debugger in IDEA, the vertex data appears completely correct in the arrays I'm constructing, but when I explore it in RenderDoc, it's wrong. I don't understand what I'm doing wrong to get these results, and obviously the first two vertices appear completely fine even for the second rectangle, but the others are totally wrong.
I'm confident that my shaders are not the problem, as they're very simple, but here they are:
shape_render.vs (vertex shader):
#version 330
layout (location = 0) in vec3 aPosition;
layout (location = 1) in vec3 aColour;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
flat out vec3 shapeFill;
void main(){
shapeFill = aColour;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(aPosition.x, aPosition.y, 0.0, 1.0);
}
shape_render.fs (fragment shader):
#version 330
layout (location = 0) out vec4 fragColour;
in vec3 shapeFill;
void main(){
fragColour = vec4(shapeFill, 1);
}
I think I've just about explained it to the best of my knowledge, any insight would be greatly appreciated. I've already checked and determined I'm enabling the necessary vertex arrays, etc... and rendering the correct amount of indices (12):
Thanks so much for having a look at this for me.
I figured it out after thinking about it for a while longer. It was to do with how I was specifying the indices. I was using the correct amount of indices however specifying them incorrectly.
For arguments sake to construct a triangle, the first one would have an index count of 0, and with four vertices, the indices would be 1,2,3 and 2,3,1 (for example). However for each new triangle I was starting the index count at the old count plus six, which makes sense for addressing an array, but because each rectangle only specified four vertices, I was pointing indices at data that didn’t exist.
So instead of using indexCount += 3 each time I pushed indices, I’ll get the current count of vertices instead and build my indices from that.
I'm trying to render a scene using OpenGL ES 2.0 in an Android App, however, the screen keeps being drawn with the defined clear color.
I had success rendering simpler examples, but I'm failing when using packed vertex buffer.
The idea is to render a scene with a camera inside a Sphere, here are some code snippets which should give some context and also is where I think the bug is.
Fragment Shader:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform inputImageTexture;
void main()
{
vec4 color = texture2D(inputImageTexture, vTextureCoord);
gl_FragColor = vec4(1.0);
}
Vertex Shader:
uniform mat4 uMVPMatrix;
uniform mat4 uTransform;
attribute vec4 aPosition;
attribute vec4 aTextureCoord;
varying vec2 vTextureCoord;
void main()
{
gl_Position = uMVPMatrix * aPosition * vec4(1, -1, 1, 1);
vec4 coord = (aTextureCoord / 2.0) + 0.5;
vTextureCoord = (uTransform * coord).xy;
}
The GL program initialization:
mProgram = new ProgramObject();
if(mProgram.init(vsh, fshResult)) {
GLES20.glClearColor(1.0f, 0.f, 0.f, 1.f);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_CULL_FACE);
uMVPMatrixLocation = mProgram.getUniformLoc("uMVPMatrix");
aPositionLocation = mProgram.attributeLocation("aPosition");
aTextureCoordLocation = mProgram.attributeLocation("aTextureCoord");
mTransformLoc = mProgram.getUniformLoc("uTransform");
}
Vertex loading:
sphere = new Sphere(SPHERE_SLICES, 0.f, 0.f, 0.f, SPHERE_RADIUS, SPHERE_INDICES_PER_VERTEX);
mProgram.bind();
final int buffers[] = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
mVertexBuffer = buffers[0];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, sphere.getVerticesSize(), sphere.getVertices(), GLES20.GL_STATIC_DRAW);
Render:
GLES20.glViewport(viewport.x, viewport.y, viewport.width, viewport.height);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(TEXTURE_2D_BINDABLE, texID);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
GLES20.glEnableVertexAttribArray(aPositionLocation);
GLES20.glVertexAttribPointer(aPositionLocation, 3,
GLES20.GL_FLOAT, false, sphere.getVerticesStride(), sphere.getVertices());
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
GLES20.glEnableVertexAttribArray(aTextureCoordLocation);
GLES20.glVertexAttribPointer(aTextureCoordLocation, 2,
GLES20.GL_FLOAT, false, sphere.getVerticesStride(),
sphere.getVertices().duplicate().position(3));
Matrix.multiplyMM(pvMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
Matrix.multiplyMM(mvpMatrix, 0, pvMatrix, 0, modelMatrix , 0);
GLES20.glUniformMatrix4fv(uMVPMatrixLocation, 1, false, mvpMatrix, 0);
for (int j = 0; j < sphere.getNumIndices().length; ++j) {
GLES20.glDrawElements(GLES20.GL_TRIANGLES, sphere.getNumIndices()[j], GLES20.GL_UNSIGNED_SHORT, sphere.getIndices()[j]);
}
Any insights are welcome, thank you very much.
Update
The could was running without any errors on a Galaxy S6, but, now running on a One Plus X I'm getting the following error:
<gl_draw_error_checks:598>: GL_INVALID_OPERATION
glDrawElements: glError 0x502
After running:
for (int j = 0; j < sphere.getNumIndices().length; ++j) {
GLES20.glDrawElements(GLES20.GL_TRIANGLES, sphere.getNumIndices()[j], GLES20.GL_UNSIGNED_SHORT, sphere.getIndices()[j]);
}
You're mixing up two ways of using vertex arrays:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
...
GLES20.glVertexAttribPointer(aPositionLocation, 3,
GLES20.GL_FLOAT, false, sphere.getVerticesStride(), sphere.getVertices());
In ES 2.0, there are two ways of using vertex arrays:
Client-side vertex arrays.
Vertex-buffer objects (VBOs).
With client-side vertex arrays, you don't store your vertex data in VBOs, but pass the vertex data directly as the last argument to the glVertexAttribPointer() call. In C/C++ bindings, this is a pointer to the data, in the Java bindings it's a Java Buffer object.
When using VBOs, you store your vertex data in a VBO, using glBufferData(). Then you bind the VBO before the glVertexAttribPointer() call, and pass the offset of the data relative to the start of the VBO as the last argument.
In your example:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
GLES20.glEnableVertexAttribArray(aPositionLocation);
GLES20.glVertexAttribPointer(aPositionLocation, 3,
GLES20.GL_FLOAT, false, sphere.getVerticesStride(), 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVertexBuffer);
GLES20.glEnableVertexAttribArray(aTextureCoordLocation);
GLES20.glVertexAttribPointer(aTextureCoordLocation, 2,
GLES20.GL_FLOAT, false, sphere.getVerticesStride(), 12);
Note that the offset is in bytes, so it's 12 bytes if your colors are offset by 3 floats relative to the positions.
Client-side vertex arrays are deprecated, and partly unavailable, in newer versions of OpenGL. So storing the data in VBOs, as you already are, is the right way to go.
I am trying to switch to VBOs, without indices for now. But all I get is just a blank screen. Can someone point out why it is blank? The same code works fine if I comment out the vbo-specific code and replace 0(offset) in glVertexAttribPointer by mFVertexBuffer, i.e without using VBOs.
This is my onDraw method
GLES20.glClearColor(0.50f, 0.50f, 0.50f, 1.0f);
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// Bind default FBO
// GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, id);
int vertexCount = mCarVerticesData.length / 3;
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);
checkGlError("1");
GLES20.glVertexAttribPointer(positionHandle, 3, GLES20.GL_FLOAT,false, 0, 0);
checkGlError("2");
GLES20.glEnableVertexAttribArray(positionHandle);
checkGlError("3 ");
transferTexturePoints(getTextureHandle());
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
checkGlError("glDrawArrays");
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glDisableVertexAttribArray(positionHandle);
GLES20.glDisable(GLES20.GL_BLEND);
This is my vbo setup:
// Allocate and handle vertex buffer
ByteBuffer vbb2 = ByteBuffer.allocateDirect(mCarVerticesData.length
* FLOAT_SIZE_BYTES);
vbb2.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb2.asFloatBuffer();
mFVertexBuffer.put(mCarVerticesData);
mFVertexBuffer.position(0);
// Allocate and handle vertex buffer
this.buffers = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mFVertexBuffer.capacity()
* FLOAT_SIZE_BYTES, mFVertexBuffer, GLES20.GL_STATIC_DRAW);
Before linking my program:
GLES20.glBindAttribLocation(program, 0, "aPosition");
checkGlError("bindAttribLoc");
And my vertex shader is :
uniform mat4 uMVPMatrix;
attribute vec4 aPosition;
attribute vec2 aTextureCoordinate;
varying vec2 v_TextureCoordinate;
void main()
{
gl_Position = uMVPMatrix * aPosition;
v_TextureCoordinate = aTextureCoordinate;
gl_PointSize= 10.0;
}
You need to also generate an elements array and call something like this in order to render your "object":
// Bind the vertex buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, _bufferIds.get(2));
GLES20.glVertexAttribPointer(4, 4, GLES20.GL_FLOAT, false, 4*4, 0);
// Bind the elements buffer and draw it
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, _bufferIds.get(1));
GLES20.glDrawElements(GLES20.GL_TRIANGLES, _numElements, GLES20.GL_UNSIGNED_SHORT, 0);
Hope that helps.
I solved the problem by rewriting the way I was uploading vertices data. I was modifying it and so, I needed to call glBindBufferData again for it to be uploaded. And I was able to use VBOs without using glDrawElements and indices, and it works well.