Android OpenGL ES 2.0 multible objects - java

We use Opengl ES 2.0 for android and are trying to display two Cubes stacked on each other. For that we have two vertexbuffers (mCubePositions1 and mCubePositions2) which store the Cubedata (the vertices combined to triangles) and call a seperate draw method for each of them:
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,0, mCubePositions1); //the second time mCubePositions2
GLES20.glEnableVertexAttribArray(mPositionHandle);
//some Code concerning lightning and textures
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 36);
The result is two cubes are displayed, but if we let them rotate, the cube which is drawn second is always displayed on top (2nd Cube is shining through the 1st).
In the onSurfaceCreated method we initialise the depth-buffer:
// Set the background clear color to black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glClearDepthf(1.0f);
// Use culling to remove back faces.
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
GLES20.glDepthMask(true);
There is a solution combining the two buffers into one and than calling just one draw-method, but this is not a solution for us, because we want to move the Cubes seperatly.
If this is not enougth code to answer please ask for more.
Thank you for every answer :)

If you enable depth testing, and it still does not work, this typically means that you don't have a depth buffer.
When using a GLSurfaceView in Android, you request what buffers you need while the view is initialized, by calling the setEGLConfigChooser method. There are a few overloads for this method. The most commonly used one takes a size (number of bits) for each buffer. A typical call will look like this:
setEGLConfigChooser(8, 8, 8, 0, 16, 0);
This means that you want 8 bits each for RGB, do not require alpha, want a 16-bit depth buffer, and do not need stencil.
Note that it is not guaranteed that you will get exactly the specified sizes, but the best possible match among the available configurations.

Related

How to properly use open gl calls with libGDX

I am trying to render a terrain using my own shader and by using low level open gl methods.
But other parts of the game use SpriteBatch and other GDXlib render classes.
My openGL code for terrain renders correctly until I make a call to:
spriteBatch.draw(...);
or something similar like:
stage.draw();
After that call, my openGL code just not draw anymore. No error, just nothing on screen.
But SpriteBatch works just OK.
After a loooong time, I figured out that I need to call
glEnableVertexAttribArray(...);
and
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
I NEED TO CALL IT BEFORE I CALL -> glDrawArrays(...);
EVERY TIME BEFORE I USE glDraw... , -> EVERY FRAME
If I don't call first one, nothing is rendered.
If I don't call second one, it renders at the wrong positions.
It looks like everytime I use GDXlib classes to render, it somehow messes my attributes up.
Init code:
shaderProgram = new ShaderProgram(baseVertexShader, baseFragmentShader);
if (!shaderProgram.isCompiled()) {
Gdx.app.error("TerrainRenderer - Cannot compile shader", shaderProgram.getLog());
}
shaderProgram.begin();
//vertexBuffers
vertexBuffer = BufferUtils.newFloatBuffer(quadPosVertices.length);
vertexBuffer.put(quadPosVertices);
vertexBuffer.rewind();
//VBOs
//generate buffers
posVertexBufferLoc = Gdx.gl.glGenBuffer();
//pass data into buffers
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glBufferData(GL20.GL_ARRAY_BUFFER, vertexBuffer.capacity()*4, vertexBuffer, GL20.GL_STATIC_DRAW);
//attributes
//locations
positionAttribLoc = shaderProgram.getAttributeLocation("position");
//attributes specifications
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
//enabling attributes
shaderProgram.enableVertexAttribute(positionAttribLoc);
//end shader
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0); //unbind
Draw code:
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
shaderProgram.enableVertexAttribute(positionAttribLoc);
shaderProgram.begin();
Gdx.gl.glDrawArrays(GL20.GL_TRIANGLES, 0, 6);
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0);
So What is the correct way to use openGL methods with GDXlib classes?
Do I really need to call those attribute functions every frame?
OpenGL is a state machine. When libGDX is doing something with OpenGL it will inevitably change the state of the OpenGL context to something different.
The canonical way to draw stuff in OpenGL is:
Set every state you depend on to the values you need for drawing.
Then draw it.
For the longest time OpenGL didn't have VAOs (vertex array objects) and you had in fact to do a glBindBuffer, glVertexAttribPointer combo every time you switched vertex buffers. Most OpenGL drivers are well optimized in that code path, and for a matter of fact, back when when VAOs got introduced, using them impaired performance. That's no longer the case, but it used to be.
Also you can't improve performance by "saving" on OpenGL calls. OpenGL isn't that low level and in many ways operates a lot like a modern out-of-order-execution CPU: As long as the outcome is identical to what would render if every command was done in order it can delay and rearrange operations.

Chaining multiple GLES 2.0 programs

I'm trying to layer / chain multiple GLES 2.0 effects / programs. In my specific case the first pass renders a video frame, then a second pass renders some particles on top and finally I want to apply an animated zoom effect that transforms the whole composition. The way I go about chaining the shaders for now is by compiling / attaching / linking them individually and then calling glUseProgram() for each one in a row in my onDrawFrame() method.
super.onDrawFrame(fbo); // this is where I call the previous glUseProgram() ...
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferName);
GLES20.glEnableVertexAttribArray(getHandle("aPosition"));
GLES20.glVertexAttribPointer(getHandle("aPosition"), VERTICES_DATA_POS_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_POS_OFFSET);
GLES20.glEnableVertexAttribArray(getHandle("aTextureCoord"));
GLES20.glVertexAttribPointer(getHandle("aTextureCoord"), VERTICES_DATA_UV_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_UV_OFFSET);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); // texture #3 was just a guess but the previous ones appear to refer to stuff from the particles program, not the whole composition
GLES20.glUniform2f(getHandle("zoomCenter"), .5f, .5f); // passing the variables to the shader, don't think there's a problem with that part, therefore not including their source
GLES20.glUniform1f(getHandle("zoomFactor"), 2f);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(getHandle("aPosition"));
GLES20.glDisableVertexAttribArray(getHandle("aTextureCoord"));
This works well for the first two layers, I can draw my particles on the video as expected. The animated zoom shader by itself also works as expected, when I apply it to the uncomposed video frame.
When I run the code above however, I get something that kind of looks like it might be a zoom on the whole image, but then it gradually gets whiter with every frame and goes completely white after around a second.
I figured that might be because I called GLES20.glBlendFunc in the previous particles GL program, so some sort of additive blending would not be unexpected, but GLES20.glDisable ( GLES20.GL_BLEND ); just gives me a black screen. Am I calling stuff in the wrong order or is my assumption nonsense that GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); could be referring to the whole composition so far? Or maybe I just fundamentally misunderstand the way one would chain those shaders.

Rotate a GLUT bitmap/stroke string around its center?

As you can see on the pictures the string rotates around its origin.
No rotation:
Rotated:
Changing the RasterPos or translate it does not change this at all. I tried glutStrokeString and glutBitmapString. The code for the example:
gl.glColor4f((float) 1, (float) 0, (float) 0, 1.0f);
gl.glScalef(0.0015f, 0.0015f, 0.0015f);
gl.glRotatef(-angleHorizontal, 0, 1, 0);
glut.glutStrokeString(GLUT.STROKE_ROMAN, "ABCDEF");
glutBitmapCharacter:
You can't. It makes use of the (outdated, deprecated, legacy) OpenGL bitmap operations, which are always aligned to the pixel grid.
glutStrokeCharacter:
These are just regular line segments that transform through the fixed function pipeline; or if you're in a compatibility profile through an early GLSL version shader program that uses the set of built-in variables to access the fixed function pipeline state. In one of my codesamples programs (which I wrote to explain how the projection frustum works) I have some helper function to draw arrows with annotations. You can find the full code here https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/frustum/frustum.c the relevant function starts in line 114.

OpenGL Vertex Array Buffer

i'm trying to learn LWJGL (OpenGL) and i have to say i'm having a hard time.
I was trying to draw a triangle and a quad on the window and i finally managed to do it.
But i still have a question.
Sorry in advance if the question sounds stupid to you but i haven't been able to find a very detailed tutorial on the web so it's hard to understand since it's the first time i use OpenGL.
That being said, this is the relevant part of code:
public void init() {
vertexCount = indices.length;
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, coords, GL15.GL_STATIC_DRAW);
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
idxVboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, idxVboId);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, indices, GL15.GL_STATIC_DRAW);
GL30.glBindVertexArray(0);
}
public void render() {
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
GL11.glDrawElements(GL11.GL_TRIANGLES, vertexCount, GL11.GL_UNSIGNED_INT, 0);
GL20.glDisableVertexAttribArray(0);
GL30.glBindVertexArray(0);
}
Let's say that the program is running at 60 fps.This means that the render method is being called by the game loop 60 times every second.
The render method steps are:
glBindVertexArray(vaoId)
glEnableVertexAttribArray(0)
Draw the quad
glDisableVertexAttribArray(0)
glBindVertexArray(0)
My question is: Is it necessary to call steps 1, 2, 4 and 5 every time? If yes why?
And the same question applies to the last line of the init() method (glBindVertexArray(0)).
Sorry for my english, it's not my mother tongue.
Thanks in advance.
My question is: Is it necessary to call steps 1, 2, 4 and 5 every time? If yes why?
No, it is not. OpenGL is designed as a state machine. You have a GL context which contains global state, and objects which you create (like VAOs, VBOs). The objects itself can contain data and per-object state. What matters is the state which is set at the time of some particular GL function call which somehow depends on some of these state values.
In the case of glDrawElements(), the vertex array pointers and enable bits as well as the GL_ELEMENT_ARRAY_BUFFER binding is relevant for providing the input data for the draw call. (All other state that is influencing the drawing, like texture bindings, shader programs, depth test setting, ... is relevant as well, but lets not focus on these here.). All that state is actually encapsulated in the Vertex Array Object (VAO).
With the state machine design of OpenGL, state stays the same unless it is explicitely changed. Since you seem to draw only a single object and never need different attrib pointers or element arrays, you simply can set these up once and reduce your render() method to just the glDrawElements() call. This of course assumes that no other code in your render loop does any influencing state changes.
One thing worth noting: The VAO does store the enables per attribute array, so your step 2 belongs into the VAO initalization, and step 4 is completely useless in this scheme.
This also means that when you want to manage different objects, you could create a VAO, VBO and EBO per object, and your render mothod would just loop over the objects, set the appropriate VAO, and issue the draw call:
for every object obj
glBindVertexArray(obj.vao);
glDrawElements(...values depending on obj...);
Binding VAO 0 is actually never strictly required in modern OpenGL. You will always have to have some VAO bound at the time of the draw call, so you eventually have to bind a non-0 VAO later again anyway. The only value such unbinding is providing is that it prevents accidental changes to some objects. Since the tradition OpenGL API always uses the indirection of biding targets to modify an object, one can create situations where objects are bound which are not supposed to be bound at that time, resulting in hard to debug misbehaviors between apparently unrelated code parts.

Should I use several glDrawArrays() or gather all the vertices to one big glDrawArrays-call?

I'm working on a personal Java OpenGL (JOGL) project and I'm using some custom objects with separate draw functions and vertices.
public class Cube extends PhysicalObject {
public void draw(GL gl) {
gl.glColor3f(1.0f, 1.0f, 0.0f);
gl.glEnableClientState(GL.GL_VERTEX_ARRAY); // Enable Vertex Arrays
gl.glEnableClientState(GL.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, vertices);
gl.glTexCoordPointer(2, GL.GL_FLOAT, 0, texCoords);
gl.glDrawArrays(GL.GL_QUADS, 0, 4*6);
gl.glDisableClientState(GL.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL.GL_TEXTURE_COORD_ARRAY);
}
I then loop through a whole bunch of these cubes, calling their draw-functions.
My question is the following:
Should I gather all the vertices to one big glDrawArrays-call, i.e gathering all the vertices to one big array and draw that? Does it do much for performance and fps?
The general rule is to minimize the number of OpenGL calls, especially in languages like Java or C# where there's overhead to interfacing with native code. However, you shouldn't group together different objects if any of their properties will ever change (applying a different model matrix, having a different color, etc.) because it's not possible to apply two separate model matrices to different parts of the same draw call. So basically, if all of your cubes never change, it's better to group them all together, otherwise keep them separate.
Another thing that will be helpful with performance is to minimize the number of state changes. If you're drawing 10,000 cubes, move the glEnableClientState and glDisableClientState calls out of the cube draw method and only call them before/after all the cubes are drawn. If they're all using the same texture, bind the texture once at the beginning and unbind it once at the end.
Oh and if you're really worried about performance, most computers (even 2 year old netbooks) support OpenGL 1.5, so moving your data to VBOs will give you a significant performance benefit. And if you're doing something like Minecraft, then the best method of optimization is to go through all your cubes and only draw the surface faces.
If your concern is regarding performance, I don't think you'll see a very big change with either of you proposed implementations...
From my past experience, one thing that could make a performance increase would be to use List (better memory performance for sure).
This is a good Opengl bottleneck pdf

Categories

Resources