I'm trying to layer / chain multiple GLES 2.0 effects / programs. In my specific case the first pass renders a video frame, then a second pass renders some particles on top and finally I want to apply an animated zoom effect that transforms the whole composition. The way I go about chaining the shaders for now is by compiling / attaching / linking them individually and then calling glUseProgram() for each one in a row in my onDrawFrame() method.
super.onDrawFrame(fbo); // this is where I call the previous glUseProgram() ...
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferName);
GLES20.glEnableVertexAttribArray(getHandle("aPosition"));
GLES20.glVertexAttribPointer(getHandle("aPosition"), VERTICES_DATA_POS_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_POS_OFFSET);
GLES20.glEnableVertexAttribArray(getHandle("aTextureCoord"));
GLES20.glVertexAttribPointer(getHandle("aTextureCoord"), VERTICES_DATA_UV_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_UV_OFFSET);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); // texture #3 was just a guess but the previous ones appear to refer to stuff from the particles program, not the whole composition
GLES20.glUniform2f(getHandle("zoomCenter"), .5f, .5f); // passing the variables to the shader, don't think there's a problem with that part, therefore not including their source
GLES20.glUniform1f(getHandle("zoomFactor"), 2f);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(getHandle("aPosition"));
GLES20.glDisableVertexAttribArray(getHandle("aTextureCoord"));
This works well for the first two layers, I can draw my particles on the video as expected. The animated zoom shader by itself also works as expected, when I apply it to the uncomposed video frame.
When I run the code above however, I get something that kind of looks like it might be a zoom on the whole image, but then it gradually gets whiter with every frame and goes completely white after around a second.
I figured that might be because I called GLES20.glBlendFunc in the previous particles GL program, so some sort of additive blending would not be unexpected, but GLES20.glDisable ( GLES20.GL_BLEND ); just gives me a black screen. Am I calling stuff in the wrong order or is my assumption nonsense that GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); could be referring to the whole composition so far? Or maybe I just fundamentally misunderstand the way one would chain those shaders.
Related
I am trying to render a terrain using my own shader and by using low level open gl methods.
But other parts of the game use SpriteBatch and other GDXlib render classes.
My openGL code for terrain renders correctly until I make a call to:
spriteBatch.draw(...);
or something similar like:
stage.draw();
After that call, my openGL code just not draw anymore. No error, just nothing on screen.
But SpriteBatch works just OK.
After a loooong time, I figured out that I need to call
glEnableVertexAttribArray(...);
and
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
I NEED TO CALL IT BEFORE I CALL -> glDrawArrays(...);
EVERY TIME BEFORE I USE glDraw... , -> EVERY FRAME
If I don't call first one, nothing is rendered.
If I don't call second one, it renders at the wrong positions.
It looks like everytime I use GDXlib classes to render, it somehow messes my attributes up.
Init code:
shaderProgram = new ShaderProgram(baseVertexShader, baseFragmentShader);
if (!shaderProgram.isCompiled()) {
Gdx.app.error("TerrainRenderer - Cannot compile shader", shaderProgram.getLog());
}
shaderProgram.begin();
//vertexBuffers
vertexBuffer = BufferUtils.newFloatBuffer(quadPosVertices.length);
vertexBuffer.put(quadPosVertices);
vertexBuffer.rewind();
//VBOs
//generate buffers
posVertexBufferLoc = Gdx.gl.glGenBuffer();
//pass data into buffers
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glBufferData(GL20.GL_ARRAY_BUFFER, vertexBuffer.capacity()*4, vertexBuffer, GL20.GL_STATIC_DRAW);
//attributes
//locations
positionAttribLoc = shaderProgram.getAttributeLocation("position");
//attributes specifications
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
//enabling attributes
shaderProgram.enableVertexAttribute(positionAttribLoc);
//end shader
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0); //unbind
Draw code:
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
shaderProgram.enableVertexAttribute(positionAttribLoc);
shaderProgram.begin();
Gdx.gl.glDrawArrays(GL20.GL_TRIANGLES, 0, 6);
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0);
So What is the correct way to use openGL methods with GDXlib classes?
Do I really need to call those attribute functions every frame?
OpenGL is a state machine. When libGDX is doing something with OpenGL it will inevitably change the state of the OpenGL context to something different.
The canonical way to draw stuff in OpenGL is:
Set every state you depend on to the values you need for drawing.
Then draw it.
For the longest time OpenGL didn't have VAOs (vertex array objects) and you had in fact to do a glBindBuffer, glVertexAttribPointer combo every time you switched vertex buffers. Most OpenGL drivers are well optimized in that code path, and for a matter of fact, back when when VAOs got introduced, using them impaired performance. That's no longer the case, but it used to be.
Also you can't improve performance by "saving" on OpenGL calls. OpenGL isn't that low level and in many ways operates a lot like a modern out-of-order-execution CPU: As long as the outcome is identical to what would render if every command was done in order it can delay and rearrange operations.
We use Opengl ES 2.0 for android and are trying to display two Cubes stacked on each other. For that we have two vertexbuffers (mCubePositions1 and mCubePositions2) which store the Cubedata (the vertices combined to triangles) and call a seperate draw method for each of them:
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,0, mCubePositions1); //the second time mCubePositions2
GLES20.glEnableVertexAttribArray(mPositionHandle);
//some Code concerning lightning and textures
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 36);
The result is two cubes are displayed, but if we let them rotate, the cube which is drawn second is always displayed on top (2nd Cube is shining through the 1st).
In the onSurfaceCreated method we initialise the depth-buffer:
// Set the background clear color to black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glClearDepthf(1.0f);
// Use culling to remove back faces.
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
GLES20.glDepthMask(true);
There is a solution combining the two buffers into one and than calling just one draw-method, but this is not a solution for us, because we want to move the Cubes seperatly.
If this is not enougth code to answer please ask for more.
Thank you for every answer :)
If you enable depth testing, and it still does not work, this typically means that you don't have a depth buffer.
When using a GLSurfaceView in Android, you request what buffers you need while the view is initialized, by calling the setEGLConfigChooser method. There are a few overloads for this method. The most commonly used one takes a size (number of bits) for each buffer. A typical call will look like this:
setEGLConfigChooser(8, 8, 8, 0, 16, 0);
This means that you want 8 bits each for RGB, do not require alpha, want a 16-bit depth buffer, and do not need stencil.
Note that it is not guaranteed that you will get exactly the specified sizes, but the best possible match among the available configurations.
This is a continuation of my previous problem and post, seen here. Thanks to the answer I received there I feel I was able to get a little closer to my goal, as well as further my learning of OpenGL, but shortly after figuring out the basics of working with stencil buffers, I've run into a problem.
It seems that when I draw a sprite to the stencil buffer, it draws the entire square area, rather than just the pixels that aren't fully transparent as I had ignorantly hoped. I vaguely understand why it happens that way, but I am not sure where the solution lies. I have experimented with the stencil itself quite a bit, and I have modified the shaders that the spritebatch uses to discard low-alpha fragments, but I seem to be failing to see the bigger picture.
As a visual example of the problem, I will continue with the examples I used in the previous question. Right now, trying to draw two circles over each other (So they blend perfectly, no overlapping), I am getting this :
So, basically, is there a way for me to utilize stencil buffers using the Sprite and SpriteBatch functionality of LibGDX on complex shapes (Circles are only being used as an example), or do I need to look for an alternative route?
EDIT ::
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glDepthMask(true);
batch.begin();
sprite.draw(batch);
sprite2.draw(batch);
batch.end();
Gdx.gl.glDisable(GL20.GL_DEPTH_TEST);
Stencil writing/testing happens at the fragment level. When you draw a circle, you are actually drawing a quad of fragments where each fragment may or may not be textured. The issue of the matter is that the GPU doesn't care what color you write into the fragment when it does the stencil test for the fragment. Therefore, you need to discard fragments to stop the stencil writes for the parts of the quad where you don't want to write stencil values.
TL;DR Use "if(gl_FragColor.a < 0.01) discard;" in the fragment shader to make sure a circle of fragments (and stencil values) are generated.
Mby this helps.
I solve same problem with blending and depth test.
First you need clear depth and color bit.
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
Then you need enable some gl features:
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL10.GL_DEPTH_TEST);
Gdx.gl.glDepthMask(true);
After that you can draw your objects. In my project smtng like this:
shapeRenderer.setColor(c.getR(), c.getG(), c.getB(), FIELD_ALPHA);
shapeRenderer.filledCircle(p.getPos().x + s.getOffset().x, ApplicationEnv.SCREEN_HEIGHT - p.getPos().y + s.getOffset().x, b.getRadius());
Important! After drawing you call end() for your spriteBatch (or something else) and disable depth_test feature:
Gdx.gl.glDisable(GL10.GL_DEPTH_TEST);
Result:
Before
After
I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.
In the two attached pictures, the desktop screenshot of libgdx functions as expected. The screenshot from my Galaxy Nexus is unfortunately not as expected. I am attempting to create a simple motion blur or trail effect.
Rendering as I expected on my desktop.
Not rendering as I expected on my Galaxy nexus.
The circle textures are drawn in a for loop during rendering and the effect is achieved with a pixmap using the RGBA of 0, 0, 0, 0.1f that is drawn before the circles.
screenClearSprite creation
Pixmap screenClearPixmap = new Pixmap(256, 256, Format.RGBA8888);
screenClearPixmap.setColor(Color.rgba8888(0, 0, 0, 0.1f));
screenClearPixmap.fillRectangle(0, 0, 256, 256);
screenClearTexture = new Texture(screenClearPixmap);
screenClearSprite = new Sprite(screenClearTexture);
screenClearSprite.setSize(screenWidth, screenHeight);
screenClearPixmap.dispose();
Render
batch.begin();
font.draw(batch, "fps:" + Gdx.graphics.getFramesPerSecond(), 0, 20);
screenClearSprite.draw(batch);
for (int i = 0; i < circleBodies.size(); i++) {
tempPos = circleBodies.get(i).getPosition();
batch.draw(circleTexture, (tempPos.x * SCALE) + screenWidthHalf
- circleSizeHalf, (tempPos.y * SCALE) + screenHeightHalf
- circleSizeHalf);
}
batch.end();
So, what did I do wrong? Perhaps there is a better way to get the 'motion blur' effect of movement?
Here is a different approach, where you clear your screen each time with solid color and no alpha.
This means that you will have to modify your code some. The good thing about this, is that the way you are doing it has some flaws: It will blur everything in motion, not just the balls. And can quickly produce ugly results/artefacts unless you are careful.
Do the same as you are doing now, but instead of drawing the balls to the batch, draw them onto a texture/bitmap/whatever. Then each frame add an alpha-blended image over the balls-image, and then draw the balls in their current position on top of that. Then add that image to your screen. Very much like you are doing now, except you draw to something else and keep it. This way you don't have to rely on the viewport you are drawing onto, and can keep everything separated.
This method is similar to drawing to an accumulation buffer.
Instead of doing it the way you are doing, you can keep track of the n latest positions of each ball. And then draw all of them each frame, with different alpha. This is very easy to implement. Can result in many drawing calls if you have many balls or a large n, but if it's not too much it shouldn't limit your fps and gives nice control.
Perhaps there is a better way to get the 'motion blur' effect of
movement?
in order to make motion blur in my game i use another approch "The particle effect" it works realy fine with me and i didn't have Android/Desktop problems or with different android devices
all you have to do is to use "Particle Effect Editor" of Libgdx and make your effect then load it in your project finally draw it at the same position you draw your object (and alos draw your object)
Tips to make the right effect file with Paticle Editor :
set (use) the same image of the object that you want to blur it motion in the particle effect
try to limit the count : the max number of particle allowed
Disable the "velocity" and "Angle"
parameter
Particle effect help to do motion effect
Hope this will help someone !