I have this line:
Gdx.gl10.glLineWidth(width);
Now, I do intend to draw a pretty thick line, and, unfortunately when I type in small values like 1 or 5 the line is obviously small. But once I surpass soemthing like 10, it no longer gets larger. I am passing in direct values in these instances, and so, I am under the impression that GL has a limit or something.... Would I be correct? Here's my code:
Gdx.gl.glClearColor(0,0,0,1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(cam.combined);
batch.begin();
batch.draw(bg,0,0,WIDTH,HEIGHT);
for(Spell a : spells){
a.draw(batch);
}
lc.draw(batch);
batch.end();
//((ppux+ppuy)/2f)*4
Gdx.gl10.glLineWidth(50);//average and then say 1/4 a unit)
renderer.setProjectionMatrix(cam.combined);
renderer.begin(ShapeType.Line);
lp.drawLines(renderer);
renderer.end();
batch.begin();
lp.draw(batch);
batch.end();
lp.drawLines(renderer) calls the following(I just call set color, and draw line):
renderer.setColor(1,1,1,1);
Elem a = elems.get(spellcombo.get(0));
Vector2 last = new Vector2(a.x(),a.y());
for(int i = 1; i < spellcombo.size(); i++){
a = elems.get(spellcombo.get(i));
Vector2 cur = new Vector2(a.x(),a.y());
renderer.line(last.x, last.y, cur.x, cur.y);
last = cur;
}
renderer.line(last.x,last.y,mx,my);
Gdx.gl.glEnable(GL10.GL_BLEND);
Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
renderer.setColor(1, 0, 0, .2f);
for(Elem e : elems){
int id = elems.indexOf(e);
if(ComboManager.validSpell(spellcombo,id))
renderer.line(last.x,last.y,e.x(),e.y());
}
Screenshots:
Image with glLineWidth() set to 1
Image with glLineWidth() set to 5
Image with glLineWidth() set to 10
Image with glLineWidth() set to 20
Image with glLineWidth() set to 200
I don't really know how to fix, and google wasn't particularily helpfull.
Thanks!
From libgdx 1.0 on there is also the ShapeRenderer's rectLine method available. See a simple example.
ShapeRenderer shapeRenderer = new ShapeRenderer();
shapeRenderer.begin(ShapeType.Filled);
shapeRenderer.rectLine(x1, y1, x2, y2, width);
shapeRenderer.end();
That seems to be the easiest way to draw thick lines now.
In Libgdx the Gdx.gl10 object is the wrapper for the OpenGL 1.x API. So, the calls there are all (basically) calls into OpenGL ES (on Android) or regular OpenGL (on the desktop). Sometimes the Java layer makes changes to the API, but generally its a pretty straightforward mapping. (On the Desktop, Libgdx tries to emulate the ES variant so the API presented contains only ES-relevant APIs.)
The line-drawing support in OpenGL ES is one place where ES changes from regular OpenGL. Both have limitations on supported line width, though in regular OpenGL the limitations seem to apply only to anti-aliased lines.
Regular OpenGL
http://www.opengl.org/sdk/docs/man/xhtml/glLineWidth.xml
There is a range of supported line widths. Only width 1 is guaranteed
to be supported; others depend on the implementation. To query the
range of supported widths, call glGet with argument
GL_ALIASED_LINE_WIDTH_RANGE.
OpenGL ES
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glLineWidth.xml
There is a range of supported line widths. Only width 1 is guaranteed
to be supported; others depend on the implementation. To query the
range of supported widths, call glGet with argument
GL_ALIASED_LINE_WIDTH_RANGE.
To query the limits in Libgdx, use something like this:
int[] results = new int[1];
Gdx.gl10.glGetIntegerv(GL20.GL_ALIASED_LINE_WIDTH_RANGE, results, 0);
The upshot of all of this though, is that because line drawing (other than width 1.0) on OpenGL ES has different run-time limitations on different platforms, you probably should use a different scheme (like rectangles) to draw fat lines.
Related
I am trying to render a terrain using my own shader and by using low level open gl methods.
But other parts of the game use SpriteBatch and other GDXlib render classes.
My openGL code for terrain renders correctly until I make a call to:
spriteBatch.draw(...);
or something similar like:
stage.draw();
After that call, my openGL code just not draw anymore. No error, just nothing on screen.
But SpriteBatch works just OK.
After a loooong time, I figured out that I need to call
glEnableVertexAttribArray(...);
and
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
I NEED TO CALL IT BEFORE I CALL -> glDrawArrays(...);
EVERY TIME BEFORE I USE glDraw... , -> EVERY FRAME
If I don't call first one, nothing is rendered.
If I don't call second one, it renders at the wrong positions.
It looks like everytime I use GDXlib classes to render, it somehow messes my attributes up.
Init code:
shaderProgram = new ShaderProgram(baseVertexShader, baseFragmentShader);
if (!shaderProgram.isCompiled()) {
Gdx.app.error("TerrainRenderer - Cannot compile shader", shaderProgram.getLog());
}
shaderProgram.begin();
//vertexBuffers
vertexBuffer = BufferUtils.newFloatBuffer(quadPosVertices.length);
vertexBuffer.put(quadPosVertices);
vertexBuffer.rewind();
//VBOs
//generate buffers
posVertexBufferLoc = Gdx.gl.glGenBuffer();
//pass data into buffers
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glBufferData(GL20.GL_ARRAY_BUFFER, vertexBuffer.capacity()*4, vertexBuffer, GL20.GL_STATIC_DRAW);
//attributes
//locations
positionAttribLoc = shaderProgram.getAttributeLocation("position");
//attributes specifications
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
//enabling attributes
shaderProgram.enableVertexAttribute(positionAttribLoc);
//end shader
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0); //unbind
Draw code:
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
shaderProgram.enableVertexAttribute(positionAttribLoc);
shaderProgram.begin();
Gdx.gl.glDrawArrays(GL20.GL_TRIANGLES, 0, 6);
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0);
So What is the correct way to use openGL methods with GDXlib classes?
Do I really need to call those attribute functions every frame?
OpenGL is a state machine. When libGDX is doing something with OpenGL it will inevitably change the state of the OpenGL context to something different.
The canonical way to draw stuff in OpenGL is:
Set every state you depend on to the values you need for drawing.
Then draw it.
For the longest time OpenGL didn't have VAOs (vertex array objects) and you had in fact to do a glBindBuffer, glVertexAttribPointer combo every time you switched vertex buffers. Most OpenGL drivers are well optimized in that code path, and for a matter of fact, back when when VAOs got introduced, using them impaired performance. That's no longer the case, but it used to be.
Also you can't improve performance by "saving" on OpenGL calls. OpenGL isn't that low level and in many ways operates a lot like a modern out-of-order-execution CPU: As long as the outcome is identical to what would render if every command was done in order it can delay and rearrange operations.
I'm trying to layer / chain multiple GLES 2.0 effects / programs. In my specific case the first pass renders a video frame, then a second pass renders some particles on top and finally I want to apply an animated zoom effect that transforms the whole composition. The way I go about chaining the shaders for now is by compiling / attaching / linking them individually and then calling glUseProgram() for each one in a row in my onDrawFrame() method.
super.onDrawFrame(fbo); // this is where I call the previous glUseProgram() ...
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferName);
GLES20.glEnableVertexAttribArray(getHandle("aPosition"));
GLES20.glVertexAttribPointer(getHandle("aPosition"), VERTICES_DATA_POS_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_POS_OFFSET);
GLES20.glEnableVertexAttribArray(getHandle("aTextureCoord"));
GLES20.glVertexAttribPointer(getHandle("aTextureCoord"), VERTICES_DATA_UV_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_UV_OFFSET);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); // texture #3 was just a guess but the previous ones appear to refer to stuff from the particles program, not the whole composition
GLES20.glUniform2f(getHandle("zoomCenter"), .5f, .5f); // passing the variables to the shader, don't think there's a problem with that part, therefore not including their source
GLES20.glUniform1f(getHandle("zoomFactor"), 2f);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(getHandle("aPosition"));
GLES20.glDisableVertexAttribArray(getHandle("aTextureCoord"));
This works well for the first two layers, I can draw my particles on the video as expected. The animated zoom shader by itself also works as expected, when I apply it to the uncomposed video frame.
When I run the code above however, I get something that kind of looks like it might be a zoom on the whole image, but then it gradually gets whiter with every frame and goes completely white after around a second.
I figured that might be because I called GLES20.glBlendFunc in the previous particles GL program, so some sort of additive blending would not be unexpected, but GLES20.glDisable ( GLES20.GL_BLEND ); just gives me a black screen. Am I calling stuff in the wrong order or is my assumption nonsense that GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); could be referring to the whole composition so far? Or maybe I just fundamentally misunderstand the way one would chain those shaders.
As you can see on the pictures the string rotates around its origin.
No rotation:
Rotated:
Changing the RasterPos or translate it does not change this at all. I tried glutStrokeString and glutBitmapString. The code for the example:
gl.glColor4f((float) 1, (float) 0, (float) 0, 1.0f);
gl.glScalef(0.0015f, 0.0015f, 0.0015f);
gl.glRotatef(-angleHorizontal, 0, 1, 0);
glut.glutStrokeString(GLUT.STROKE_ROMAN, "ABCDEF");
glutBitmapCharacter:
You can't. It makes use of the (outdated, deprecated, legacy) OpenGL bitmap operations, which are always aligned to the pixel grid.
glutStrokeCharacter:
These are just regular line segments that transform through the fixed function pipeline; or if you're in a compatibility profile through an early GLSL version shader program that uses the set of built-in variables to access the fixed function pipeline state. In one of my codesamples programs (which I wrote to explain how the projection frustum works) I have some helper function to draw arrows with annotations. You can find the full code here https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/frustum/frustum.c the relevant function starts in line 114.
We use Opengl ES 2.0 for android and are trying to display two Cubes stacked on each other. For that we have two vertexbuffers (mCubePositions1 and mCubePositions2) which store the Cubedata (the vertices combined to triangles) and call a seperate draw method for each of them:
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,0, mCubePositions1); //the second time mCubePositions2
GLES20.glEnableVertexAttribArray(mPositionHandle);
//some Code concerning lightning and textures
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 36);
The result is two cubes are displayed, but if we let them rotate, the cube which is drawn second is always displayed on top (2nd Cube is shining through the 1st).
In the onSurfaceCreated method we initialise the depth-buffer:
// Set the background clear color to black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glClearDepthf(1.0f);
// Use culling to remove back faces.
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
GLES20.glDepthMask(true);
There is a solution combining the two buffers into one and than calling just one draw-method, but this is not a solution for us, because we want to move the Cubes seperatly.
If this is not enougth code to answer please ask for more.
Thank you for every answer :)
If you enable depth testing, and it still does not work, this typically means that you don't have a depth buffer.
When using a GLSurfaceView in Android, you request what buffers you need while the view is initialized, by calling the setEGLConfigChooser method. There are a few overloads for this method. The most commonly used one takes a size (number of bits) for each buffer. A typical call will look like this:
setEGLConfigChooser(8, 8, 8, 0, 16, 0);
This means that you want 8 bits each for RGB, do not require alpha, want a 16-bit depth buffer, and do not need stencil.
Note that it is not guaranteed that you will get exactly the specified sizes, but the best possible match among the available configurations.
In the two attached pictures, the desktop screenshot of libgdx functions as expected. The screenshot from my Galaxy Nexus is unfortunately not as expected. I am attempting to create a simple motion blur or trail effect.
Rendering as I expected on my desktop.
Not rendering as I expected on my Galaxy nexus.
The circle textures are drawn in a for loop during rendering and the effect is achieved with a pixmap using the RGBA of 0, 0, 0, 0.1f that is drawn before the circles.
screenClearSprite creation
Pixmap screenClearPixmap = new Pixmap(256, 256, Format.RGBA8888);
screenClearPixmap.setColor(Color.rgba8888(0, 0, 0, 0.1f));
screenClearPixmap.fillRectangle(0, 0, 256, 256);
screenClearTexture = new Texture(screenClearPixmap);
screenClearSprite = new Sprite(screenClearTexture);
screenClearSprite.setSize(screenWidth, screenHeight);
screenClearPixmap.dispose();
Render
batch.begin();
font.draw(batch, "fps:" + Gdx.graphics.getFramesPerSecond(), 0, 20);
screenClearSprite.draw(batch);
for (int i = 0; i < circleBodies.size(); i++) {
tempPos = circleBodies.get(i).getPosition();
batch.draw(circleTexture, (tempPos.x * SCALE) + screenWidthHalf
- circleSizeHalf, (tempPos.y * SCALE) + screenHeightHalf
- circleSizeHalf);
}
batch.end();
So, what did I do wrong? Perhaps there is a better way to get the 'motion blur' effect of movement?
Here is a different approach, where you clear your screen each time with solid color and no alpha.
This means that you will have to modify your code some. The good thing about this, is that the way you are doing it has some flaws: It will blur everything in motion, not just the balls. And can quickly produce ugly results/artefacts unless you are careful.
Do the same as you are doing now, but instead of drawing the balls to the batch, draw them onto a texture/bitmap/whatever. Then each frame add an alpha-blended image over the balls-image, and then draw the balls in their current position on top of that. Then add that image to your screen. Very much like you are doing now, except you draw to something else and keep it. This way you don't have to rely on the viewport you are drawing onto, and can keep everything separated.
This method is similar to drawing to an accumulation buffer.
Instead of doing it the way you are doing, you can keep track of the n latest positions of each ball. And then draw all of them each frame, with different alpha. This is very easy to implement. Can result in many drawing calls if you have many balls or a large n, but if it's not too much it shouldn't limit your fps and gives nice control.
Perhaps there is a better way to get the 'motion blur' effect of
movement?
in order to make motion blur in my game i use another approch "The particle effect" it works realy fine with me and i didn't have Android/Desktop problems or with different android devices
all you have to do is to use "Particle Effect Editor" of Libgdx and make your effect then load it in your project finally draw it at the same position you draw your object (and alos draw your object)
Tips to make the right effect file with Paticle Editor :
set (use) the same image of the object that you want to blur it motion in the particle effect
try to limit the count : the max number of particle allowed
Disable the "velocity" and "Angle"
parameter
Particle effect help to do motion effect
Hope this will help someone !