As you can see on the pictures the string rotates around its origin.
No rotation:
Rotated:
Changing the RasterPos or translate it does not change this at all. I tried glutStrokeString and glutBitmapString. The code for the example:
gl.glColor4f((float) 1, (float) 0, (float) 0, 1.0f);
gl.glScalef(0.0015f, 0.0015f, 0.0015f);
gl.glRotatef(-angleHorizontal, 0, 1, 0);
glut.glutStrokeString(GLUT.STROKE_ROMAN, "ABCDEF");
glutBitmapCharacter:
You can't. It makes use of the (outdated, deprecated, legacy) OpenGL bitmap operations, which are always aligned to the pixel grid.
glutStrokeCharacter:
These are just regular line segments that transform through the fixed function pipeline; or if you're in a compatibility profile through an early GLSL version shader program that uses the set of built-in variables to access the fixed function pipeline state. In one of my codesamples programs (which I wrote to explain how the projection frustum works) I have some helper function to draw arrows with annotations. You can find the full code here https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/frustum/frustum.c the relevant function starts in line 114.
Related
I am trying to render a terrain using my own shader and by using low level open gl methods.
But other parts of the game use SpriteBatch and other GDXlib render classes.
My openGL code for terrain renders correctly until I make a call to:
spriteBatch.draw(...);
or something similar like:
stage.draw();
After that call, my openGL code just not draw anymore. No error, just nothing on screen.
But SpriteBatch works just OK.
After a loooong time, I figured out that I need to call
glEnableVertexAttribArray(...);
and
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
I NEED TO CALL IT BEFORE I CALL -> glDrawArrays(...);
EVERY TIME BEFORE I USE glDraw... , -> EVERY FRAME
If I don't call first one, nothing is rendered.
If I don't call second one, it renders at the wrong positions.
It looks like everytime I use GDXlib classes to render, it somehow messes my attributes up.
Init code:
shaderProgram = new ShaderProgram(baseVertexShader, baseFragmentShader);
if (!shaderProgram.isCompiled()) {
Gdx.app.error("TerrainRenderer - Cannot compile shader", shaderProgram.getLog());
}
shaderProgram.begin();
//vertexBuffers
vertexBuffer = BufferUtils.newFloatBuffer(quadPosVertices.length);
vertexBuffer.put(quadPosVertices);
vertexBuffer.rewind();
//VBOs
//generate buffers
posVertexBufferLoc = Gdx.gl.glGenBuffer();
//pass data into buffers
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glBufferData(GL20.GL_ARRAY_BUFFER, vertexBuffer.capacity()*4, vertexBuffer, GL20.GL_STATIC_DRAW);
//attributes
//locations
positionAttribLoc = shaderProgram.getAttributeLocation("position");
//attributes specifications
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
//enabling attributes
shaderProgram.enableVertexAttribute(positionAttribLoc);
//end shader
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0); //unbind
Draw code:
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, posVertexBufferLoc);
Gdx.gl.glVertexAttribPointer(positionAttribLoc, 4, GL20.GL_FLOAT, false, 0, 0);
shaderProgram.enableVertexAttribute(positionAttribLoc);
shaderProgram.begin();
Gdx.gl.glDrawArrays(GL20.GL_TRIANGLES, 0, 6);
shaderProgram.end();
Gdx.gl.glBindBuffer(GL20.GL_ARRAY_BUFFER, 0);
So What is the correct way to use openGL methods with GDXlib classes?
Do I really need to call those attribute functions every frame?
OpenGL is a state machine. When libGDX is doing something with OpenGL it will inevitably change the state of the OpenGL context to something different.
The canonical way to draw stuff in OpenGL is:
Set every state you depend on to the values you need for drawing.
Then draw it.
For the longest time OpenGL didn't have VAOs (vertex array objects) and you had in fact to do a glBindBuffer, glVertexAttribPointer combo every time you switched vertex buffers. Most OpenGL drivers are well optimized in that code path, and for a matter of fact, back when when VAOs got introduced, using them impaired performance. That's no longer the case, but it used to be.
Also you can't improve performance by "saving" on OpenGL calls. OpenGL isn't that low level and in many ways operates a lot like a modern out-of-order-execution CPU: As long as the outcome is identical to what would render if every command was done in order it can delay and rearrange operations.
I'm trying to layer / chain multiple GLES 2.0 effects / programs. In my specific case the first pass renders a video frame, then a second pass renders some particles on top and finally I want to apply an animated zoom effect that transforms the whole composition. The way I go about chaining the shaders for now is by compiling / attaching / linking them individually and then calling glUseProgram() for each one in a row in my onDrawFrame() method.
super.onDrawFrame(fbo); // this is where I call the previous glUseProgram() ...
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferName);
GLES20.glEnableVertexAttribArray(getHandle("aPosition"));
GLES20.glVertexAttribPointer(getHandle("aPosition"), VERTICES_DATA_POS_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_POS_OFFSET);
GLES20.glEnableVertexAttribArray(getHandle("aTextureCoord"));
GLES20.glVertexAttribPointer(getHandle("aTextureCoord"), VERTICES_DATA_UV_SIZE, GL_FLOAT, false, VERTICES_DATA_STRIDE_BYTES, VERTICES_DATA_UV_OFFSET);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); // texture #3 was just a guess but the previous ones appear to refer to stuff from the particles program, not the whole composition
GLES20.glUniform2f(getHandle("zoomCenter"), .5f, .5f); // passing the variables to the shader, don't think there's a problem with that part, therefore not including their source
GLES20.glUniform1f(getHandle("zoomFactor"), 2f);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(getHandle("aPosition"));
GLES20.glDisableVertexAttribArray(getHandle("aTextureCoord"));
This works well for the first two layers, I can draw my particles on the video as expected. The animated zoom shader by itself also works as expected, when I apply it to the uncomposed video frame.
When I run the code above however, I get something that kind of looks like it might be a zoom on the whole image, but then it gradually gets whiter with every frame and goes completely white after around a second.
I figured that might be because I called GLES20.glBlendFunc in the previous particles GL program, so some sort of additive blending would not be unexpected, but GLES20.glDisable ( GLES20.GL_BLEND ); just gives me a black screen. Am I calling stuff in the wrong order or is my assumption nonsense that GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 3); could be referring to the whole composition so far? Or maybe I just fundamentally misunderstand the way one would chain those shaders.
We use Opengl ES 2.0 for android and are trying to display two Cubes stacked on each other. For that we have two vertexbuffers (mCubePositions1 and mCubePositions2) which store the Cubedata (the vertices combined to triangles) and call a seperate draw method for each of them:
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,0, mCubePositions1); //the second time mCubePositions2
GLES20.glEnableVertexAttribArray(mPositionHandle);
//some Code concerning lightning and textures
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 36);
The result is two cubes are displayed, but if we let them rotate, the cube which is drawn second is always displayed on top (2nd Cube is shining through the 1st).
In the onSurfaceCreated method we initialise the depth-buffer:
// Set the background clear color to black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glClearDepthf(1.0f);
// Use culling to remove back faces.
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
GLES20.glDepthMask(true);
There is a solution combining the two buffers into one and than calling just one draw-method, but this is not a solution for us, because we want to move the Cubes seperatly.
If this is not enougth code to answer please ask for more.
Thank you for every answer :)
If you enable depth testing, and it still does not work, this typically means that you don't have a depth buffer.
When using a GLSurfaceView in Android, you request what buffers you need while the view is initialized, by calling the setEGLConfigChooser method. There are a few overloads for this method. The most commonly used one takes a size (number of bits) for each buffer. A typical call will look like this:
setEGLConfigChooser(8, 8, 8, 0, 16, 0);
This means that you want 8 bits each for RGB, do not require alpha, want a 16-bit depth buffer, and do not need stencil.
Note that it is not guaranteed that you will get exactly the specified sizes, but the best possible match among the available configurations.
I have this line:
Gdx.gl10.glLineWidth(width);
Now, I do intend to draw a pretty thick line, and, unfortunately when I type in small values like 1 or 5 the line is obviously small. But once I surpass soemthing like 10, it no longer gets larger. I am passing in direct values in these instances, and so, I am under the impression that GL has a limit or something.... Would I be correct? Here's my code:
Gdx.gl.glClearColor(0,0,0,1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(cam.combined);
batch.begin();
batch.draw(bg,0,0,WIDTH,HEIGHT);
for(Spell a : spells){
a.draw(batch);
}
lc.draw(batch);
batch.end();
//((ppux+ppuy)/2f)*4
Gdx.gl10.glLineWidth(50);//average and then say 1/4 a unit)
renderer.setProjectionMatrix(cam.combined);
renderer.begin(ShapeType.Line);
lp.drawLines(renderer);
renderer.end();
batch.begin();
lp.draw(batch);
batch.end();
lp.drawLines(renderer) calls the following(I just call set color, and draw line):
renderer.setColor(1,1,1,1);
Elem a = elems.get(spellcombo.get(0));
Vector2 last = new Vector2(a.x(),a.y());
for(int i = 1; i < spellcombo.size(); i++){
a = elems.get(spellcombo.get(i));
Vector2 cur = new Vector2(a.x(),a.y());
renderer.line(last.x, last.y, cur.x, cur.y);
last = cur;
}
renderer.line(last.x,last.y,mx,my);
Gdx.gl.glEnable(GL10.GL_BLEND);
Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
renderer.setColor(1, 0, 0, .2f);
for(Elem e : elems){
int id = elems.indexOf(e);
if(ComboManager.validSpell(spellcombo,id))
renderer.line(last.x,last.y,e.x(),e.y());
}
Screenshots:
Image with glLineWidth() set to 1
Image with glLineWidth() set to 5
Image with glLineWidth() set to 10
Image with glLineWidth() set to 20
Image with glLineWidth() set to 200
I don't really know how to fix, and google wasn't particularily helpfull.
Thanks!
From libgdx 1.0 on there is also the ShapeRenderer's rectLine method available. See a simple example.
ShapeRenderer shapeRenderer = new ShapeRenderer();
shapeRenderer.begin(ShapeType.Filled);
shapeRenderer.rectLine(x1, y1, x2, y2, width);
shapeRenderer.end();
That seems to be the easiest way to draw thick lines now.
In Libgdx the Gdx.gl10 object is the wrapper for the OpenGL 1.x API. So, the calls there are all (basically) calls into OpenGL ES (on Android) or regular OpenGL (on the desktop). Sometimes the Java layer makes changes to the API, but generally its a pretty straightforward mapping. (On the Desktop, Libgdx tries to emulate the ES variant so the API presented contains only ES-relevant APIs.)
The line-drawing support in OpenGL ES is one place where ES changes from regular OpenGL. Both have limitations on supported line width, though in regular OpenGL the limitations seem to apply only to anti-aliased lines.
Regular OpenGL
http://www.opengl.org/sdk/docs/man/xhtml/glLineWidth.xml
There is a range of supported line widths. Only width 1 is guaranteed
to be supported; others depend on the implementation. To query the
range of supported widths, call glGet with argument
GL_ALIASED_LINE_WIDTH_RANGE.
OpenGL ES
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glLineWidth.xml
There is a range of supported line widths. Only width 1 is guaranteed
to be supported; others depend on the implementation. To query the
range of supported widths, call glGet with argument
GL_ALIASED_LINE_WIDTH_RANGE.
To query the limits in Libgdx, use something like this:
int[] results = new int[1];
Gdx.gl10.glGetIntegerv(GL20.GL_ALIASED_LINE_WIDTH_RANGE, results, 0);
The upshot of all of this though, is that because line drawing (other than width 1.0) on OpenGL ES has different run-time limitations on different platforms, you probably should use a different scheme (like rectangles) to draw fat lines.
Can I set the current render position to be an arbitrary value, instead of just giving it an offset from the current location?
This is what I'm doing currently:
gl.glTranslatef(3.0f, 0.0f, 2.0f);
It allows me to say "I want to move left" but not "I want to move to point (2, 1, 2)". Is there a way to do the latter?
I'm using OpenGL with JOGL.
Update:
#Bahbar suggests the following:
gl.glLoadIdentity();
gl.glTranslatef(...);
When I do this, everything except six lines disappears. I'm not sure why. I'm having a problem with the far clipping plane being too close, so perhaps they're too far away to be rendered.
Yes. Just start from the identity matrix.
gl.glLoadIdentity();
gl.glTranslatef(...);
Yes, you can set your view position using the gluLookAt command. If you look at the OpenGL FAQ, there is a question that has the line most relevant to your problem: 8.060 How do I make the camera "orbit" around a point in my scene?
You can simulate an orbit by
translating/rotating the scene/object
and leaving your camera in the same
place. For example, to orbit an object
placed somewhere on the Y axis, while
continuously looking at the origin,
you might do this:
gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */
0, 0, 0, /* look at the origin */
0, 1, 0); /* positive Y up vector */
glRotatef(orbitDegrees, 0.f, 1.f, 0.f); /* orbit the Y axis */
/* ...where orbitDegrees is derived from mouse motion */
glCallList(SCENE); /* draw the scene */
If you insist on physically orbiting the camera position, you'll
need to transform the current camera
position vector before using it in
your viewing transformations.
In either event, I recommend you
investigate gluLookAt() (if you aren't
using this routine already).
Note that gluLookAt call: the author is storing the camera position in a 3 value array. If you do the same, you will be able to specify your viewpoint in absolute coordinates exactly as you wanted.
NOTE: if you move the eye position, it's likely that you're going to want to specify the view direction as well. In this example, the author has decided to keep the eye focused on the point (0, 0, 0).
There's a good chance that this question is related to what you're trying to do as well: GLU.gluLookAt in Java OpenGL bindings seems to do nothing.