I've been coding in OpenGL for a few months, and have been mainly working in 3D. My init method looks like this:
private void initGl() {
glViewport(0, 0, Display.getWidth(), Display.getHeight());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
GLU.gluPerspective(45.0f, Display.getWidth() / Display.getHeight(), 1.0f, 100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_FOG);
glFogi(GL_FOG_MODE, GL_EXP2);
glFogf(GL_FOG_DENSITY, density);
glHint(GL_FOG_DENSITY, GL_FASTEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
vbo = new VBO();
}
Is all of this necesary? I'm wondering if I call loadIdentity() in the right places, should I be calling it after gluPerspective too? Basically, when is the appropriate time to call loadIdentity()?
A common misconception of OpenGL newbies is, that OpenGL somehow is "initialized". OpenGL is a state based drawing machine. That means that all those functions in your "init" function set some drawing related state. The peculiar thing about state machines is, that when used practically you set all state everytime you need it, right when you need it. Or in other words: There is no such thing like a "OpenGL initialization" phase. Most of the calls in your "init" function actually belong into the drawing code.
The main exception are OpenGL objects that truly are one-time initialized, like textures or VBOs.
Related
I want to make an 8bit style game, therefore all of my textures are 16x16 going to be scaled up to 128x128. However I noticed that when i did this they were blurry. I anticipated this and used:
glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_NEAREST)
That did not help though. So i set off to research and I found a question that talked about using a black and white checkerboard to find out what is going on. I replaced my texture with a black and white checkerboard and this is what i got:
http://imgur.com/mScUVJz
Strange it seems like even though i am specifying Nearest it is using Linear. Any help?
Edit: My texture is POT, as well is the surface i'm drawing it to.
Texture Drawing Code
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
GL11.glEnable(GL11.GL_TEXTURE_2D);
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0,0);
GL11.glVertex2d(0, 0); // Upper-left
GL11.glTexCoord2d(1,0);
GL11.glVertex2d(1024,0); // Upper-right
GL11.glTexCoord2d(1, 1);
GL11.glVertex2d(1024,1024); // Bottom-right
GL11.glTexCoord2d(0,1);
GL11.glVertex2d(0,1024); // Bottom-left
GL11.glEnd();
GL11.glDisable(GL11.GL_TEXTURE_2D);
GL11.glDisable(GL11.GL_BLEND);
GL Init Code
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glViewport((int)0, (int)0, (int)screenWidth, (int)screenHeight);
GL11.glOrtho(0, 1920, 1440, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_LINEAR);
GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER,GL11.GL_LINEAR);
How to draw Slick's fonts (UnicodeFont), which can be drawn only via drawString(), which provides only x and y, in a 3D world?
I have tried this:
public void drawHudString(float _posX, float _posY, String _text, UnicodeFont _font, Color _color) {
glPushMatrix();
glRotatef(cam.rotX, -1.0f, 0.0f, 0.0f);
glRotatef(cam.rotY, 0.0f, -1.0f, 0.0f);
glRotatef(cam.rotZ, 0.0f, 0.0f, -1.0f);
glTranslatef(-cam.posX, -cam.posY, -cam.posZ + 20);
_font.drawString(_posX, _posY, _text, _color);
glPopMatrix();
}
but the text was not displayex. If used without glRotatefs and glTranslatef, then the text will be rendered by 3D-world coords
The rendering of fonts with Slick is based on OpenGL's immediate mode. This means that the behavior of drawString is determined by the current state, or in other words, the current GL_PROJECTION and GL_MODELVIEW matrices.
Your question does not really make it clear whether you want to draw text as a 2D overlay (probably in screen coordinates) or truly perspective embedded in 3d space. Nevertheless, you can achieve both with drawString. drawString renders the font texture at the specified x/y coordinates in the z=0 plane. Iirc the convention in drawString is to assume a left-handed coordinate system and the texture is visible from the negative z-side. The reason for this convention is probably that OpenGL uses a left-handed coordinate system for window space, while using a right-handed for world/object space (see this answer for a good explanation). As a result, rendering in 2D is straightforward with Slick: Set up a typical orthogonal projection matrix (as suggested by ryuyah2000) and you are good to go.
If you want to render in 3D space, you instead keep your regular perspective projection matrix (i.e., you are using the same projection you use for rendering your world). In order to control the position of the text in 3D space you have to set up the modelview matrix accordingly (i.e., aligning the z=0 plane in the plane where you want to render your text). Due to the left-hand and z-visibility conventions you may have to rotate your z-axis by 180° and have to invert the handedness (just scale your modelview by -1). In case you get one of these steps wrong, your text is either not visible (= looking to wrong z-side) or is written right-to-left (= wrong handedness). Slick's drawString method uses a scaling of 1 unit = 1 pixel of the font texture, so you have to apply an appropriate scaling to match that to your world units.
use this
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, 800, 600, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// render font
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
I'm using LWJGL and Slick framework to load Textures to my OpenGL-application.
I'm using this image:
And this code to import and utilize the texture:
flagTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("japan.png"));
......
flagTexture.bind();
GL11.glColor3f(1.0f, 1.0f, 1.0f);
GL11.glPushMatrix();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(2.5f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(2.5f, 2.5f);
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 2.5f);
GL11.glEnd();
GL11.glPopMatrix();
But the end-result becomes this:
I'm not using any special settings like GL_REPEAT or anything like that. Whats going on? How can I make the texture fill the given vertices?
It looks like the texture is getting padded out to the nearest power of two. There are two solutions here:
Stretch the texture out to the nearest power of two.
Calculate the difference between your texture's size and the nearest power of two and change the texture coordinates from 1.0f to textureWidth/nearestPowerOfTwoWidth and textureHeight/nearestPowerOfTwoHeight.
There might also be some specific LWJGL method to allow for non-power-of-two textures, look into that.
If you need to support non-power-of-two textures, you can modify the loading method. If you use Slick2D, there's no way to do it other than to implement your "own" texture class (You can get some examples on those here: Texture.java and TextureLoader.java
The TextureLoader class contains a method "get2Fold", this is used to calculate the next power of two bigger than the texture width/height. So, if you want to use textures with non-power-of-two size, just change this method to simply return fold; (=the input), so that the program "thinks" that the next power of two is the size of the image, which it isn't in many cases, but if the hardware supports it (Most does), this shouldn't be a problem. A more "abstract" way would be to change this line:
GL11.glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL11.GL_UNSIGNED_BYTE, textureBuffer);
Here, the 4th argument = the width of the texture and the 5th = the height of the texture. If you set these to the IMAGE's width/height, it will work. Since this method is basically the same as the one before, there are the same problems for both.. As said before, this will slow down your image processing, and it might not be supported..
Hopefully this link will be of some help
http://www.lwjgl.org/wiki/index.php?title=Slick-Util_Library_-_Part_1_-_Loading_Images_for_LWJGL
looks like its very similar to what your doing here.
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+texture.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd();
What is eligible way to implement double buffering in JOGL (Java OpenGL)?
I am trying to do that by the following code:
...
/** Creating canvas. */
GLCapabilities capabilities = new GLCapabilities();
capabilities.setDoubleBuffered(true);
GLCanvas canvas = new GLCanvas(capabilities);
...
/** Function display(…), which draws a white Rectangle on a black background. */
public void display(GLAutoDrawable drawable) {
drawable.swapBuffers();
gl = drawable.getGL();
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glColor3f(1.0f, 1.0f, 1.0f);
gl.glBegin(GL.GL_POLYGON);
gl.glVertex2f(-0.5f, -0.5f);
gl.glVertex2f(-0.5f, 0.5f);
gl.glVertex2f(0.5f, 0.5f);
gl.glVertex2f(0.5f, -0.5f);
gl.glEnd();
}
...
/** Other functions are empty. */
Questions:
— When I'm resizing the window, I usually get flickering. As I see it, I have a mistake in my double buffering implementation.
— I have doubt, where I must place function swapBuffers — before or after (as many sources says) the drawing? As you noticed, I use function swapBuffers (drawable.swapBuffers()) before drawing a rectangle. Otherwise, I'm getting a noise after resize. So what is an appropriate way to do that?
Including or omitting the line capabilities.setDoubleBuffered(true) does not make any effect.
If you use a GLCanvas, autoSwapBuffer mode is set to true by default, you should not have to call swapBuffers() manually. Your flickering has nothing to do with double buffering, rather set sun.awt.noerasebackground to true.
If JOGL is like the C/C++ version:
RMorrisey and the sample code is incorrect in stating the use of glFlush.
The swapBuffers function must go at the end of the drawing.
To confirm this: have the shapes do animation very quickly and watch for tearing. If you get tearing then you are doing a single draw, if you don't then you are using double buffering.
Here's an example of double-buffered animation using JOGL:
http://www.java-tips.org/other-api-tips/jogl/how-to-implement-a-simple-double-buffered-animation-with-mouse-e.html
Try instead of calling swapBuffers(), at the end of display(...) call:
gl.glFlush();
It's been a while since I've done anything with JOGL; hope this helps.
I'm trying to render a colored cube after rendering other cubes that have textures. I have multiple "Drawer" objects that conform to the Drawer interface, and I pass each a reference to the GL object to the draw( final GL gl ) method of each individual implementing class. However, no matter what I do, I seem unable to render a colored cube.
Code sample:
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glColor3f( 1f, 0f, 0f );
gl.glBegin(GL.GL_QUADS);
// Front Face
Point3f point = player.getPosition();
gl.glNormal3f(0.0f, 0.0f, 1.0f);
//gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex3f(-point.x - 1.0f, -1.0f, -point.z + 1.0f);
//gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex3f(-point.x + 1.0f, -1.0f, -point.z + 1.0f);
//continue rendering rest of cube. ...
gl.glEnd();
gl.glEnable(GL.GL_TEXTURE_2D);
I've also tried throwing the glColor3f calls before each vertex call, but that still gives me a white cube. What's up?
There are a few things you need to make sure you do.
First off:
gl.glEnable(gl.GL_COLOR_MATERIAL);
This will let you apply colors to your vertices. (Do this before your calls to glColor3f.)
If this still does not resolve the problem, ensure that you are using blending properly (if you're using blending at all.)
For most applications, you'll probably want to use
gl.glEnable(gl.GL_BLEND);
gl.glBlendFunc(gl.GL_SRC_ALPHA,gl.GL_ONE_MINUS_SRC_ALPHA);
If neither of these things solve your problem, you might have to give us some more information about what you're doing/setting up prior to this section of your code.
If lighting is enabled, color comes from the material, not the glColor vertex colors. If your draw function that you mentioned is setting a material for the textured objects (and a white material under the texture would be common) then the rest of the cubes would be white. Using GL_COLOR_MATERIAL sets up OpenGL to take the glColor commands and update the material instead of just the vertex color, so that should work.
So, simply put, if you have lighting enabled, try GL_COLOR_MATERIAL.
One thing you might want to try is: glBindTexture(GL_TEXTURE_2D, 0); to bind the texture to nil.
Some things to check:
Is there a shader active?
Any gl-errors?
What other states did you change? For example GL_COLOR_MATERIAL, blending or lighting will change the appearance of your geometry.
Does it work if you draw the non-textured cube first? And if it does try to figure out at which point it turns white. It's also possible that the cube will only show up in the correct color in the first frame, then there's definitely a GL state involved.
Placing glPushAttrib/glPopAttrib at the beginning/end of your drawing methods might help, but it's better to figure out what caused the problem in the first place.