How to draw Slick's fonts (UnicodeFont), which can be drawn only via drawString(), which provides only x and y, in a 3D world?
I have tried this:
public void drawHudString(float _posX, float _posY, String _text, UnicodeFont _font, Color _color) {
glPushMatrix();
glRotatef(cam.rotX, -1.0f, 0.0f, 0.0f);
glRotatef(cam.rotY, 0.0f, -1.0f, 0.0f);
glRotatef(cam.rotZ, 0.0f, 0.0f, -1.0f);
glTranslatef(-cam.posX, -cam.posY, -cam.posZ + 20);
_font.drawString(_posX, _posY, _text, _color);
glPopMatrix();
}
but the text was not displayex. If used without glRotatefs and glTranslatef, then the text will be rendered by 3D-world coords
The rendering of fonts with Slick is based on OpenGL's immediate mode. This means that the behavior of drawString is determined by the current state, or in other words, the current GL_PROJECTION and GL_MODELVIEW matrices.
Your question does not really make it clear whether you want to draw text as a 2D overlay (probably in screen coordinates) or truly perspective embedded in 3d space. Nevertheless, you can achieve both with drawString. drawString renders the font texture at the specified x/y coordinates in the z=0 plane. Iirc the convention in drawString is to assume a left-handed coordinate system and the texture is visible from the negative z-side. The reason for this convention is probably that OpenGL uses a left-handed coordinate system for window space, while using a right-handed for world/object space (see this answer for a good explanation). As a result, rendering in 2D is straightforward with Slick: Set up a typical orthogonal projection matrix (as suggested by ryuyah2000) and you are good to go.
If you want to render in 3D space, you instead keep your regular perspective projection matrix (i.e., you are using the same projection you use for rendering your world). In order to control the position of the text in 3D space you have to set up the modelview matrix accordingly (i.e., aligning the z=0 plane in the plane where you want to render your text). Due to the left-hand and z-visibility conventions you may have to rotate your z-axis by 180° and have to invert the handedness (just scale your modelview by -1). In case you get one of these steps wrong, your text is either not visible (= looking to wrong z-side) or is written right-to-left (= wrong handedness). Slick's drawString method uses a scaling of 1 unit = 1 pixel of the font texture, so you have to apply an appropriate scaling to match that to your world units.
use this
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, 800, 600, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// render font
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
Related
I rotate an object in 3D space in XYZ by 90 degree steps, (rX rY rZ). The angle's are limited to 0-360 degree and I use the following commands for rotating the matrix:
Matrix.rotateM(mModelMatrix, 0, rX, 1.0f, 0.0f, 0.0f);
Matrix.rotateM(mModelMatrix, 0, rY, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mModelMatrix, 0, rZ, 0.0f, 0.0f, 1.0f);
If the XYZ of the object - before rotation - is right (X+), away (Y+) and up (Z+); How can easily calculate what is right, away and up after an arbitrary rotation?
I have no other information but the rX, rY and rZ rotation variables.
When having a matrix it might make most sense to multiply the base vectors with the same matrix to get transformed vectors. For instance if you are looking for a vector facing from object toward (0,0,1) respecting its internal coordinate system you would first transform the origin (0,0,0) with this matrix to get the new center and then transform the target vector (0,0,1) with the same procedure. The result is then target-origin. This procedure will work for any system and any combination you need but you do need to watch out what matrix you are multiplying with as in most cases the projection should not be included.
Another interesting solution for your specific case might be simply looking at the matrix base vectors. The top-left 3x3 part of the matrix actually represents the 3 axis for x, y and z. So identity is x=(1,0,0), y=(0,1,0), z=(0,0,1). Once rotated or scaled these values will change and can be accessed directly from the matrix.
I want to make an 8bit style game, therefore all of my textures are 16x16 going to be scaled up to 128x128. However I noticed that when i did this they were blurry. I anticipated this and used:
glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_NEAREST)
That did not help though. So i set off to research and I found a question that talked about using a black and white checkerboard to find out what is going on. I replaced my texture with a black and white checkerboard and this is what i got:
http://imgur.com/mScUVJz
Strange it seems like even though i am specifying Nearest it is using Linear. Any help?
Edit: My texture is POT, as well is the surface i'm drawing it to.
Texture Drawing Code
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
GL11.glEnable(GL11.GL_TEXTURE_2D);
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0,0);
GL11.glVertex2d(0, 0); // Upper-left
GL11.glTexCoord2d(1,0);
GL11.glVertex2d(1024,0); // Upper-right
GL11.glTexCoord2d(1, 1);
GL11.glVertex2d(1024,1024); // Bottom-right
GL11.glTexCoord2d(0,1);
GL11.glVertex2d(0,1024); // Bottom-left
GL11.glEnd();
GL11.glDisable(GL11.GL_TEXTURE_2D);
GL11.glDisable(GL11.GL_BLEND);
GL Init Code
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glViewport((int)0, (int)0, (int)screenWidth, (int)screenHeight);
GL11.glOrtho(0, 1920, 1440, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_LINEAR);
GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER,GL11.GL_LINEAR);
I was trying to follow example codes to simply display a rectangle on a black background, but it didn't seem to be displaying. What I did was
private static void initGL(){
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,Display.getWidth(),0,Display.getHeight(),-1,1);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST); //2D mode
glColor3f(0.5f, 0.0f, 1.0f);
glBegin(GL_QUADS);
glVertex2f(-0.75, 0.75);
glVertex2f(-0.75, -0.75);
glVertex2f(0.75, -0.75);
glVertex2f(0.75, 0.75);
glEnd();
}
It doesn't display anything on the screen except for a black background. Does anyone know what I might have done wrong? I'm using lwjgl in eclipse.
First things first: You only have to run the whole
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,Display.getWidth(),0,Display.getHeight(),-1,1);
glMatrixMode(GL_MODELVIEW);
thing once during your program, probably shortly after you run Display.create().
Also, you're tessellating using the wrong vertices. You wrote
glVertex2f(-0.75, 0.75);
glVertex2f(-0.75, -0.75);
glVertex2f(0.75, -0.75);
glVertex2f(0.75, 0.75);
which means draw a rectangle from (-0.75, -0.75) pixels to (0.75, 0.75) pixels. This is too small to be noticed. My guess is you assumed glVertex2f deals with fractions of the display width. It does not. glVertex2f deals with actual coordinates, it just allows fractional pixels, unlike glVertex2i (this is useful believe it or not, it helps with smoother animations). Something like
glVertex2f(100F, 100F);
places a vertex at (100, 100), and is effectively equivalent to
glVertex2i(100, 100);
Also, remember that negative pixels will be rendered off the screen, because OpenGL's origin of the coordinate system, (0, 0), is in the lower left and behaves like the first quadrant from the coordinate system in math class, not like the traditional computer coordinate system with (0, 0) in the upper left.
As for the the black background, LWJGL's Display has a black background by default, so it's recommended to draw a quad with your background color that covers the entire display width and height. One quad won't really affect your performance.
glVertex2f uses same size units as your glOrtho so unless your display width and height are in units of ones, like 10 or less, you may not see anything!
I'm using LWJGL and Slick framework to load Textures to my OpenGL-application.
I'm using this image:
And this code to import and utilize the texture:
flagTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("japan.png"));
......
flagTexture.bind();
GL11.glColor3f(1.0f, 1.0f, 1.0f);
GL11.glPushMatrix();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(2.5f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(2.5f, 2.5f);
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 2.5f);
GL11.glEnd();
GL11.glPopMatrix();
But the end-result becomes this:
I'm not using any special settings like GL_REPEAT or anything like that. Whats going on? How can I make the texture fill the given vertices?
It looks like the texture is getting padded out to the nearest power of two. There are two solutions here:
Stretch the texture out to the nearest power of two.
Calculate the difference between your texture's size and the nearest power of two and change the texture coordinates from 1.0f to textureWidth/nearestPowerOfTwoWidth and textureHeight/nearestPowerOfTwoHeight.
There might also be some specific LWJGL method to allow for non-power-of-two textures, look into that.
If you need to support non-power-of-two textures, you can modify the loading method. If you use Slick2D, there's no way to do it other than to implement your "own" texture class (You can get some examples on those here: Texture.java and TextureLoader.java
The TextureLoader class contains a method "get2Fold", this is used to calculate the next power of two bigger than the texture width/height. So, if you want to use textures with non-power-of-two size, just change this method to simply return fold; (=the input), so that the program "thinks" that the next power of two is the size of the image, which it isn't in many cases, but if the hardware supports it (Most does), this shouldn't be a problem. A more "abstract" way would be to change this line:
GL11.glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL11.GL_UNSIGNED_BYTE, textureBuffer);
Here, the 4th argument = the width of the texture and the 5th = the height of the texture. If you set these to the IMAGE's width/height, it will work. Since this method is basically the same as the one before, there are the same problems for both.. As said before, this will slow down your image processing, and it might not be supported..
Hopefully this link will be of some help
http://www.lwjgl.org/wiki/index.php?title=Slick-Util_Library_-_Part_1_-_Loading_Images_for_LWJGL
looks like its very similar to what your doing here.
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+texture.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd();
I'm trying to render a colored cube after rendering other cubes that have textures. I have multiple "Drawer" objects that conform to the Drawer interface, and I pass each a reference to the GL object to the draw( final GL gl ) method of each individual implementing class. However, no matter what I do, I seem unable to render a colored cube.
Code sample:
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glColor3f( 1f, 0f, 0f );
gl.glBegin(GL.GL_QUADS);
// Front Face
Point3f point = player.getPosition();
gl.glNormal3f(0.0f, 0.0f, 1.0f);
//gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex3f(-point.x - 1.0f, -1.0f, -point.z + 1.0f);
//gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex3f(-point.x + 1.0f, -1.0f, -point.z + 1.0f);
//continue rendering rest of cube. ...
gl.glEnd();
gl.glEnable(GL.GL_TEXTURE_2D);
I've also tried throwing the glColor3f calls before each vertex call, but that still gives me a white cube. What's up?
There are a few things you need to make sure you do.
First off:
gl.glEnable(gl.GL_COLOR_MATERIAL);
This will let you apply colors to your vertices. (Do this before your calls to glColor3f.)
If this still does not resolve the problem, ensure that you are using blending properly (if you're using blending at all.)
For most applications, you'll probably want to use
gl.glEnable(gl.GL_BLEND);
gl.glBlendFunc(gl.GL_SRC_ALPHA,gl.GL_ONE_MINUS_SRC_ALPHA);
If neither of these things solve your problem, you might have to give us some more information about what you're doing/setting up prior to this section of your code.
If lighting is enabled, color comes from the material, not the glColor vertex colors. If your draw function that you mentioned is setting a material for the textured objects (and a white material under the texture would be common) then the rest of the cubes would be white. Using GL_COLOR_MATERIAL sets up OpenGL to take the glColor commands and update the material instead of just the vertex color, so that should work.
So, simply put, if you have lighting enabled, try GL_COLOR_MATERIAL.
One thing you might want to try is: glBindTexture(GL_TEXTURE_2D, 0); to bind the texture to nil.
Some things to check:
Is there a shader active?
Any gl-errors?
What other states did you change? For example GL_COLOR_MATERIAL, blending or lighting will change the appearance of your geometry.
Does it work if you draw the non-textured cube first? And if it does try to figure out at which point it turns white. It's also possible that the cube will only show up in the correct color in the first frame, then there's definitely a GL state involved.
Placing glPushAttrib/glPopAttrib at the beginning/end of your drawing methods might help, but it's better to figure out what caused the problem in the first place.