I rotate an object in 3D space in XYZ by 90 degree steps, (rX rY rZ). The angle's are limited to 0-360 degree and I use the following commands for rotating the matrix:
Matrix.rotateM(mModelMatrix, 0, rX, 1.0f, 0.0f, 0.0f);
Matrix.rotateM(mModelMatrix, 0, rY, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mModelMatrix, 0, rZ, 0.0f, 0.0f, 1.0f);
If the XYZ of the object - before rotation - is right (X+), away (Y+) and up (Z+); How can easily calculate what is right, away and up after an arbitrary rotation?
I have no other information but the rX, rY and rZ rotation variables.
When having a matrix it might make most sense to multiply the base vectors with the same matrix to get transformed vectors. For instance if you are looking for a vector facing from object toward (0,0,1) respecting its internal coordinate system you would first transform the origin (0,0,0) with this matrix to get the new center and then transform the target vector (0,0,1) with the same procedure. The result is then target-origin. This procedure will work for any system and any combination you need but you do need to watch out what matrix you are multiplying with as in most cases the projection should not be included.
Another interesting solution for your specific case might be simply looking at the matrix base vectors. The top-left 3x3 part of the matrix actually represents the 3 axis for x, y and z. So identity is x=(1,0,0), y=(0,1,0), z=(0,0,1). Once rotated or scaled these values will change and can be accessed directly from the matrix.
Related
I wrote a simple program using OpenGL ES 2.0 and Java for Android.
This program draw a point in a random position on the screen using an ortho projection matrix, view matrix and a model matrix. In the shader I put matrix * position.
All works well, but then I tried, for testing purpose, to calculate the position of the point by myself, so I used multiplyMV and as arguments I put my MVPmatrix (obtained by using multiplyMM before between projection and view and then between the result and the model matrix) and my point (for example 2f, 3.5f, 0f, 1f). The problem is that sometimes the result I get for x and/or y is greater than 1 or smaller than -1, despite the fact that the point is on the screen. But in normalized device coordinates the point must have a range between -1 and 1 in order to be "on screen".
I really don't understand where is my mistake.
The point has to be in normalized device space [-1, 1] (for x, y, z), after the projection matrix is applied.
But note the point may be a Homogeneous coordinate. This means you have to do a Perspective divide first.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
Clip space coordinates are Homogeneous coordinates. In clipspace the clipping of the scene is performed. A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-p.w <= p.x, p.y, p.z <= p.w.
The clip coordinates are transformed to the normalized evice coordiantes by doing the Perspective divide:
ndc.xyz = (p.x/p.w, p.y/p.w, p.z/p.w)
I rotate an object in 3D space in XYZ by 90 degree steps, (rX rY rZ). The angle's are limited to 0-360 degree and I use the following commands for rotating the matrix:
Matrix.rotateM(mRotationMatrix, 0, rX, 1.0f, 0.0f, 0.0f);
Matrix.rotateM(mRotationMatrix, 0, rY, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mRotationMatrix, 0, rZ, 0.0f, 0.0f, 1.0f);
To figure out in what direction the up, right and away of the object ended up after the rotation I simply parse the 3x3 upper left of the rotation matrix. That works great by reading:
mRotationMatrix[0], mRotationMatrix[1], mRotationMatrix[2]
mRotationMatrix[4], mRotationMatrix[5], mRotationMatrix[6]
mRotationMatrix[8], mRotationMatrix[9], mRotationMatrix[10]
From here I need to figure out how I can update rX, rY, rZ using four different formulas|commands|functions where each rotates the object along the resulting X and Y axis back and forward. Imagine like if the four commands can push the object at four directions and the object will "roll" 90 degree in that direction.
I have no other information and can only manipulate the rX, rY and rZ rotation variables. I must not add a fourth rotation to the result. Somehow I must calculate what change to rX, rY and rZ respectively will have the same effect on the resulting rotation matrix independently of what the rX, rY and rZ already are set to.
I can't figure the maths out. Knowing what side of the object is currently pointing up isn't enough since it can be rotated in four different directions and still have the same side up, etc. Even if I know exactly in what directions the six sides of the object is pointing I still can't figure out what of the rX, rY or rZ I should manipulate to achieve the rotation.
I am starting to lean towards creating a hardcoded lookup-table for all possible combinations, but I really want to avoid that (I guess that would be 4*4*4*4, where the last four is the "roll" direction).
Another way of seeing it is instead of thinking of rolling the top side 90 degrees to any of the four sides facing left, right, away or near one can say that I need to figure out how I can manipulate rX, rY and rZ to rotate any of the sides of the object, except for up or down, to the top side. I hope that explains what my goal is.
How to draw Slick's fonts (UnicodeFont), which can be drawn only via drawString(), which provides only x and y, in a 3D world?
I have tried this:
public void drawHudString(float _posX, float _posY, String _text, UnicodeFont _font, Color _color) {
glPushMatrix();
glRotatef(cam.rotX, -1.0f, 0.0f, 0.0f);
glRotatef(cam.rotY, 0.0f, -1.0f, 0.0f);
glRotatef(cam.rotZ, 0.0f, 0.0f, -1.0f);
glTranslatef(-cam.posX, -cam.posY, -cam.posZ + 20);
_font.drawString(_posX, _posY, _text, _color);
glPopMatrix();
}
but the text was not displayex. If used without glRotatefs and glTranslatef, then the text will be rendered by 3D-world coords
The rendering of fonts with Slick is based on OpenGL's immediate mode. This means that the behavior of drawString is determined by the current state, or in other words, the current GL_PROJECTION and GL_MODELVIEW matrices.
Your question does not really make it clear whether you want to draw text as a 2D overlay (probably in screen coordinates) or truly perspective embedded in 3d space. Nevertheless, you can achieve both with drawString. drawString renders the font texture at the specified x/y coordinates in the z=0 plane. Iirc the convention in drawString is to assume a left-handed coordinate system and the texture is visible from the negative z-side. The reason for this convention is probably that OpenGL uses a left-handed coordinate system for window space, while using a right-handed for world/object space (see this answer for a good explanation). As a result, rendering in 2D is straightforward with Slick: Set up a typical orthogonal projection matrix (as suggested by ryuyah2000) and you are good to go.
If you want to render in 3D space, you instead keep your regular perspective projection matrix (i.e., you are using the same projection you use for rendering your world). In order to control the position of the text in 3D space you have to set up the modelview matrix accordingly (i.e., aligning the z=0 plane in the plane where you want to render your text). Due to the left-hand and z-visibility conventions you may have to rotate your z-axis by 180° and have to invert the handedness (just scale your modelview by -1). In case you get one of these steps wrong, your text is either not visible (= looking to wrong z-side) or is written right-to-left (= wrong handedness). Slick's drawString method uses a scaling of 1 unit = 1 pixel of the font texture, so you have to apply an appropriate scaling to match that to your world units.
use this
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, 800, 600, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// render font
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
How do I rotate a triangle, given that rotation in latest OpenGL is deprecated? Before deprecation:
gl.glRotated(i, 0, 0, 1);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(0.0f, 1.0f, 0.0f );
gl.glVertex3f(-1.0f, -1.0f, 0.0f );
gl.glVertex3f(1.0f, -1.0f, 0.0f );
gl.glEnd();
I tried doing this, it's just a translation though:
double rotCos = Math.cos(i);
double rotSine = Math.sin(i);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3d(0.0f + rotSine, 1.0f + rotCos, 0.0f );
gl.glVertex3d(-1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glVertex3d(1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glEnd();
How to achieve the math behind glRotated?
What you did is not what the idea behind the deprecation of those function was; this deprecation included the functions glBegin, glVertex and glEnd, too, so if you're using those, you're missing the point
What you should to is implement a vertex shader, in which you perform the usual steps of vertex transformation i.e. multiply the vertex with first a modelview, then a projection matrix; you can also contract modelview and projection into one matrix, but this makes things a bit trickier regarding illumination.
The matrices are passed to OpenGL through so called uniforms. To create the matrices use some vector math library like GLM or Eigen (with the unofficial OpenGL module accompanying Eigen).
How to achieve the math behind glRotated?
The matrix glRotate() constructs is right there in the documentation.
It is still OK to use depreciated functions, but if you want to use "modern" OpenGL (OpenGL 3 and higher) you are going to have to do things rather differently.
For one, you no longer use glBegin/glEnd and instead draw everything using vertex buffer objects. Secondly, the fixed function pipeline has been removed so vertex and fragment shaders are required to draw anything. There are also number of other changes (including the addition of vertex array objects, and geometry shaders).
The way to do rotation in OpenGL 3 is to pass modelView and projection matrices in uniforms, and use them to compute vertex positions in the vertex shader.
Ultimately, if you want to learn "modern" OpenGL, you are probably best off just looking online for tutorials on OpenGL 3.0 (or higher).
I'm using LWJGL and Slick framework to load Textures to my OpenGL-application.
I'm using this image:
And this code to import and utilize the texture:
flagTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("japan.png"));
......
flagTexture.bind();
GL11.glColor3f(1.0f, 1.0f, 1.0f);
GL11.glPushMatrix();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(2.5f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(2.5f, 2.5f);
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 2.5f);
GL11.glEnd();
GL11.glPopMatrix();
But the end-result becomes this:
I'm not using any special settings like GL_REPEAT or anything like that. Whats going on? How can I make the texture fill the given vertices?
It looks like the texture is getting padded out to the nearest power of two. There are two solutions here:
Stretch the texture out to the nearest power of two.
Calculate the difference between your texture's size and the nearest power of two and change the texture coordinates from 1.0f to textureWidth/nearestPowerOfTwoWidth and textureHeight/nearestPowerOfTwoHeight.
There might also be some specific LWJGL method to allow for non-power-of-two textures, look into that.
If you need to support non-power-of-two textures, you can modify the loading method. If you use Slick2D, there's no way to do it other than to implement your "own" texture class (You can get some examples on those here: Texture.java and TextureLoader.java
The TextureLoader class contains a method "get2Fold", this is used to calculate the next power of two bigger than the texture width/height. So, if you want to use textures with non-power-of-two size, just change this method to simply return fold; (=the input), so that the program "thinks" that the next power of two is the size of the image, which it isn't in many cases, but if the hardware supports it (Most does), this shouldn't be a problem. A more "abstract" way would be to change this line:
GL11.glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL11.GL_UNSIGNED_BYTE, textureBuffer);
Here, the 4th argument = the width of the texture and the 5th = the height of the texture. If you set these to the IMAGE's width/height, it will work. Since this method is basically the same as the one before, there are the same problems for both.. As said before, this will slow down your image processing, and it might not be supported..
Hopefully this link will be of some help
http://www.lwjgl.org/wiki/index.php?title=Slick-Util_Library_-_Part_1_-_Loading_Images_for_LWJGL
looks like its very similar to what your doing here.
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+texture.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd();