currently I am scaling a matrix like so:
public void scale(float aw, float ah){
Matrix.scaleM(modelMatrix, 0, aw, ah, 1f);
updateMVP();
}
private void updateMVP(){
Matrix.multiplyMM(mvpMatrix, 0, projectionMatrix, 0, modelMatrix, 0);
}
And using: gl_Position = u_Matrix * a_Position; in my vertex shader, u_Matrix being the mvpMatrix. The camera I am using is the default and the projectionMatrix is created by:
ASPECT_RATIO = (float) height / (float) width;
orthoM(projectionMatrix, 0, -1f, 1f, -ASPECT_RATIO, ASPECT_RATIO, -1f, 1f);
Now I can scale my object properly, but the only problem is that every time I scale the matrix, the object moves a little bit. I was wondering how I could scale the matrix while keeping the center point and not having the object translate. Anyone know how I can do this in OpenGL ES 2.0 on Android? Thanks
Do you have any other matrices (rotation/translation)?
If so: you might not be multiplying your matrices in the correct order, which can cause issues.
(proper order multiply right to left)
Translate * Rotation * Scale
Your error sounds like the one explained here:
You translate the ship by (10,0,0). Its center is now at 10 units of the origin.
You scale your ship by 2. Every coordinate is multiplied by 2 relative to the origin, which is far away… So you end up with a big
ship, but centered at 2*10 = 20. Which you don’t want.
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
Related
Let's say that my screen is (800 * 600) and I have a Quad (2D) drawn with the following vertices positions using Triangle_Strip (in NDC) :
float[] vertices = {-0.2f,0.2f,-0.2f,-0.2f,0.2f,0.2f,0.2f,-0.2f};
And I set up my Transformation Matrix in this way :
Vector2f position = new Vector2f(0,0);
Vector2f size = new Vector2f(1.0f,1.0f);
Matrix4f tranMatrix = new Matrix4f();
tranMatrix.setIdentity();
Matrix4f.translate(position, tranMatrix, tranMatrix);
Matrix4f.scale(new Vector3f(size.x, size.y, 1f), tranMatrix, tranMatrix);
And my vertex Shader :
#version 150 core
in vec2 in_Position;
uniform mat4 transMatrix;
void main(void) {
gl_Position = transMatrix * vec4(in_Position,0,1.0);
}
My question is, which formula should I use to modify the transformations of my quad with coordinates (in Pixels) ?
For example :
set Scale : (50px, 50px) => Vector2f(width,height)
set Position : (100px, 100px) => Vector2f(x,y)
To better understand, I would create a function to convert my Pixels data to NDCs to send them next to the vertex shader. I was advised to use an Orthographic projection but I don't know how to create it correctly and as you can see in my vertex shader I don't use any projection matrix.
Here is a topic similar to mine but not very clear - Transform to NDC, calculate and transform back to worldspace
EDIT:
I created my orthographic projection matrix by following the formula but
nothing seems to appear, here is how I proceeded :
public static Matrix4f glOrtho(float left, float right, float bottom, float top, float near, float far){
final Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
matrix.m00 = 2.0f / (right - left);
matrix.m01 = 0;
matrix.m02 = 0;
matrix.m03 = 0;
matrix.m10 = 0;
matrix.m11 = 2.0f / (top - bottom);
matrix.m12 = 0;
matrix.m13 = 0;
matrix.m20 = 0;
matrix.m21 = 0;
matrix.m22 = -2.0f / (far - near);
matrix.m23 = 0;
matrix.m30 = -(right+left)/(right-left);
matrix.m31 = -(top+bottom)/(top-bottom);
matrix.m32 = -(far+near)/(far-near);
matrix.m33 = 1;
return matrix;
}
I then included my matrix in the vertex shader
#version 140
in vec2 position;
uniform mat4 projMatrix;
void main(void){
gl_Position = projMatrix * vec4(position,0.0,1.0);
}
What did I miss ?
New Answer
After clarifications in the comments, the question being asked can be summed up as:
How do I effectively transform a quad in terms of pixels for use in a GUI?
As mentioned in the original question, the simplest approach to this will be using an Orthographic Projection. What is an Orthographic Projection?
a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
In practice, you may think of this as a 2D projection. Distance plays no role, and the OpenGL coordinates map to pixel coordinates. See this answer for a bit more information.
By using an Orthographic Projection instead of a Perspective Projection you can start thinking of all of your transformations in terms of pixels.
Instead of defining a quad as (25 x 25) world units in dimension, it is (25 x 25) pixels in dimension.
Or instead of translating by 50 world units along the world x-axis, you translate by 50 pixels along the screen x-axis (to the right).
So how do you create an Orthographic Projection?
First, they are usually defined using the following parameters:
left - X coordinate of the left vertical clipping plane
right - X coordinate of the right vertical clipping plane
bottom - Y coordinate of the bottom horizontal clipping plane
top - Y Coordinate of the top horizontal clipping plane
near - Near depth clipping plane
far - Far depth clipping plane
Remember, all units are in pixels. A typical Orthographic Projection would be defined as:
glOrtho(0.0, windowWidth, windowHeight, 0.0f, 0.0f, 1.0f);
Assuming you do not (or can not) make use of glOrtho (you have your own Matrix class or another reason), then you must calculate the Orthographic Projection matrix yourself.
The Orthographic Matrix is defined as:
2/(r-l) 0 0 -(r+l)/(r-l)
0 2/(t-b) 0 -(t+b)/(t-b)
0 0 -2/(f-n) -(f+n)/(f-n)
0 0 0 1
Source A, Source B
At this point I recommend using a pre-made mathematics library unless you are determined to use your own. One of the most common bug sources I see in practice are matrix-related and the less time you spend debugging matrices, the more time you have to focus on other more fun endeavors.
GLM is a widely-used and respected library that is built to model GLSL functionality. The GLM implementation of glOrtho can be seen here at line 100.
How to use an Orthographic Projection?
Orthographic projections are commonly used to render a GUI on top of your 3D scene. This can be done easily enough by using the following pattern:
Clear Buffers
Apply your Perspective Projection Matrix
Render your 3D objects
Apply your Orthographic Projection Matrix
Render your 2D/GUI objects
Swap Buffers
Old Answer
Note that this answered the wrong question. It assumed the question boiled down to "How do I convert from Screen Space to NDC Space?". It is left in case someone searching comes upon this question looking for that answer.
The goal is convert from Screen Space to NDC Space. So let's first define what those spaces are, and then we can create a conversion.
Normalized Device Coordinates
NDC space is simply the result of performing perspective division on our vertices in clip space.
clip.xyz /= clip.w
Where clip is the coordinate in clip space.
What this does is place all of our un-clipped vertices into a unit cube (on the range of [-1, 1] on all axis), with the screen center at (0, 0, 0). Any vertices that are clipped (lie outside the view frustum) are not within this unit cube and are tossed away by the GPU.
In OpenGL this step is done automatically as part of Primitive Assembly (D3D11 does this in the Rasterizer Stage).
Screen Coordinates
Screen coordinates are simply calculated by expanding the normalized coordinates to the confines of your viewport.
screen.x = ((view.w * 0.5) * ndc.x) + ((w * 0.5) + view.x)
screen.y = ((view.h * 0.5) * ndc.y) + ((h * 0.5) + view.y)
screen.z = (((view.f - view.n) * 0.5) * ndc.z) + ((view.f + view.n) * 0.5)
Where,
screen is the coordinate in screen-space
ndc is the coordinate in normalized-space
view.x is the viewport x origin
view.y is the viewport y origin
view.w is the viewport width
view.h is the viewport height
view.f is the viewport far
view.n is the viewport near
Converting from Screen to NDC
As we have the conversion from NDC to Screen above, it is easy to calculate the reverse.
ndc.x = ((2.0 * screen.x) - (2.0 * x)) / w) - 1.0
ndc.y = ((2.0 * screen.y) - (2.0 * y)) / h) - 1.0
ndc.z = ((2.0 * screen.z) - f - n) / (f - n)) - 1.0
Example:
viewport (w, h, n, f) = (800, 600, 1, 1000)
screen.xyz = (400, 300, 200)
ndc.xyz = (0.0, 0.0, -0.599)
screen.xyz = (575, 100, 1)
ndc.xyz = (0.4375, -0.666, -0.998)
Further Reading
For more information on all of the transform spaces, read OpenGL Transformation.
Edit for Comment
In the comment on the original question, Bo specifies screen-space origin as top-left.
For OpenGL, the viewport origin (and thus screen-space origin) lies at the bottom-left. See glViewport.
If your pixel coordinates are truly top-left origin then that needs to be taken into account when transforming screen.y to ndc.y.
ndc.y = 1.0 - ((2.0 * screen.y) - (2.0 * y)) / h)
This is needed if you are transforming, say, a coordinate of a mouse-click on screen/gui into NDC space (as part of a full transform to world space).
NDC coordinates are transformed to screen (i.e. window) coordinates using glViewport. This function (you must use t in your app) defines a portion of the window by an origin and a size.
The formulas used can be seen at https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glViewport.xml
(x,y) are the origin, normally (0,0) the bottom left corner of the window.
While you can derivate the inverse formulas on your own, here you have them: https://www.khronos.org/opengl/wiki/Compute_eye_space_from_window_space#From_window_to_ndc
If I understand the question, you're trying to get screen space coords (the ones that define the size of your screen) to the -1 to 1 coords. If yes then it's quite simple. The equation is:
((coords_of_NDC_space / width_or_height_of_screen) * 2) - 1
This would work because for example a screen of 800 × 600:
800 / 800 = 1
1 * 2 = 2
2 - 1 = 1
and to check for a coordinate from half the screen on the height:
300 / 600 = 0.5
0.5 * 2 = 1
1 - 1 = 0 (NDC is from -1 to 1 so 0 is middle)
I would like to have a billboard of a tree to always face the camera.
Currently, I am just using glRotatef() and rotating the tree's yaw to the camera's yaw:
glRotatef(camera.yaw(), 0f, 1f, 0f);
However, that unfortunately does not work.
It almost seems like the tree is turning to the right, when it should be turning left.
I've already tried inverting the rotation, but that doesn't work.
glRotatef(-camera.yaw(), 0f, 1f, 0f);
OR
glRotatef(camera.yaw(), 0f, -1f, 0f);
I could always resort to doing a crossed billboard (like I do on my grass), however scaling that up it looks horrible. I would prefer to only use it as a last resort.
I could also use a 3D model as an alternative, however I find that much harder, and it also is far more intensive on the graphics card.
I've already tried looking here for an answer, but not only is that confusing, but it is also for flash and really doesn't even seem to even get close on telling how to do it for other languages.
If needed (for whatever reason), my entire rendering code is:
public void render(){
Main.TerrainDemo.shader.start();
glPushMatrix();
glDisable(GL_LIGHTING);
glTranslatef(location.x * TerrainDemo.scale, location.y, location.z * TerrainDemo.scale); //Scale is the size of the map: More players online = bigger map.
TexturedModel texturedModel = TerrainDemo.textModel;
RawModel model = texturedModel.getRawModel();
glDisable(GL_CULL_FACE);
GL30.glBindVertexArray(model.getVaoID());
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, TerrainDemo.textModel.getTexture().getID());
glScalef(size.x, size.y, size.z);
glColor4f(0, 0, 0, 0.5f); //0,0,0, because of the shaders.
glRotatef(Main.TerrainDemo.camera.yaw(), 0f, 1f, 0f);
glDrawElements(GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL30.glBindVertexArray(0);
glEnable(GL_LIGHTING);
glPopMatrix();
Main.TerrainDemo.shader.stop();
}
camera.yaw():
/** #return the yaw of the camera in degrees */
public float yaw() {
return yaw;
}
The yaw is in between 360 and -360.
/** Processes mouse input and converts it in to camera movement. */
public void processMouse() {
float mouseDX = Mouse.getDX() * 0.16f;
float mouseDY = Mouse.getDY() * 0.16f;
if (yaw + mouseDX >= 360) {
yaw = yaw + mouseDX - 360;
} else if (yaw + mouseDX < 0) {
yaw = 360 - yaw + mouseDX;
} else {
yaw += mouseDX/50;
}
//Removed code relevant to pitch, since it is not relevant to this question.
}
UPDATE:
I have tried a lot of combinations, but the camera.yaw() does not seem to be remotely relevant to what the trees are doing?
No matter what I times or divide or seem to do with it, it always seems to be wrong!
What you want is an axis aligned billboard. First take the center axis in local coordinates, let's call it a. Second you need the axis from the point of view to some point along that axis (the tree's base will do just fine), let's call it v. Given these two vectors you want to form a "tripod" with one leg being coplanar with the center axis and the direction to viewpoint.
This can be done by orthogonalizing the vector v against a using the Gram-Schmidt process, yielding v'. The third leg of the tripod is the cross product between a and v' yielding r = a × v'. The edges of the axis aligned billboard are parallel to a and r; but this is just another way of saying, that a billboard is rotated into the (a,r) plane, which is exactly what rotation matrices describe. Assume the untransformed billboard geometry is in the XY plane, with a parallel to Y, then the rotation matrix would be
[r, a, (0,0,1)]
or in a slightly more elaborate way of writing it
| r.x , a.x , 0 |
| r.y , a.y , 0 |
| r.z , a.z , 1 |
To form a full 4×4 homogenous transformation matrix expand it to
| r.x , a.x , 0 , t.x |
| r.y , a.y , 0 , t.y |
| r.z , a.z , 1 , t.z |
| 0 , 0 , 0 , 1 |
where t is the translation.
Note that if anything about matrixes and vector operations doesn't yet make sense to you, you have to stop anything you do with OpenGL right now, and first learn these essential basic skills. You will need them.
I am trying to turn my screen x and y coordinates to the ones that's used to draw on screen.
So I get my screen X and Y coordinates from MotionEvent thats fired by my touch listener.
I thought it should be as easy as multiplying those by the matrix that's used to draw on canvas so I creatad Matrix instance at creation of view
matrix = new Matrix();
when void onDraw(Canvas canvas) gets called, I set the canvas' matrix to be the matrix I created on the constructor and apply all my transformations to the matrix
matrix.reset();
canvas.setMatrix(matrix);
canvas.translate(getWidth() / 2, getHeight() / 2);
canvas.scale(mScaleFactor, mScaleFactor);
canvas.translate(-getWidth() / 2, -getHeight() / 2);
canvas.translate(-x, -y);
The view looks as it should but when on my touch listener I try to turn my screen coordinates to view coordinates with that matrix using mapPoints(float[] points) the values it gives aren't right, I draw cross on 0, 0 in onDraw
canvas.drawLine(0f, -100f, 0f, 100f, viewportPaint);
canvas.drawLine(-100f, 0f, 100, 0f, viewportPaint);
and when I click where it appears to be after scaling and what not the values I receive aren't even close to 0, 0
float[] array = new float[]{e.getX(), e.getY()};
matrix.mapPoints(array);
Log.v(TAG, "points transformed are " + array[0] + ", " + array[1]);
When I took this picture where I am clearly clicking the 0, 0 mark I received the following logging:
03-09 14:08:48.803: V/XXXX(22181): points transformed are 403.43967, 628.47
Ps. I am not touching matrix anywhere else than in my onDraw code
I had the question in my mind inverted, when I thought about the question again after having a break from it(after 8 hours..) I got it, I had to invert the matrix in order to perform this operation.
Also the matrix wasn't transformed when I was transforming canvas so i had to canvas.get(matrix)
in order to get the same matrix that the canvas was using..
This might help others:
In onDraw()
canvasMatrix.postScale(mScaleFactor, mScaleFactor);
canvasMatrix.postTranslate(-getWidth() / 2, -getHeight() / 2);
canvas.setMatrix(canvasMatrix);
Somewhere else where you need the mapped coordinates:
Matrix m = new Matrix();
canvasMatrix.invert(m);
float[] touch = new float[] { X, Y };
m.mapPoints(touch);
X = touch[0];
Y = touch[1];
there you go.
I want to create a camera moving above a tiled plane. The camera is supposed to move in the XY-plane only and to look straight down all the time. With an orthogonal projection I expect a pseudo-2D renderer.
My problem is, that I don't know how to translate the camera. After some research it seems to me, that there is nothing like a "camera" in OpenGL and I have to translate the whole world. Changing the eye-position and view center coordinates in the Matrix.setLookAtM-function just leads to distorted results.
Translating the whole MVP-Matrix does not work either.
I'm running out of ideas now; do I have to translate every single vertex every frame directly in the vertex buffer? That does not seem plausible to me.
I derived GLSurfaceView and implemented the following functions to setup and update the scene:
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// Setup the projection Matrix for an orthogonal view
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//Setup the camera
float[] camPos = { 0.0f, 0.0f, -3.0f }; //no matter what else I put in here the camera seems to point
float[] lookAt = { 0.0f, 0.0f, 0.0f }; // to the coordinate center and distorts the square
// Set the camera position (View matrix)
Matrix.setLookAtM( vMatrix, 0, camPos[0], camPos[1], camPos[2], lookAt[0], lookAt[1], lookAt[2], 0f, 1f, 0f);
// Calculate the projection and view transformation
Matrix.multiplyMM( mMVPMatrix, 0, projMatrix, 0, vMatrix, 0);
//rotate the viewport
Matrix.setRotateM(mRotationMatrix, 0, getRotationAngle(), 0, 0, -1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0);
//I also tried to translate the viewport here
// (and several other places), but I could not find any solution
//draw the plane (actually a simple square right now)
mPlane.draw(mMVPMatrix);
}
Changing the eye-position and view center coordinates in the "LookAt"-function just leads to distorted results.
If you got this from the android tutorial, I think they have a bug in their code. (made a comment about it here)
Try the following fixes:
Use setLookatM to point to where you want the camera to be.
In the shader, change the gl_Position line
from: " gl_Position = vPosition * uMVPMatrix;"
to: " gl_Position = uMVPMatrix * vPosition;"
I'd think the //rotate the viewport section should be removed as well, as this is not rotating the camera properly. You can change the camera's orientation in the setlookat function.
I'm trying to set up the renderer so that regardless of device, the view is a simple 2D field with the top of the screen at 1.0f and the bottom at -1.0f. I can't seem to get it quite right, I've been using the below method in the onSurfaceChanged() method and playing with the parameters in gluPerspective to achieve the desired effect, but it seems impossible to make perfect. Surely there is an alternative way to go about this to achieve what i'm after. I've also been playing with the Z values of the meshes drawn to try to get them to match.
Again i'm trying to set it up so that the screen is defined in the range -1.0f to 1.0, so that if you drew a square with sides equal to 2.0f it would fill the entire screen regardless of aspect ratio. What do I need to change to do this? (include the value I should use for the Z dimension of the mesh vertices)
(Don't be alarmed by the strange parameters in gluperspective(), I've been tinkering to see what happens.)
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
if(height == 0) { //Prevent A Divide By Zero By
height = 1; //Making Height Equal One
}
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 90.0f, (float) width / (float) height,
0.0000001f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
Generate a Ortho Matrix instead:
Matrix.orthoM(projectionMatrix,0,-yourdisplayWidth/2,+yourdisplayWidth/2,-yourdisplayHeight/2,+yourdisplayHeight/2,0f,2f);
So you can place your image-quads in distance of 1f in front of your camera. You also have to size your quads as big as they are in pixels. This way you can render pixelperfect.
See also: https://github.com/Chrise55/Llama3D
You might want to try experimenting with using glOrtho or glFrustum instead of glPerspective