Let's say that my screen is (800 * 600) and I have a Quad (2D) drawn with the following vertices positions using Triangle_Strip (in NDC) :
float[] vertices = {-0.2f,0.2f,-0.2f,-0.2f,0.2f,0.2f,0.2f,-0.2f};
And I set up my Transformation Matrix in this way :
Vector2f position = new Vector2f(0,0);
Vector2f size = new Vector2f(1.0f,1.0f);
Matrix4f tranMatrix = new Matrix4f();
tranMatrix.setIdentity();
Matrix4f.translate(position, tranMatrix, tranMatrix);
Matrix4f.scale(new Vector3f(size.x, size.y, 1f), tranMatrix, tranMatrix);
And my vertex Shader :
#version 150 core
in vec2 in_Position;
uniform mat4 transMatrix;
void main(void) {
gl_Position = transMatrix * vec4(in_Position,0,1.0);
}
My question is, which formula should I use to modify the transformations of my quad with coordinates (in Pixels) ?
For example :
set Scale : (50px, 50px) => Vector2f(width,height)
set Position : (100px, 100px) => Vector2f(x,y)
To better understand, I would create a function to convert my Pixels data to NDCs to send them next to the vertex shader. I was advised to use an Orthographic projection but I don't know how to create it correctly and as you can see in my vertex shader I don't use any projection matrix.
Here is a topic similar to mine but not very clear - Transform to NDC, calculate and transform back to worldspace
EDIT:
I created my orthographic projection matrix by following the formula but
nothing seems to appear, here is how I proceeded :
public static Matrix4f glOrtho(float left, float right, float bottom, float top, float near, float far){
final Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
matrix.m00 = 2.0f / (right - left);
matrix.m01 = 0;
matrix.m02 = 0;
matrix.m03 = 0;
matrix.m10 = 0;
matrix.m11 = 2.0f / (top - bottom);
matrix.m12 = 0;
matrix.m13 = 0;
matrix.m20 = 0;
matrix.m21 = 0;
matrix.m22 = -2.0f / (far - near);
matrix.m23 = 0;
matrix.m30 = -(right+left)/(right-left);
matrix.m31 = -(top+bottom)/(top-bottom);
matrix.m32 = -(far+near)/(far-near);
matrix.m33 = 1;
return matrix;
}
I then included my matrix in the vertex shader
#version 140
in vec2 position;
uniform mat4 projMatrix;
void main(void){
gl_Position = projMatrix * vec4(position,0.0,1.0);
}
What did I miss ?
New Answer
After clarifications in the comments, the question being asked can be summed up as:
How do I effectively transform a quad in terms of pixels for use in a GUI?
As mentioned in the original question, the simplest approach to this will be using an Orthographic Projection. What is an Orthographic Projection?
a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
In practice, you may think of this as a 2D projection. Distance plays no role, and the OpenGL coordinates map to pixel coordinates. See this answer for a bit more information.
By using an Orthographic Projection instead of a Perspective Projection you can start thinking of all of your transformations in terms of pixels.
Instead of defining a quad as (25 x 25) world units in dimension, it is (25 x 25) pixels in dimension.
Or instead of translating by 50 world units along the world x-axis, you translate by 50 pixels along the screen x-axis (to the right).
So how do you create an Orthographic Projection?
First, they are usually defined using the following parameters:
left - X coordinate of the left vertical clipping plane
right - X coordinate of the right vertical clipping plane
bottom - Y coordinate of the bottom horizontal clipping plane
top - Y Coordinate of the top horizontal clipping plane
near - Near depth clipping plane
far - Far depth clipping plane
Remember, all units are in pixels. A typical Orthographic Projection would be defined as:
glOrtho(0.0, windowWidth, windowHeight, 0.0f, 0.0f, 1.0f);
Assuming you do not (or can not) make use of glOrtho (you have your own Matrix class or another reason), then you must calculate the Orthographic Projection matrix yourself.
The Orthographic Matrix is defined as:
2/(r-l) 0 0 -(r+l)/(r-l)
0 2/(t-b) 0 -(t+b)/(t-b)
0 0 -2/(f-n) -(f+n)/(f-n)
0 0 0 1
Source A, Source B
At this point I recommend using a pre-made mathematics library unless you are determined to use your own. One of the most common bug sources I see in practice are matrix-related and the less time you spend debugging matrices, the more time you have to focus on other more fun endeavors.
GLM is a widely-used and respected library that is built to model GLSL functionality. The GLM implementation of glOrtho can be seen here at line 100.
How to use an Orthographic Projection?
Orthographic projections are commonly used to render a GUI on top of your 3D scene. This can be done easily enough by using the following pattern:
Clear Buffers
Apply your Perspective Projection Matrix
Render your 3D objects
Apply your Orthographic Projection Matrix
Render your 2D/GUI objects
Swap Buffers
Old Answer
Note that this answered the wrong question. It assumed the question boiled down to "How do I convert from Screen Space to NDC Space?". It is left in case someone searching comes upon this question looking for that answer.
The goal is convert from Screen Space to NDC Space. So let's first define what those spaces are, and then we can create a conversion.
Normalized Device Coordinates
NDC space is simply the result of performing perspective division on our vertices in clip space.
clip.xyz /= clip.w
Where clip is the coordinate in clip space.
What this does is place all of our un-clipped vertices into a unit cube (on the range of [-1, 1] on all axis), with the screen center at (0, 0, 0). Any vertices that are clipped (lie outside the view frustum) are not within this unit cube and are tossed away by the GPU.
In OpenGL this step is done automatically as part of Primitive Assembly (D3D11 does this in the Rasterizer Stage).
Screen Coordinates
Screen coordinates are simply calculated by expanding the normalized coordinates to the confines of your viewport.
screen.x = ((view.w * 0.5) * ndc.x) + ((w * 0.5) + view.x)
screen.y = ((view.h * 0.5) * ndc.y) + ((h * 0.5) + view.y)
screen.z = (((view.f - view.n) * 0.5) * ndc.z) + ((view.f + view.n) * 0.5)
Where,
screen is the coordinate in screen-space
ndc is the coordinate in normalized-space
view.x is the viewport x origin
view.y is the viewport y origin
view.w is the viewport width
view.h is the viewport height
view.f is the viewport far
view.n is the viewport near
Converting from Screen to NDC
As we have the conversion from NDC to Screen above, it is easy to calculate the reverse.
ndc.x = ((2.0 * screen.x) - (2.0 * x)) / w) - 1.0
ndc.y = ((2.0 * screen.y) - (2.0 * y)) / h) - 1.0
ndc.z = ((2.0 * screen.z) - f - n) / (f - n)) - 1.0
Example:
viewport (w, h, n, f) = (800, 600, 1, 1000)
screen.xyz = (400, 300, 200)
ndc.xyz = (0.0, 0.0, -0.599)
screen.xyz = (575, 100, 1)
ndc.xyz = (0.4375, -0.666, -0.998)
Further Reading
For more information on all of the transform spaces, read OpenGL Transformation.
Edit for Comment
In the comment on the original question, Bo specifies screen-space origin as top-left.
For OpenGL, the viewport origin (and thus screen-space origin) lies at the bottom-left. See glViewport.
If your pixel coordinates are truly top-left origin then that needs to be taken into account when transforming screen.y to ndc.y.
ndc.y = 1.0 - ((2.0 * screen.y) - (2.0 * y)) / h)
This is needed if you are transforming, say, a coordinate of a mouse-click on screen/gui into NDC space (as part of a full transform to world space).
NDC coordinates are transformed to screen (i.e. window) coordinates using glViewport. This function (you must use t in your app) defines a portion of the window by an origin and a size.
The formulas used can be seen at https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glViewport.xml
(x,y) are the origin, normally (0,0) the bottom left corner of the window.
While you can derivate the inverse formulas on your own, here you have them: https://www.khronos.org/opengl/wiki/Compute_eye_space_from_window_space#From_window_to_ndc
If I understand the question, you're trying to get screen space coords (the ones that define the size of your screen) to the -1 to 1 coords. If yes then it's quite simple. The equation is:
((coords_of_NDC_space / width_or_height_of_screen) * 2) - 1
This would work because for example a screen of 800 × 600:
800 / 800 = 1
1 * 2 = 2
2 - 1 = 1
and to check for a coordinate from half the screen on the height:
300 / 600 = 0.5
0.5 * 2 = 1
1 - 1 = 0 (NDC is from -1 to 1 so 0 is middle)
Related
I am working on a 2D java game engine using AWT canvas as a basis. Part of this game engine is that it needs to have hitboxes with collision. Not just the built in rectangles (tried that system already) but I need my own Hitbox class because I need more functionality. So I made one, supports circular and 4-sided polygon shaped hitboxes. The way the hitbox class is setup is that it uses four coordinate points to serve as the 4 corner vertices that connect to form a rectangle. Lines are draw connecting the points and these are the lines that are used to detect intersections with other hitboxes. But I now have a problem: rotation.
There are two possibilities for a box hitbox, it can just be four coordinate points, or it can be 4 coordinate points attached to a gameobject. The difference is that the former is just 4 coordinates based on 0,0 as the ordin while the attached to gameobject stores offsets in the coordinates rather than raw location data, so (-100,-100) for example represents the location of the host gameobject but 100 pixels to the left, and 100 pixels up.
Online I found a formula for rotating points about the origin. Since Gameobject based hitboxes were centered around a particular point, I figured that would be the best option to try it on. This code runs each tick to update a player character's hitbox
//creates a rectangle hitbox around this gameobject
int width = getWidth();
int height = getHeight();
Coordinate[] verts = new Coordinate[4]; //corners of hitbox. topLeft, topRight, bottomLeft, bottomRight
verts[0] = new Coordinate(-width / 2, -height / 2);
verts[1] = new Coordinate(width / 2, -height / 2);
verts[2] = new Coordinate(-width / 2, height / 2);
verts[3] = new Coordinate(width / 2, height / 2);
//now go through each coordinate and adjust it for rotation
for(Coordinate c : verts){
if(!name.startsWith("Player"))return; //this is here so only the player character is tested
double theta = Math.toRadians(rotation);
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
}
getHitbox().vertices = verts;
I appologize for poor video quality but this is what the results of the above are: https://www.youtube.com/watch?v=dF5k-Yb4hvE
All related classes are found here: https://github.com/joey101937/2DTemplate/tree/master/src/Framework
edit: The desired effect is for the box outline to follow the character in a circle while maintaining aspect ratio as seen here: https://www.youtube.com/watch?v=HlvXQrfazhA . The current system uses the code above, the effect of which can be seen above in the previous video link. How should I modify the four 2D coordinates to maintain relative aspect ratio throughout a rotation about a point?
current rotation system is the following:
x = x*Cos(theta) - y *Sin(theta)
y = x*Sin(theta) + y *Cos(theta)
where theta is degree of rotation in raidians
You made classic mistake:
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
In the second line you use modified value of c.x. Just remember tempx = c.x
before calculations and use it.
tempx = c.x;
c.x = (int)(tempx*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(tempx*Math.sin(theta)+c.y*Math.cos(theta));
Another issue: rounding coordinates after each rotation causes distortions and shrinking after some rotations. It would be wise to store coordinates in floats and round them only for output, or remember starting values and apply rotation by accumulated angle to them.
I'm making a switch to generate matrices myself in java, to pass into my opengl shaders.
I've created a method to generate the perspective matrix which works fine. But currently I have my objects being drawn at z position 0.0f. Which means when the app runs, I can only see 1 of my custom objects (square) really close up.
Should I be setting my camera z position to 0.0f (which is happening currently) .. or all my objects z position to 0.0f?
createPerspectiveProjection(60.0f, width / height, 0.1f, 100.0f);
/**
* #param fov
* #param aspect
* #param zNear
* #param zFar
* #return projectionMatrix
*/
private Matrix4f createPerspectiveProjection(float fov, float aspect, float zNear, float zFar){
Matrix4f mat = new Matrix4f();
float yScale = (float) (1 / (Math.tan(Math.toRadians(fov / 2))));
float xScale = yScale / aspect;
float frustrumLength = zFar - zNear;
mat.m00 = xScale;
mat.m11 = yScale;
mat.m22 = -((zFar + zNear) / frustrumLength);
mat.m23 = -1;
mat.m32 = -((2 * zFar * zNear) / frustrumLength);
mat.m33 = 0;
return mat;
}
The standard projection matrix you are using corresponds to a camera placed at the origin, and looking down the negative z-axis. The near and far values determine the range of negative z-values that are within the view frustum. With the values in the example, this means that z-values between -0.1f and -100.0f are visible. That's as long as they're within the pyramid defined by the field of view angle, of course.
How you place your camera and objects in world space is completely up to you. If you emulate a traditional OpenGL rendering pipeline, you'll have a model-view matrix that transforms your objects to place them in the view frustum described above. This means that visible objects should have a negative z-value after the model-view transformation is applied.
The absolutely simplest way of achieving this is to use the identity transformation (i.e. no transformation at all) for the model-view transformation, and place your objects around the negative z-axis.
However, it's often convenient to have your objects placed somewhere around the origin. One simple way of allowing this to work is to place the camera on the positive z-axis, and point it at the origin. The model-view matrix then becomes particularly simple. It's only a translation in the negative z-direction, to shift the camera from its position on the positive z-axis to the origin, matching how the projection matrix was set up.
For example, with your near/far values of 0.1/100.0, you could place the camera at (0.0, 0.0, 50.0) in world space. The view transformation is then a translation by (0.0, 0.0, -50.0). The z-range from 49.9 to -50.0 in world space is then the visible range, allowing you to place your objects around the origin.
Good day.
I would like to convert a screen x,y pixel location (the location a user tapped/clicked) to a lon/lat location on a map.
The current screen location is in a bounding box, of which you have the top left most and bottom right most lon/lat values.
When the screen is not rotated it is quite simple to translate the x/y position to the lon,lat values:
Let mapboudingbox[0,1] contain top left most lat/lon
mapboundingbox[2,3] contains bottom right most lat/lon
Then the degrees per pixel width = abs(lon2 - lon1)/ screenWidthInPixels
Then the degrees per pixel height = abs(lat2 - lat1)/ screenHeightInPixels
From this you can then get Lon/Lat as follow:
float longitude = ((touchXInPixels) * degreesPerPixelWidth) + mapBoundingBox[1];
float latitude = ((touchYPixels) * degreesPerPixelHeight) + mapBoundingBox[0];
This is easy enough. The problem that I have is calculating the Lat/Lon values when the screen is rotated, i.e:
From this, you can see that the screen has now been rotated by an angle Ө. -180 < Ө < 180
So let's assume the user clicks/taps on the screen FQKD at point Sx,Sy. How can I get the new lon/lat values where the user clicked, assuming that we have point Z and R in Lat/Lon, as well as the angle Ө, as well as the screen height and width in pixels?
Any and all help will be much appreciated!
I would just modify standard rotation and scale algorithm for 2D. Read a bit here:
2dTransformations.
The easiest way to achieve this is with matrices.
A 3x3 matrix can describe the rotation, translation & scale in 2D space.
Using this matrix you can project your map image on to the screen area. And using the inverse of the matrix, you can take a point in screen space to map space.
Pseudocode: (as you don't care what language)
Build your matrix:
var matrix = Matrix.newIdentity();
matrix.postAppendTranslate(tx, ty);
matrix.postAppendScale(zoom);
matrix.postAppendRotate(rot);
Render map image using that matrix.
To reverse a press:
var inverseMatrix = matrix.inverse();
var point = new float[]{touchPointX, touchPointY, 1};
var transformedPoint = inverseMatrix.multiply(point);
var mapX = transformedPoint[0];
var mapY = transformedPoint[1];
After two hours of googling (here, here, here, here, and here, and a ton others which I am not bothered to find), I thought I had finally learnt the theory of turning 3D coordinates to 2D coordinates. But it isn't working. The idea is to translate the 3D coordinates of a ship to 2D coordinates on the screen to render the username of the player controlling that ship.
However, the text is rendering in the wrong location:
The text is "Test || 2DXCoordinate || 2DZCoordinate".
Here is my getScreenCoords() - Which converts the 3D coordinates to 2D.
public static int[] getScreenCoords(double x, double y, double z) {
FloatBuffer screenCoords = BufferUtils.createFloatBuffer(4);
IntBuffer viewport = BufferUtils.createIntBuffer(16);
FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
FloatBuffer projection = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, modelView);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, projection);
GL11.glGetInteger(GL11.GL_VIEWPORT, viewport);
boolean result = GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);
if (result) {
return new int[] { (int) screenCoords.get(0), (int) screenCoords.get(1) };
}
return null;
}
screenCoords.get(0) is returning a perfect X coordinate. However, screenCoords.get(1) is going higher or lower depending on how far away I am from the ship. After many hours of debugging, I have narrowed it down to this line being incorrect:
GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);
However, I have no idea what is wrong. The X coordinate of the ship is fine.... Why not the Y?
According to BDL's answer, I am supplying the "wrong matrix" to gluProject(). But I don't see how that is possible, since I call the method right after I render my ship (Which is obviously in whatever matrix draws the ship).
I just can't fathom what is wrong.
Note: BDL's answer is perfectly adequate except that it does not explain why the Y coordinates are incorrect.
Note: This question used to be much longer and much more vague. I have posted my narrowed-down question above after hours of debugging.
You have to use the same projection matrix in gluProject that you use for rendering your ship. In your case the ship is rendered using a perspective projection, but when you call gluProject a orthographic projection is used.
General theory about coordinate systems in OpenGL
In most cases geometry of a model in your scene (e.g. the ship) is given in a model-coordinate system. This is the space where your vertex coordinates exist. When now placing the model in your scene we apply the model-matrix to each vertex to get the coordinates the ship has in the scene. This coordinate system is called world space. When viewing the scene from a given viewpoint and a viewing direction, again a transformation is needed that transforms the scene such that the viewpoint is located in the origin (0,0,0) and view-direction is along the negativ z-axis. This is the view coordinate system. The last step is to transform view-coordinates into ndc, which is done via a projection matrix.
In total we get the transformation of a vertex to the screen as:
v_screen = Projection * View * Model * v_model
In ancient OpenGL (as you use it) View and Model are stored together in the ModelView matrix.
(I skipped here some problems as perspective divide, but it should be sufficient to understand the problem.)
Your problem
You already have a position in world space (x,y,z) of your ship. Thus the transformation with Model has already happend. What is left is
v_screen = Projection * View * v_worldspace
For this we see, that in our case the ModelView matrix that gets entered to gluProject has to be exactly the View matrix.
I can't tell you where you get the view matrix in your code, since I don't know this part of your code.
I found an answer to my issue!
I used
font.drawString(drawx - offset, drawy, (sh.username + " || " + drawx + " | " + drawy), Color.orange);
When it should have been
font.drawString(drawx - offset, Display.getHeight() - drawy, (sh.username + " || " + drawx + " | " + drawy), Color.orange);
Im developing simple game. I have cca. 50 rectangles arranged in 10 columns and 5 rows. It wasn't problem to put them somehow to fit the whole screen. But when I rotate the canvas, let's say about 7° angle, the old coordinates does't fit in the new position of the coordinates. In constructor I already create and define the position of that rectangles, in onDraw method I'm drawing this rectangles (of course there are aready rotated) bud I need some method that colliding with the current rectangle. I tried to use something like this (i did rotation around the center point of the screen)
int newx = (int) ((x * Math.cos(ROTATE_ANGLE) - (y * Math.sin(ROTATE_ANGLE))) + width / 2);
int newy = (int) ((y * Math.cos(ROTATE_ANGLE) + (x * Math.sin(ROTATE_ANGLE))) + height / 2);
but it doesn't works (becuase it gives me absolute wrong new coordinates). x and y are coordinates of the touch that I'm trying to calculate new position in manner of rotation. ROTATE_ANGLE is the angle of rotation the screen.
Does anybody know how to solve this problem, I already go thorough many articles, wiki, wolframalpha categories but not luck. Maybe I just need some link to understand problem more.
Thank you
You use a rotation matrix.
Matrix mat = new Matrix(); //mat is identity
mat.postRotate(ROTATE_ANGLE); //mat is a rotation matrix of ROTATE_ANGLE degrees
float point[] = {10.0, 20.0}; //create a new float array representing the point (10, 20)
mat.mapPoints(point); //rotate the point by the requested amount
Ok, find the solution.
First it is important to convert from angle to radian
Then I personly need to negate that radian value.
That's all, this solution is correct