Computer Graphics Coordinates with glu (OpenGL) - java

I am having difficulty figuring out how the coordinate system in glu works, several problems to solve.
GLJPanel canvas = new GLJPanel();
frame.setSize(400,600); // Size in pixels of the frame we draw on
frame.getContentPane().add(canvas);
glu.gluOrtho2D(-100.0, 100.0, -200.0, 200.0);
gl.glViewport(100,100,200,300);
If a point has world coordinates (-50,-75), what are its coordinates in the viewport coordinate system?
and another one (not really specific code):
gluOrtho2D(-1.0, 0.0, -1.5, 0.0) and glViewport(0,300,200,300)
gluOrtho2D(0.0, 1.0, 0.0, 1.5) and glViewport(200,0,200,300)
Where would the two truncated genie curves be positioned?
Now I think I would be able to solve these, but am lost on how the coordinate system works.

The world coordinates are arbitrary, and you get to choose them. In this case, (-50, -75).
The MVP matrix and projection transformation convert these to clip space coordinates which vary from (-1, -1, -1) to (+1, +1, +1). In this case, (-0.5, -0.375). This conversion is affected by your use of gluOrtho2D(), or in more modern programs, the output of the vertex shader.
The viewport coordinates are pixels, from (100, 100) to (300, 400) in this case. You just scale the clip space coordinates to convert. The pixel centers are located at half-integer coordinates, so the lower-left pixel of the window is at (0.5, 0.5). Your point is located at (200, 193.75). This conversion is affected by the use of glViewport().
I have no idea what a "genie curve" is.

Related

Misunderstanding of multiplyMV - OpenGL ES 2.0

I wrote a simple program using OpenGL ES 2.0 and Java for Android.
This program draw a point in a random position on the screen using an ortho projection matrix, view matrix and a model matrix. In the shader I put matrix * position.
All works well, but then I tried, for testing purpose, to calculate the position of the point by myself, so I used multiplyMV and as arguments I put my MVPmatrix (obtained by using multiplyMM before between projection and view and then between the result and the model matrix) and my point (for example 2f, 3.5f, 0f, 1f). The problem is that sometimes the result I get for x and/or y is greater than 1 or smaller than -1, despite the fact that the point is on the screen. But in normalized device coordinates the point must have a range between -1 and 1 in order to be "on screen".
I really don't understand where is my mistake.
The point has to be in normalized device space [-1, 1] (for x, y, z), after the projection matrix is applied.
But note the point may be a Homogeneous coordinate. This means you have to do a Perspective divide first.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
Clip space coordinates are Homogeneous coordinates. In clipspace the clipping of the scene is performed. A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-p.w <= p.x, p.y, p.z <= p.w.
The clip coordinates are transformed to the normalized evice coordiantes by doing the Perspective divide:
ndc.xyz = (p.x/p.w, p.y/p.w, p.z/p.w)

LibGDX: Where is the X-/Y-Coordinate of a camera?

I'm wondering how the orthographic camera in LibGDX is positioned.
Is X bottom left or center or on right(etc)? And how it is with the Y?
I know its a simple question, but I'm messing around with my cam at this moment and I need some help :D
Libgdx camera coordinates is always CENTER of your screen.
For example if your viewportWidth and viewportHeight like
(800, 480)
it's coordinates will be
(400, 240)
In LibGDX, we have lots of coordinate systems (not only LibGDX, this applies to other engines/frameworks too). Camera is also a game object like other objects and thus is positioned like other objects.
The only difference about cameras is that they don't have width and height in the same sense of other objects. They are always a zero size point and can capture a rectangle (which is called viewport) centered on this point.
In your game if you use only one camera, what you see is the viewport that the only existent camera captures. So, if a sprite is on (0, 0) and also your camera is on (0, 0), you'll see that sprite exactly on the center of your screen.
Here's an example project using orthographic camera.

Implementing trapezoidal sprites in LibGDX

I'm trying to create a procedural animation engine for a simple 2D game, that would let me create nice looking animations out of a small number of images (similar to this approach, but for 2D: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach)
At the moment I have keyframes which hold data for different animation objects, the keyframes are arrays of floats representing the following:
translateX, translateY, scaleX, scaleY, rotation (degrees)
I'd like to add skewX, skewY, taperTop, and taperBottom to this list, but I'm having trouble properly rendering them.
This was my attempt at implementing a taper to the top of the sprite to give it a trapezoid shape:
float[] vert = sprite.getVertices();
vert[5] += 20; // top-left vertex x co-ordinate
vert[10] -= 20; // top-right vertex x co-ordinate
batch.draw(texture, vert, 0, vert.length);
Unfortunately this is producing some weird texture morphing.
I had a bit of a Google and a look around StackOverflow and found this, which appears to be the problem I'm having:
http://www.xyzw.us/~cass/qcoord/
However I don't understand the maths behind it (what are s, t, r and q?).
Can someone explain it a bit simpler?
Basically, the less a quad resembles a rectangle, the worse the appearance due to the effect of linearly interpolating the texture coordinates across the shape. The two triangles that make up the quad are stretched to different sizes, so linear interpolation make the seam very noticeable.
The texture coordinates of each vertex are linearly interpolated for each fragment that the fragment shader processes. Texture coordinates typically are stored with the size of the object already divided out, so the coordinates are in the range of 0-1, corresponding with the edges of the texture (and values outside this range are clamped or wrapped around). This is also typically how any 3D modeling program exports meshes.
With a trapezoid, we can limit the distortion by pre-multiplying the texture coordinates by the width and then post-dividing the width out of the texture coordinates after linear interpolation. This is like curving the diagonal between the two triangles such that its slope is more horizontal at the corner that is on the wider side of the trapezoid. Here's an image that helps illustrate it.
Texture coordinates are usually expressed as a 2D vector with components U and V, also known as S and T. But if you want to divide the size out of the components, you need one more component that you are going to divide by after interpolation, and this is called the Q component. (The P component would be used as the third position in the texture if you were looking up something in a 3D texture instead of a 2D texture).
Now here comes the hard part... libgdx's SpriteBatch doesn't support the extra vertex attribute necessary for the Q component. So you can either clone SpriteBatch and carefully go through and modify it to have an extra component in the texCoord attribute, or you can try to re-purpose the existing color attribute, although it's stored as an unsigned byte.
Regardless, you will need pre-width-divided texture coordinates. One way to simplify this is to, instead of using the actual size of the quad for the four vertices, get the ratio of the top and bottom widths of the trapezoid, so we can treat the top parts as width of 1 and therefore leave them alone.
float bottomWidth = taperBottom / taperTop;
Then you need to modify the TextureRegion's existing texture coordinates to pre-multiply them by the widths. We can leave the vertices on the top side of the trapezoid alone because of the above simplification, but the U and V coordinates of the two narrow-side vertices need to be multiplied by bottomWidth. You would need to recalculate them and put them into your vertex array every time you change the TextureRegion or one of the taper values.
In the vertex shader, you would need to pass the extra Q component to the fragment shader. In the fragment shader, we normally look up our texture color using the size-divided texture coordinates like this:
vec4 textureColor = texture2D(u_texture, v_texCoords);
but in our case we still need to divide by that Q component:
vec4 textureColor = texture2D(u_texture, v_texCoords.st / v_texCoords.q);
However, this causes a dependent texture read because we are modifying a vector before it is passed into the texture function. GLSL provides a function that automatically does the above (and I assume does not cause a dependent texture read):
vec4 textureColor = texture2DProj(u_texture, v_texCoords); //first two components automatically divided by last component

How to rotate a single image around a remote point

I'm developing a tube shooter-esque game in java that simulates 3D without actually using any 3D libraries. Right now I have a player-controlled ship that rotates around the center point of the screen, using (in this case, for moving right).
angle += 0.1;
x = Math.cos(angle) * radius + cX;
y = Math.sin(angle) * radius + cY;
Where angle is the placement in relation to the center point (ex. 270 is directly under the center), x and y are the current ship position, radius is the distance from the center, and cX and cY are the center point's location.
Right now revolving around the point works smoothly, but I'm not sure how to handle rotating the actual ship to always point towards the center. I've looked around a lot online but can't figure out how an individual Image (or if that doesn't work, an array of drawLines) can be rotated without affecting other objects on the screen.
Long story short, how would one go about rotating an individual Image to constantly point towards a remote x,y location?
What you need is the AffineTransform class which is basically a matrix class in java. Graphics2D has a draw image variant which accepts an AffineTransform instance:
boolean java.awt.Graphics2D.drawImage(Image img, AffineTransform xform, ImageObserver obs)
To create a transform, you can use 2D matrix operations:
AffineTransform trans = new AffineTransform();
trans.translate(x, y);
trans.rotate(theta);
trans.scale(scalex, scaley);
etc...
Mind that the order is important, probably you will want to scale first, rotate and then traslate the image to the corresponding location. It should do fine.
Java has uses some 3D power to draw as fast as it can, it is faster than a software renderer, but quite far from native opengl.

GeneralPath not drawing with changed scale

Im currently making a sidescroller game which uses randomly generated terrain that scrolls in the background. The terrain is basically an instance of the GeneralPath class. When the terrain generates, the corners (0, 0) and (width, 0) are included in the path since the height of the viewing canvas isnt known yet. To make the terrain appear right-side-up, i added the following lines of code:
g.translate(0, getHeight());
g.scale(0, -1);
This should flip the coordinate system into Cartesian format with the bottom left being 0, 0.
For some reason, the terrain isnt drawing. When i comment-out these lines, it works, but is upside-down. If i only comment out the scale command and change the amount translated by to a smaller number, it also draws successfully (upside-down and translated a small amount).
Thanks in advance!
As you noticed, you need to scale the x axis as well - g.scale(1, -1);

Categories

Resources