Math Vector Matrix multiplication - java

As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.To map a 3DVector's position from "World Space" to Viewport Coordinates you write f.e
"gl_position=uModelViewProjectionMatrix * vPosition " in your fragment shader.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result and - more important - is there a funktion to do this on CPU side (especially a libary/ class for java in android)

Just clarifying a few terms:
The viewport is the region within the window you're drawing to. It's specified in pixels.
The rasterizer needs coordinates in normalized device coordinates which is -1 to 1. It then maps these to the viewport area.
gl_Position must take a 4D vector in clip space. This is the space triangles are clipped in (for example and particularly if they intersect the near plane). This is separate to normalized device coordinates because the perspective divide hasn't happened yet. Doing this yourself would be pos /= pos.w, but that loses some information OpenGL needs for clipping, depth and interpolation.
This brings me to the answer. You're correct, you can't multiply a 3D vector by a 4x4 matrix. It's actually using homogeneous coordinates and the vector is 4D with a 1 at the end. The 4D result is for clip space. The rasterizer creates fragments with just the 2D position, but w is used for perspective correct interpolation and z is interpolated for depth testing.
Finally, the ModelViewProjection matrix implies the introduction of three more spaces. These are purely convention but with good reasons to exist. Mesh vertices are given in object space. You can place objects in the world with a model transformation matrix. You provide a camera position and rotation in the world with the view matrix. The projection matrix then defines the viewing volume by scaling everything for clip space. A reason to separate the view and projection matrices is for operations in eye space such as lighting calculations.
I won't go into any more detail, but hopefully this sets you on the right track.

As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.
Sort of, but not exactly like that.
From a mathematical point of view what matters is the dimension of the kernel. When it comes to the framebuffer things don't end at the viewport coordinates. From there things get "split", the (x,y) coordinate is used to determine which fragment to touch and the (z,w) coordinates usually are used for calculations that ultimately end in the depth buffer.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result
By padding the 3d vector to 4d elements; in terms of homogenous coordinates pad it with zeros except for the last element which is set to 1. This allows you to multiply with a nĂ—4 matrix. And to get back to 2d you project it down into the lower dimension vector space; this is just like a 3d object projects a 2d shadow onto a surface. The simplemost projection is simply omitting the dimensions you're not interested in, like dropping z and w when going to the viewport.
is there a function to do this on CPU side
There are several linear algebra libraries. Just pad the vectors accordingly to transform with higher dimension matrices and project them to get to lower dimensions.

Related

Calculating coordinates of an oblique aerial image

I am using a GoPro HERO 4 on a drone to capture images that need to be georeferenced. Ideally I need coordinates of the captured image's corners relative to the drone.
I have the camera's:
Altitude
Horizontal and vertical field of view
Rotation in all 3 axes
I have found a couple of solutions but I can't quite translate them for my purposes. The closest one I found is here https://photo.stackexchange.com/questions/56596/how-do-i-calculate-the-ground-footprint-of-an-aerial-camera but I can't figure out how and if it's possible for me to use it. Particularly when I have to take both pitch and roll into account.
Thanks for any help I get.
Edit: I code my software in Java.
If you have rotations in all three axes then you can use these matrices - http://planning.cs.uiuc.edu/node102.html - to construct a full (3x3) rotation matrix for your camera.
Assuming that, when the rotation matrix is an identity (i.e. in the camera's frame) you have defined the camera's axes to be:
X axis for front
Y for side (left)
Z for up
In the camera frame, the rays have directions:
Calculate these directions and rotate them using the matrix to get the real-world axes. Use the camera's real world coordinate as the source.
To calculate the points on the ground: https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm

Computer Graphics: Practical translate to origin method

What I'm trying to do
I'm trying to implement translations on polygons manually, e.g. with plain Java AWT and no OpenGL, in order to understand the concepts better.
I want to perform a "translation to origin" before (and hence after) I do a scaling / rotation on my object, but then comes the question - by which vertex of the polygon do I calculate the distance to the origin?
Intuition
My intuition is to calculate the distance of each vertex to the origin, and once I find the closest one, calculate the x,y values required to translate it to the origin, and then translate all the polygon's verices with those x,y.
Am I right?
Another catch
I've implemented a view through camera in my program, such that I've got full viewing pipeline taking polygons in world objects, transforming all coordinates to viewing coordinates, projecting them to 2D and then transforming them to viewport coordinates.
My camera has it's position, lookAt point and up vector, and I want the scaling / rotations to be done with regard to the lookAt point.
How can I achieve this? Does it just mean to translate each polygon to the lookAt point instead to the origin?

Implementing trapezoidal sprites in LibGDX

I'm trying to create a procedural animation engine for a simple 2D game, that would let me create nice looking animations out of a small number of images (similar to this approach, but for 2D: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach)
At the moment I have keyframes which hold data for different animation objects, the keyframes are arrays of floats representing the following:
translateX, translateY, scaleX, scaleY, rotation (degrees)
I'd like to add skewX, skewY, taperTop, and taperBottom to this list, but I'm having trouble properly rendering them.
This was my attempt at implementing a taper to the top of the sprite to give it a trapezoid shape:
float[] vert = sprite.getVertices();
vert[5] += 20; // top-left vertex x co-ordinate
vert[10] -= 20; // top-right vertex x co-ordinate
batch.draw(texture, vert, 0, vert.length);
Unfortunately this is producing some weird texture morphing.
I had a bit of a Google and a look around StackOverflow and found this, which appears to be the problem I'm having:
http://www.xyzw.us/~cass/qcoord/
However I don't understand the maths behind it (what are s, t, r and q?).
Can someone explain it a bit simpler?
Basically, the less a quad resembles a rectangle, the worse the appearance due to the effect of linearly interpolating the texture coordinates across the shape. The two triangles that make up the quad are stretched to different sizes, so linear interpolation make the seam very noticeable.
The texture coordinates of each vertex are linearly interpolated for each fragment that the fragment shader processes. Texture coordinates typically are stored with the size of the object already divided out, so the coordinates are in the range of 0-1, corresponding with the edges of the texture (and values outside this range are clamped or wrapped around). This is also typically how any 3D modeling program exports meshes.
With a trapezoid, we can limit the distortion by pre-multiplying the texture coordinates by the width and then post-dividing the width out of the texture coordinates after linear interpolation. This is like curving the diagonal between the two triangles such that its slope is more horizontal at the corner that is on the wider side of the trapezoid. Here's an image that helps illustrate it.
Texture coordinates are usually expressed as a 2D vector with components U and V, also known as S and T. But if you want to divide the size out of the components, you need one more component that you are going to divide by after interpolation, and this is called the Q component. (The P component would be used as the third position in the texture if you were looking up something in a 3D texture instead of a 2D texture).
Now here comes the hard part... libgdx's SpriteBatch doesn't support the extra vertex attribute necessary for the Q component. So you can either clone SpriteBatch and carefully go through and modify it to have an extra component in the texCoord attribute, or you can try to re-purpose the existing color attribute, although it's stored as an unsigned byte.
Regardless, you will need pre-width-divided texture coordinates. One way to simplify this is to, instead of using the actual size of the quad for the four vertices, get the ratio of the top and bottom widths of the trapezoid, so we can treat the top parts as width of 1 and therefore leave them alone.
float bottomWidth = taperBottom / taperTop;
Then you need to modify the TextureRegion's existing texture coordinates to pre-multiply them by the widths. We can leave the vertices on the top side of the trapezoid alone because of the above simplification, but the U and V coordinates of the two narrow-side vertices need to be multiplied by bottomWidth. You would need to recalculate them and put them into your vertex array every time you change the TextureRegion or one of the taper values.
In the vertex shader, you would need to pass the extra Q component to the fragment shader. In the fragment shader, we normally look up our texture color using the size-divided texture coordinates like this:
vec4 textureColor = texture2D(u_texture, v_texCoords);
but in our case we still need to divide by that Q component:
vec4 textureColor = texture2D(u_texture, v_texCoords.st / v_texCoords.q);
However, this causes a dependent texture read because we are modifying a vector before it is passed into the texture function. GLSL provides a function that automatically does the above (and I assume does not cause a dependent texture read):
vec4 textureColor = texture2DProj(u_texture, v_texCoords); //first two components automatically divided by last component

How to extrude a shape to a volume

I am currently trying to show a series of images that slightly differ from each other in a 3D view, and which contain lots of transparent areas (for example, points that move in time inside a rectangle, and I would provide a 3D view with all their positions over time).
What I'm doing now is generate an image with the points drawn in it, create one Boxes of 40x40x1 per frame (or rectangular shape of 40x40), apply the image as a texture to the FRONT side of the box, and add the boxes to my scenes at positions (0, 0, z) where z is the frame number.
It works quite well, but of course their is discontinuities (of 1 "meter") between the images.
I would like to know if their is a way to create an "extrusion" object based on that image so as to fill the space between the planes. This would be equivalent of creating one 1x1x1 box for each point, placing them at (x, y, z) where x/y are the point's coordinate and z the frame number. The actual problem is that I have lots of points (several hundreds, if not thousands in some cases), and what was relatively easy to handle and render with an image would, I think, become quite heavy to render if I have to create thousands boxes.
Thanks in advance for your help,
Frederic.
You could use 3d textue with your data (40 x 40 x N) pixels, N=number of frames.
But you still has to draw something with this texture enabled.
I would do what you are doing currently - draw quads, but no only along Z axis, but along X and Y too.
Each of N quads along Z axis would have 40x40 size, each of 40 quads along X axis would be 40xN size, and each of 40 quads along Y axis would be Nx40 size.
So for 2x2x2 textue we will draw 2+2+2 = 6 quads, and it will look like regular cube, for 3x3x3 points in texture we will draw 3+3+3 quads, and it will look like 8 cubes stacked into one big cube (so instead of 8 cubes 6 quads each we just draw 9 quads, but the effect is the same).
For 40x40x1000 it would be 1080 quads (reasonable to draw in real time imho) instead of 40*40*1000*6 quads.
I only don't know, if the graphical effect would be exactly what you wanted to achieve.

Flipping a gui Component in the x-z plane

I would like to know the respective graphics transformations ..in creating a flipping effect of a ui component about the x-z plane. It needs to be done only using 2d since the swing toolkit only supports 2d affine transformations.
http://www.verysimple.com/flex/flipcard/ .... is an example of the effect to be achieved .
Not a true 3-D flipping but the effect looks very similar if you just do 2-D scaling like this,
Render the front image.
Scale X from 1 to 0, anchored at the middle.
Render the back image.
Scale X from 0 to 1, anchored at the middle.
To simulate a constant angular speed, the scaling factor can be calculated like this,
double scale = Math.cos(i*Math.PI/(2.0*steps));
The i is step number and steps is the total number of steps need to simulate a 90 degree rotation.
You can also introduce some shear transformation to simulate the perspective of a true 3-D rotation but the effect is not that noticeable for a fast flipping.

Categories

Resources