I would like to know the respective graphics transformations ..in creating a flipping effect of a ui component about the x-z plane. It needs to be done only using 2d since the swing toolkit only supports 2d affine transformations.
http://www.verysimple.com/flex/flipcard/ .... is an example of the effect to be achieved .
Not a true 3-D flipping but the effect looks very similar if you just do 2-D scaling like this,
Render the front image.
Scale X from 1 to 0, anchored at the middle.
Render the back image.
Scale X from 0 to 1, anchored at the middle.
To simulate a constant angular speed, the scaling factor can be calculated like this,
double scale = Math.cos(i*Math.PI/(2.0*steps));
The i is step number and steps is the total number of steps need to simulate a 90 degree rotation.
You can also introduce some shear transformation to simulate the perspective of a true 3-D rotation but the effect is not that noticeable for a fast flipping.
Related
As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.To map a 3DVector's position from "World Space" to Viewport Coordinates you write f.e
"gl_position=uModelViewProjectionMatrix * vPosition " in your fragment shader.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result and - more important - is there a funktion to do this on CPU side (especially a libary/ class for java in android)
Just clarifying a few terms:
The viewport is the region within the window you're drawing to. It's specified in pixels.
The rasterizer needs coordinates in normalized device coordinates which is -1 to 1. It then maps these to the viewport area.
gl_Position must take a 4D vector in clip space. This is the space triangles are clipped in (for example and particularly if they intersect the near plane). This is separate to normalized device coordinates because the perspective divide hasn't happened yet. Doing this yourself would be pos /= pos.w, but that loses some information OpenGL needs for clipping, depth and interpolation.
This brings me to the answer. You're correct, you can't multiply a 3D vector by a 4x4 matrix. It's actually using homogeneous coordinates and the vector is 4D with a 1 at the end. The 4D result is for clip space. The rasterizer creates fragments with just the 2D position, but w is used for perspective correct interpolation and z is interpolated for depth testing.
Finally, the ModelViewProjection matrix implies the introduction of three more spaces. These are purely convention but with good reasons to exist. Mesh vertices are given in object space. You can place objects in the world with a model transformation matrix. You provide a camera position and rotation in the world with the view matrix. The projection matrix then defines the viewing volume by scaling everything for clip space. A reason to separate the view and projection matrices is for operations in eye space such as lighting calculations.
I won't go into any more detail, but hopefully this sets you on the right track.
As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.
Sort of, but not exactly like that.
From a mathematical point of view what matters is the dimension of the kernel. When it comes to the framebuffer things don't end at the viewport coordinates. From there things get "split", the (x,y) coordinate is used to determine which fragment to touch and the (z,w) coordinates usually are used for calculations that ultimately end in the depth buffer.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result
By padding the 3d vector to 4d elements; in terms of homogenous coordinates pad it with zeros except for the last element which is set to 1. This allows you to multiply with a nĂ—4 matrix. And to get back to 2d you project it down into the lower dimension vector space; this is just like a 3d object projects a 2d shadow onto a surface. The simplemost projection is simply omitting the dimensions you're not interested in, like dropping z and w when going to the viewport.
is there a function to do this on CPU side
There are several linear algebra libraries. Just pad the vectors accordingly to transform with higher dimension matrices and project them to get to lower dimensions.
Recently I switched from using an array of integers as my screen in Java to using a library. The library I'm using is LibGDX, and the conversion for me is quite different. Most things I have already started to get the hang of, and I'm still writing a bit of the code myself.
At this point, I'm curious if I can limit the rendering range of Sprites and any other factor of drawing, such as if a sprite stuck half-way out of a box, it wouldn't render the part that was sticking out (as so:)
Is there a way to render in a specific range, and if it is partially out of the range, it doesn't render what is out of the range, or will I have to do that myself?
You can do simple "clipping" to a rectangle with the LibGDX ScissorStack.
Because OpenGL is stateful and many of the LibGDX drawing APIs cache, be sure to "flush" or "end" your batches within the range of the scissors. See libgdx ScissorStack not working as expected and libgdx Cutting an image
If i did not missunderstand you, you are looking for camera.
The camera lets you define a Viewport (size) and you only see things inside this Viewport.
You can also move it arroung to see other parts of the world.
For example:
OrthographicCamera cam = new OrthographicCamera(80, 45);
This defines a camera, which showes you 80 units in x and 45 units in y. It P(0/0) by default is in the middle of the screen, so this camera shows objects from -40 to +40 in x and -22.5 to + 22.5 in y.
You can move it, so that the P(0/0) is in the left lower corner:
camera.position.x = -40;
camera.position.y = -22.5;
camera.update();
This should move the camera to the left by 40 units and down by 22.5 units, so that the P(0/0) is the left lower corner. Don't forget to call update() as this recalculates the projection and view matrix.
Finally, to draw with this camera, you need to set the SptieBatchs projectionMatrix to the one of the camera:
spriteBatch.setProjectionMatrix(camera.combined);
Now you can use this SpriteBatch to draw.
You should also consider to se ViewFrustum-Culling, which means, that you don't draw things out of the camera, because they will never appear on screen, but the draw call costs some performance.
I've been looking around and i couldn't find an answer to this but what I have done is create a cube / box and the camera will squash and stretch depending on where I am looking at. This all seems to resolve it self when the screen is perfectly square but when I'm using 16:9 it stretches and squashes the shapes. Is it possible to change this?
16:9
and this is 500px X 500px
As a side question would it be possible to change the color of background "sky"?
OpenGL uses a cube [-1,1]^3 to represent the frustum in normalized device coordinates. The Viewport transform strechtes this in x and y direction to [0,width] and [0,height]. So to get the correct output aspect ratio, you have to take the viewport dimensions into account when transfroming the vertices into clip space. Usually, this is part of the projection matrix. The old fixed-function gluPerspective() function has a parameter to directly create a frustum for a given aspect ratio. As you do not show any code, it is hard to suggest what you actually should change, but it should be quite easy, as it boils down to a simple scale operation along x and y.
To the side question: That color is defined by the values the color buffer is set to when clearing the it. You can set the color via glClearColor().
What is the most efficient way to find out if one axis aligned rectangle is colliding with one rotated rectangle? Each class has a position vector and a size vector and the rotated class has an angle value.
You want to use the Separating Axis Theorem (SAT). Normally it's used in 3d, but it collapses to 2d quite well. Since you've got a special case, the only axis you need to consider are 4 main axis of your rectangles:
[ 1,0 ]
[ 0,1 ]
[ sin(theta), cos(theta) ]
[ -cos(theta), sin(theta) ]
To check an axis, compute the dot product of every vertex with that axis. Then check the min and max of the 2 sets of values to see if they overlap. If any of the 4 axis gives ranges that don't overlap then the rectangles do not overlap (you've found a separating axis). If all 4 axis show overlap then the rectangles intersect.
Here is a recent SO question on the same problem:
Separating Axis Theorem and Python
And here is the wikipedia article
http://en.wikipedia.org/wiki/Separating_axis_theorem
The most efficent way is to create a larger rectangle which bounds the rotated rectangle, and the do collision detection based on the bounding rectangles.
This means that bounding rectangle collisions don't signify "hits" but rather conditions which merit further investigation. Means of investigating differ depending on what assumptions you can make. In the simplest cases, you could AND pixels checking for a true output.
You could then use this "confirmed" hit to do an analysis with a more sophisticated model; one that takes into account the angles, velocities, geometry, and elasticity of the collision (or whatever you're interested in).
More sophisticated models exist, but generally the more sophisticated models require more computational power. It's easier to save your computational power by setting up a series of fast, quick checks and only bring out the heavy compute cycles for the cases where they are going to pay off.
I am currently trying to show a series of images that slightly differ from each other in a 3D view, and which contain lots of transparent areas (for example, points that move in time inside a rectangle, and I would provide a 3D view with all their positions over time).
What I'm doing now is generate an image with the points drawn in it, create one Boxes of 40x40x1 per frame (or rectangular shape of 40x40), apply the image as a texture to the FRONT side of the box, and add the boxes to my scenes at positions (0, 0, z) where z is the frame number.
It works quite well, but of course their is discontinuities (of 1 "meter") between the images.
I would like to know if their is a way to create an "extrusion" object based on that image so as to fill the space between the planes. This would be equivalent of creating one 1x1x1 box for each point, placing them at (x, y, z) where x/y are the point's coordinate and z the frame number. The actual problem is that I have lots of points (several hundreds, if not thousands in some cases), and what was relatively easy to handle and render with an image would, I think, become quite heavy to render if I have to create thousands boxes.
Thanks in advance for your help,
Frederic.
You could use 3d textue with your data (40 x 40 x N) pixels, N=number of frames.
But you still has to draw something with this texture enabled.
I would do what you are doing currently - draw quads, but no only along Z axis, but along X and Y too.
Each of N quads along Z axis would have 40x40 size, each of 40 quads along X axis would be 40xN size, and each of 40 quads along Y axis would be Nx40 size.
So for 2x2x2 textue we will draw 2+2+2 = 6 quads, and it will look like regular cube, for 3x3x3 points in texture we will draw 3+3+3 quads, and it will look like 8 cubes stacked into one big cube (so instead of 8 cubes 6 quads each we just draw 9 quads, but the effect is the same).
For 40x40x1000 it would be 1080 quads (reasonable to draw in real time imho) instead of 40*40*1000*6 quads.
I only don't know, if the graphical effect would be exactly what you wanted to achieve.