In processing (java dialect) there are the methods screenX, screenY (and screenZ but we skip that for now).
Let's say i have a object at xyz = 50, 100, 500. Then with screenX and screenY you can now where they will apear on the canvas.
float x = screenX(50, 100, 500);
float y = screenY(50, 100, 500);
here is the reference:
http://processing.org/reference/screenX_.html
What i'm interested in is like a inverse method.
For example, i want a sphere to apear on the canvas on x=175 and y=100. The sphere should have a z of 700. Then what would the actual x and y position be at z=700 to make it apear on the canvas at 175,100?
So the method would be float unscreenX(float x, float y, float z) and it would return the x value.
My math / programming skills is not so advanced (let's call it bad) (i'm more a designer) so i'm looking for some help. I all ready asked at the processing board but often there are more people here with deeper knowledge about matrixes etc.
The normal screenX method from processing can be found here:
https://github.com/processing/processing/blob/master/core/src/processing/opengl/PGraphicsOpenGL.java
public float screenX(float x, float y, float z) {
return screenXImpl(x, y, z);
}
protected float screenXImpl(float x, float y, float z) {
float ax =
modelview.m00*x + modelview.m01*y + modelview.m02*z + modelview.m03;
float ay =
modelview.m10*x + modelview.m11*y + modelview.m12*z + modelview.m13;
float az =
modelview.m20*x + modelview.m21*y + modelview.m22*z + modelview.m23;
float aw =
modelview.m30*x + modelview.m31*y + modelview.m32*z + modelview.m33;
return screenXImpl(ax, ay, az, aw);
}
protected float screenXImpl(float x, float y, float z, float w) {
float ox =
projection.m00*x + projection.m01*y + projection.m02*z + projection.m03*w;
float ow =
projection.m30*x + projection.m31*y + projection.m32*z + projection.m33*w;
if (nonZero(ow)) {
ox /= ow;
}
float sx = width * (1 + ox) / 2.0f;
return sx;
}
Of corse there is also for the y and for the z (i don't understand the z but let's ignore that).
I thought this might give some insight in how to inverse it.
modelview and projection is a 3d matrix, the code is here:
https://github.com/processing/processing/blob/master/core/src/processing/core/PMatrix3D.java
But i guess it's pretty basic and common.
I also made a post on the processing board since you never know. It explains a litle bit different what i want.
http://forum.processing.org/topic/unscreenx-and-unscreeny
For the tags describing this post, i didn't went to specific cause i can imagine a programmer who never worked with java but did work with c++ for example and has experience in matrixes is still able to provide a good answer.
hope someone can help.
I highly recommend you study some linear algebra or matrix math for 3d graphics. It's fun and easy, but a bit longer than a SO answer. I'll try though :) Disclaimer: I have no idea about the API you are using!
It looks like you are returning 3 coordinate for a position (often called a vertex). But you also mention a projection matrix and that function has 4 coordinates. Usually a shader or API will take 4 coordinates for a vertex. x,y,z,w. To get them on screen it does something like this:
xscreen = x/w
yscreen = y/w
zbuffer = z/w
This is useful because you get to pick w. If you are just doing 2d drawing you can just put w=1. But if you are doing 3d and want some perspective effect you want to divide by distance from the camera. And that's what the projection matrix is for. It mainly takes the z of your point where z means distance to camera and puts it into w. It also might scale things around a bit, like field of view.
Looking back at the code you posted this is exactly what the last ScreenXImpl function does.
It applies a projection matrix, which mostly just moves z into w, and then divides by w. At the end it does an extra scale and offset from (-1,1) to (0,widhtinpixels) but we can ignore that.
Now, why am I rambling on about this stuff? All you want to do is to get the x,y,z coordinates for a given xscreen, yscreen, zbuffer, right? Well, the trick is just going backwards. In in order to do that you need to have a firm grasp on going forward :)
There are two problems with going backwards: 1) Do you really know or care for the zbuffer value? 2) Do you know what the projection matrix did?
For 1) Let's say we don't care. There's many possible values for that, so we might just pick one. For 2) You will have to look at what it does. Some projection matrices might just take (x,y,z,w) and output (x,y,z,1). That would be 2d. Or (x,y+z,z,1) which would be isometric. But in perspective it usually does (x,y,1,z). Plus some scaling and so on.
I just noticed your second screenXImpl already passes x,y,z,w to the next stage. That is useful sometimes, but for all practical cases that w will be 1.
At this point I realize that I am terrible at explaining things. :) You really should pick up that linear algebra book, I learned from this one: http://www.amazon.com/Elementary-Linear-Algebra-Howard-Anton but it came with a good lecture, so I don't know how useful it is on it's own.
Anyhow! Let's get more practical. Back to your code: the last function of screenXImpl. We now know that the input w=1 and that ow=~z and ox=~x; The squiggly line here means times some scale plus some offset. And the screen x we have to begin with is ~ox/ow. (+1/2,*width.. that's what squiggly lines are for). And now we are back at 1)... if you want a special oz - pick one now. Otherwise, we can pick any. For rendering it probably makes sense to pick anything in front of the camera and easy to work with. Like 1.
protected float screenXImpl(float x, float y, float z, float w==1) {
float ox = 1*x + 0*y + 0*z + 0*w; // == x
float ow = 0*x + 0*y + 1*z + 0*w; // == z == 1
ox /= ow; // == ox
float sx = width * (1 + ox) / 2.0f;
return sx;
}
WTF? sx = width * (1+ox)/2 ? Why didn't I just say so? Well, all the zeros I put in there are probably not zero. But it's going to end up just as simple. Ones might not be ones. I tried to show the important assumptions you have to make to be able to go back. Now it should be as easy as going back from sx to ox.
That was the hard part! But you still have to go from the last function to the second one. I guess the second to the first is easy. :) That function is doing a linear matrix transform. Which is good for us. It takes an input of four values (x,y,z) and (w=1) implicit and outputs four other values (ax,ay,az,aw). We could figure out how to go back there manually! I had to do that in school.. four unknowns, four equations. You know ax,ay,az,aw... solve for x,y,z and you get w=1 for free! Very possible and a good exercise but also tedious. The good news is that the way those equations are written is called a matrix. (x,y,z,1) * MODELMATRIX = (ax,ay,az,aw). Really convenient because we can find MODELMATRIX^-1. It's called the inverse! Just like 1/2 is the inverse of 2 for multiplying real numbers or -1 is the inverse of 1 for addition. You really should read up on this it's fun and not hard, btw :).
Anyhow, use any standard library to get the inverse of your model matrix. Probably something like modelView.Inverse(). And then do the same function with that and you go backwards. Easy!
Now, why did we not do the same thing with the PROJECTION matrix earlier? Glad you asked! That one takes 4 inputs(x,y,z,w) and spits out only three outputs (screenx,screeny,zbufferz). So without making some assumptions we could not solve it! An intuitive way to look at that is that if you have a 3d point, that you project on a 2d screen, there's going to be a lot of possible solutions. So we have to pick something. And we can not use the convenient matrix inverse function.
Let me know if this was somewhat helpful or not. I have a feeling that it's not, but I had fun writing it! Also google for unproject in processing gives this: http://forum.processing.org/topic/read-3d-positions-gluunproject
You'd need to know the project matrix before you can make this work, which Processing doesn't supply you with. However, we can work it out ourselves by checking the screenX/Y/Z values for for the three vectors (1,0,0), (0,1,0) and (0,0,1). From those we can work out what the plane formula is for our screen (which is technically just a cropped flat 2D surface running through the 3D space). Then, given an (x,y) coordinate on the "screen" surface, and a predetermined z value, we could find the intersection between the normal line through our screen plane, and the plane at z=....
However, this is not what you want to do, because you can simply reset the coordinate system for anything you want to do. Use pushMatrix to "save" your current 3D transforms, resetMatrix to set everything back to "straight", and then draw your sphere based on the fact that your world axes and view axes are aligned. Then when you're done, call popMatrix to restore your earlier world transform and done. Save yourself the headache of implementing the math =)
You can figure this out with simple trigonometry. What you need is, h, the distance of the eye from the center of the canvas, and the cx and cy representing the center of the canvas. For simplicity, assume cx and cy are 0. Note that it is not the distance of your actual eye but the distance of the virtual eye used to construct the perspective on your 3d scene.
Next, given sx and sy, compute the distance to center, b = sqrt(sx * sx + sy * sy)
Now, you have a right-angled triangle with base b and height h. This triangle is formed by the "eye", the center on canvas, and the desired position of the object on the screen: (sx, sy).
This triangle forms the top part of another right-angled triangle formed by the "eye", the center on canvas pushed back by given depth z and the object itself: (x, y).
The ratio of the base and height of the triangles is exactly the same, so it should be trivial to calculate the base of the larger triangle bb given its height hh = z or hh = h + z depending on whether the z value is from the eye or from the canvas. The equation to use is b / h = bb / hh where you know b, h and hh
From there you can easily compute (x, y) because the two bases are at the same angle from the horizontal. I. e. sy / sx = y / x.
The only messy part will be to extract the distance of eye to canvas and the center of the canvas from the 3d setup.
Overview of Transforming 3d point onto your 2d screen
When you have a 3d representation of your object (x,y,z), you want to 'project' this onto your monitor, which is in 2d. To do this, there is a transformation function that takes in your 3d coordinates, and spits out the 2d coordinates. Under the hood, (at least in openGL), the transformation function that takes place is a special matrix. To do the transformation, you take your point (represented by a vector), and do a simple matrix multiplication.
For some nice fancy diagrams, and a derivation (if you are curious, it isn't necessary), check this out: http://www.songho.ca/opengl/gl_projectionmatrix.html
To me, screenXImpl looks like it is doing the matrix multiplication.
Doing the reverse
The reverse transformation is simply the inverse matrix of the original transformation.
Related
I'm trying to make a program that draws a Koch fractal. Is there any way I can draw a line in Java by length instead of coordinates?
A koch fractal looks kind of like a snowflake. The repeating pattern is equilateral triangles inserted 1/3 of the way into each line (with the triangle's sides being 1/3 the length of the line).
Originally I was trying to draw triangles recursively, but I couldn't figure out how to calculate the coordinates. Then I thought it would be way easier if I could just draw lines of a certain length and rotate them, and reduce the length of the lines each time. Except that I don't know if I even can draw lines by length in Java. I have tried searching the internet for this and have not found an answer, which makes me think it's not possible, but I thought I would ask here just to make sure.
I realize this is way beyond my technical college level. I also realize I could probably find a complete program that someone else has already written, but I want to see if I can figure it out (mostly) on my own.
First of all, I am assuming that you have some sort of function drawLine(int x1, int y1, int x2, int y2) in whatever API you are using for Java. If this is true, and you want to draw a line by length I would believe you could just do it using the standard trigonometry functions (Math.sin(...), Math.cos(...) and Math.tan(...)).
Example
What you would want to draw using a given length is at least the following data:
A starting coordinate
The angle of the line with the line y = c (where c is any number)
The length of the line
Your code could then use something like this:
public void drawLineByLength(int xStart, int yStart, double angle, double length) {
int xEnd = (int) (xStart + (Math.cos(angle) * length));
int yEnd = (int) (yStart + (Math.sin(angle) * length));
drawLine(xStart, yStart, xEnd, yEnd);
}
Note that you will have to import the Math class for this method. In addition, it will only work if you were to have a drawLine(...) function available to you that takes the coordinates of two points.
Implementation
If I understand your intentions, you would want to keep track of the current coordinate the "pen" is at; the angle it is drawing at; and the length of the next line it is going to draw. You could add something at the end of the drawLineByLength(...) method that updates these variables.
I'm making a simple 2d Java game similar to goemetry wars. I'm currently programming the player's shooting.
I have a target point specified by the mouse location. Now I want to add a bit of randomization, so that the shooting angle has a random offset. I am currently converting the point to a vector directly. My idea was to convert the vector to an angle, apply the randomization to that angle, and then, convert the angle back to a vector (given the length of the vector).
I can't figure out how to convert the angle back to a vector. I don't need code, it's basically just a question about the maths.
If there's a better way to randomize the shooting, I would love to hear that too! I can!t apply the randomization to the point because there are other things then the mouse that can set it.
Polar coordinate system
As everybody seems to have just answered in the comments, here goes the answer to your question as it is formulated : you need to use the polar coordinate system.
Let's call your angle a, the angle you want to add to it b, so the modified angle is a+b.
In the polar coordinate system, your point is represented by an angle a = Math.atan2(y, x) and a radius r = sqrt(x*x + y*y). If you just use the angle, you loose some information, which is at which distance the mouse is from your (0,0) point.
Converting back from your polar representation (after the angle has been modified) is now possible : x = r * Math.cos(a+b), y = r * Math.sin(a+b)
Without computing the angle
By using some cool trigonometry formulas, we can do better. We don't need to go to an angle and come back to a vector, we can modify the x and y values of the vector directly, still changing the angle like you want.
Remember that we want to find x'=r cos(a+b) and y'=r sin(a+b). We will obviously the following formulas :
Now let us multiply by r on both sides to get what whe want.
We now recognize x=r cos(a) and y=r sin(a), so we have the final expression :
If you come to think of it, for small values of b (which is what you want), sin(b) is close to 0, and cos(b) is close to 1, so the perturbation is small.
Not only do you reduce the number of trigonometry operations from 3 to 2, but you also have a cos and a sin of small values, which is nice if they are computed from Taylor series, makes convergence very fast. But that's implementation dependent.
NB: An alternative (and more elegant?) way to find the same result (and exact same formulas) is to use a rotation matrix.
I can haz cod
Whoopsie, you said you didn't want code. Well let's do it like this : I don't post it here, but if you want to compare with how I'd do it once you're done coding your implementation, you can see mine in action here : http://ideone.com/zRV4lL
What I am trying to convey in the title, is that there is a player on the screen and, using the direction variable and trigonometry, he is "looking" in a direction. I need to spawn an object right in front of him. And by spawn, I mean create an object with the x and y coordinates matching the location of the spot in "front" of the player.
The code for this is something difficult. I'm unable to understand, without more information or learning more trig, what I need to do to get this to work.
Basically this is what I have, it creates a bullet and another line of code adds it to a list to be drawn to the screen. What I need to know is how to spawn the "bullet" object in the correct x & y coordinates. This is what I have so far. I can assume there is something more I need to add to the x and y variables, but I don't know what that is.
Bullet b = new Bullet((int)x/2+(Math.cos(Math.toRadians(direction))), (int)y/2 + (Math.sin(Math.toRadians(direction))), "/img/bullet.png", direction, weapon);
Create a vector pointing in a direction where you want the object spawned.
x = radius * Math.cos(angle) + startX
y = radius * Math.sin(angle) + startY
Normalize it, and then scale it to your liking.
Here's a simple demo to illustrate.
p.s
radius here is just an initial uniform displacement from the spawn point.
It would help if you understood Proportionality, but it is basically this: if you multiply x and y for the same number, you will get farther away from the current position. Of course that depends on the signals, but the simplest way is this: supposing that x and y are two positive numbers, let's say x=1 and y=1, then, if you multiply both by a positive number, let's say 3, then the final numbers (x=3 and y=3) you will have a "bullet" in the coordinates 3,3 that is right in front of the actor, which is in the position 1,1. Again, I am assuming a lot of things and ignoring a bunch of other ones, such as position of camera, perspective, etc.
I'm aware of Quaternion methods of doing this. But ultimately these methods require us to transform all objects in question into the rotation 'space' of the Camera.
However, looking at the math, I'm certain there must be a simple way to get the XY, YZ and XZ equations for a line based on only the YAW (heading) and PITCH of a camera.
For instance, given the normals of the view frustrum such as (sqrt(2), sqrt(2), 0) you can easily construct the line (x+y=0) for the XY plane. But once the Z (in this case, Z is being used for depth, not GL's Y coordinate scrambling) changes, the calculations become more complex.
Additionally, given the order of applying rotations: yaw, pitch, roll; roll does not affect the normals of the view frustrum at all.
So my question is very simple. How do I go from a 3-coordinate view normal (that is normalized, i.e the vector length is 1) or a yaw (in radians), pitch (in radians) pair to a set of three line equations that map the direction of the 'eye' through space?
NOTE:
Quaternions I have had success with in this, but the math is too complex for every entity in a simulation to do for visual checks, along with having to check against all visible objects, even with various checks to reduce the number of viewable objects.
Use any of the popular methods out there for constructing a matrix from yaw and pitch to represent the camera rotation. The matrix elements now contain all kinds of useful information. For example (when using the usual representation) the first three elements of the third column will point along the view vector (either into or out of the camera, depending on the convention you're using). The first three elements of the second column will point 'up' relative to the camera. And so on.
However it's hard to answer your question with confidence as lots of things you say don't make sense to me. For example I've no idea what "a set of three line equations that map the direction of the 'eye' through space" means. The eye direction is simply given by a vector as I described above.
nx = (float)(-Math.cos(yawpos)*Math.cos(pitchpos));
ny = (float)(Math.sin(yawpos)*Math.cos(pitchpos));
nz = (float)(-Math.sin(pitchpos)));
That gets the normals of the camera. This assumes yaw and pitch are in radians.
If you have the position of the camera (px,py,pz) you can get the parametric equation thusly:
x = px + nx*t
y = py + ny*t
z = pz + nz*t
You can also construct the 2d projections of this line:
0 = ny(x-px) + nx(y-py)
0 = nz(y-px) + ny(z-pz)
0 = nx(z-pz) + nz(x-px)
I think this is correct. If someone notes an incorrect plus/minus let me know.
I'm trying to understand how the quaternion rotations work, I found this mini tutorial http://www.julapy.com/blog/2008/12/22/quaternion-rotation/ but He makes some assumptions that I can't workout, like how can I do "work out the rotation vectors around each axis, simply by rotating the vector around an axis." and how does he calculate angleDegreesX, angleDegreesY and angleDegreesZ?
Can some one provide a working example or explanation?
The shortest possible summary is that a quaternion is just shorthand for a rotation matrix. Whereas a 4x4 matrix requires 16 individual values, a quaternion can represent the exact same rotation in 4.
For the mathematically inclined, I am fully aware that the above is super over-simplified.
To provide a little more detail, let's refer to the Wikipedia article:
Unit quaternions provide a convenient
mathematical notation for representing
orientations and rotations of objects
in three dimensions. Compared to Euler
angles they are simpler to compose and
avoid the problem of gimbal lock.
Compared to rotation matrices they are
more numerically stable and may be
more efficient
What isn't clear from that opening paragraph is that a quaternion is not only convenient, it's unique. If you have a particular orientation of an object, twisting on any number of axes, there exists a single unique quaternion that represents that orientation.
Again, for the mathematically inclined, my uniqueness comment above assumes right-handed rotations. There is an equivalent left-handed quaternion that rotates in the opposite direction around the opposite axis.
For the purpose of simple explanation, that is something of a distinction without a difference.
If you'd like to make a simple quaternion that represents rotation about an axis, here's a short series of steps that will get you there:
Pick your axis of rotation v = {x, y, z}. Just for politeness, please pick a unit vector: if it's not already of length 1, divide all the components by the length of v.
Pick an angle of rotation that you'd like to turn about this axis and call that theta.
The equivalent unit quaternion can be computed using the sample code below:
Quaternion construction:
q = { cos(theta/2.0), // This is the angle component
sin(theta/2.0) * x, // Remember, angle is in radians, not degrees!
sin(theta/2.0) * y, // These capture the axis of rotation
sin(theta/2.0) * z};
Note those divisions by two: those ensure that there's no confusion in the rotation. With a normal rotation matrix, rotating to the right 90 degrees is the same as rotating to the left by 270. The quaternions that are equivalent to those two rotations are distinct: you can't confuse one with the other.
EDIT: responding to the question in the comments:
Let's simplify the problem by setting the following frame of reference:
Pick the center of the screen as the origin (we're going to rotate around that).
X axis points to the right
Y axis points up (top of the screen)
Z axis points out of the screen at your face (forming a nice right handed coordinate system).
So, if we have an example object (say an arrow) that starts by pointing to the right (positive x axis). If we move the mouse up from the x axis, the mouse will provide us with a positive x and positive y. So, working through the series of steps:
double theta = Math.atan2(y, x);
// Remember, Z axis = {0, 0, 1};
// pseudo code for the quaternion:
q = { cos(theta/2.0), // This is the angle component
sin(theta/2.0) * 0, // As you can see, the zero components are ignored
sin(theta/2.0) * 0, // Left them in for clarity.
sin(theta/2.0) * 1.0};
You need some basic math to do what you need. Basically, you rotate a point around an axis by multiyplying the matrix representing that point with a rotation matrix. The result is the rotated matrix represantation of that point.
The line
angleX = angleDegreesX * DEGTORAD;
just converts the degrees representation into a radians reprensentation by a simple formular (see this Wikipedia entry on Radians)
You can find some more information and examples of rotation matrizes here: Rotation around arbitrary axes
There are probably tools in your programming framework to do that rotation work and retrieve the matrices. Unfortunately, I cannot help you with the quaternions but your problems seem to be a little bit more basic.