I am working on a 2D java game engine using AWT canvas as a basis. Part of this game engine is that it needs to have hitboxes with collision. Not just the built in rectangles (tried that system already) but I need my own Hitbox class because I need more functionality. So I made one, supports circular and 4-sided polygon shaped hitboxes. The way the hitbox class is setup is that it uses four coordinate points to serve as the 4 corner vertices that connect to form a rectangle. Lines are draw connecting the points and these are the lines that are used to detect intersections with other hitboxes. But I now have a problem: rotation.
There are two possibilities for a box hitbox, it can just be four coordinate points, or it can be 4 coordinate points attached to a gameobject. The difference is that the former is just 4 coordinates based on 0,0 as the ordin while the attached to gameobject stores offsets in the coordinates rather than raw location data, so (-100,-100) for example represents the location of the host gameobject but 100 pixels to the left, and 100 pixels up.
Online I found a formula for rotating points about the origin. Since Gameobject based hitboxes were centered around a particular point, I figured that would be the best option to try it on. This code runs each tick to update a player character's hitbox
//creates a rectangle hitbox around this gameobject
int width = getWidth();
int height = getHeight();
Coordinate[] verts = new Coordinate[4]; //corners of hitbox. topLeft, topRight, bottomLeft, bottomRight
verts[0] = new Coordinate(-width / 2, -height / 2);
verts[1] = new Coordinate(width / 2, -height / 2);
verts[2] = new Coordinate(-width / 2, height / 2);
verts[3] = new Coordinate(width / 2, height / 2);
//now go through each coordinate and adjust it for rotation
for(Coordinate c : verts){
if(!name.startsWith("Player"))return; //this is here so only the player character is tested
double theta = Math.toRadians(rotation);
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
}
getHitbox().vertices = verts;
I appologize for poor video quality but this is what the results of the above are: https://www.youtube.com/watch?v=dF5k-Yb4hvE
All related classes are found here: https://github.com/joey101937/2DTemplate/tree/master/src/Framework
edit: The desired effect is for the box outline to follow the character in a circle while maintaining aspect ratio as seen here: https://www.youtube.com/watch?v=HlvXQrfazhA . The current system uses the code above, the effect of which can be seen above in the previous video link. How should I modify the four 2D coordinates to maintain relative aspect ratio throughout a rotation about a point?
current rotation system is the following:
x = x*Cos(theta) - y *Sin(theta)
y = x*Sin(theta) + y *Cos(theta)
where theta is degree of rotation in raidians
You made classic mistake:
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
In the second line you use modified value of c.x. Just remember tempx = c.x
before calculations and use it.
tempx = c.x;
c.x = (int)(tempx*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(tempx*Math.sin(theta)+c.y*Math.cos(theta));
Another issue: rounding coordinates after each rotation causes distortions and shrinking after some rotations. It would be wise to store coordinates in floats and round them only for output, or remember starting values and apply rotation by accumulated angle to them.
Related
Let's say that my screen is (800 * 600) and I have a Quad (2D) drawn with the following vertices positions using Triangle_Strip (in NDC) :
float[] vertices = {-0.2f,0.2f,-0.2f,-0.2f,0.2f,0.2f,0.2f,-0.2f};
And I set up my Transformation Matrix in this way :
Vector2f position = new Vector2f(0,0);
Vector2f size = new Vector2f(1.0f,1.0f);
Matrix4f tranMatrix = new Matrix4f();
tranMatrix.setIdentity();
Matrix4f.translate(position, tranMatrix, tranMatrix);
Matrix4f.scale(new Vector3f(size.x, size.y, 1f), tranMatrix, tranMatrix);
And my vertex Shader :
#version 150 core
in vec2 in_Position;
uniform mat4 transMatrix;
void main(void) {
gl_Position = transMatrix * vec4(in_Position,0,1.0);
}
My question is, which formula should I use to modify the transformations of my quad with coordinates (in Pixels) ?
For example :
set Scale : (50px, 50px) => Vector2f(width,height)
set Position : (100px, 100px) => Vector2f(x,y)
To better understand, I would create a function to convert my Pixels data to NDCs to send them next to the vertex shader. I was advised to use an Orthographic projection but I don't know how to create it correctly and as you can see in my vertex shader I don't use any projection matrix.
Here is a topic similar to mine but not very clear - Transform to NDC, calculate and transform back to worldspace
EDIT:
I created my orthographic projection matrix by following the formula but
nothing seems to appear, here is how I proceeded :
public static Matrix4f glOrtho(float left, float right, float bottom, float top, float near, float far){
final Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
matrix.m00 = 2.0f / (right - left);
matrix.m01 = 0;
matrix.m02 = 0;
matrix.m03 = 0;
matrix.m10 = 0;
matrix.m11 = 2.0f / (top - bottom);
matrix.m12 = 0;
matrix.m13 = 0;
matrix.m20 = 0;
matrix.m21 = 0;
matrix.m22 = -2.0f / (far - near);
matrix.m23 = 0;
matrix.m30 = -(right+left)/(right-left);
matrix.m31 = -(top+bottom)/(top-bottom);
matrix.m32 = -(far+near)/(far-near);
matrix.m33 = 1;
return matrix;
}
I then included my matrix in the vertex shader
#version 140
in vec2 position;
uniform mat4 projMatrix;
void main(void){
gl_Position = projMatrix * vec4(position,0.0,1.0);
}
What did I miss ?
New Answer
After clarifications in the comments, the question being asked can be summed up as:
How do I effectively transform a quad in terms of pixels for use in a GUI?
As mentioned in the original question, the simplest approach to this will be using an Orthographic Projection. What is an Orthographic Projection?
a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
In practice, you may think of this as a 2D projection. Distance plays no role, and the OpenGL coordinates map to pixel coordinates. See this answer for a bit more information.
By using an Orthographic Projection instead of a Perspective Projection you can start thinking of all of your transformations in terms of pixels.
Instead of defining a quad as (25 x 25) world units in dimension, it is (25 x 25) pixels in dimension.
Or instead of translating by 50 world units along the world x-axis, you translate by 50 pixels along the screen x-axis (to the right).
So how do you create an Orthographic Projection?
First, they are usually defined using the following parameters:
left - X coordinate of the left vertical clipping plane
right - X coordinate of the right vertical clipping plane
bottom - Y coordinate of the bottom horizontal clipping plane
top - Y Coordinate of the top horizontal clipping plane
near - Near depth clipping plane
far - Far depth clipping plane
Remember, all units are in pixels. A typical Orthographic Projection would be defined as:
glOrtho(0.0, windowWidth, windowHeight, 0.0f, 0.0f, 1.0f);
Assuming you do not (or can not) make use of glOrtho (you have your own Matrix class or another reason), then you must calculate the Orthographic Projection matrix yourself.
The Orthographic Matrix is defined as:
2/(r-l) 0 0 -(r+l)/(r-l)
0 2/(t-b) 0 -(t+b)/(t-b)
0 0 -2/(f-n) -(f+n)/(f-n)
0 0 0 1
Source A, Source B
At this point I recommend using a pre-made mathematics library unless you are determined to use your own. One of the most common bug sources I see in practice are matrix-related and the less time you spend debugging matrices, the more time you have to focus on other more fun endeavors.
GLM is a widely-used and respected library that is built to model GLSL functionality. The GLM implementation of glOrtho can be seen here at line 100.
How to use an Orthographic Projection?
Orthographic projections are commonly used to render a GUI on top of your 3D scene. This can be done easily enough by using the following pattern:
Clear Buffers
Apply your Perspective Projection Matrix
Render your 3D objects
Apply your Orthographic Projection Matrix
Render your 2D/GUI objects
Swap Buffers
Old Answer
Note that this answered the wrong question. It assumed the question boiled down to "How do I convert from Screen Space to NDC Space?". It is left in case someone searching comes upon this question looking for that answer.
The goal is convert from Screen Space to NDC Space. So let's first define what those spaces are, and then we can create a conversion.
Normalized Device Coordinates
NDC space is simply the result of performing perspective division on our vertices in clip space.
clip.xyz /= clip.w
Where clip is the coordinate in clip space.
What this does is place all of our un-clipped vertices into a unit cube (on the range of [-1, 1] on all axis), with the screen center at (0, 0, 0). Any vertices that are clipped (lie outside the view frustum) are not within this unit cube and are tossed away by the GPU.
In OpenGL this step is done automatically as part of Primitive Assembly (D3D11 does this in the Rasterizer Stage).
Screen Coordinates
Screen coordinates are simply calculated by expanding the normalized coordinates to the confines of your viewport.
screen.x = ((view.w * 0.5) * ndc.x) + ((w * 0.5) + view.x)
screen.y = ((view.h * 0.5) * ndc.y) + ((h * 0.5) + view.y)
screen.z = (((view.f - view.n) * 0.5) * ndc.z) + ((view.f + view.n) * 0.5)
Where,
screen is the coordinate in screen-space
ndc is the coordinate in normalized-space
view.x is the viewport x origin
view.y is the viewport y origin
view.w is the viewport width
view.h is the viewport height
view.f is the viewport far
view.n is the viewport near
Converting from Screen to NDC
As we have the conversion from NDC to Screen above, it is easy to calculate the reverse.
ndc.x = ((2.0 * screen.x) - (2.0 * x)) / w) - 1.0
ndc.y = ((2.0 * screen.y) - (2.0 * y)) / h) - 1.0
ndc.z = ((2.0 * screen.z) - f - n) / (f - n)) - 1.0
Example:
viewport (w, h, n, f) = (800, 600, 1, 1000)
screen.xyz = (400, 300, 200)
ndc.xyz = (0.0, 0.0, -0.599)
screen.xyz = (575, 100, 1)
ndc.xyz = (0.4375, -0.666, -0.998)
Further Reading
For more information on all of the transform spaces, read OpenGL Transformation.
Edit for Comment
In the comment on the original question, Bo specifies screen-space origin as top-left.
For OpenGL, the viewport origin (and thus screen-space origin) lies at the bottom-left. See glViewport.
If your pixel coordinates are truly top-left origin then that needs to be taken into account when transforming screen.y to ndc.y.
ndc.y = 1.0 - ((2.0 * screen.y) - (2.0 * y)) / h)
This is needed if you are transforming, say, a coordinate of a mouse-click on screen/gui into NDC space (as part of a full transform to world space).
NDC coordinates are transformed to screen (i.e. window) coordinates using glViewport. This function (you must use t in your app) defines a portion of the window by an origin and a size.
The formulas used can be seen at https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glViewport.xml
(x,y) are the origin, normally (0,0) the bottom left corner of the window.
While you can derivate the inverse formulas on your own, here you have them: https://www.khronos.org/opengl/wiki/Compute_eye_space_from_window_space#From_window_to_ndc
If I understand the question, you're trying to get screen space coords (the ones that define the size of your screen) to the -1 to 1 coords. If yes then it's quite simple. The equation is:
((coords_of_NDC_space / width_or_height_of_screen) * 2) - 1
This would work because for example a screen of 800 × 600:
800 / 800 = 1
1 * 2 = 2
2 - 1 = 1
and to check for a coordinate from half the screen on the height:
300 / 600 = 0.5
0.5 * 2 = 1
1 - 1 = 0 (NDC is from -1 to 1 so 0 is middle)
Im developing simple game. I have cca. 50 rectangles arranged in 10 columns and 5 rows. It wasn't problem to put them somehow to fit the whole screen. But when I rotate the canvas, let's say about 7° angle, the old coordinates does't fit in the new position of the coordinates. In constructor I already create and define the position of that rectangles, in onDraw method I'm drawing this rectangles (of course there are aready rotated) bud I need some method that colliding with the current rectangle. I tried to use something like this (i did rotation around the center point of the screen)
int newx = (int) ((x * Math.cos(ROTATE_ANGLE) - (y * Math.sin(ROTATE_ANGLE))) + width / 2);
int newy = (int) ((y * Math.cos(ROTATE_ANGLE) + (x * Math.sin(ROTATE_ANGLE))) + height / 2);
but it doesn't works (becuase it gives me absolute wrong new coordinates). x and y are coordinates of the touch that I'm trying to calculate new position in manner of rotation. ROTATE_ANGLE is the angle of rotation the screen.
Does anybody know how to solve this problem, I already go thorough many articles, wiki, wolframalpha categories but not luck. Maybe I just need some link to understand problem more.
Thank you
You use a rotation matrix.
Matrix mat = new Matrix(); //mat is identity
mat.postRotate(ROTATE_ANGLE); //mat is a rotation matrix of ROTATE_ANGLE degrees
float point[] = {10.0, 20.0}; //create a new float array representing the point (10, 20)
mat.mapPoints(point); //rotate the point by the requested amount
Ok, find the solution.
First it is important to convert from angle to radian
Then I personly need to negate that radian value.
That's all, this solution is correct
So I'm rendering points in 3D space. To find their X position on the screen, I'm using this math:
double sin = Math.sin(viewPointRotX);
double cos = Math.cos(viewPointRotX);
double xx = x - viewPointX;
double zz = z - viewPointZ;
double rotx = xx * cos - zz * sin;
double rotz = zz * cos + xx * sin;
double xpix = (rotx / rotz * height + width / 2);
I'm doing a similar process for Y.
This works fine, but points can render as if they were in front of the camera when they are actually behind it.
How can I work out using the data I've got whether a given point is in front of or behind the camera?
We can tell if a point is in front or behind a camera by comparing coordinates with a little 3D coordinate geometry.
Consider a very simple example: The camera is located at (0,0,0) and pointed straight up. That would mean every point with a positive Z coordinate is "in front" of the camera. Now, we could get more complicated and account for the fact that the field of view of a camera is really a cone with a particular angle.
For instance, if the camera has a 90 degree field of view (45 degrees in both directions of the way it is facing), then we can handle this with some linear math. In 2D space, again with camera at the origin facing up, the field of view would be bound by the lines y = x and y = -x (or all points within y = |x|).
For 3D space, take the line y = x and spin it around the z-axis. We now have the cone:
z = x*x + y*y, so if
if(z < x*x + y*y && z>0)
then the point is in front of the camera and within the field of view.
If the camera is not at the origin, we can just translate the coordinate system so that it is and then run the calculation above. If the viewing angle is not 90 degrees, then we need to trace out some line y = mx (where m = Math.tan(viewAngle/2) ) and spin this line about the z-axis to get the light cone.
The way it looks like you are doing this is transforming the point coordinates so that they are aligned with and relative to the view.
If you do this the right way then you can just check that the rotated z-value is positive and greater than the distance from the focal-point to the "lens".
Find the view direction vector:
V = (cos(a)sin(b), sin(a)sin(b), cos(b)), where a and b are the camera's rotation angles.
Project the offset vector (xx, yy, zz) onto the view direction vector, and find the magnitude, giving the distance along the camera's view axis of the point:
distance = xx * cos(a)sin(b) + yy * sin(a)sin(b) + zz * cos(b)
Now just check that distance > focalLength.
This should work but you have to be careful to set everything up right. You might have to use a different calculation to find the view direction vector depending on how you are representing the camera's orientation.
I've got a camera set up, and I can move with WASD and rotate the view with the mouse. But now comes the problem: I want to add physics to the camera/player, so that it "interacts" with my other jBullet objects. How do I do that? I thought about creating a RigidBody for the camera and storing the position there, so that jBullet can apply its physics to the camera. Then, when I need to change something (the position), I could simply change it in the RigidBody. But I didn't find any methods for editing the position.
Can you push me in the right direction or maybe give me an example source code?
I was asking the same question myself a few days ago. My solution was as Sierox said. To create a RigidBody of BoxShape and add that to the DynaicsWorld. To move the camera arund, apply force to its rigidbody. I have damping set to .999 for linear and 1 for angular to stop the camera when no force is applied, i.e. the player stops pressing the button.
I also use body.setAngularFactor(0); so the box isn't tumbling all over the place. Also set the mass really low as not to interfere too much with other objects, but still be able to jump on then and run into them, and otherwise be affected by them.
Remember to convert your x,y, and z coordinates to cartesian a plane so you move in the direction of the camera. i.e.
protected void setCartesian(){//set xyz to a standard plane
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
float pd = (float) (Math.PI/180);
x = (float) (-Math.cos(xrot*pd)*Math.sin(yrot*pd));
z = (float) (-Math.cos(xrot*pd)*Math.cos(yrot*pd));
//y = (float) Math.sin(xrot*pd);
}//..
public void forward(){// move forward from position in direction of camera
setCartesian();
x += (Math.sin(yrotrad))*spd;
z -= (Math.cos(yrotrad))*spd;
//y -= (Math.sin(xrotrad))*spd;
body.applyForce(new Vector3f(x,0,z),getThrow());
}//..
public Vector3f getThrow(){// get relative position of the camera
float nx=x,ny=y,nz=z;
float xrotrad, yrotrad;
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
nx += (Math.sin(yrotrad))*2;
nz -= (Math.cos(yrotrad))*2;
ny -= (Math.sin(xrotrad))*2;
return new Vector3f(nx,ny,nz);
}//..
to jump just use body.setLinearVelocity(new Vector3f(0,jumpHt,0)); and set jumpHt to whatever velocity you wish.
I use getThrow to return a vector for other objects i may be "throwing" on screen or carrying. I hope I answered your question and didn't throw in too much non-essential information.I'll try and find the source that gave me this idea. I believe it was on the Bullet forums.
------- EDIT ------
Sorry to have left that part out
once you have the rigid body functioning properly you just have to get it's coordinates and apply that to your camera for example:
float mat[] = new float[16];
Transform t = new Transform();
t = body.getWorldTransform(t);
t.origin.get(mat);
x = mat[0];
y = mat[1];
z = mat[2];
gl.glRotatef(xrot, 1, 0, 0); //rotate our camera on teh x-axis (left and right)
gl.glRotatef(yrot, 0, 1, 0); //rotate our camera on the y-axis (up and down)
gl.glTranslatef(-x, -y, -z); //translate the screen to the position of our camera
In my case I'm using OpenGL for graphics. xrot and yrot represent the pitch and yaw of your camera. the code above gets the world transform in the form of a matrix and for the purposes of the camera you need only to pull the x,y, and z coordinates then apply the transform.
from here, to move the camera, you can set the linear velocity of the rigid body to move the camera or apply force.
Before you read this answer I would like to mention that I have a problem with the solution stated in my answer. You can follow my question about that problem so that you can have the solution too if you use this answer.
So. First, you need to create a new BoxShape:
CollisionShape cameraHolder = new BoxShape(SIZE OF CAMERAHOLDER);
And add it to your world so that it interacts with all the other objects. Now you need to change all the methods about camera movement (not rotation) so that the methods move your cameraHolder but not your camera. Then set the position of your Camera to the position of the cameraHolder.
Again, if you have a problem where you can't move properly, you can check my question and wait for an answer. You also can find a better way of doing this.
If you have problems or did not understand something about the answer, please state it as a comment.
So i've made my own FPS, graphics and guns and all of that cool stuff; When we fire, the bullet should take a random direction inside the crosshair, as defined by:
float randomX=(float)Math.random()*(0.08f*guns[currentWeapon].currAcc)-(0.04f*guns[currentWeapon].currAcc);
float randomY=(float)Math.random()*(0.08f*guns[currentWeapon].currAcc)-(0.04f*guns[currentWeapon].currAcc);
bulletList.add(new Bullet(new float[]{playerXpos, playerYpos, playerZpos}, new float[]{playerXrot+randomX, playerYrot+randomY}, (float) 0.5));
We calculate the randomness in X and Y (say you had a crosshair size (guns[currentWeapon].currAcc) of 10, then the bullet could go 0.4 to any side and it would remain inside the crosshair.
After that is calculated, we send the player position as the starting position of the bullet, along with the direction it's meant to take (its the player's direction with that extra randomness), and finally it's speed (not important atm, tho).
Now, each frame, the bullets have to move, so for each bullet we call:
position[0] -= (float)Math.sin(direction[1]*piover180) * (float)Math.cos(direction[0]*piover180) * speed;
position[2] -= (float)Math.cos(direction[1]*piover180) * (float)Math.cos(direction[0]*piover180) * speed;
position[1] += (float)Math.sin(direction[0]*piover180) * speed;
So, for X and Z positions, the bullet moves according to the player's rotation on the Y and X axis (say you were looking horizontally into Z, 0 degrees on X and 0 on Y; X would move 0*1*speed and Z would move 1*1*speed).
For Y position, the bullet moves according to the rotation on X axis (varies between -90 and 90), meaning it stays at the same height if the player's looking horizontally or moves completely up if the player is looking vertically.
Now, the problem stands as follows:
If i shoot horizontally, everything works beautifully. Bullets spread around the cross hair, as seen in https://dl.dropbox.com/u/16387578/horiz.jpg
The thing is, if i start looking up, the bullets start concentrating around the center, and make this vertical line the further into the sky i look.
https://dl.dropbox.com/u/16387578/verti.jpg
The 1st image is around 40º in the X axis, the 2nd is a little higher and the last is when i look vertically.
What am i doing wrong here? I designed this solution myself can im pretty sure im messing up somewhere, but i cant figure it out :/
Basicly the vertical offset calculation (float)Math.cos(direction[0]*piover180) was messing up the X and Z movement because they'd both get reduced to 0. The bullets would make a vertical line because they'd rotate on the X axis with the randomness. My solution was to add the randomness after that vertical offset calculation, so they still go left and right and up and down after you fire them.
I also had to add an extra random value otherwise you'd just draw a diagonal line or a cross.
float randomX=(float)Math.random()*(0.08f*guns[currentWeapon].currAcc)-(0.04f*guns[currentWeapon].currAcc);
float randomY=(float)Math.random()*(0.08f*guns[currentWeapon].currAcc)-(0.04f*guns[currentWeapon].currAcc);
float randomZ=(float)Math.random()*(0.08f*guns[currentWeapon].currAcc)-(0.04f*guns[currentWeapon].currAcc);
bulletList.add(new Bullet(new float[]{playerXpos, playerYpos, playerZpos}, new float[]{playerXrot, playerYrot}, new float[]{randomX,randomY, randomZ},(float) 0.5));
And the moving code...
vector[0]= -((float)Math.sin(dir[1]*piover180) * (float)Math.cos(dir[0]*piover180)+(float)Math.sin(random[1]*piover180)) * speed;
vector[1]= ((float)Math.sin(dir[0]*piover180)+(float)Math.sin(random[0]*piover180)) * speed;
vector[2]= -((float)Math.cos(dir[1]*piover180) * (float)Math.cos(dir[0]*piover180)+(float)Math.sin(random[2]*piover180)) * speed;
You didn't need to bust out any complex math, your problem was that when you were rotating the bullet around the y axis for gun spread, if you were looking directly up (that is, through the y axis, the bullet is being rotated around the path which its going, which means no rotation whatsoever (imagine the difference between sticking your arm out forwards towards a wall and spinning in a circle, and sticking you arm out towards the sky and spinning in a circle. Notice that your hand doesn't move at all when pointed towards the sky (the y-axis)) and so you get those "diagonal" bullet spreads.
The trick is to do the bullet spread before rotating by the direction the player is looking in, because that way you know that when you are rotating for spread, that the vector is guaranteed to be perpendicular to the x and y axes.
this.vel = new THREE.Vector3(0,0,-1);
var angle = Math.random() * Math.PI * 2;
var mag = Math.random() * this.gun.accuracy;
this.spread = new THREE.Vector2(Math.cos(angle) * mag,Math.sin(angle) * mag);
this.vel.applyAxisAngle(new THREE.Vector3(0,1,0),this.spread.y / 100); //rotate first when angle gaurenteed to be perpendicular to x and y axes
this.vel.applyAxisAngle(new THREE.Vector3(1,0,0),this.spread.x / 100);
this.vel.applyAxisAngle(new THREE.Vector3(1,0,0),player.looking.x); //then add player looking direction
this.vel.applyAxisAngle(new THREE.Vector3(0,1,0),player.looking.y);
this.offset = this.vel.clone()
I don't use java but I hope you get the main idea of what im doing by this javascript. I am rotating a vector going in the negative z direction (default direction of camera) by the spread.y, and spread.x, and then I am rotating by the pitch and yaw of the angle at which the player is facing.