I try to transform the window mouse coordinates (0/0 is the upper left corner) into world space coordinates. I just tried to solve it by this description. Here is my code:
public void showMousePosition(float mx, float my){
Matrix4f projectionMatrix = camera.getProjectionMatrix();
Matrix4f viewMatrix = camera.getViewMatrix();
Matrix4f projMulView = projectionMatrix.mul(viewMatrix);
projMulView.invert();
float px = ((2*mx)/650)-1;
float py = ((2*my)/650)-1;
Vector4f vec4 = new Vector4f(px, py*(-1), 0.0f, 1.0f);
vec4.mul(projMulView);
vec4.w = 1.0f / vec4.w;
vec4.x *= vec4.w;
vec4.y *= vec4.w;
vec4.z *= vec4.w;
System.out.println(vec4.x + ", " + vec4.y);
}
But thats not 100% correct. I have an Object on 0/-11 on world space and when I move my mouse to this point, my function say 0/9,8. And when I go to the left side of my window the x value is 5,6 but it should be something like 28.
Someone know what is wrong on my code?
First of all, your code says that your windows size is always width=650, height=650.
Then you are getting the position when z=0. But this z is in screen space and therefore it changes as you change the camera position and orientation. Normally, you get this information from the depth buffer, using glReadPixel. You should do it in this case.
However, there is another way to do this also. In the code I will share, I am looking for the intersection between a ray (generated from the mouse position) and the plane (0,0,0) with normal (0,1,0), I hope this helps.
/*Given the inverse PV (projection*view) matrix, the position of the mouse on screen and the size of the screen, transforms the screen coordinates to world coordinates*/
glm::vec3 Picking::OnWorld(glm::mat4 const& m_inv, glm::vec2 const & spos,size_t width, size_t height) {
float x = spos.x;
float y = spos.y;
y = height - y;
//InputOrigin, start of the ray for intersection with plane
glm::vec4 inputO = glm::vec4(x / width*2.0f - 1.0f, y / height*2.0f - 1.0f, -1.0f, 1.0f); //transforms screen position to the unit cube range
glm::vec4 resO = m_inv*inputO; //transforms to world space
if (resO.w == 0.0f)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
resO /= resO.w; //homogeneous division
glm::vec4 inputE = inputO; //inputEnd, the end of the ray
inputE.z = 1.0;
//End of ray to world space
glm::vec4 resE = m_inv*inputE;
//checks that the coordinates are correct
if (resE.w == 0.0f)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
resE /= resE.w;
//ray for intersection
glm::vec3 ray = glm::vec3(resE - resO); //vector between z=-1 and z=1
glm::vec3 normalRay = glm::normalize(ray);
glm::vec3 normalPlane = glm::vec3(0, 1, 0); //detects collision with plane 0, normal 1
float denominator = glm::dot(normalRay, normalPlane);
if (denominator == 0)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
float numerator = glm::dot(glm::vec3(resO), normalPlane);
//intersection between ray and plane
glm::vec3 result = glm::vec3(resO) - normalRay*(numerator / denominator);
return result;
}
The math for the intersection can be read from this link:
https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm
Related
I'm making a 2D topdown view shooter game with Java Swing. I want to calculate what angle the mouse pointer is compared to the center of the screen so some of my Sprites can look toward the pointer and so that I can create projectiles described by an angle and a speed. Additionally If the pointer is straight above the middle of the screen, I want my angle to be 0°, if straight to its right, 90°, if straight below 180°, and straight left 270°.
I have made a function to calculate this:
public static float calculateMouseToPlayerAngle(float x, float y){
float mouseX = (float) MouseInfo.getPointerInfo().getLocation().getX();
float mouseY = (float)MouseInfo.getPointerInfo().getLocation().getY();
float hypotenuse = (float) Point2D.distance(mouseX, mouseY, x, y);
return (float)(Math.acos(Math.abs(mouseY-y)/hypotenuse)*(180/Math.PI));
}
The idea behind it is that I calculate the length of the hypotenuse then the length of the side opposite of the angle in question. The fraction of the 2 should be a cos of my angle, so taking that result's arc cos then multiplying that by 180/Pi should give me the angle in degrees. This does work for above and to the right, but straight below returns 0 and straight left returns 90. That means that I currently have 2 problems where the domain of my output is only [0,90] instead of [0,360) and that it's mirrored through the y (height) axis. Where did I screw up?
You can do it like this.
For a window size of 500x500, top left being at point 0,0 and bottom right being at 500,500.
The tangent is the change in Y over the change in X of two points. Also known as the slope it is the ratio of the sin to cos of a specific angle. To find that angle, the arctan (Math.atan or Math.atan2) can be used. The second method takes two arguments and is used below.
BiFunction<Point2D, Point2D, Double> angle = (c,
m) -> (Math.toDegrees(Math.atan2(c.getY() - m.getY(),
c.getX() - m.getX())) + 270)%360;
BiFunction<Point2D, Point2D, Double> distance = (c,
m) -> Math.hypot(c.getY() - m.getY(),
c.getX() - m.getX());
int screenWidth = 500;
int screenHeight = 500;
int ctrY = screenHeight/2;
int ctrX = screenWidth/2;
Point2D center = new Point2D.Double(ctrX,ctrY );
Point2D mouse = new Point2D.Double(ctrX, ctrY-100);
double straightAbove = angle.apply(center, mouse);
System.out.println("StraightAbove: " + straightAbove);
mouse = new Point2D.Double(ctrX+100, ctrY);
double straightRight = angle.apply(center, mouse);
System.out.println("StraightRight: " + straightRight);
mouse = new Point2D.Double(ctrX, ctrY+100);
double straightBelow = angle.apply(center, mouse);
System.out.println("StraightBelow: " + straightBelow);
mouse = new Point2D.Double(ctrX-100, ctrY);
double straightLeft = angle.apply(center, mouse);
System.out.println("Straightleft: " + straightLeft);
prints
StraightAbove: 0.0
StraightRight: 90.0
StraightBelow: 180.0
Straightleft: 270.0
I converted the radian output from Math.atan2 to degrees. For your application it may be more convenient to leave them in radians.
Here is a similar Function to find the distance using Math.hypot
BiFunction<Point2D, Point2D, Double> distance = (c,m) ->
Math.hypot(c.getY() - m.getY(),
c.getX() - m.getX());
I'm working with ARCore in Android Studio using java and am trying to implement ray intersection with an object.
I started with Google's provided sample (as found here: https://developers.google.com/ar/develop/java/getting-started).
Upon touching the screen, a ray gets projected and when this ray touches a Plane, a PlaneAttachment (with an Anchor/a Pose) is created in the intersection point.
I would then like to put a 3D triangle in the world attached to this Pose.
At the moment I create my Triangle based on the Pose's translation, like this:
In HelloArActivity, during onDrawFrame(...)
//Code from sample, determining the hits on planes
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon.
if (hit instanceof PlaneHitResult && ((PlaneHitResult) hit).isHitInPolygon()) {
mTouches.add(new PlaneAttachment(
((PlaneHitResult) hit).getPlane(),
mSession.addAnchor(hit.getHitPose())));
//creating a triangle in the world
Pose hitPose = hit.getHitPose();
float[] poseCoords = new float[3];
hitPose.getTranslation(poseCoords, 0);
mTriangle = new Triangle(poseCoords);
}
}
}
Note: I am aware that the triangle's coordinates should be updated every time the Pose's coordinates get updated. I left this out as it is not part of my issue.
Triangle class
public class Triangle {
public float[] v0;
public float[] v1;
public float[] v2;
//create triangle around a given coordinate
public Triangle(float[] poseCoords){
float x = poseCoords[0], y = poseCoords[1], z = poseCoords[2];
this.v0 = new float[]{x+0.0001f, y-0.0001f, z};
this.v1 = new float[]{x, y+ 0.0001f, z-0.0001f};
this.v2 = new float[]{x-0.0001f, y, z+ 0.0001f};
}
After this, upon tapping the screen again I create a ray projected from the tapped (x,y) part of the screen, using Ian M his code sample provided in the answer to this question: how to check ray intersection with object in ARCore
Ray Creation, in HelloArActivity
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
The result of this however is that, no matter where I tap the screen, it always counts as a hit (regardless of how much distance I add between the points, in Triangle's constructor).
I suspect this has to do with how a Pose is located in the world, and using the Pose's translation coordinates as a reference point for my triangle is not the way to go, so I'm looking for the correct way to do this, but any remarks regarding other parts of my method are welcome!
Also I have tested my method for ray-triangle intersection and I don't think it is the problem, but I'll include it here for completeness:
public Point3f intersectRayTriangle(CustomRay R, Triangle T) {
Point3f I = new Point3f();
Vector3f u, v, n;
Vector3f dir, w0, w;
float r, a, b;
u = new Vector3f(T.V1);
u.sub(new Point3f(T.V0));
v = new Vector3f(T.V2);
v.sub(new Point3f(T.V0));
n = new Vector3f(); // cross product
n.cross(u, v);
if (n.length() == 0) {
return null;
}
dir = new Vector3f(R.direction);
w0 = new Vector3f(R.origin);
w0.sub(new Point3f(T.V0));
a = -(new Vector3f(n).dot(w0));
b = new Vector3f(n).dot(dir);
if ((float)Math.abs(b) < SMALL_NUM) {
return null;
}
r = a / b;
if (r < 0.0) {
return null;
}
I = new Point3f(R.origin);
I.x += r * dir.x;
I.y += r * dir.y;
I.z += r * dir.z;
return I;
}
Thanks in advance!
I'm writing a game for Android using Java and OpenGL. I can render everything perfectly to screen, but when I try to check whether two objects collide or not, my algorithm detects a collision before it occurs on the screen.
Here's how I test for collision:
for(int i=0; i<enemies.size(); i++) {
float enemyRadius = enemies.elementAt(i).worldSpaceBoundingSphereRadius();
float[] enemyPosition = enemies.elementAt(i).getWorldSpaceCoordinates();
for(int j=0; j<qubieBullets.size(); j++) {
float bulletRadius = bullets.elementAt(j).worldSpaceBoundingSphereRadius();
float[] bulletPosition = bullets.elementAt(j).getWorldSpaceCoordinates();
float[] distanceVector = Vector3f.subtract(enemyPosition, bulletPosition);
float distance = Vector3f.length(distanceVector);
if(distance < (enemyRadius + bulletRadius)) {
enemies.remove(i);
qubieBullets.remove(j);
i--;
j--;
// Reset enemy position
}
}
}
When the enemy cube (represented by a sphere for collision detection) closes in on the player, the player shoots a bullet (also a cube represented by a sphere) toward the enemy. My expectations are that the enemy gets reset when the bullet hits him on screen, but it happens way earlier than that.
The methods for calculation world space position and radius:
public float[] getWorldSpaceCoordinates() {
float[] modelSpaceCenter = {0.0f, 0.0f, 0.0f, 1.0f};
float[] worldSpaceCenter = new float[4];
Matrix.multiplyMV(worldSpaceCenter, 0, getModelMatrix(), 0, modelSpaceCenter, 0);
return new float[] {worldSpaceCenter[0]/worldSpaceCenter[3], worldSpaceCenter[1]/worldSpaceCenter[3], worldSpaceCenter[2]/worldSpaceCenter[3]};
}
public float worldSpaceBoundingSphereRadius() {
float[] arbitraryVertex = new float[] {1.0f, 1.0f, 1.0f, 1.0f};
float[] worldSpaceVector = new float[4];
Matrix.multiplyMV(worldSpaceVector, 0, getModelMatrix(), 0, arbitraryVertex, 0);
float[] xyz = new float[] {worldSpaceVector[0]/worldSpaceVector[3], worldSpaceVector[1]/worldSpaceVector[3], worldSpaceVector[2]/worldSpaceVector[3]};
return Vector3f.length(xyz);
}
Is it my code or math that's wrong? I can't think of anything more to try, and would be helpful if someone could point me in the right direction.
Your worldSpaceBoundingSphereRadius() is most likely the culprit. arbitraryVertex is a Vector of (1,1,1) so your math will only work if the cube model has edges of length 2 * sqrt(1/3). What you want to do is find the exact length of your cube's model's edge, use the formula from my comment (rad = sqrt( 3 * (x/2) * (x/2) )) and use that radius for your arbitraryVertex (rad,rad,rad,1).
Also, your dividing the results of your multiplication by the homogenous coordinate (worldSpaceVector[0]/worldSpaceVector[3]). With a proper rotation, translation, or scale, the homogenous coordinate should always be exactly 1 (if it started as one). If it isn't, you might have a projection matrix in there or something else that isn't a basic transformation.
EDIT:
Since you're using worldSpaceBoundingSphereRadius() to get only the radius, you want only the scaling component of getModelMatrix(). If that returns scaling and translation, this translation will apply to your radius and make it much larger than it actually is.
Follow-up for: Calculating world coordinates from camera coordinates
I'm multiplying a 2D vector with a transformation matrix (OpenGL's model-view matrix) to get world coordinates from my camera coordinates.
I do this calculation like this:
private Vector2f toWorldCoordinates(Vector2f position) {
glPushMatrix();
glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);
ByteBuffer m = ByteBuffer.allocateDirect(64);
m.order(ByteOrder.nativeOrder());
glGetFloatv(GL_MODELVIEW_MATRIX, m);
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(16)) + (position.y * m.getFloat(20)) + m.getFloat(28);
glPopMatrix();
return new Vector2f(x, y);
}
Now I also want to do this vice-versa: calculate the camera coordinates for a position in the world. How can I reverse this calculation?
To create a matrix representing the inverse transform to the one above, apply the transforms in reverse, with negative quantities for the rotation and translation and an inverse quantity for the zoom:
glRotatef(-ROTATION, 0, 0, 1);
glTranslatef(-this.position.x, -this.position.y, 0);
glScalef(1.0f / this.zoom, 1.0f / this.zoom, 1);
Then multiply by the position vector as before.
The alternative is to compute the inverse matrix, but this way is much simpler.
I'm sorry if this question was asked before, I did search, and I did not find an answer.
My problem is, that I'd like to make movement on all 3 axes with the X and Y rotation of the camera being relevant.
This is what I did:
private static void fly(int addX, int addY){ //parameters are the direction change relative to the current rotation
float angleX = rotation.x + addX; //angle is basically the direction, into which we will be moving(when moving forward this is always the same as our actual rotation, therefore addX and addY would be 0, 0)
float angleY = rotation.y + addY;
float speed = (moveSpeed * 0.0002f) * delta;
float hypotenuse = speed; //the length that is SUPPOSED TO BE moved overall on all 3 axes
/* Y-Z side*/
//Hypotenuse, Adjacent and Opposite side lengths of a triangle on the Y-Z side
//The point where the Hypotenuse and the Adjacent meet is where the player currently is.
//OppYZ is the opposite of this triangle, which is the ammount that should be moved on the Y axis.
//the Adjacent is not used, don't get confused by it. I just put it there, so it looks nicer.
float HypYZ = speed;
float AdjYZ = (float) (HypYZ * Math.cos(Math.toRadians(angleX))); //adjacent is on the Z axis
float OppYZ = (float) (HypYZ * Math.sin(Math.toRadians(angleX))); //opposite is on the Y axis
/* X-Z side*/
//Side lengths of a triangle on the Y-Z side
//The point where the Hypotenuse and the Adjacent meet is where the player currently is.
float HypXZ = speed;
float AdjXZ = (float) (HypXZ * Math.cos(Math.toRadians(angleY))); //on X
float OppXZ = (float) (HypXZ * Math.sin(Math.toRadians(angleY))); //on Z
position.x += AdjXZ;
position.y += OppYZ;
position.z += OppXZ;
}
I only implement this method when moving forwards(parameters: 0, 90) or backwards(params: 180, 270), since movement can't happen on the Y axis while going sideways, since you don't rotate on the Z axis. ( the method for going sideways(strafing) works just fine, so I won't add that.)
the problem is that when I look 90 degrees up or -90 down and then move forward I should be moving only on the Y axis(vertically) but for some reason I also move forwards(which means on the Z axis, as the X axis is the strafing).
I do realize that movement speed this way is not constant. If you have a solution for that, I'd gladly accept it as well.
I think your error lies in the fact that you don't fully project your distance (your quantity of movement hypothenuse) on your horizontal plane and vertical one.
In other words, whatever the chosen direction, what you are doing right now is moving your point of hypothenuse in the horizontal plane X-Z, even though you already move it of a portion of hypothenuse in the vertical direction Y.
What you probably want to do is moving your point of a hypothenuse quantity as a total.
So you have to evaluate how much of the movement takes place in the horizontal plane and how much in the vertical axis. Your direction gives you the answer.
Now, it is not clear to me right now what your 2 angles represent. I highly recommend you to use Tait–Bryan angles in this situation (using only yawn and pitch, since you don't seem to need the rolling - what you call the Z-rotation), to simplify the calculations.
In this configuration, the yawn angle would be apparently similar to your definition of your angleY, while the pitch angle would be the angle between the horizontal plane and your hypothenuse vector (and not the angle of the projection in the plane Y-Z).
A schema to clarify:
With :
s your quantity of movement from your initial position P_0 to P_1 (hypothenuse)
a_y the yawn angle and a_p the pitch one
D_x, D_y, D_z the displacements for each axis (to be added to position, ie AdjXZ, OppYZ and OppXZ)
So if you look at this representation, you can see that your triangle in X-Z doesn't have s as hypotenuse but its projection s_xz. The evaluation of this distance is quite straightforward: if you place yourself in the triangle P_0 P_1 P_1xz, you can see that s_xz = s * cos(a_p). Which gives you:
float HypXZ = speed * Math.cos(Math.toRadians(angleP))); // s_xz
float AdjXZ = (float) (HypXZ * Math.cos(Math.toRadians(angleY))); // D_x
float OppXZ = (float) (HypXZ * Math.sin(Math.toRadians(angleY))); // D_z
As for D_y ie OppYZ, place yourself in the triangle P_0 P_1 P_1xz again, and you'll obtain:
float OppYZ = (float) (speed * Math.sin(Math.toRadians(angleP))); // D_y
Now, if by angleX you actually meant the angle of elevation as I suppose you did, then angleP = angleX and HypXZ = AdjYZ in your code.
With this correction, if angleX = 90 or angleX = -90, then
HypXZ = speed * cos(angleX) = speed * cos(90deg) = speed * 0;
... and thus AdjXZ = 0 and OppXZ = 0. No movement in the horizontal plane.
Note:
To check if your calculations are correct, you can verify if you actually move your point of the wanted quantity of movement (hypothenuse ie speed ie s). Using Pythagorean theorem:
s² = s_xz² + D_z² // Applied in the triangle P_0 P_1 P_1xz
= D_x² + D_y² + D_z² // Applied in the triangle P_0 P_1x P_1xz
With the definitions of the displacements given above:
D_x² + D_y² + D_z²
= (s * cos(a_p) * cos(a_y))² + (s * cos(a_p) * sin(a_y))² + (s * sin(a_p))²
= s² * (cos(a_p)² * cos(a_y)² + cos(a_p)² * sin(a_y)² + sin(a_p)²)
= s² * (cos(a_p)² * (cos(a_y)² + sin(a_y)²) + sin(a_p)²)
= s² * (cos(a_p)² * 1 + sin(a_p)²)
= s² * (cos(a_p)² + sin(a_p)²)
= s² * 1 // Correct
Hope it helped... Bye!