Calculating coordinates of an oblique aerial image - java

I am using a GoPro HERO 4 on a drone to capture images that need to be georeferenced. Ideally I need coordinates of the captured image's corners relative to the drone.
I have the camera's:
Altitude
Horizontal and vertical field of view
Rotation in all 3 axes
I have found a couple of solutions but I can't quite translate them for my purposes. The closest one I found is here https://photo.stackexchange.com/questions/56596/how-do-i-calculate-the-ground-footprint-of-an-aerial-camera but I can't figure out how and if it's possible for me to use it. Particularly when I have to take both pitch and roll into account.
Thanks for any help I get.
Edit: I code my software in Java.

If you have rotations in all three axes then you can use these matrices - http://planning.cs.uiuc.edu/node102.html - to construct a full (3x3) rotation matrix for your camera.
Assuming that, when the rotation matrix is an identity (i.e. in the camera's frame) you have defined the camera's axes to be:
X axis for front
Y for side (left)
Z for up
In the camera frame, the rays have directions:
Calculate these directions and rotate them using the matrix to get the real-world axes. Use the camera's real world coordinate as the source.
To calculate the points on the ground: https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm

Related

Java - How to create a 3D vector based on the rotation of a camera

I am trying to create a vector based on the rotation of my camera, for example if the camera looked straight forward it would be <0, 0, -1> (Note: The axes are based on the opengl ones) or if the camera was looking to the right and a bit up it would be <0.5, 0.5, 0>
I am using the lwjgl library so joml is avaliable. But if its easier just creating x,y,z float's is fine.
Note: The camera only uses x and y rotation as z rotation isn't needed and you cant just construct a vector purely based on those and make z rotation 0, it doesn't work.
In layman's terminology I want a vector that if you added it to the position of a player it would move in the direction that the camera is facing.
Edit:
Correct locations in joml are:
x=m02
y=m12
z=m22
Your forward direction should just be the z axis (3rd column) of your camera matrix. Depending on the API you are using the "camera matrix" might be it's inverse, in that case take the 3rd row.

3D camera local axis rotation

I've been working on a 3d software renderer in Java for the past few weeks and I successfully implemented rotations using https://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations
This rotation will rotate an object around the global x, y, and z axes. How do I rotate around the camera's local axes?
For example, when looking up / down in 3d you simply rotate the world around the camera's x axis. This is a problem when the camera is rotated 90 degrees around its y axis (facing the global x axis direction) because then you would want to rotate around the global z axis for the correct effect.
It seems like I need to implement axis-angle rotation but the Wikipedia page and other tutorials I've found (the best one so far: http://www.engr.uvic.ca/~mech410/lectures/4_2_RotateArbi.pdf) are hard to understand.
Edit:
Picture to clarify:
So once again, how do I always rotate about the "local" axis (including when the local axis doesn't match up with the global axis [like when the rotation is 45degrees])

Computer Graphics: Practical translate to origin method

What I'm trying to do
I'm trying to implement translations on polygons manually, e.g. with plain Java AWT and no OpenGL, in order to understand the concepts better.
I want to perform a "translation to origin" before (and hence after) I do a scaling / rotation on my object, but then comes the question - by which vertex of the polygon do I calculate the distance to the origin?
Intuition
My intuition is to calculate the distance of each vertex to the origin, and once I find the closest one, calculate the x,y values required to translate it to the origin, and then translate all the polygon's verices with those x,y.
Am I right?
Another catch
I've implemented a view through camera in my program, such that I've got full viewing pipeline taking polygons in world objects, transforming all coordinates to viewing coordinates, projecting them to 2D and then transforming them to viewport coordinates.
My camera has it's position, lookAt point and up vector, and I want the scaling / rotations to be done with regard to the lookAt point.
How can I achieve this? Does it just mean to translate each polygon to the lookAt point instead to the origin?

How to rotate a single shape in 3d space in opengl?

so I'm pretty new with opengl and creating 3d shapes. So for my example I have two squares, one with a height/width 2 with the center at the origin coordinate (0,0,-10), and one that is to the far left side of the window. I am trying to rotate the square that lies in the origin along the x-z plane without rotating the square that is located to the far left side of the screen. My approach to this was to save each xyz coordinate of the center square to a variable, and creating a method that uses the behavior of cos(theta) to rotate the square along the x-z plane. My code works, but I assume this is a horrible approach as there must be some more efficient method that is already created that can do the same functionality. I looked at glRotatef(), but from what I understood this only rotates my camera view which in the end would rotate both the middle square and the far left square whereas I only want to rotate the middle square. Is there some other method that already exists that can easily rotate a single 2d shape in 3d space?
In case its relevant, I have included the rotating code I made myself for the middle square: (btw the blue class is just some class I made that has the squares coordinates and the circle degree for cos(theta))
if (Keyboard.isKeyDown(Keyboard.KEY_LEFT)) {
blue.setCircle(blue.getCircle()+1f);//getCircle is initially zero and gets incremented by 1 for everytime the program loops with the user holding the left button.
blue.setXfrontTR((float)Math.cos(Math.toRadians(blue.getCircle())));//Changing top-right x coordinate of the middle square
blue.setZfrontTR(-10f+ ((float)Math.cos(Math.toRadians(blue.getCircle()+270f)))); //Changing top-right z coordinate of the middle square.
blue.setXfrontTL((float)Math.cos(Math.toRadians(blue.getCircle()+180f)));
blue.setZfrontTL(-10f+ ((float)Math.cos(Math.toRadians(blue.getCircle()+90f))));//Changing top-left x,z coordinates
blue.setXfrontBL((float)Math.cos(Math.toRadians(blue.getCircle()+180f)));
blue.setZfrontBL(-10f+ ((float)Math.cos(Math.toRadians(blue.getCircle()+90f))));//Changing bottom-left x,z coordinates
blue.setXfrontBR((float)Math.cos(Math.toRadians(blue.getCircle())));
blue.setZfrontBR(-10f+ ((float)Math.cos(Math.toRadians(blue.getCircle()+270f))));//Changing bottom-right x-z coordinates
}
If you give each object that requires independent movement a model-view matrix you can achieve this. The other option to quickly draw/move a few independent objects is to:
for each object:
pushMatrix()
draw object
popMatrix()
while in the modelview matrix...
The method of drawing depends greatly on the OpenGL version you're coding to but the above will work for simple drawing. I'm not an expert on OpenGL / 3D programming so, if you wait a bit you may hear(see) better wisdom than what I offer :)

Rotation in OpenGl ES to place objects then rotate the world

I am developing an augmented reality application for android and trying to use openGl to place cubes at locations in the world. My current method can be seen in the code below:
for(Marker ma: ARData.getMarkerlist().values()) {
Log.d("populating", "");
gl.glPushMatrix();
Location maLoc = new Location("loc");
maLoc.setLatitude(ma.lat);
maLoc.setLongitude(ma.lng);
maLoc.setAltitude(ma.alt);
float distance = currentLoc.distanceTo(maLoc);
float bearing = currentLoc.bearingTo(maLoc);
Log.d("distance", String.valueOf(distance));
Log.d("bearing", String.valueOf(bearing));
gl.glRotatef(bearing,0,0,1);
gl.glTranslatef(0,0,-distance);
ma.cube.draw(gl);
gl.glPopMatrix();
}
gl.glRotatef(y, 0, 1, 0);
gl.glRotatef(x, 1, 0, 0);`
Where y is yaw and x is the pitch. currently I am getting a single cube on the screen at a 45 degree angle someway in the distance. It looks like I am getting sensible bearing and distance values. Could it have something to do with the phones orientation? If you need more code let me know.
EDIT: I updated bearing rotation to gl.glRotatef(bearing,0,1,0); I am now getting my cubes mapped horizontally along the screen at different depths. Still no movement using heading and pitch but #Mirkules has identified some reasons why that might be.
EDIT 2: I am now attempting to place the cubes by rotating the matrix by the difference in angle between heading and bearing to a marker. However, all I get is a sort of jittering where the cubes appear to be rendered in a new position and then jump back to there old position. Code as above except for the following:
float angleDiff = bearing - y;
gl.glRotatef((angleDiff),0,1,0);
gl.glTranslatef(0,0,-distance);
bearing and y are both normalised to a 0 - 360 scale. Also, I moveed my "camera rotation" to above the code where I set the markers.
EDIT 3: I have heading working now using, float angleDiff = (bearing + y)/2;. However, I cant seem to get pitch working. I have attempted to use gl.glRotatef(-x,1,0,0); but that doesn't seem to work.
It's tricky to tell exactly what you're trying to do here, but there are a few things that stick out as potential problems.
Firstly, your final two rotations don't seem to actually apply to anything. If these are supposed to represent a movement of the world or camera (which mostly amounts to much the same thing) then they need to happen before drawing anything.
Then your rotations themselves perhaps won't entirely do what you intend.
Your cube is rotated around the Z axis. The usual convention in GL is for the camera to look down the Z axis, with the Y axis being considered 'up'. You can naturally interpret axes however you like, but a rotation around 'Z' would not typically be 'bearing', but 'roll'. 'Bearing' to me would be analogous to 'yaw'.
As you translate along the Z axis, I assume you are trying to position the object by rotating and translating, but obviously if the rotation is around the same axis as you translate along, it won't actually alter the position of the cube - it will always just be directly in front of the camera, spinning on its axis.
I'm not really clear on why you're trying to position the cube like that when it seems like you start off with a more specific location. You could probably directly construct a more appropriate matrix.
Finally, your camera/world rotation is two concatenated rotations around Y and X. You call these pitch and roll, but typically using euler angles for a camera rotation does not result in an intuitive result where terms like pitch and roll make complete sense. It is common to maintain an orientation and apply individual rotations to that in order to update it, rather than attempting to update several dependent rotations.
So yes, I would expect that this code, in the absence of other matrix operations, would likely result in drawing one or more cubes straight ahead which are simply rotated by some angle around the view direction.

Categories

Resources