I'm doing a project on Android for measuring areas of land through photographs taken by a drone.
I have an aerial photograph that contains a GPS coordinate. For practical purposes I assume that coordinate represents the central pixel of the picture.
I need to move pixel by pixel in the picture to reach the corners and know what GPS coordinate represent the corners of the
I have no idea about how to achieve it. I have searched but can not find anything similar to my problem.
Thank You.
enter link description here
If you know the altitude at which the photo was taken and the camera maximum capture angle I believe you can determine (through trigonometry) the deviation of each pixel from the center, in meters, and then determine the GPS coordinate of it.
According to my knowledge,
Height of the drone also matter so first of all with the central coordinate you also need at what height drone take that picture.
Now you need to perform some experiment with reference picture between two known GPS coordinate of two points of picture. Change the height of the drone and plot the number of pixels between two coordinate wrt to the height of drone. Doing some curve fitting and get the function between two variable.
Using the above function you can calculate the "change in GPS coordinate per pixel" at the particular height and by using this parameter we can easily deduce the GPS of picture taken by drone at particular height.
I don't know whether the solution works or not. But this my idea you can use this and develop further.
Thanks
Related
I am using a GoPro HERO 4 on a drone to capture images that need to be georeferenced. Ideally I need coordinates of the captured image's corners relative to the drone.
I have the camera's:
Altitude
Horizontal and vertical field of view
Rotation in all 3 axes
I have found a couple of solutions but I can't quite translate them for my purposes. The closest one I found is here https://photo.stackexchange.com/questions/56596/how-do-i-calculate-the-ground-footprint-of-an-aerial-camera but I can't figure out how and if it's possible for me to use it. Particularly when I have to take both pitch and roll into account.
Thanks for any help I get.
Edit: I code my software in Java.
If you have rotations in all three axes then you can use these matrices - http://planning.cs.uiuc.edu/node102.html - to construct a full (3x3) rotation matrix for your camera.
Assuming that, when the rotation matrix is an identity (i.e. in the camera's frame) you have defined the camera's axes to be:
X axis for front
Y for side (left)
Z for up
In the camera frame, the rays have directions:
Calculate these directions and rotate them using the matrix to get the real-world axes. Use the camera's real world coordinate as the source.
To calculate the points on the ground: https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm
I've been looking around and i couldn't find an answer to this but what I have done is create a cube / box and the camera will squash and stretch depending on where I am looking at. This all seems to resolve it self when the screen is perfectly square but when I'm using 16:9 it stretches and squashes the shapes. Is it possible to change this?
16:9
and this is 500px X 500px
As a side question would it be possible to change the color of background "sky"?
OpenGL uses a cube [-1,1]^3 to represent the frustum in normalized device coordinates. The Viewport transform strechtes this in x and y direction to [0,width] and [0,height]. So to get the correct output aspect ratio, you have to take the viewport dimensions into account when transfroming the vertices into clip space. Usually, this is part of the projection matrix. The old fixed-function gluPerspective() function has a parameter to directly create a frustum for a given aspect ratio. As you do not show any code, it is hard to suggest what you actually should change, but it should be quite easy, as it boils down to a simple scale operation along x and y.
To the side question: That color is defined by the values the color buffer is set to when clearing the it. You can set the color via glClearColor().
I have a bunch of quadkeys and would like to get the bounding box coordinates that is an extent of all of them i.e. the min/max lat and long that would contain all the quadkeys. Is there a library that would help get this? Thanks.
Quadkeys are just another way of displaying X/Y/Zoom coordinates of a MapTile system.
Lets assume your quadkeys are all the same resolution (Zoom level), i.e. they all have the same number of digits.
If you convert the quadkey's back to X/Y coordinates, then it becomes a simple geometry problem: Find the X,Y coordinates for top-left, and bottom-right of a box that contains a series of X,Y points. Let me know if you need help with that, though it should be basic Euclidean Geometry.
Once you find those two corner points, convert them back to Lat/Long, and you will have the Lat/Long points of a bounding box that contains your Quadkeys.
MSDN has example source code showing the conversions between Lat/Long, X/Y/Zoom and QuadKeys.
I have a large PNG (around 1500 x 2000) that cut into slices and put back together using HTML because otherwise the image quality is horrible.
I want to be able to have a marker of the user's current location on this image. I'm a little lost about how to do this, especially if the image is zoomable.
Ex: How do I make the marker have variable location on the string? (codewise)
How do I know how much to change the coordinates by when they zoom in?
Help or code samples would be highly appreciated! I am very stuck.
Thanks!
Note: Please be specific, I admit I am not experienced at android development
Why not just store the marker position on the original image coordinates (x, y)? eg. if the user is in the middle, save their position as (750, 1000). When you zoom, everything is relative; so if you zoom to 2x, the full image would be 3000x4000, and the marker position would be (1500, 2000).
I don't see what your difficulty is :) maybe I'm underestimating what the problem is.
i'm having the problem of capturing all the coordinate value of pixels while dragging with the mouse using mousedragged event in java
while i'm dragging slowly i'm able to get all the coordinate value of pixels
but when i'm doing it fast i'm getting only one third of the pixel coordinate values
for example if i drag it slowly i'm getting 760 pixel values but when i'm doing it fast i'm getting only 60 pixel coordinate values
please help me
I need all the points because i'm going to use all those points for the signature comparision...
Project Description :
User will put the sign using mouse in log in page, this sign will be compared with the sign which the user already put in sign up page...
I'm going to compare the sign using the pixel values, so by getting all the coordinate values only i can compare the sign...
pls help me...
Windows is not going to give you this, its up to the refresh rate of the Mouse, its DPI and the rate at which windows polls for the Mouse event. You are not going to get all pixels so you will need to make room for some ambiguity.
(It doesn't matter which language you use Java or C#)
Mouse movement events occur every few milliseconds, not for every pixel movement, so when the mouse is moving rapidly, some pixels will be missed out. If you want every single pixel, you'll have to interpolate between pixels if the new location is not adjacent to the previous one. One way to interpolate the pixels between two coordinates is Bresenham's line algorithm: http://en.wikipedia.org/wiki/Bresenhams_line_algorithm
Edit: Fixed link.