Global Raster over GPS Data - java

i want to build a global raster over all GPS coordinates.
The cells should be like 20x20 metres...
I want to write an java application and 'work with'/adjust that raster later on.
For example to get the cell for a single gps coordinate or maybe even combine two cells to a bigger one (not necessary).
Can anyone give me an advice for an API or something else that could help me?

As I already answered to a similar question, you cannot create a raster expressed in meters, without prior transforming all coordinates to a meter based x,y (= cartesian) coordinate system.
The GPS coordinates are spherical ones, the cell sizes (especially the y or latitudinal span) would vary from cell to cell, when going North or South.
So either express your raster size in decimal degrees (use an equivalent of your desired with (20m) expressed in meters at center of your area of interest).
Note: The Earth circumfence = 40.000 km, so 40.000 / 360 degrees give 111.111 km as length related to 1 degrees; use this factor to calculate the number of degrees related to 20m.
Or you transform all coordinates to UTM.
Having them in UTM you can then implement a raster expressed in meters.
Difficuties for UTM apporach: This projection is only valid at a longitudinal span of 3 degrees, you will get major problems when the location have to croiss this 3 degree limit.
There is no API for that, that I know, but you can implement that using a 2 dimensional array. (There are APIs for transfroming a lat/lon coordinate to UTM)
If the area of interest is larger then one country this approach may not work (well). The array could be to big.
In that case this task gets more complex, you would need a spatial index, like a quadtree, to limit the number of raster elements, by having an adaptive behaviour for dense locations.

Related

Convert pixels of a picture on GPS coordinates

I'm doing a project on Android for measuring areas of land through photographs taken by a drone.
I have an aerial photograph that contains a GPS coordinate. For practical purposes I assume that coordinate represents the central pixel of the picture.
I need to move pixel by pixel in the picture to reach the corners and know what GPS coordinate represent the corners of the
I have no idea about how to achieve it. I have searched but can not find anything similar to my problem.
Thank You.
enter link description here
If you know the altitude at which the photo was taken and the camera maximum capture angle I believe you can determine (through trigonometry) the deviation of each pixel from the center, in meters, and then determine the GPS coordinate of it.
According to my knowledge,
Height of the drone also matter so first of all with the central coordinate you also need at what height drone take that picture.
Now you need to perform some experiment with reference picture between two known GPS coordinate of two points of picture. Change the height of the drone and plot the number of pixels between two coordinate wrt to the height of drone. Doing some curve fitting and get the function between two variable.
Using the above function you can calculate the "change in GPS coordinate per pixel" at the particular height and by using this parameter we can easily deduce the GPS of picture taken by drone at particular height.
I don't know whether the solution works or not. But this my idea you can use this and develop further.
Thanks

Math Vector Matrix multiplication

As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.To map a 3DVector's position from "World Space" to Viewport Coordinates you write f.e
"gl_position=uModelViewProjectionMatrix * vPosition " in your fragment shader.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result and - more important - is there a funktion to do this on CPU side (especially a libary/ class for java in android)
Just clarifying a few terms:
The viewport is the region within the window you're drawing to. It's specified in pixels.
The rasterizer needs coordinates in normalized device coordinates which is -1 to 1. It then maps these to the viewport area.
gl_Position must take a 4D vector in clip space. This is the space triangles are clipped in (for example and particularly if they intersect the near plane). This is separate to normalized device coordinates because the perspective divide hasn't happened yet. Doing this yourself would be pos /= pos.w, but that loses some information OpenGL needs for clipping, depth and interpolation.
This brings me to the answer. You're correct, you can't multiply a 3D vector by a 4x4 matrix. It's actually using homogeneous coordinates and the vector is 4D with a 1 at the end. The 4D result is for clip space. The rasterizer creates fragments with just the 2D position, but w is used for perspective correct interpolation and z is interpolated for depth testing.
Finally, the ModelViewProjection matrix implies the introduction of three more spaces. These are purely convention but with good reasons to exist. Mesh vertices are given in object space. You can place objects in the world with a model transformation matrix. You provide a camera position and rotation in the world with the view matrix. The projection matrix then defines the viewing volume by scaling everything for clip space. A reason to separate the view and projection matrices is for operations in eye space such as lighting calculations.
I won't go into any more detail, but hopefully this sets you on the right track.
As far as I know the opengl Viewport's Coordinate System is 2Dimensional and ranges between -1 and 1 in x and y direction.
Sort of, but not exactly like that.
From a mathematical point of view what matters is the dimension of the kernel. When it comes to the framebuffer things don't end at the viewport coordinates. From there things get "split", the (x,y) coordinate is used to determine which fragment to touch and the (z,w) coordinates usually are used for calculations that ultimately end in the depth buffer.
My Question is how you can multiply a 3DVector by a 4D Matrix and get a 2DVector as a result
By padding the 3d vector to 4d elements; in terms of homogenous coordinates pad it with zeros except for the last element which is set to 1. This allows you to multiply with a nĂ—4 matrix. And to get back to 2d you project it down into the lower dimension vector space; this is just like a 3d object projects a 2d shadow onto a surface. The simplemost projection is simply omitting the dimensions you're not interested in, like dropping z and w when going to the viewport.
is there a function to do this on CPU side
There are several linear algebra libraries. Just pad the vectors accordingly to transform with higher dimension matrices and project them to get to lower dimensions.

Rotation in OpenGl ES to place objects then rotate the world

I am developing an augmented reality application for android and trying to use openGl to place cubes at locations in the world. My current method can be seen in the code below:
for(Marker ma: ARData.getMarkerlist().values()) {
Log.d("populating", "");
gl.glPushMatrix();
Location maLoc = new Location("loc");
maLoc.setLatitude(ma.lat);
maLoc.setLongitude(ma.lng);
maLoc.setAltitude(ma.alt);
float distance = currentLoc.distanceTo(maLoc);
float bearing = currentLoc.bearingTo(maLoc);
Log.d("distance", String.valueOf(distance));
Log.d("bearing", String.valueOf(bearing));
gl.glRotatef(bearing,0,0,1);
gl.glTranslatef(0,0,-distance);
ma.cube.draw(gl);
gl.glPopMatrix();
}
gl.glRotatef(y, 0, 1, 0);
gl.glRotatef(x, 1, 0, 0);`
Where y is yaw and x is the pitch. currently I am getting a single cube on the screen at a 45 degree angle someway in the distance. It looks like I am getting sensible bearing and distance values. Could it have something to do with the phones orientation? If you need more code let me know.
EDIT: I updated bearing rotation to gl.glRotatef(bearing,0,1,0); I am now getting my cubes mapped horizontally along the screen at different depths. Still no movement using heading and pitch but #Mirkules has identified some reasons why that might be.
EDIT 2: I am now attempting to place the cubes by rotating the matrix by the difference in angle between heading and bearing to a marker. However, all I get is a sort of jittering where the cubes appear to be rendered in a new position and then jump back to there old position. Code as above except for the following:
float angleDiff = bearing - y;
gl.glRotatef((angleDiff),0,1,0);
gl.glTranslatef(0,0,-distance);
bearing and y are both normalised to a 0 - 360 scale. Also, I moveed my "camera rotation" to above the code where I set the markers.
EDIT 3: I have heading working now using, float angleDiff = (bearing + y)/2;. However, I cant seem to get pitch working. I have attempted to use gl.glRotatef(-x,1,0,0); but that doesn't seem to work.
It's tricky to tell exactly what you're trying to do here, but there are a few things that stick out as potential problems.
Firstly, your final two rotations don't seem to actually apply to anything. If these are supposed to represent a movement of the world or camera (which mostly amounts to much the same thing) then they need to happen before drawing anything.
Then your rotations themselves perhaps won't entirely do what you intend.
Your cube is rotated around the Z axis. The usual convention in GL is for the camera to look down the Z axis, with the Y axis being considered 'up'. You can naturally interpret axes however you like, but a rotation around 'Z' would not typically be 'bearing', but 'roll'. 'Bearing' to me would be analogous to 'yaw'.
As you translate along the Z axis, I assume you are trying to position the object by rotating and translating, but obviously if the rotation is around the same axis as you translate along, it won't actually alter the position of the cube - it will always just be directly in front of the camera, spinning on its axis.
I'm not really clear on why you're trying to position the cube like that when it seems like you start off with a more specific location. You could probably directly construct a more appropriate matrix.
Finally, your camera/world rotation is two concatenated rotations around Y and X. You call these pitch and roll, but typically using euler angles for a camera rotation does not result in an intuitive result where terms like pitch and roll make complete sense. It is common to maintain an orientation and apply individual rotations to that in order to update it, rather than attempting to update several dependent rotations.
So yes, I would expect that this code, in the absence of other matrix operations, would likely result in drawing one or more cubes straight ahead which are simply rotated by some angle around the view direction.

Rotation around a specific point (eg, rotate around 0,0,0)

I've been searching a lot on this problem, but I couldn't really find an answer that would fit.
I need to rotate a cylinder around a given point (eg, 0,0,0), but the pivot of the cylinder is given by default. How do i change that?
I found this topic, and it's quite what I would want to do, but I don't know how to do it with java.
To explain better what I would like to do, I'll show 3 images.(v)
imageshack.us/photo/my-images/259/aintgood.jpg
imageshack.us/photo/my-images/840/whatineed.jpg
imageshack.us/photo/my-images/705/nogoodn.jpg
So, the first image shows my basic problem, the cylinder should be positioned with the end at the center of the sphere, let's say (0,0,0). The user gives two angles. The first one is for a rotX command, the second one for a rotZ one. The pivot of the cylinder is at its center, so, as image 3 shows, even if i translate the cylinder so its end is at the center of the sphere, when it rotates, the whole thing ruins.
Image 2 shows what the cylinder-sphere group should look like, regardless the given angles.
The image is not obtained based on an algorithm, but based on calculus, and mouserotated.
The general procedure for rotation about an arbitrary point P is:
Translate by -P (so P is at (0, 0, 0))
Rotate around the origin
Translate by P (to bring the origin back to the original location of P)
The easiest way to do this is to represent everything in homogeneous coordinates and represent translations and rotations by matrices. Composing the above three transformations (translate-rotate-translate) is done by matrix multiplication. If the rotation is composed of two or more simpler rotations, then the rotation matrix itself is a product of the matrices for the simpler rotations.

Java Simple WGS84 Lat Lon to Pixel X, Y

I've read a multitude of information regarding map projection today. The amount of information available is overwhelming.
I am attempting to simply convert lat, long values into a screen X, Y coordinate not using any map. I do not need the values projected onto any map, just on the window.
The window itself is representing approx. a 1500x1500 meter location. Lat, Long accuracy needed is to a 1/10th of a second.
What may be some simpler ways in converting lat/long representation to the screen?
I've read several articles and post regarding translation onto images, but nothing related to the natural java coordinate system.
Thanks for any insight.
When projecting to a screen, you still are projecting to a "map", so it really depends on what your map projection is. However, if you're only working in such a small area of 1500x1500 meters, you can use a simple Cartesian projection, where each pixel is an equal amount of space (i.e. if your screen is 1500 pixels, then each pixel would represent 1 meter). However, you still need to account for where you are on the earth since the length of a degree (in both latitude and longitude) can vary greatly depending on where you are. If you are working with a fixed area, you should be able to lookup the length of 1 degree at that point.

Categories

Resources