I've read a multitude of information regarding map projection today. The amount of information available is overwhelming.
I am attempting to simply convert lat, long values into a screen X, Y coordinate not using any map. I do not need the values projected onto any map, just on the window.
The window itself is representing approx. a 1500x1500 meter location. Lat, Long accuracy needed is to a 1/10th of a second.
What may be some simpler ways in converting lat/long representation to the screen?
I've read several articles and post regarding translation onto images, but nothing related to the natural java coordinate system.
Thanks for any insight.
When projecting to a screen, you still are projecting to a "map", so it really depends on what your map projection is. However, if you're only working in such a small area of 1500x1500 meters, you can use a simple Cartesian projection, where each pixel is an equal amount of space (i.e. if your screen is 1500 pixels, then each pixel would represent 1 meter). However, you still need to account for where you are on the earth since the length of a degree (in both latitude and longitude) can vary greatly depending on where you are. If you are working with a fixed area, you should be able to lookup the length of 1 degree at that point.
Related
I want to measure the acceleration (forward and lateral separately) using an android smartphone device in order to be able to analyse the driving behavior/style.
My approach would be as follows:
1. Aligning coordinate systems
Calibration (no motion / first motion):
While the car is stationary, I would calculate the magnitude of gravity using Sensor.TYPE_GRAVITY and rotate it straight to the z-axis (pointing downwards assuming a flat surface). That way, the pitch and roll angles should be near zero and equal to the angles of the car relativ to the world.
After this, I would start moving straight forward with the car to get a first motion indication using Sensor.TYPE_ACCELEROMETER and rotate this magnitude straight to the x-axis (pointing forward). This way, the yaw angle should be equal to the vehicle's heading relativ to the world.
Update Orientation (while driving):
To be able to keep the coordinate systems aligned while driving I am going to use Sensor.TYPE_GRAVITY to maintain the roll and pitch of the system via
where A_x,y,z is the acceleration of gravity.
Usually, the yaw angle would be maintained via Sensor.ROTATION_VECTOR or Sensor.MAGNETIC_FIELD. However, the reason behind not using them is because I am going to use the application also in electrical vehicles. The high amounts of volts and ampere produced by the engine would presumably make the accuracy of those sensor values suffer. Hence, the best alternative that I know (although not optimal) is using the GPS course to maintain the yaw angle.
2. Getting measurements
By applying all aforementioned rotations it should be possible to maintain an alignment between the smartphone's and vehicle's coordinate systems and, hence, giving me the pure forward and lateral acceleration values on the x-axis and y-axis.
Questions:
Is this approach applicable or did I miss something crucial?
Is there an easier/alternative approach to this?
In regards to finding acceleration, if you have access to the source code of the GPS couldn't you find the forward motion by calculating the distance/time from the GPS?
If the goal is to find driving behavior and style I would imagine gathering a large data set and then using a k means cluster algorithm to sort the data followed by an lstmRNN (to make predictions) could be another method. (Though this requires you to have data from a large set I don't know if this is possible nor do am I aware of what factors you would want to incorporate in your data set).
Sounds like an interesting problem though.
i want to build a global raster over all GPS coordinates.
The cells should be like 20x20 metres...
I want to write an java application and 'work with'/adjust that raster later on.
For example to get the cell for a single gps coordinate or maybe even combine two cells to a bigger one (not necessary).
Can anyone give me an advice for an API or something else that could help me?
As I already answered to a similar question, you cannot create a raster expressed in meters, without prior transforming all coordinates to a meter based x,y (= cartesian) coordinate system.
The GPS coordinates are spherical ones, the cell sizes (especially the y or latitudinal span) would vary from cell to cell, when going North or South.
So either express your raster size in decimal degrees (use an equivalent of your desired with (20m) expressed in meters at center of your area of interest).
Note: The Earth circumfence = 40.000 km, so 40.000 / 360 degrees give 111.111 km as length related to 1 degrees; use this factor to calculate the number of degrees related to 20m.
Or you transform all coordinates to UTM.
Having them in UTM you can then implement a raster expressed in meters.
Difficuties for UTM apporach: This projection is only valid at a longitudinal span of 3 degrees, you will get major problems when the location have to croiss this 3 degree limit.
There is no API for that, that I know, but you can implement that using a 2 dimensional array. (There are APIs for transfroming a lat/lon coordinate to UTM)
If the area of interest is larger then one country this approach may not work (well). The array could be to big.
In that case this task gets more complex, you would need a spatial index, like a quadtree, to limit the number of raster elements, by having an adaptive behaviour for dense locations.
I'm doing a project on Android for measuring areas of land through photographs taken by a drone.
I have an aerial photograph that contains a GPS coordinate. For practical purposes I assume that coordinate represents the central pixel of the picture.
I need to move pixel by pixel in the picture to reach the corners and know what GPS coordinate represent the corners of the
I have no idea about how to achieve it. I have searched but can not find anything similar to my problem.
Thank You.
enter link description here
If you know the altitude at which the photo was taken and the camera maximum capture angle I believe you can determine (through trigonometry) the deviation of each pixel from the center, in meters, and then determine the GPS coordinate of it.
According to my knowledge,
Height of the drone also matter so first of all with the central coordinate you also need at what height drone take that picture.
Now you need to perform some experiment with reference picture between two known GPS coordinate of two points of picture. Change the height of the drone and plot the number of pixels between two coordinate wrt to the height of drone. Doing some curve fitting and get the function between two variable.
Using the above function you can calculate the "change in GPS coordinate per pixel" at the particular height and by using this parameter we can easily deduce the GPS of picture taken by drone at particular height.
I don't know whether the solution works or not. But this my idea you can use this and develop further.
Thanks
I have a bunch of quadkeys and would like to get the bounding box coordinates that is an extent of all of them i.e. the min/max lat and long that would contain all the quadkeys. Is there a library that would help get this? Thanks.
Quadkeys are just another way of displaying X/Y/Zoom coordinates of a MapTile system.
Lets assume your quadkeys are all the same resolution (Zoom level), i.e. they all have the same number of digits.
If you convert the quadkey's back to X/Y coordinates, then it becomes a simple geometry problem: Find the X,Y coordinates for top-left, and bottom-right of a box that contains a series of X,Y points. Let me know if you need help with that, though it should be basic Euclidean Geometry.
Once you find those two corner points, convert them back to Lat/Long, and you will have the Lat/Long points of a bounding box that contains your Quadkeys.
MSDN has example source code showing the conversions between Lat/Long, X/Y/Zoom and QuadKeys.
I've a map of 400x400 that approximatively represents an area of 250x250km in that I want to project a GPS coordinate in form of Lat/Lon.
Taking in account that precision is not very important(errors of some km are tolerable) there is any easy formula or algorithm to make the projection and translate to a pixel coordinate? If there is one, what error can I expect?
Or I'm really wrong and there not easy way for the precision that I need?
Notes:
I readed about PROJ.4 but I prefer to don't use any external library because the program has to run in small devices
I haven't any calibration data on the map but I can calibrate it myself using an online map.
From here I documented a little and I know how to convert the lat/lon to x/y/z coordinates. But I don't know how to dial with the Z
Usually, this is done using a transformation matrix and using a Mercarator projection.
Here is a good place to start.
Although it isn't java, there is an open source project called OpenHeatMap which does this within its source code. This might be a good place to look (specifically the setLatLonViewingArea, setLatLonToXYMatrix, mercatorLatitudeToLatitude in maprender/src/maprender.mxml).
Hope this helps!
The GIS people will probably stone me for this, but assuming you're not a a high latitude, you could just figure out the lat/lon of diagonal corners of your map to get the bounding box, pick a corner as your origin, take the difference between your GPS coordinate and the origin, then a simple multiplication to scale that to pixels, then draw the point.
I've used this in the past for a map program I was playing with, and I'm at about the 39th parallel. If it doesn't have to be dead accurate, and not too close to a pole (Though, for a 250km square, you'd have to be close to a pole for gross errors to happen), this would be the quickest and the easiest.