I've a map of 400x400 that approximatively represents an area of 250x250km in that I want to project a GPS coordinate in form of Lat/Lon.
Taking in account that precision is not very important(errors of some km are tolerable) there is any easy formula or algorithm to make the projection and translate to a pixel coordinate? If there is one, what error can I expect?
Or I'm really wrong and there not easy way for the precision that I need?
Notes:
I readed about PROJ.4 but I prefer to don't use any external library because the program has to run in small devices
I haven't any calibration data on the map but I can calibrate it myself using an online map.
From here I documented a little and I know how to convert the lat/lon to x/y/z coordinates. But I don't know how to dial with the Z
Usually, this is done using a transformation matrix and using a Mercarator projection.
Here is a good place to start.
Although it isn't java, there is an open source project called OpenHeatMap which does this within its source code. This might be a good place to look (specifically the setLatLonViewingArea, setLatLonToXYMatrix, mercatorLatitudeToLatitude in maprender/src/maprender.mxml).
Hope this helps!
The GIS people will probably stone me for this, but assuming you're not a a high latitude, you could just figure out the lat/lon of diagonal corners of your map to get the bounding box, pick a corner as your origin, take the difference between your GPS coordinate and the origin, then a simple multiplication to scale that to pixels, then draw the point.
I've used this in the past for a map program I was playing with, and I'm at about the 39th parallel. If it doesn't have to be dead accurate, and not too close to a pole (Though, for a 250km square, you'd have to be close to a pole for gross errors to happen), this would be the quickest and the easiest.
Related
I want to measure the acceleration (forward and lateral separately) using an android smartphone device in order to be able to analyse the driving behavior/style.
My approach would be as follows:
1. Aligning coordinate systems
Calibration (no motion / first motion):
While the car is stationary, I would calculate the magnitude of gravity using Sensor.TYPE_GRAVITY and rotate it straight to the z-axis (pointing downwards assuming a flat surface). That way, the pitch and roll angles should be near zero and equal to the angles of the car relativ to the world.
After this, I would start moving straight forward with the car to get a first motion indication using Sensor.TYPE_ACCELEROMETER and rotate this magnitude straight to the x-axis (pointing forward). This way, the yaw angle should be equal to the vehicle's heading relativ to the world.
Update Orientation (while driving):
To be able to keep the coordinate systems aligned while driving I am going to use Sensor.TYPE_GRAVITY to maintain the roll and pitch of the system via
where A_x,y,z is the acceleration of gravity.
Usually, the yaw angle would be maintained via Sensor.ROTATION_VECTOR or Sensor.MAGNETIC_FIELD. However, the reason behind not using them is because I am going to use the application also in electrical vehicles. The high amounts of volts and ampere produced by the engine would presumably make the accuracy of those sensor values suffer. Hence, the best alternative that I know (although not optimal) is using the GPS course to maintain the yaw angle.
2. Getting measurements
By applying all aforementioned rotations it should be possible to maintain an alignment between the smartphone's and vehicle's coordinate systems and, hence, giving me the pure forward and lateral acceleration values on the x-axis and y-axis.
Questions:
Is this approach applicable or did I miss something crucial?
Is there an easier/alternative approach to this?
In regards to finding acceleration, if you have access to the source code of the GPS couldn't you find the forward motion by calculating the distance/time from the GPS?
If the goal is to find driving behavior and style I would imagine gathering a large data set and then using a k means cluster algorithm to sort the data followed by an lstmRNN (to make predictions) could be another method. (Though this requires you to have data from a large set I don't know if this is possible nor do am I aware of what factors you would want to incorporate in your data set).
Sounds like an interesting problem though.
I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.
If I create a simple application where I can fly over a plain I can only see a little of the plain. The engine only renders in a certain radius around the camera. Everything that's beyond appears in the background colour. So it feels like being in a fog where my range of sight is only a couple of meters.
How do I increase that range of sight?
javax.media.j3d.View.setFrontClipDistance(double distance)
More data found here:
http://download.java.net/media/java3d/javadoc/1.3.2/javax/media/j3d/View.html
Sorry if this seems a bit late but I want to clarify for future reference the best answer is not exactly correct.
setFrontClipDistance Is the point that something un-renders as you get close to it, by default
this value is .01(meters) as you do not want something to un-render when you are 10 meters from it, well at least in most cases.
What is truly being asked is the how to increase the Maximum render distance and that is done with setBackClipDistance, default set to 10(meters). If you set it to 1000 then that would increase the maximum render distance to 1000 scale meters.
The proper way to set this, assuming you are using a simpleUnivers object, is to access the function in the View of the instanced object.
//Create a Simple Universe object using a 3d canvas object you have
SimpleUniverse simpleU = new SimpleUniverse(Your3dCanvasHere);
//add in your compiled branch group
simpleU.addBranchGraph(YourBranchGroupHere);
//Increase the render distance with setBackClipDistance
simpleU.getViewer().getView().setBackClipDistance(1000);
If you are planning to develop something serious, you shouldn't stick to Java-3D. Try to use OpenGL. OpenGL comes with a function:
gluPerspective(fieldOfViewY, aspect, near, far);
The far parameter is what you are looking for. OpenGL is way more efficient than a CPU based drawing engine, because it uses the GPU.
I'm making a minecraft-inspired game through Java LWJGL, which is heavy into development already. However, I am not quite sure what method I would use to pick/highlight the nearest block in the exact center of the player's view frustum.
I am already storing frustum and positional data, which I could use.
I had a vague idea about using raycasting, but this seems to be unrelated based on what people have done with raycasting.
So which function or test would I use to determine this?
Raycasting will definitively work. You need to create a ray from the orientation of your camera and its position.
If your camera rotation matrix has no scale, the axis is the third column ( the z-axis ). Now depending on your convention, z axis may point to screen or to the world
So I'm currently working on some FPS game programming in OpenGL (JOGL, more specifically) just for fun and I wanted to know what would be the recommended way to create an FPS-like camera?
At the moment I basically have a vector for the direction the player is facing, which will be added to the current player position upon pressing the "w" or forward key. The negative of that vector is of course used for the "s" or backward key. For "a", left, and "d", right I use the normal of the direction vector. (I am aware that this would let the player fly, but that is not a problem at the moment)
Upon moving the mouse, the direction vector will be rotated using trigonometry and matrices. All vectors are, of course, normalized for easy speed control.
Is this the common and/or good way or is there an easier/better way?
The way I have always seen it done is using two angles, yaw and pitch. The two axes of mouse movement correspond to changes in these angles.
You can calculate the forward vector easily with a spherical-to-rectangular coordinate transformation. (pitch=latitude=φ, yaw=longitude=θ)
You can use a fixed up vector (say (0,0,1)) but this means you can't look directly upwards or downwards. (Most games solve this by allowing you to look no steeper than 89.999 degrees.)
The right vector is then the cross product of the forward and up vectors. It will always be parallel to the ground plane since the up vector is always perpendicular to the ground plane.
Left/right strafe keys then use the +/-right vector. For a forward vector parallel to the ground plane, you can take the cross product of the right and the up vectors.
As for the GL part, you can simply use gluLookAt() using the player's origin, the origin plus the forward vector and the up vector.
Oh and please, please add an "invert mouse" option.
Edit: Here's an alternative solution which gets rid of the 89.9 problem, asked in another question, which involves building the right vector first (with no pitch information) and then forward and up.
Yes, thats essentially the way I have always seen it done.
Yeah, but in the end you will want to add various other attributes to the camera. To spell it n00b: keep it tidy if you want to mimic Quake or CS. In the end might have bobing, FoV, motion filtering, network lag suspension and more.
Cameras are actually one of the more difficult parts to make in a good game. That's why developers usually are content with a seriously dull, fixed 1st/3rd person ditto.
You could use Quaternions for your camera rotation. Although I have not tried it myself, they are useful for avoiding gimbal lock.