I'm attempting an exercise where a vehicle is following the Lemniscate of Bernoulli (or more simply, a figure-8 track). I want to use glTranslatef and glRotatef to achieve this. So far, I have been able to successfully get the vehicle to follow/translate along this path by using the parametric form as follows:
X = (width * cos(t)) / (1+sin^2(t))
Y = (width * cos(t) * sin(t)) / (1+sin^2(t))
Where t is in -pi, pi
In the code, this is as follows:
carX = (float) ((Math.cos(t) / (1 + Math.sin(t)*Math.sin(t))));
carY = 0.0f;
carZ = (float) ((Math.cos(t) * (Math.sin(t))) / (1 + Math.sin(t)*Math.sin(t)));
gl.glTranslatef(carX,carY,carZ);
So that works well enough. My problem now is rotating the vehicle so that it follows the path defined by the Lemniscate of Bernoulli. I want to achieve this by using glRotatef to rotate around the Y axis, but I am not sure how to proceed in regards to finding the angle to input in glRotatef. The rotate is currently in place so that it only manipulates the vehicle, and appears to just need the correct mathematics to follow the path.
Things I have tried:
Using the derivative of the X and Y forms listed above. I used them independently of each other, because I'm not sure how to/if they need to be combined to be used for the angle. With some
minor manipulation they follow the straight areas near the origin,
but broke down around the curves.
Directly finding the tangent of the
t value and converting to degrees. Erratic spinning resulted.
If anyone has any suggestions that may be better than the glRotatef method, that would be appreciated as well. I've seen that gluLookAt may be helpful, and I may attempt to find a solution using that.
(Note: I'm working in JOGL using Java and the FFP, but I'm comfortable with C/C++ code snippets.)
assuming camera view is the driver's view, gluLookAt is exactly what you need! based on your carX,carY,carZ computations (assuming that the math is good), you can store previous values and use it:
//globals & imports:
import javax.vecmath.*;
Vector3f current = new Vector3f();
Vector3f prev = new Vector3f();
computation is as followed:
//on drawing:
prev.set(current);
current.x = (float) ((Math.cos(t) / (1 + Math.sin(t)*Math.sin(t))));
current.z = (float) ((Math.cos(t) * (Math.sin(t))) / (1 + Math.sin(t)*Math.sin(t)));
glu.gluLookAt(current.x, 0f, current.z,
current.x - prev.x, 0f, current.z - prev.z,
0f, 1f, 0f);
i'll test it when i get back home, to make sure it's working, but as far as i can tell, this should do the trick.
Related
ive been looking for a while and cannot find solution
so I am trying to find a point on a polyline that is either a certain distance toward the opposite point or away from the main point.(I have pictures) if you are familiar with celestial navigation this is called the intercept method. so here is the almost working code.
double distance = ap.distanceToAsDouble(gp)*0.00053996;
double convert = nauticalMiles;
double t = convert / distance;
double actlon = ((1 - t)*ap.getLongitude() + t*gp.getLongitude());
double actlat = ((1 - t)*ap.getLatitude() + t*gp.getLatitude());
bear = ap.bearingTo(new GeoPoint(actlat,actlon));
actual = ap.destinationPoint(nauticalMiles/0.00053996,bear);
so "ap" is the point that the point I am trying to find will be (nauticalMiles) "away" or "toward" the "gp" and they are correct so "bear" is the angle kind of from ap to gp that almost works completely. "actual" is the point I am trying to find. here is an image of how this works out.
as you can see th actual point is toward the gp point from the ap point denoted with the yellow polyline. BUT!!!! if you look at the picture the yellow line is not exactly on the red line so I think that it is off a bit. so I tried to do
bear = ap.bearingTo(gp);
actual = ap.destinationPoint(nauticalMiles/0.00053996,bear);
instead doing that whole calculation that was in the code sample previously which should work but I am getting this
which is totally off it is not on the line. so for right now I am sticking with the first approach because it is relatively accurate but that is why I am asking this question. What calculation could I do to make the yellow line exactly on the red line? I am trying to get the most accurate calculation that I can get. thank you for your time.
I am reading an android tutorial for game development that explains how to convert in-game coordinates to actual pixels. Simple enough. This is done via function worldToScreen() as follows:
public Rect worldToScreen(float objectX, float objectY, float objectWidth, float objectHeight){
int left = (int) (screenCentreX - ((currentViewportWorldCentre.x - objectX) * pixelsPerMetreX));
int top = (int) (screenCentreY - ((currentViewportWorldCentre.y - objectY) * pixelsPerMetreY));
int right = (int) (left + (objectWidth * pixelsPerMetreX));
int bottom = (int) (top + (objectHeight * pixelsPerMetreY));
convertedRect.set(left, top, right, bottom);
return convertedRect;
}
It seems to return a rectangle object containing the four points that a square object would occupy.
Why does it use a square?
Why is it substracting top/left and adding bottom/right?
A thorough explanation will be much appreciated.
Answer to question 1
He's using a rectangle probably because it's a simple geometry object that is already implemented in Java and in most gaming libraries, like the one you are using (i can see he's using the Rect class).
Rectangle is also a common solution in 2D games when you want to implement simple collision detection for example.
Answer to question 2
You ask why he's adding bottom and right... But i can only see that he's adding top and left.
He's doing that because the y axis goes from up to down, and the x axis goes from left to right.
So to get the bottom point you have to add the y coordinate of the top point to the height of the rectangle.
Same for the right point, you have to add the x coordinate of the left point to the width of the rectangle.
In the hope that my Paint skills can come useful i made a drawing that probably will help you understand:
To make the drawing and my answer even more clear:
top + height = bottom
left + width = right
PS: "he" is the guy who made the tutorial that you're following.
So I'm programming a recursive program that is supposed to draw Koch's snowflake using OpenGL, and I've got the program basically working except one tiny issue. The deeper the recursion, the weirder 2 particular vertices get. Pictures at the bottom.
EDIT: I don't really care about the OpenGL aspect, I've got that part down. If you don't know OpenGL, all that the glVertex does is draw a line between the two vertices specified in the 2 method calls. Pretend its drawLine(v1,v2). Same difference.
I suspect that my method for finding points is to blame, but I can't find anything that looks incorrect.
I'm following the basically standard drawing method, here are the relevant code snips
(V is for vertex V1 is the bottom left corner, v2 is the bottom right corner, v3 is the top corner):
double dir = Math.PI;
recurse(V2,V1,n);
dir=Math.PI/3;
recurse(V1,V3,n);
dir= (5./3.)* Math.PI ;
recurse(V3,V2,n);
Recursive method:
public void recurse(Point2D v1, Point2D v2, int n){
double newLength = v1.distance(v2)/3.;
if(n == 0){
gl.glVertex2d(v1.getX(),v1.getY());
gl.glVertex2d(v2.getX(),v2.getY());
}else{
Point2D p1 = getPointViaRotation(v1, dir, newLength);
recurse(v1,p1,n-1);
dir+=(Math.PI/3.);
Point2D p2 = getPointViaRotation(p1,dir,newLength);
recurse(p1,p2,n-1);
dir-=(Math.PI*(2./3.));
Point2D p3 = getPointViaRotation(p2, dir, newLength);
recurse(p2,p3,n-1);
dir+=(Math.PI/3.);
recurse(p3,v2,n-1);
}
}
I really suspect my math is the problem, but this looks correct to me:
public static Point2D getPointViaRotation(Point2D p1, double rotation, double length){
double xLength = length * Math.cos(rotation);
double yLength = length * Math.sin(rotation);
return new Point2D.Double(xLength + p1.getX(), yLength + p1.getY());
}
N = 0 (All is well):
N = 1 (Perhaps a little bendy, maybe)
N = 5 (WAT)
I can't see any obvious problem code-wise. I do however have a theory about what happens.
It seems like all points in the graph are based on the locations of the points that came before it. As such, any rounding errors that occurs during this process eventually start accumulating, eventually ending with it going haywire and being way off.
What I would do for starters is calculating the start and end points of each segment before recursing, as to limit the impact of the rounding errors of the inner calls.
One thing about Koch's snowflake is, that the algorithm will lead to a rounding issue one time (it is recursive and all rounding errors add up). The trick is, to keep it going as long as possible. There're three things you can do:
If you want to get more detailed, the only way is to expand the possibilities of Double. You will need to use your own range of coordinates and transform them, every time you actually paint on the screen, to screen coordinates. Your own coordinates should zoom and show the last recursion step (the last triangle) in a coordination system of e.g. 100x100. Then calculate the three new triangles on top of that, transform into screen coordinates and paint.
The line dir=Math.PI/3; divides by 3 instead of (double) 3. Add the . after the 3
Make sure you use Point2D.Double anywhere. Your code should do so, but I would explicitely write it everywhere.
You won the game, when you still have a nice snowflake but get a Stackoverflow.
So, it turns out I am the dumbest man alive.
Thanks everyone for trying, I appreciate the help.
This code is meant to handle an equilateral triangle, its very specific about that (You can tell by the angles).
I put in a triangle with the height equal to the base (not equilateral). When I fixed the input triangle, everything works great.
I am in the process of changing how I get my map images from Google maps api to the MapQuest-hosted map tiles which uses OpenStreetMap Data. I am switching from Google maps because I hit the daily request limit which I wasn't expecting and I am not using OpenStreet api because although their data is free, their tiles have a limit and all I need is an image. Therefore, here I am using the MapQuest-hosted map tiles.
I think I understand it, but there are some things that I would like to be able to do but cannot find any documentation on it. For example, I would like to have an image size of 500x300 if possible, or at least 512*512 (double 256*256 which is what the tiles come out to be). I would also like to be able to display a marker. Is this possible?
I used this code found here to convert my latitude and longitude data into x and y coordinates:
public class slippy {
public static void main(String[] args) {
int zoom = 9;
double lat = 42.8549;
double lon = -78.863;
System.out.println("http://otile1.mqcdn.com/tiles/1.0.0/map/" + getTileNumber(lat, lon, zoom) + ".png");
}
public static String getTileNumber(final double lat, final double lon, final int zoom) {
int xtile = (int)Math.floor( (lon + 180) / 360 * (1<<zoom) ) ;
int ytile = (int)Math.floor( (1 - Math.log(Math.tan(Math.toRadians(lat)) + 1 / Math.cos(Math.toRadians(lat))) / Math.PI) / 2 * (1<<zoom) ) ;
return("" + zoom + "/" + xtile + "/" + ytile);
}
}
I used this code to generate two links to a map of Buffalo; one with a zoom of 9, here, and one with 10 ,here, and the center seems to differ. Is this a result of using open source data or is there an attribute I could use?
Of course the center differs. From one zoom level to the next, one tile is "split" into four other tiles. Consequently the center of the single tile will be located at the corners of the four tiles. Using the mentioned formula you will always get the tile which contains your coordinates. But due to the nature of tiles it won't be necessarily at the center of the tile. For each specific coordinate there is only one tile at a given zoom level containing it. Hence the coordinate can be anywhere on the tile and not necessarily at the center.
Still I'm not quite sure what you actually want to achieve. For displaying tiles (and markers) all you need to do is using Leaflet or OpenLayers (or any another library supporting the tiles concept).
And keep in mind that MapQuest also has terms of use.
Edit:
An alternative would be to use a WMS service instead of a TMS which does the resizing and concatenation of the tiles for you. With a WMS you just have to define a bounding box around your center and an image size. The resulting image will always be centered around the coordinates. The OSM wiki has a list of OSM WMS servers.
Don't forget to get informed about the usage policy of the WMS service you choose.
I need to be able to set the rotation of a matrix rather than add to it. I believe the only way to set the rotation is to know the current rotation of the matrix.
Note: matrix.setRotate() will not do, because the matrix needs to retain its information.
float[] v = new float[9];
matrix.getValues(v);
// translation is simple
float tx = v[Matrix.MTRANS_X];
float ty = v[Matrix.MTRANS_Y];
// calculate real scale
float scalex = v[Matrix.MSCALE_X];
float skewy = v[Matrix.MSKEW_Y];
float rScale = (float) Math.sqrt(scalex * scalex + skewy * skewy);
// calculate the degree of rotation
float rAngle = Math.round(Math.atan2(v[Matrix.MSKEW_X], v[Matrix.MSCALE_X]) * (180 / Math.PI));
from here
http://judepereira.com/blog/calculate-the-real-scale-factor-and-the-angle-of-rotation-from-an-android-matrix/
What you can do is call getValues and cache the values. Later when you want them back just call setValues on the matrix.
Update
The rotation matrix and transform matrix relation well explained here
I'm afraid I can't help you with the integration (I've done this in Flash only), but it sounds like you should try to do your matrix calculations yourself, which is best done using "quaternions", which makes adding rotations pretty trivial. See http://introcs.cs.princeton.edu/java/32class/Quaternion.java.html for an example Java implementation. You would want to create a second quaternion matrix that describes your change in rotation and add (in this case, plus) it to your current matrix.