I am trying to map the sun patterns in android to make an app similar to the apple watch face called solar.
I asked around what the best way to do this would be. They said that I would need to calculate the hour and divide that by 24 to get my x access and then take the hour and divid it by the midDay (sun light wise) and that would be my Y.
So thats what I did.
X axis:
public float getXPosition(Date dateTime) {
float time = getTime(dateTime);
float percentageX = time / 24.0f;
float xPosition = percentageX * lineImage.getWidth();
return xPosition;
}
Y axis:
public float getYPosition(Date dateTime) {
float time = getTime(dateTime);
float midDayTime = getTime(globalVariables.czc.getMidDay());
float differeceInTime = 12.0f - time; //reverse order from 0-12 (top down) if >1pm
Log.d("hours: ", "" + time);
Log.d("midDay", "" + midDayTime);
if (time >= midDayTime) {
time %= midDayTime;
differeceInTime = time;
}
float percentageY = differeceInTime / midDayTime;
float yPosition = percentageY * lineImage.getHeight();
return yPosition;
}
Then I just mapped the coordinates as so:
public void moveSun() {
Date dt = new Date();
sunImage.setY(getYPosition(dt) - sunImage.getHeight() / 2.5f);
sunImage.setX(getXPosition(dt) - sunImage.getWidth() / 2.5f);
}
This is for the sun.
I have other dots, representing other point in time for the suns movements that have been placed on the map as well.
As you can see the sun is almost on the line, and some of the dots are on the line. but most are not.
Does anyone know why or how I can fix this?
Maybe a better way to implement the idea.
Is it because this is a wave and not a grid?
Thanks!
EDIT: This is what happened when I change it to /midday
if (time < midday) {
percentageX = time / midday + 0.0f;
xPosition = percentageX * lineImage.getWidth() + 0.0f;
}else{
time -=midday;
percentageX = time / 24.0f- midday;
xPosition = percentageX * lineImage.getWidth()+ 0.0f;
}
Related
What is the proper way to implement framerate-independent oscillation? I am using the LibGDX library which provides delta time for the update loop.
Currently, my sine wave pattern works as expected as long as the FPS is at a healthy 60 average (May have skewed slightly, due to slight gif-capture software lag):
http://i.stack.imgur.com/ZcKmX.gif
However, when the framerate drops, the pattern becomes skewed and acts rather strangely:
http://i.stack.imgur.com/tn82J.gif
Here is the sine wave method:
private void sineWaveDown(float speed, float amplitude, boolean mirror, float delta){
HitBox hitBox = getTransform().getHitBox();
int mirrored = 1;
if (mirror)
mirrored = -1;
Vector2 current = new Vector2(hitBox.x, hitBox.y);
current.y -= speed * delta;
current.x += MathUtils.sin(current.y * delta) * amplitude * mirrored;
hitBox.setPosition(current);
}
In this example, speed is 60 and amplitude is 2.
HitBox is derived from com.badlogic.gdx.math.Circle, and is used to logically represent the circles you see in the images above.
Edit: Question has been answered. Here is my working code:
private void sineWaveDown(float delta){
HitBox hitBox = getTransform().getHitBox();
int mirrored = 1;
if (config.mirrored)
mirrored = -1;
Vector2 current = new Vector2(hitBox.x, hitBox.y);
current.y -= config.speed * delta;
elapsedTime = (elapsedTime + delta) % (1/config.frequency);
float sinePosition = mirrored * config.amplitude * MathUtils.sin(MathUtils.PI2 * config.frequency * elapsedTime + config.phase);
current.x = config.spawnPosition.x + sinePosition;
hitBox.setPosition(current);
}
I haven't found a good way to do this without using elapsed time. You can "clean" the elapsed value with a modulus to avoid losing precision after a long time elapses.
elapsed = (elapsed + deltaTime) % (1/FREQUENCY);
float sinePosition = amplitude * MathUtils.sin(MathUtils.PI2 * FREQUENCY * elapsed + PHASE);
I'm not sure what you're doing basing the sine of x off of what y is, but you can adapt the above.
I've read the article from here.
But it seems i can't translate that to Java, and by that I mean this:
double t = 0.0;
const double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = time();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25; // note: max frame time to avoid spiral of death
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState*alpha + previousState * ( 1.0 - alpha );
render( state );
}
What is the State class he is using? I've downloaded the code and i couldn't find it's declaration? How would the code look like in Java?
State is more an abstract idea. He's just interpolating a number. For example, state could be the x position of an entity.
An example for you:
float x = x*alpha+oldX*(1-alpha);
In my physics game, I passed the alpha value to all my entities during each render. They would use this during the render to calculate the best approximation of their position. I would suggest you do the same. Just have your render routines accept alpha, and have each object track its old state.
So every entity guesses where it really is at the time of rendering using its last position and its current position.
EDIT:
public class Entity{
double oldX;
double oldY;
double x;
double y;
public void render(Graphics g, float alpha){
float estimatedX = x*alpha+oldX*(1-alpha);
float estimatedY = y*alpha+oldY*(1-alpha);
g.drawRect((int)estimatedX,(int)estimatedY,1,1);
}
}
It's a simple structure containing the current position and velocity before each integration step. It's defined in the previous tutorial, and also near the beginning of Timestep.cpp in the code you can download from that page:
struct State
{
float x; // position
float v; // velocity
};
So apparently calculating square roots is not very efficient, which leaves me wondering what the best way is to find out the distance (which I've called range below) between two circles is?
So normally I would work out:
a^2 + b^2 = c^2
dy^2 + dx^2 = h^2
dy^2 + dx^2 = (r1 + r2 + range)^2
(dy^2 + dx^2)^0.5 = r1 + r2 + range
range = (dy^2 + dx^2)^0.5 - r1 - r2
Trying to avoid the square root works fine when you just look for the situation when "range" is 0 for collisions:
if ( (r1 + r2 + 0 )^2 > (dy^2 + dx^2) )
But if I'm trying to work out that range distance, I end up with some unwieldy equation like:
range(range + 2r1 + 2r2) = dy^2 + dx^2 - (r1^2 + r2^2 + 2r1r2)
which isn't going anywhere. At least I don't know how to solve it for range from here...
The obvious answer then is trignometry and first find theta:
Tan(theta) = dy/dx
theta = dy/dx * Tan^-1
Then the find the hypotemuse
Sin(theta) = dy/h
h = dy/Sin(theta)
Finally work out the range
range + r1 + r2 = dy/Sin(theta)
range = dy/Sin(theta) - r1 - r2
So that's what I've done and have got a method that looks like this:
private int findRangeToTarget(ShipEntity ship, CircularEntity target){
//get the relevant locations
double shipX = ship.getX();
double shipY = ship.getY();
double targetX = target.getX();
double targetY = target.getY();
int shipRadius = ship.getRadius();
int targetRadius = target.getRadius();
//get the difference in locations:
double dX = shipX - targetX;
double dY = shipY - targetY;
// find angle
double theta = Math.atan( ( dY / dX ) );
// find length of line ship centre - target centre
double hypotemuse = dY / Math.sin(theta);
// finally range between ship/target is:
int range = (int) (hypotemuse - shipRadius - targetRadius);
return range;
}
So my question is, is using tan and sin more efficient than finding a square root?
I might be able to refactor some of my code to get the theta value from another method (where I have to work it out) would that be worth doing?
Or is there another way altogether?
Please excuse me if I'm asking the obvious, or making any elementary mistakes, it's been a long time since I've used high school maths to do anything...
Any tips or advice welcome!
****EDIT****
Specifically I'm trying to create a "scanner" device in a game that detects when enemies/obstacles are approaching/ going away etc. The scanner will relay this information via an audio tone or a graphical bar or something. Therefore although I don't need exact numbers, ideally I would like to know:
target is closer/further than before
target A is closer/further than target B, C, D...
A (linear hopefully?) ratio that expresses how far a target is from the ship relative to 0 (collision) and max range (some constant)
some targets will be very large (planets?) so I need to take radius into account
I'm hopeful that there is some clever optimisation/approximation possible (dx + dy + (longer of dx, dy?), but with all these requirements, maybe not...
Math.hypot is designed to get faster, more accurate calculations of the form sqrt(x^2 + y^2). So this should be just
return Math.hypot(x1 - x2, y1 - y2) - r1 - r2;
I can't imagine any code that would be simpler than this, nor faster.
If you really need the accurate distance, then you can't really avoid the square root. Trigonometric functions are at least as bad as square root calculations, if not worse.
But if you need only approximate distances, or if you need only relative distances for various combinations of circles, then there are definitely things you can do. For example, if you need only relative distances, note that squared numbers have the same greater-than relationship as do their square roots. If you're only comparing different pairs, skip the square root step and you'll get the same answer.
If you only need approximate distances, then you might consider that h is roughly equal to the longer adjacent side. This approximation is never off by more than a factor of two. Or you could use lookup tables for the trigonometric functions -- which are more practical than lookup tables for arbitrary square roots.
I tired working out whether firstly the answers when we use tan, sine is same as when we use sqrt functions.
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
double shipX = 5;
double shipY = 5;
double targetX = 1;
double targetY = 1;
int shipRadius = 2;
int targetRadius = 1;
//get the difference in locations:
double dX = shipX - targetX;
double dY = shipY - targetY;
// find angle
double theta = Math.toDegrees(Math.atan( ( dY / dX ) ));
// find length of line ship centre - target centre
double hypotemuse = dY / Math.sin(theta);
System.out.println(hypotemuse);
// finally range between ship/target is:
float range = (float) (hypotemuse - shipRadius - targetRadius);
System.out.println(range);
hypotemuse = Math.sqrt(Math.pow(dX,2) + Math.pow(dY,2));
System.out.println(hypotemuse);
range = (float) (hypotemuse - shipRadius - targetRadius);
System.out.println(range);
}
The answer which i got was :
4.700885452542996
1.7008854
5.656854249492381
2.6568542
Now there seems a difference between the value with sqrt ones being more correct.
talking abt the performance :
Consider your code snippet :
i calculated the time of performance- which comes out as:
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
long lStartTime = new Date().getTime(); //start time
double shipX = 555;
double shipY = 555;
double targetX = 11;
double targetY = 11;
int shipRadius = 26;
int targetRadius = 3;
//get the difference in locations:
double dX = shipX - targetX;
double dY = shipY - targetY;
// find angle
double theta = Math.toDegrees(Math.atan( ( dY / dX ) ));
// find length of line ship centre - target centre
double hypotemuse = dY / Math.sin(theta);
System.out.println(hypotemuse);
// finally range between ship/target is:
float range = (float) (hypotemuse - shipRadius - targetRadius);
System.out.println(range);
long lEndTime = new Date().getTime(); //end time
long difference = lEndTime - lStartTime; //check different
System.out.println("Elapsed milliseconds: " + difference);
}
Answer - 639.3204215458475,
610.32043,
Elapsed milliseconds: 2
And when we try out with sqrt root one:
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
long lStartTime = new Date().getTime(); //start time
double shipX = 555;
double shipY = 555;
double targetX = 11;
double targetY = 11;
int shipRadius = 26;
int targetRadius = 3;
//get the difference in locations:
double dX = shipX - targetX;
double dY = shipY - targetY;
// find angle
double theta = Math.toDegrees(Math.atan( ( dY / dX ) ));
// find length of line ship centre - target centre
double hypotemuse = Math.sqrt(Math.pow(dX,2) + Math.pow(dY,2));
System.out.println(hypotemuse);
float range = (float) (hypotemuse - shipRadius - targetRadius);
System.out.println(range);
long lEndTime = new Date().getTime(); //end time
long difference = lEndTime - lStartTime; //check different
System.out.println("Elapsed milliseconds: " + difference);
}
Answer -
769.3321779309637,
740.33215,
Elapsed milliseconds: 1
Now if we check for the difference the difference between the two answer is also huge.
hence i would say that if you making a game more accurate the data would be more fun it shall be for the user.
The problem usually brought up with sqrt in "hard" geometry software is not its performance, but the loss of precision that comes with it. In your case, sqrt fits the bill nicely.
If you find that sqrt really brings performance penalties - you know, optimize only when needed - you can try with a linear approximation.
f(x) ~ f(X0) + f'(x0) * (x - x0)
sqrt(x) ~ sqrt(x0) + 1/(2*sqrt(x0)) * (x - x0)
So, you compute a lookup table (LUT) for sqrt and, given x, uses the nearest x0. Of course, that limits your possible ranges, when you should fallback to regular computing. Now, some code.
class MyMath{
private static double[] lut;
private static final LUT_SIZE = 101;
static {
lut = new double[LUT_SIZE];
for (int i=0; i < LUT_SIZE; i++){
lut[i] = Math.sqrt(i);
}
}
public static double sqrt(final double x){
int i = Math.round(x);
if (i < 0)
throw new ArithmeticException("Invalid argument for sqrt: x < 0");
else if (i >= LUT_SIZE)
return Math.sqrt(x);
else
return lut[i] + 1.0/(2*lut[i]) * (x - i);
}
}
(I didn't test this code, please forgive and correct any errors)
Also, after writing this all, probably there is already some approximate, efficient, alternative Math library out there. You should look for it, but only if you find that performance is really necessary.
I have a JAVA project to do using Google Static Maps and after hours and hours working, I can't get a thing working, I will explain everything and I hope someone will be able to help me.
I am using a static map (480pixels x 480pixels), the map's center is lat=47, lon=1.5 and the zoom level is 5.
Now what I need is being able to get lat and lon when I click a pixel on this static map. After some searches, I found that I should use Mercator Projection (right ?), I also found that each zoom level doubles the precision in both horizontal and vertical dimensions but I can't find the right formula to link pixel, zoom level and lat/lon...
My problem is only about getting lat/lon from pixel, knowing the center's coords and pixel and the zoom level...
Thank you in advance !
Use the Mercator projection.
If you project into a space of [0, 256) by [0,256]:
LatLng(47,=1.5) is Point(129.06666666666666, 90.04191318303863)
At zoom level 5, these equate to pixel coordinates:
x = 129.06666666666666 * 2^5 = 4130
y = 90.04191318303863 * 2^5 = 2881
Therefore, the top left of your map is at:
x = 4130 - 480/2 = 4070
y = 2881 - 480/2 = 2641
4070 / 2^5 = 127.1875
2641 / 2^5 = 82.53125
Finally:
Point(127.1875, 82.53125) is LatLng(53.72271667491848, -1.142578125)
Google-maps uses tiles for the map to efficient divide the world into a grid of 256^21 pixel tiles. Basically the world is made of 4 tiles in the lowest zoom. When you start to zoom you get 16 tiles and then 64 tiles and then 256 tiles. It basically a quadtree. Because such a 1d structure can only flatten a 2d you also need a mercantor projection or a conversion to WGS 84. Here is a good resource Convert long/lat to pixel x/y on a given picture. There is function in Google Maps that convert from lat-long pair to pixel. Here is a link but it says the tiles are 128x128 only: http://michal.guerquin.com/googlemaps.html.
Google Maps V3 - How to calculate the zoom level for a given bounds
http://www.physicsforums.com/showthread.php?t=455491
Based on the math in Chris Broadfoot's answer above and some other code on Stack Overflow for the Mercator Projection, I got this
public class MercatorProjection implements Projection {
private static final double DEFAULT_PROJECTION_WIDTH = 256;
private static final double DEFAULT_PROJECTION_HEIGHT = 256;
private double centerLatitude;
private double centerLongitude;
private int areaWidthPx;
private int areaHeightPx;
// the scale that we would need for the a projection to fit the given area into a world view (1 = global, expect it to be > 1)
private double areaScale;
private double projectionWidth;
private double projectionHeight;
private double pixelsPerLonDegree;
private double pixelsPerLonRadian;
private double projectionCenterPx;
private double projectionCenterPy;
public MercatorProjection(
double centerLatitude,
double centerLongitude,
int areaWidthPx,
int areaHeightPx,
double areaScale
) {
this.centerLatitude = centerLatitude;
this.centerLongitude = centerLongitude;
this.areaWidthPx = areaWidthPx;
this.areaHeightPx = areaHeightPx;
this.areaScale = areaScale;
// TODO stretch the projection to match to deformity at the center lat/lon?
this.projectionWidth = DEFAULT_PROJECTION_WIDTH;
this.projectionHeight = DEFAULT_PROJECTION_HEIGHT;
this.pixelsPerLonDegree = this.projectionWidth / 360;
this.pixelsPerLonRadian = this.projectionWidth / (2 * Math.PI);
Point centerPoint = projectLocation(this.centerLatitude, this.centerLongitude);
this.projectionCenterPx = centerPoint.x * this.areaScale;
this.projectionCenterPy = centerPoint.y * this.areaScale;
}
#Override
public Location getLocation(int px, int py) {
double x = this.projectionCenterPx + (px - this.areaWidthPx / 2);
double y = this.projectionCenterPy + (py - this.areaHeightPx / 2);
return projectPx(x / this.areaScale, y / this.areaScale);
}
#Override
public Point getPoint(double latitude, double longitude) {
Point point = projectLocation(latitude, longitude);
double x = (point.x * this.areaScale - this.projectionCenterPx) + this.areaWidthPx / 2;
double y = (point.y * this.areaScale - this.projectionCenterPy) + this.areaHeightPx / 2;
return new Point(x, y);
}
// from https://stackoverflow.com/questions/12507274/how-to-get-bounds-of-a-google-static-map
Location projectPx(double px, double py) {
final double longitude = (px - this.projectionWidth/2) / this.pixelsPerLonDegree;
final double latitudeRadians = (py - this.projectionHeight/2) / -this.pixelsPerLonRadian;
final double latitude = rad2deg(2 * Math.atan(Math.exp(latitudeRadians)) - Math.PI / 2);
return new Location() {
#Override
public double getLatitude() {
return latitude;
}
#Override
public double getLongitude() {
return longitude;
}
};
}
Point projectLocation(double latitude, double longitude) {
double px = this.projectionWidth / 2 + longitude * this.pixelsPerLonDegree;
double siny = Math.sin(deg2rad(latitude));
double py = this.projectionHeight / 2 + 0.5 * Math.log((1 + siny) / (1 - siny) ) * -this.pixelsPerLonRadian;
Point result = new org.opencv.core.Point(px, py);
return result;
}
private double rad2deg(double rad) {
return (rad * 180) / Math.PI;
}
private double deg2rad(double deg) {
return (deg * Math.PI) / 180;
}
}
Here's a unit test for the original answer
public class MercatorProjectionTest {
#Test
public void testExample() {
// tests against values in https://stackoverflow.com/questions/10442066/getting-lon-lat-from-pixel-coords-in-google-static-map
double centerLatitude = 47;
double centerLongitude = 1.5;
int areaWidth = 480;
int areaHeight = 480;
// google (static) maps zoom level
int zoom = 5;
MercatorProjection projection = new MercatorProjection(
centerLatitude,
centerLongitude,
areaWidth,
areaHeight,
Math.pow(2, zoom)
);
Point centerPoint = projection.projectLocation(centerLatitude, centerLongitude);
Assert.assertEquals(129.06666666666666, centerPoint.x, 0.001);
Assert.assertEquals(90.04191318303863, centerPoint.y, 0.001);
Location topLeftByProjection = projection.projectPx(127.1875, 82.53125);
Assert.assertEquals(53.72271667491848, topLeftByProjection.getLatitude(), 0.001);
Assert.assertEquals(-1.142578125, topLeftByProjection.getLongitude(), 0.001);
// NOTE sample has some pretty serious rounding errors
Location topLeftByPixel = projection.getLocation(0, 0);
Assert.assertEquals(53.72271667491848, topLeftByPixel.getLatitude(), 0.05);
// the math for this is wrong in the sample (see comments)
Assert.assertEquals(-9, topLeftByPixel.getLongitude(), 0.05);
Point reverseTopLeftBase = projection.projectLocation(topLeftByPixel.getLatitude(), topLeftByPixel.getLongitude());
Assert.assertEquals(121.5625, reverseTopLeftBase.x, 0.1);
Assert.assertEquals(82.53125, reverseTopLeftBase.y, 0.1);
Point reverseTopLeft = projection.getPoint(topLeftByPixel.getLatitude(), topLeftByPixel.getLongitude());
Assert.assertEquals(0, reverseTopLeft.x, 0.001);
Assert.assertEquals(0, reverseTopLeft.y, 0.001);
Location bottomRightLocation = projection.getLocation(areaWidth, areaHeight);
Point bottomRight = projection.getPoint(bottomRightLocation.getLatitude(), bottomRightLocation.getLongitude());
Assert.assertEquals(areaWidth, bottomRight.x, 0.001);
Assert.assertEquals(areaHeight, bottomRight.y, 0.001);
}
}
If you're (say) working with aerial photography, I feel like the algorithm doesn't take into account the stretching effect of the mercator projection, so it might lose accuracy if your region of interest isn't relatively close to the equator. I guess you could approximate it by multiplying your x coordinates by cos(latitude) of the center?
It seems worth mentioning that you can actually have the google maps API give you the latitudinal & longitudinal coordinates from pixel coordinates.
While it's a little convoluted in V3 here's an example of how to do it. (NOTE: This is assuming you already have a map and the pixel vertices to be converted to a lat&lng coordinate):
let overlay = new google.maps.OverlayView();
overlay.draw = function() {};
overlay.onAdd = function() {};
overlay.onRemove = function() {};
overlay.setMap(map);
let latlngObj = overlay.fromContainerPixelToLatLng(new google.maps.Point(pixelVertex.x, pixelVertex.y);
overlay.setMap(null); //removes the overlay
Hope that helps someone.
UPDATE: I realized that I did this two ways, both still utilizing the same way of creating the overlay (so I won't duplicate that code).
let point = new google.maps.Point(628.4160703464878, 244.02779437950872);
console.log(point);
let overlayProj = overlay.getProjection();
console.log(overlayProj);
let latLngVar = overlayProj.fromContainerPixelToLatLng(point);
console.log('the latitude is: '+latLngVar.lat()+' the longitude is: '+latLngVar.lng());
I know the title is an eyebrow raiser but it's mostly a side effect problem. I'm writing an Android app that I can use with the math I've been learning in my physics class. It's a 2D bouncing ball app. I'm using the time corrected Verlet integrator with an impulse on the floor which is the bottom of the screen. I'm adding friction and bounciness so that the ball eventually reaches 0 velocity.
The problem shows up when the ball is resting on the floor and a "significant" time step change happens. The integrator adjusts the velocities flawlessly and ends up firing the impulse on the floor. The impulse fires when the velocity's abs value is greater than 2.5. Android's GC usually causes a time adjusted -18 velocity.
Any help is appreciated. I realize the code structure could be better but I'm just trying to visualize and apply physics for fun. Thank you.
// The loop
public void run() {
if(mRenderables != null) {
final long time = SystemClock.uptimeMillis();
final long timeDelta = time - mLastTime;
if(mLastTime != 0) {
final float timeDeltaSeconds = timeDelta / 1000.0f;
if(mLastTimeDeltaSec != 0) {
for(short i = 0; i < mRendLength; i++) {
Ball b1 = mRenderables[i];
// Acceleration is gauged by screen's tilt angle
final float gravityX = -mSV.mSensorX * b1.MASS;
final float gravityY = -mSV.mSensorY * b1.MASS;
computeVerletMethod(b1, gravityX, gravityY, timeDeltaSeconds, mLastTimeDeltaSec);
}
}
mLastTimeDeltaSec = timeDeltaSeconds;
}
mLastTime = time;
}
}
/*
* Time-Corrected Verlet Integration
* xi+1 = xi + (xi - xi-1) * (dti / dti-1) + a * dti * dti
*/
public void computeVerletMethod(Renderable obj, float gravityX, float gravityY, float dt, float lDT) {
mTmp.x = obj.pos.x;
mTmp.y = obj.pos.y;
obj.vel.x = obj.pos.x - obj.oldPos.x;
obj.vel.y = obj.pos.y - obj.oldPos.y;
// Log "1." here
resolveScreenCollision(obj);
obj.pos.x += obj.FRICTION * (dt / lDT) * obj.vel.x + gravityX * (dt * dt);
obj.pos.y += obj.FRICTION * (dt / lDT) * obj.vel.y + gravityY * (dt * dt);
obj.oldPos.x = mTmp.x;
obj.oldPos.y = mTmp.y;
// Log "2." here
}
// Screen edge detection and resolver
public void resolveScreenCollision(Renderable obj) {
final short xmax = (short) (mSV.mViewWidth - obj.width);
final short ymax = (short) (mSV.mViewHeight - obj.height);
final float x = obj.pos.x;
final float y = obj.pos.y;
// Only testing bottom of screen for now
if (y > ymax) {
// ...
} else if (y < 0.5f) {
if(Math.abs(obj.vel.y) > 2.5f) {
float imp = (obj.MASS * (obj.vel.y * obj.vel.y) / 2) * obj.RESTITUTION / obj.MASS;
obj.vel.y += imp;
// Log "bounce" here
} else {
obj.vel.y = obj.pos.y = obj.oldPos.y = mTmp.y = 0.0f;
}
}
}
Output while ball is resting on the floor and sudden impulse happens
(see code for "log" comments)
1. vel.y: -0.48258796
2. pos.y: -0.42748278 /oldpos.y: 0.0 /dt: 0.016 /ldt: 0.017
1. vel.y: -0.42748278
dalvikvm GC_FOR_MALLOC freed 8536 objects / 585272 byte s in 74ms
2. pos.y: -0.48258796 /oldpos.y: 0.0 /dt: 0.017 /ldt: 0.016
1. vel.y: -0.48258796
2. pos.y: -18.061148 /oldpos.y: 0.0 /dt: 0.104 /ldt: 0.017
1. vel.y: -18.061148
bounce imp: 124.35645
2. pos.y: 13.805508 /oldpos.y: -18.061148 /dt: 0.015 /ldt: 0.104
you shouldn't use a timestep based on how much time occured from the previous calculation because that can cause problems such as this and errors in collision detection, if you don't have that yet. Instead for every update, you should set a "time chunk" or a max amount of time for each update. For example: say you want 30fps and the which is in nanotime about 33333333. so 33333333 = 1 timechunk. so then you could do a while loop
long difftime = System.nanoTime() - lastTime;
static long fpstn = 1000000000 / 30;
static int maxtimes = 10;// This is used to prevent what is commonly known as the spiral of death: the calcutaions are longer that the time you give them. in this case you have to increase the value of a base timechunk in your calculations
for (int i = 0; i < maxtimes; i++) {
if (difftime >= fpstn) {
world.updateVerlet(1);
} else {
world.updateVerlet((float)diffTime / (float)fpstn);
}
difftime -= fpstn;
if (difftime <= 0)
break;
}
It's hard to be sure, but it looks as if the problem isn't the increase in the time step, it's the large time step. You do Verlet integration and bouncing as separate processes, so if the ball start from a resting position on the floor with a large time step, it falls far into the floor, picking up speed, before being reflected into the air. Keep the time step small and you won't have this problem... much.