I have two layouts. One (A) has a ScaleGestureDetector active on it's contents to handle a PinchZoom functionality while the other (B) has dynamically added ImageViews known as Markers. B sits perfectly on top of A with mirrored alignments and sizes.
When a user zooms in on A, I want the Markers on B to translate in a manner that they remain fixed on the points they were on relative to A. For example, if a user places a Marker on a point of interest (POI) on A, and zooms into a different point, the Marker should remain pinned on the POI.
Currently I'm using the below code for every ScaleGestureDetector instance/action/run:
float[] values = new float[9];
trans.getValues(values);
float transx = values[Matrix.MTRANS_X];
float transy = values[Matrix.MTRANS_Y];
float scalex = values[Matrix.MSCALE_X];
float scaley = values[Matrix.MSCALE_Y];
float scale = (float) Math.sqrt(scalex * scalex + scaley * scaley);
focusX = transx - focusX;
focusY = transy - focusY;
transx = focusX;
transy = focusY;
float markerX = marker.getX();
float markerY = marker.getY();
if(markerX > startPoint.x) {
marker.setTranslationX((-transx/scale) + (-transx%scale));
}else if(markerX < startPoint.x) {
marker.setTranslationX((transx/scale) + (transx%scale));
}
if(markerY > startPoint.y) {
marker.setTranslationY((-transy/scale) + (-transy%scale));
}else if(markerY < startPoint.y) {
marker.setTranslationY((transy/scale) + (transy%scale));
}
Where trans is the Matrix used to perform postScales on A's contents and focusX and focusY are merely used to prevent cumulative build-up of the translation values. startPoint is the point of first contact, defined in the MotionEvent.ACTION_DOWN case.
My problem is that the translation becomes progressively "unhinged" when the users zooms in further away from the markers. Larger zoom gestures also cause the Markers to float away from their designated points. For clarity, they get closer to the zoom point and move away from the POI.
I suspect the translation amount at the bottom of the code segment is missing something, likely relative to the size of the layouts.
If I were you, I would take the following approach:
Find the markers position at the start of the movement.
Find the math to correctly identify where the marker should be at any point along the movement.
Use the setX and setY to set it directly to that point.
That will avoid any floating point uncertainties that might arise.
Related
Is it possible to calculate distance between two HitResult `s ?
Or how we can calculate real distance (e.g. meters) using ARCore?
In Java ARCore world units are meters (I just realized we might not document this... aaaand looks like nope. Oops, bug filed). By subtracting the translation component of two Poses you can get the distance between them. Your code would look something like this:
On first hit as hitResult:
startAnchor = session.addAnchor(hitResult.getHitPose());
On second hit as hitResult:
Pose startPose = startAnchor.getPose();
Pose endPose = hitResult.getHitPose();
// Clean up the anchor
session.removeAnchors(Collections.singleton(startAnchor));
startAnchor = null;
// Compute the difference vector between the two hit locations.
float dx = startPose.tx() - endPose.tx();
float dy = startPose.ty() - endPose.ty();
float dz = startPose.tz() - endPose.tz();
// Compute the straight-line distance.
float distanceMeters = (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
Assuming that these hit results don't happen on the same frame, creating an Anchor is important because the virtual world can be reshaped every time you call Session.update(). By holding that location with an anchor instead of just a Pose, its Pose will update to track the physical feature across those reshapings.
You can extract the two HitResult poses using getHitPose() and then compare their translation component (getTranslation()).
The translation is defined as
...the position vector from the destination (usually
world) coordinate frame to the local coordinate frame, expressed in
destination (world) coordinates.
As for the physical unit of this I could not find any remark. With a calibrated camera this should be mathematically possible but I don't know if they actually provide an API for this
The answer is: Yes, you definitely can calculate distance between two HitResult's. The working units of ARCore, as well as ARKit, are meters. Sometimes, it's more useful to use centimetres. Here are a few ways how you do it with Java and great old Pythagorean theorem.
import com.google.ar.core.HitResult
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// some logic...
}
}
// Here's the principle how you can calculate the distance
// between two anchors in 3D space using Java:
private double getDistanceMeters(Pose pose0, Pose pose1) {
float distanceX = pose0.tx() - pose1.tx();
float distanceY = pose0.ty() - pose1.ty();
float distanceZ = pose0.tz() - pose1.tz();
return Math.sqrt(distanceX * distanceX +
distanceY * distanceY +
distanceZ * distanceZ);
}
// Convert Meters into Centimetres
double distanceCm = ((int)(getDistanceMeters(pose0, pose1) * 1000))/10.0f;
// pose0 is the location of first Anchor
// pose1 is the location of second Anchor
Alternatively, you can use the following math:
Pose pose0 = firstAnchor.getPose() // first pose
Pose pose1 = secondAnchor.getPose() // second pose
double distanceM = Math.sqrt(Math.pow((pose0.tx() - pose1.tx()), 2) +
Math.pow((pose0.ty() - pose1.ty()), 2) +
Math.pow((pose0.tz() - pose1.tz()), 2));
double distanceCm = ((int)(distanceM * 1000))/10.0f;
I'm making a 2d game in libgdx and I would like to know what the standard way of moving (translating between two known points) on the screen is.
On a button press, I am trying to animate a diagonal movement of a sprite between two points. I know the x and y coordinates of start and finish point. However I can't figure out the maths that determines where the texture should be in between on each call to render. At the moment my algorithm is sort of like:
textureProperty = new TextureProperty();
firstPtX = textureProperty.currentLocationX
firstPtY = textureProperty.currentLocationY
nextPtX = textureProperty.getNextLocationX()
nextPtX = textureProperty.getNextLocationX()
diffX = nextPtX - firstPtX
diffY = nextPtY - firstPtY
deltaX = diffX/speedFactor // Arbitrary, controlls speed of the translation
deltaX = diffX/speedFactor
renderLocX = textureProperty.renderLocX()
renderLocY = textureProperty.renderLocY()
if(textureProperty.getFirstPoint() != textureProperty.getNextPoint()){
animating = true
}
if (animating) {
newLocationX = renderLocX + deltaX
newLocationY = renderLocY + deltaY
textureProperty.setRenderPoint(renderLocX, renderLocY)
}
if (textureProperty.getRenderPoint() == textureProperty.getNextPoint()){
animating = false
textureProperty.setFirstPoint(textureProperty.getNextPoint())
}
batch.draw(texture, textureProperty.renderLocX(), textureProperty.renderLocY())
However, I can foresee a few issues with this code.
1) Since pixels are integers, if I divide that number by something that doesn't go evenly, it will round. 2) as a result of number 1, it will miss the target.
Also when I do test the animation, the objects moving from point1, miss by a long shot, which suggests something may be wrong with my maths.
Here is what I mean graphically:
Desired outcome:
Actual outcome:
Surely this is a standard problem. I welcome any suggestions.
Let's say you have start coordinates X1,Y1 and end coordinates X2,Y2. And let's say you have some variable p which holds percantage of passed path. So if p == 0 that means you are at X1,Y1 and if p == 100 that means you are at X2, Y2 and if 0<p<100 you are somewhere in between. In that case you can calculate current coordinates depending on p like:
X = X1 + ((X2 - X1)*p)/100;
Y = Y1 + ((Y2 - Y1)*p)/100;
So, you are not basing current coords on previous one, but you always calculate depending on start and end point and percentage of passed path.
First of all you need a Vector2 direction, giving the direction between the 2 points.
This Vector should be normalized, so that it's length is 1:
Vector2 dir = new Vector2(x2-x1,y2-y1).nor();
Then in the render method you need to move the object, which means you need to change it's position. You have the speed (given in distance/seconds), a normalized Vector, giving the direction, and the time since the last update.
So the new position can be calculated like this:
position.x += speed * delta * dir.x;
position.y += speed * delta * dir.y;
Now you only need to limit the position to the target position, so that you don't go to far:
boolean stop = false;
if (position.x >= target.x) {
position.x = target.x;
stop = true;
}
if (position.y >= target.y) {
position.y = target.y;
stop = true;
}
Now to the pixel-problem:
Do not use pixels! Using pixels will make your game resolution dependent.
Use Libgdx Viewport and Camera instead.
This alows you do calculate everything in you own world unit (for example meters) and Libgdx will convert it for you.
I didn't saw any big errors, tho' i saw some like you are comparing two objects using == and !=, But i suggest u to use a.equals(b) and !a.equals(b) like that. And secondly i found that your renderLock coords are always being set same in textureProperty.setRenderPoint(renderLocX, renderLocY) you are assigning the same back. Maybe you were supposed to use newLocation coords.
BTW Thanks for your code, i was searching Something that i got by you <3
I'd like to both zoom and scroll a GoogleMap object at the same time. Unfortunately, it seems like if I just make two moveCamera calls (one after another), only the second call appears to take effect.
An alternative approach would be to pass a CameraPosition, but unfortunately it looks like the CameraPosition constructor does not take an argument that deals with an amount to scroll (which is invariant to the zoom level), but only an argument as to what lat/lon to go to.
Is there some clever way to combine/concatenate CameraUpdate objects so I can just issue one moveCamera command that does both a pan and a zoom?
I assume something like this is possible because you can do it when you're touching the map. You put down two fingers, and you can zoom in/out by spreading your fingers and pan by moving them both simultaneously.
It looks like your best option will be to modify the CameraPosition object returned when you call GoogleMap.getCameraPosition(). However, you won't be able to just increment that latitude and longitude of the CameraPosition object by the number of pixels you scrolled. You'll have to get the CameraPosition's LatLng coordinates, convert that to a Point object using the Projection class, modify the Point, and then convert it back into a LatLng object.
For example:
GoogleMap map; // This is your GoogleMap object
int dx; // X distance scrolled
int dy; // Y distance scrolled
float dz; // Change in zoom level
float originalZoom;
CameraPosition position = map.getCameraPosition();
Project projection = map.getProjection();
LatLng mapTarget = position.target;
originalZoom = position.zoom;
Point mapPoint = projection.toScreenLocation(mapTarget);
mapPoint.x += dx;
mapPoint.y += dy;
mapTarget = projection.fromScreenLocation(mapPoint);
CameraPosition newPosition = new CameraPosition(position)
.target(mapTarget)
.zoom(originalZoom+dz)
.build();
map.moveCamera(CameraUpdateFactory.newCameraPosition(newPosition));
There is this method newLatLngZoom (LatLng latLng, float zoom), which will zoom to the point located at latLng. For your case of zooming to a point P that is off the center, it is able to do it with some math involves.
To reverse thinking about the problem, we know the screen coordinate of the map center C after the zoom so that we know the LatLng of C. This LatLng needs to be passed in to the newLatLngZoom method so that it will center to the point and P is the same as before (which is as we defined the problem, zoom to P). So we need to find the screen coordinate of C before the zoom.
Let's assume the zoom value before zoom is z1, and after zoom is z1 + dz. Google map API states that at zoom level N, the width of the world is approximately 256*2^N dp. So the width of the world before is 256*2^z1, and after is 256*2^(z1 + dz).
We also know that the ratio of the distance of P to C to the whole world doesn't change before and after the zoom. The coordinate of C after the zoom is just center of your map view, which is known. Let's assume coordinate of C before the zoom as (Cx, Cy), then we can have our equation as (Px - Cx) / 256*2^z1 = (Px - C'x) / 256*2^(z1 + dz), and (Py - Cy) / 256*2^z1 = (Py - C'y) / 256*2^(z1 + dz). I ignore the aspect ratio to transfer width to height, because it will be cancelled out as you solve the equation.
Solve the equation for Cx and Cy, and use that value as your parameters in newLatLngZoom. If you want to add panning and you know the dx dy value that needs to be panned, then just calculate the dx' dy' in the map after the zoom and add that to C.
I've got a camera set up, and I can move with WASD and rotate the view with the mouse. But now comes the problem: I want to add physics to the camera/player, so that it "interacts" with my other jBullet objects. How do I do that? I thought about creating a RigidBody for the camera and storing the position there, so that jBullet can apply its physics to the camera. Then, when I need to change something (the position), I could simply change it in the RigidBody. But I didn't find any methods for editing the position.
Can you push me in the right direction or maybe give me an example source code?
I was asking the same question myself a few days ago. My solution was as Sierox said. To create a RigidBody of BoxShape and add that to the DynaicsWorld. To move the camera arund, apply force to its rigidbody. I have damping set to .999 for linear and 1 for angular to stop the camera when no force is applied, i.e. the player stops pressing the button.
I also use body.setAngularFactor(0); so the box isn't tumbling all over the place. Also set the mass really low as not to interfere too much with other objects, but still be able to jump on then and run into them, and otherwise be affected by them.
Remember to convert your x,y, and z coordinates to cartesian a plane so you move in the direction of the camera. i.e.
protected void setCartesian(){//set xyz to a standard plane
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
float pd = (float) (Math.PI/180);
x = (float) (-Math.cos(xrot*pd)*Math.sin(yrot*pd));
z = (float) (-Math.cos(xrot*pd)*Math.cos(yrot*pd));
//y = (float) Math.sin(xrot*pd);
}//..
public void forward(){// move forward from position in direction of camera
setCartesian();
x += (Math.sin(yrotrad))*spd;
z -= (Math.cos(yrotrad))*spd;
//y -= (Math.sin(xrotrad))*spd;
body.applyForce(new Vector3f(x,0,z),getThrow());
}//..
public Vector3f getThrow(){// get relative position of the camera
float nx=x,ny=y,nz=z;
float xrotrad, yrotrad;
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
nx += (Math.sin(yrotrad))*2;
nz -= (Math.cos(yrotrad))*2;
ny -= (Math.sin(xrotrad))*2;
return new Vector3f(nx,ny,nz);
}//..
to jump just use body.setLinearVelocity(new Vector3f(0,jumpHt,0)); and set jumpHt to whatever velocity you wish.
I use getThrow to return a vector for other objects i may be "throwing" on screen or carrying. I hope I answered your question and didn't throw in too much non-essential information.I'll try and find the source that gave me this idea. I believe it was on the Bullet forums.
------- EDIT ------
Sorry to have left that part out
once you have the rigid body functioning properly you just have to get it's coordinates and apply that to your camera for example:
float mat[] = new float[16];
Transform t = new Transform();
t = body.getWorldTransform(t);
t.origin.get(mat);
x = mat[0];
y = mat[1];
z = mat[2];
gl.glRotatef(xrot, 1, 0, 0); //rotate our camera on teh x-axis (left and right)
gl.glRotatef(yrot, 0, 1, 0); //rotate our camera on the y-axis (up and down)
gl.glTranslatef(-x, -y, -z); //translate the screen to the position of our camera
In my case I'm using OpenGL for graphics. xrot and yrot represent the pitch and yaw of your camera. the code above gets the world transform in the form of a matrix and for the purposes of the camera you need only to pull the x,y, and z coordinates then apply the transform.
from here, to move the camera, you can set the linear velocity of the rigid body to move the camera or apply force.
Before you read this answer I would like to mention that I have a problem with the solution stated in my answer. You can follow my question about that problem so that you can have the solution too if you use this answer.
So. First, you need to create a new BoxShape:
CollisionShape cameraHolder = new BoxShape(SIZE OF CAMERAHOLDER);
And add it to your world so that it interacts with all the other objects. Now you need to change all the methods about camera movement (not rotation) so that the methods move your cameraHolder but not your camera. Then set the position of your Camera to the position of the cameraHolder.
Again, if you have a problem where you can't move properly, you can check my question and wait for an answer. You also can find a better way of doing this.
If you have problems or did not understand something about the answer, please state it as a comment.
hi I have image of my house.Top view image.I want to have latitude lotitude info displayed when i click on the image.
I do have latitude longitude value for 1 left top part of image.
Also how to maintain latitude longitude values while zooming in out of the image.
Lat/lon is a geodesic coordinate system (WGS84), which means it is curved coordinates going around the earth - an image is flat, which means typically you can't easily go directly between the two. However it may be the case that an image of your house is so small area, that the calculation error will be small enough to be negligible (depending on what you need it for).
To do what you want to do, you need to find a "degrees per pixel" value which means you need to know the lat/lon for both top/left and bottom right of your image. If you have that it's simple. This assumes you're in the northern hemisphere:
var degreesPerPixelX = bottomX - topX / imageWidth;
var degreesPerPixelY = bottomY - topY / imageHeight;
And an event handler (the getEventOffsetFromImageXXX are not shown).
function onClick (evt) {
var x = getEventOffsetFromImageLeft(evt);
var y = getEventOffsetFromImageTop(evt);
var clickedLon = topX + x * degreesPerPixelX;
var clickedLat = bottomY + y * degreesPerPixelY;
}
The zoom level will affect the top/left bottom/right lon/lat so the calculations need to adjust accordingly.
When Google Maps calculate x/y to lon/lat they internally ALWAYS first convert the lon/lat to the coordinate system Spherical Mercator (EPSG:900913), do the operations in that system and then convert back. However Spherical Mercator has fixed zoom levels, which is probably not right for you. Nevertheless, this is a very worthwhile read.
http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/
N.b. degreesPerPixel is called resolution in google talk and the unit is meters per pixel. Meter is the unit in Spherical Mercator, which roughly translates to a meter at the equator, but is far from a meter the further north/south you get.
Anyone how abt this code snippet
function longToX(longitudeDegrees)
{
var longitude=longitudeDegrees-baselong;
longitude =degreesToRadians(longitude);
return (radius * longitude);
}
function latToY(latitudeDegrees)
{
var latitude=latitudeDegrees-baselat;
latitude =degreesToRadians(latitude);
var newy = radius/2.0 *
Math.log( (1.0 + Math.sin(latitude)) /
(1.0 - Math.sin(latitude)) );
return newy;
}
function xToLong(xx)
{
var longRadians = xx/radius;
var longDegrees = radiansToDegrees(longRadians);
var rotations = Math.floor((longDegrees + 180)/360)
var longitude = longDegrees - (rotations * 360)
return longitude+baselong;
}
function yToLat(yo)
{
var latitude = (Math.PI/2) - (2 * Math.atan(Math.exp(-1.0 * yo /this.radius)));
return radiansToDegrees(latitude)+baselat;
}