Object relative to the image while zooming in or out - java

I have a map that is being drawn on the screen inside a VisualizationView. I am able to zoom in and out on this map, depending on where I focused inside this view using the touch functionality.
I also toggle between zooming in and drawing on this map. When I draw an arrow on the map and switch back to zooming in. I want the arrow I drawed on the map to stay at the same spot relative to the map I am zooming in and out on. Meaning the x and y pos of this arrow have to be adjusted.
I have all the variables needed I just dont know how to solve this since its math related.
Can someone explain to me how I can solve this math wise. I tried looking on the internet but I could not find a good explanation. Also the map resolution on the Tablet is always 576 x 576.
public void onZoom(final double focusX, final double focusY,
final double factor) {
//pseudocode
Triangle.x = ..
Triangle.y = ..
Triangle.repaint();
//peusocode
}
The code for the zooming is as following straight from the Library.
public void zoom(double focusX, double focusY, double factor) {
synchronized (mutex) {
Transform focus = Transform.translation(toMetricCoordinates((int) focusX, (int) focusY));
double zoom = RosMath.clamp(getZoom() * factor, MINIMUM_ZOOM, MAXIMUM_ZOOM) / getZoom();
transform = transform.multiply(focus).scale(zoom).multiply(focus.invert());
}
}
I hope someone can explain the math behind solving this.

I'm not directly familiar with the library you're using, but to move the location of the arrow (triangle) to its new location you should apply the same transform to it as you do to the map.
Think of it this way: Before zooming, the location of the arrow (the center or head of the arrow, however you define "location") and the point on the map it's pointing to have the same coordinates. After zooming, the point on the map will have moved to a new location and you want the arrow to move to that same new location, thus you need to use the same transform. Like I said, I'm not familiar with the library you're using so I can't tell you exactly how to do that, but that's where you want to be.
Note, however, that you only want to transform one point of the arrow/triangle and draw the other points relative to it. If you transform all 3 you'll end up with the arrow getting larger and smaller as you zoom in and out (which I assume you don't want).

Related

JavaFX: How can I rotate a 3D-box 90-degrees with the keyboard, and then save and load the rotation using JSON?

I have a 'javafx.scene.shape.Box' with a 'javafx.scene.shape.MeshView' shown at it's coordinates.
I want to rotate this box 90-degrees using the keyboard (perhaps WASDQE, WS, QE and AD for the different axises). I also want to save the box-rotation to JSON and then be able to set that rotation again.
What I've tried (the box- and meshview-properties are bound):
box.setRotate(box.getRotate + angle); // angle being +-90 and reset on > 360 or < -360
box.setRotationAxis(Rotate.X_AXIS); // Axis depending on key pressed
This approach works for the current axis, and it works to save and load using JSON since I can set/get this value. However, with this, it seems I can not rotate all the angles, because when I change rotation-axis it forgets the previous rotation and starts from the 'default'/'starting'-rotation.
I've also tried doing it like below:
this.setRotationAxis(this.getRotationAxis().normalize().add(Rotate.X_AXIS));
This approach results with the box rotating on multiple axis, in other words, it seems to remember previous axises, but it does not rotate the axis I just added by the angle given. The box ends up tilted.
The last thing I've tried was to add the rotation as a transform:
this.getTransforms().add(new Rotate(angle, Rotate.X_AXIS));
(
also tried this with some code I found that did this with inverseDeltaTransform (where node being the box):
private void addRotate(Node node, Point3D rotationAxis, double angle) {
ObservableList<Transform> transforms = node.getTransforms();
try {
for (Transform t : transforms) {
rotationAxis = t.inverseDeltaTransform(rotationAxis);
}
} catch (NonInvertibleTransformException ex) {
throw new IllegalStateException(ex);
}
transforms.add(new Rotate(angle, rotationAxis));
}
)
This approach, however, leaves me with two problems.
Problem 1:
If i render the box, it rotates just fine, but if I render the meshview, it seems to rotate around a pivot rather then to rotate on the spot.
Problem 2:
If i add the rotation through transforms, how would I save the current rotation to JSON and then apply it again when I load it?
box.getRotationAxis() and box.getRotate() will now just show the default-values as if the box is never rotated and box.getTransforms() does not seem to make this easy.
Another question about the transforms:
The list of transforms is building up with every rotation, would I need to clear this list to avoid memory-issues and would clearing the list affect the current rotation?

How to zoom in and move a map of multiple squares?

I have 10x10 sqares that are forming a map. The variable zoom, xPos and yPos are defining how deep I am scolling in the map and the position of the camera.
Each tile has a x and y - coordinate (0-9).
How can I display this map?
I've tried to do this:
rect(xzoom+xPos, yzoom+xPos, zoom, zoom); //the function rect makes a rectangle with the center at the first 2 inputs)
The problem is that I'm always zooming in the upper left corner;
I've also tried this:
rect((x-5.5)*zoom+xPos, (y-5.5)*zoom+yPos);
but this zooms always in the center in the map while I want it to zoom in the center of the screen.
Please help me
I really suggest sitting down with some graph paper and a pencil. Draw out a bunch of example grids with their coordinates and sizes. Then draw out what they look like at different zoom level until you notice a pattern. If you can't get that pattern to work, please post an MCVE and we'll go from there.
Also note that Processing has a scale() function that might come in handy. More info is available in the reference.

how to scale using canvas

i am using this code to scale my canvas around a focus point
private class MyScaleDetector
extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScaleBegin(ScaleGestureDetector detector) {
float focusx=detector.getFocusX();
float focusy=detector.getFocusY();
return super.onScaleBegin(detector);
}
#Override
public boolean onScale(ScaleGestureDetector detector) {
float factor=detector.getScaleFactor();
scale *= factor;
scale = Math.max(0.4f, Math.min(scale, 20.0f));
return true;
}
}
and this is the code i use to scale inside the ondraw method
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.scale(scale,scale,focusx,focusy);
}
my problem is when i start scaling around the focus point (200,200) for example ,at first it start going there but than while increasing the scale it start going away from the point . so my problem is how i scale toward a specific point so that that this point becomes the center of the viewport while scaling .
I think i should use canvas.translate() with it but i don't know how much i should move the x and y position of canvas while scaling .
Edit:the image below summarize what i am trying to say
image
There are several things to bear in mind here:
The ScaleDetector only detects scaling events; this sounds good but is not sufficient by itself to handle pinch/zoom in 2D (eg. in a picture).
The 'focus' returned by the detector is the mid point between the two pointers (fingers); this is less useful since the natural assumption when you start scaling is that the image pixel under your finger will remain under your finger as you zoom. People often move fingers at different speeds (or deliberately keep one finger stationary), which invalidates the use of the mid-point.
You (probably) want to handle rotation too.
The broad approach I have used in the past is:
manually detect first and second touch events; record the coordinate of the two pointers when the second pointer touch event occurs. These points are to logically 'follow' the pointers from now on.
when a motion event occurs that continues to have two pointers down (ie. a two fingered rotation, zoom, or drag), determine the translation, scale and rotation as follows:
the scale is based on the distance between the new points compared to the original points (recorded at the start)
the rotation is based on the difference in the angle between the first two points and the angle between the most recent points
the translation is based arbitrarily on one of the points (usually the first) - just translate based on the pixels between the original point and the latest point
these values can be calculated and stored during the motion event, then used in onDraw() method
Note that in onDraw():
It's important to perform the translation first since you are working in screen pixels for the two reference points.
you can then rotate & scale in any order since the scale is always equal in the X & Y direction
Edit to address more specific request:
If all you really need to do is simply scale around a specific redefined point p (px, py) by an amount s, and keep p visible, then you can do the following:
canvas.translate(px, py);
canvas.scale(s,s);
canvas.translate(-px*s, py*s);
This should keep p in the same pixel-position it was in prior to scaling. This will suffer from all the problems alluded to above as well as the risks of drift due to successive scale events around a moving focus (based on your original code). You are far better off doing what I described above.

How to create a counter that updates via a mouse drag?

Lets say I have a circle, and if the user drags his mouse clockwise along the path of the circle the counter increases, and if he drags the mouse counter-clockwise it decreases. What is the best way to implement this in Java? I imagine trig will be needed so the program knows when the user is dragging clockwise or counter-clockwise, right? Not looking for code examples, just help with theory so I can begin with the right approach.
As you probably have access to the circle, get its center point coordinates.
Then get the coordinates of the mouse. After that you compute the angle between the vector and the x-axis.
You do so by first setting the circle point as the center of the imaginary coordinate system and then shifting the mouse coordinates to this system. After that you apply atan2 on the new mouse coordinates. The result is the desired angle.
Point center = ...;
Point mouse = ...;
Point shiftedMouse = new Point(mouse.x - center.x, mouse.y - center.y);
double angle = Math.atan2(shiftedMouse.y, shiftedMouse.x);
At this point you probably need to convert the result of angle to degrees or something like that, if you like. You may take a look at Wikipedia#atan2 for this.
Of course you can also leave it in the format (-pi, pi] and work with that, if you know what it means.
Now you track how this angle changes. If it increases, then the mouse is moving counter-clockwise; if it decreases, then clockwise and so on (or maybe the other way around, just try it). Take care of the bound where after 359° 0° and then 1° comes.

Rotation in OpenGl ES to place objects then rotate the world

I am developing an augmented reality application for android and trying to use openGl to place cubes at locations in the world. My current method can be seen in the code below:
for(Marker ma: ARData.getMarkerlist().values()) {
Log.d("populating", "");
gl.glPushMatrix();
Location maLoc = new Location("loc");
maLoc.setLatitude(ma.lat);
maLoc.setLongitude(ma.lng);
maLoc.setAltitude(ma.alt);
float distance = currentLoc.distanceTo(maLoc);
float bearing = currentLoc.bearingTo(maLoc);
Log.d("distance", String.valueOf(distance));
Log.d("bearing", String.valueOf(bearing));
gl.glRotatef(bearing,0,0,1);
gl.glTranslatef(0,0,-distance);
ma.cube.draw(gl);
gl.glPopMatrix();
}
gl.glRotatef(y, 0, 1, 0);
gl.glRotatef(x, 1, 0, 0);`
Where y is yaw and x is the pitch. currently I am getting a single cube on the screen at a 45 degree angle someway in the distance. It looks like I am getting sensible bearing and distance values. Could it have something to do with the phones orientation? If you need more code let me know.
EDIT: I updated bearing rotation to gl.glRotatef(bearing,0,1,0); I am now getting my cubes mapped horizontally along the screen at different depths. Still no movement using heading and pitch but #Mirkules has identified some reasons why that might be.
EDIT 2: I am now attempting to place the cubes by rotating the matrix by the difference in angle between heading and bearing to a marker. However, all I get is a sort of jittering where the cubes appear to be rendered in a new position and then jump back to there old position. Code as above except for the following:
float angleDiff = bearing - y;
gl.glRotatef((angleDiff),0,1,0);
gl.glTranslatef(0,0,-distance);
bearing and y are both normalised to a 0 - 360 scale. Also, I moveed my "camera rotation" to above the code where I set the markers.
EDIT 3: I have heading working now using, float angleDiff = (bearing + y)/2;. However, I cant seem to get pitch working. I have attempted to use gl.glRotatef(-x,1,0,0); but that doesn't seem to work.
It's tricky to tell exactly what you're trying to do here, but there are a few things that stick out as potential problems.
Firstly, your final two rotations don't seem to actually apply to anything. If these are supposed to represent a movement of the world or camera (which mostly amounts to much the same thing) then they need to happen before drawing anything.
Then your rotations themselves perhaps won't entirely do what you intend.
Your cube is rotated around the Z axis. The usual convention in GL is for the camera to look down the Z axis, with the Y axis being considered 'up'. You can naturally interpret axes however you like, but a rotation around 'Z' would not typically be 'bearing', but 'roll'. 'Bearing' to me would be analogous to 'yaw'.
As you translate along the Z axis, I assume you are trying to position the object by rotating and translating, but obviously if the rotation is around the same axis as you translate along, it won't actually alter the position of the cube - it will always just be directly in front of the camera, spinning on its axis.
I'm not really clear on why you're trying to position the cube like that when it seems like you start off with a more specific location. You could probably directly construct a more appropriate matrix.
Finally, your camera/world rotation is two concatenated rotations around Y and X. You call these pitch and roll, but typically using euler angles for a camera rotation does not result in an intuitive result where terms like pitch and roll make complete sense. It is common to maintain an orientation and apply individual rotations to that in order to update it, rather than attempting to update several dependent rotations.
So yes, I would expect that this code, in the absence of other matrix operations, would likely result in drawing one or more cubes straight ahead which are simply rotated by some angle around the view direction.

Categories

Resources