LibGDX: boundsInFrustrum and BoundingBox not working as expected - java

I am loading a box2d scene from a json file. This scene contains a fixture marking the bounding box that the camera is allowed to travel in. Using this mechanism works fine for the lower and left bounds, yet fails completely for the upper and right bounds, which is rather odd.
Here is the part that loads the bounding box from the file:
PolygonShape shape = ((PolygonShape) fixture.getShape());
Vector2 vertex = new Vector2();
float boundLeft = world.startX, boundRight = world.startX, boundUp = world.startY, boundLow = world.startY; // The location of the camera as initial value
for (int i = 0; i < shape.getVertexCount(); i++) { // Itarate over each vertex in the fixture and set the boundary values
shape.getVertex(i, vertex);
vertex.add(body.getPosition());
boundLeft = Math.min(vertex.x, boundLeft);
boundLow = Math.min(vertex.y, boundLow);
boundRight = Math.max(vertex.x, boundRight);
boundUp = Math.max(vertex.y, boundUp);
}
// Build the bounding boxes with enough thickness to prevent tunneling on fast pans
world.boundLeft = new BoundingBox(new Vector3(boundLeft - 5, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundLeft, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundRight = new BoundingBox(new Vector3(boundRight, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundUp = new BoundingBox(new Vector3(boundLeft - 5, boundUp, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundLow = new BoundingBox(new Vector3(boundLeft - 5, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundLow, 0).scl(RenderingSystem.PPM));
// world is a class containing some properties, including these BoundingBoxes
// RenderingSystem.PPM is the amount of pixels per metre, in this case 64
And the following part is called when the camera is panned around:
public void pan(float x, float y) {
Vector3 current = new Vector3(camera.position);
camera.translate(-x, y);
camera.update(true);
if (camera.frustum.boundsInFrustum(world.boundLeft) || camera.frustum.boundsInFrustum(world.boundRight)) {
camera.position.x = current.x; // Broke bounds on x axis, set camera back to old x
camera.update();
}
if (camera.frustum.boundsInFrustum(world.boundLow) || camera.frustum.boundsInFrustum(world.boundUp)) {
camera.position.y = current.y; // Broke bounds on y axis, set camera back to old y
camera.update();
}
game.batch.setProjectionMatrix(camera.combined);
}

Well, I figured it out. Guess what my world.startx and world.startY were defined as? That's right, they were in screen coordinates:
world.startX = start.getPosition().x * RenderingSystem.PPM;
world.startY = start.getPosition().y * RenderingSystem.PPM;
This was causing the Math.max to always pick the world.startX and world.startY as these values were absolutely massive in comparison.

Related

In ARCore, how do I best place a triangle in my world near a Pose, that I can use for ray intersection?

I'm working with ARCore in Android Studio using java and am trying to implement ray intersection with an object.
I started with Google's provided sample (as found here: https://developers.google.com/ar/develop/java/getting-started).
Upon touching the screen, a ray gets projected and when this ray touches a Plane, a PlaneAttachment (with an Anchor/a Pose) is created in the intersection point.
I would then like to put a 3D triangle in the world attached to this Pose.
At the moment I create my Triangle based on the Pose's translation, like this:
In HelloArActivity, during onDrawFrame(...)
//Code from sample, determining the hits on planes
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon.
if (hit instanceof PlaneHitResult && ((PlaneHitResult) hit).isHitInPolygon()) {
mTouches.add(new PlaneAttachment(
((PlaneHitResult) hit).getPlane(),
mSession.addAnchor(hit.getHitPose())));
//creating a triangle in the world
Pose hitPose = hit.getHitPose();
float[] poseCoords = new float[3];
hitPose.getTranslation(poseCoords, 0);
mTriangle = new Triangle(poseCoords);
}
}
}
Note: I am aware that the triangle's coordinates should be updated every time the Pose's coordinates get updated. I left this out as it is not part of my issue.
Triangle class
public class Triangle {
public float[] v0;
public float[] v1;
public float[] v2;
//create triangle around a given coordinate
public Triangle(float[] poseCoords){
float x = poseCoords[0], y = poseCoords[1], z = poseCoords[2];
this.v0 = new float[]{x+0.0001f, y-0.0001f, z};
this.v1 = new float[]{x, y+ 0.0001f, z-0.0001f};
this.v2 = new float[]{x-0.0001f, y, z+ 0.0001f};
}
After this, upon tapping the screen again I create a ray projected from the tapped (x,y) part of the screen, using Ian M his code sample provided in the answer to this question: how to check ray intersection with object in ARCore
Ray Creation, in HelloArActivity
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
The result of this however is that, no matter where I tap the screen, it always counts as a hit (regardless of how much distance I add between the points, in Triangle's constructor).
I suspect this has to do with how a Pose is located in the world, and using the Pose's translation coordinates as a reference point for my triangle is not the way to go, so I'm looking for the correct way to do this, but any remarks regarding other parts of my method are welcome!
Also I have tested my method for ray-triangle intersection and I don't think it is the problem, but I'll include it here for completeness:
public Point3f intersectRayTriangle(CustomRay R, Triangle T) {
Point3f I = new Point3f();
Vector3f u, v, n;
Vector3f dir, w0, w;
float r, a, b;
u = new Vector3f(T.V1);
u.sub(new Point3f(T.V0));
v = new Vector3f(T.V2);
v.sub(new Point3f(T.V0));
n = new Vector3f(); // cross product
n.cross(u, v);
if (n.length() == 0) {
return null;
}
dir = new Vector3f(R.direction);
w0 = new Vector3f(R.origin);
w0.sub(new Point3f(T.V0));
a = -(new Vector3f(n).dot(w0));
b = new Vector3f(n).dot(dir);
if ((float)Math.abs(b) < SMALL_NUM) {
return null;
}
r = a / b;
if (r < 0.0) {
return null;
}
I = new Point3f(R.origin);
I.x += r * dir.x;
I.y += r * dir.y;
I.z += r * dir.z;
return I;
}
Thanks in advance!

How to use AffineTransform with very little coordinates?

I have a set of two dimensions points. Their X and Y are greater than -2 and lesser than 2. Such point could be : (-0.00012 ; 1.2334 ).
I would want to display these points on a graph, using rectangles (a rectangle illustrates a point, and has its coordinates set to its point's ones - moreover, it has a size of 10*10).
Rectangles like (... ; Y) should be displayed above any rectangles like (... ; Y-1) (positive Y direction is up). Thus, I must set the graph's origin not at the top-left hand-corner, but somewhere else.
I'm trying to use Graphics2D's AffineTransform to do that.
I get the minimal value for all the X coordinates
I get the minimal value for all the Y coordinates
I get the maximal value for all the X coordinates
I get the maximal value for all the Y coordinates
I get the distance xmax-xmin and ymax-ymin
Then, I wrote the code I give you below.
Screenshots
Some days ago, using my own method to scale, I had this graph:
(so as I explained, Y are inverted and that's not a good thing)
For the moment, i.e., with the code I give you below, I have only one point that takes all the graph's place! Not good at all.
I would want to have:
(without lines, and without graph's axis. The important here is that points are correctly displayed, according to their coordinates).
Code
To get min and max coordinates value:
x_min = Double.parseDouble((String) list_all_points.get(0).get(0));
x_max = Double.parseDouble((String) list_all_points.get(0).get(0));
y_min = Double.parseDouble((String) list_all_points.get(0).get(1));
y_max = Double.parseDouble((String) list_all_points.get(0).get(1));
for(StorableData s : list_all_points) {
if(Double.parseDouble((String) s.get(0)) < x_min) {
x_min = Double.parseDouble((String) s.get(0));
}
if(Double.parseDouble((String) s.get(0)) > x_max) {
x_max = Double.parseDouble((String) s.get(0));
}
if(Double.parseDouble((String) s.get(1)) < y_min) {
y_min = Double.parseDouble((String) s.get(1));
}
if(Double.parseDouble((String) s.get(1)) > y_max) {
y_max = Double.parseDouble((String) s.get(1));
}
}
To draw a point:
int x, y;
private void drawPoint(Cupple storable_data) {
//x = (int) (storable_data.getNumber(0) * scaling_coef + move_x);
//y = (int) (storable_data.getNumber(1) * scaling_coef + move_y);
x = storable_data.getNumber(0).intValue();
y = storable_data.getNumber(1).intValue();
graphics.fillRect(x, y, 10, 10);
graphics.drawString(storable_data.toString(), x - 5, y - 5);
}
To paint the graph:
#Override
public void paint(Graphics graphics) {
this.graphics = graphics;
Graphics2D graphics_2d = ((Graphics2D) this.graphics);
AffineTransform affine_transform = graphics_2d.getTransform();
affine_transform.scale(getWidth()/(x_max - x_min), getHeight()/(y_max - y_min));
affine_transform.translate(x_min, y_min);
graphics_2d.transform(affine_transform);
for(StorableData storable_data : list_all_points) {
graphics_2d.setColor(Color.WHITE);
this.drawPoint((Cupple) storable_data);
}
I suggest you map each data point to a point on the screen, thus avoiding the following coordinate system pitfalls. Take your list of points and create from them a list of points to draw. Take into account that:
The drawing is pixel-based, so you will want to scale your points (or you would have rectangles 1 to 4 pixels wide...).
You will need to translate all your points because negative values will be outside the boundaries of the component on which you draw.
The direction of the y axis is reversed in the drawing coordinates.
Once that is done, use the new list of points for the drawing and the initial one for calculations. Here is an example:
public class Graph extends JPanel {
private static int gridSize = 6;
private static int scale = 100;
private static int size = gridSize * scale;
private static int translate = size / 2;
private static int pointSize = 10;
List<Point> dataPoints, scaledPoints;
Graph() {
setBackground(Color.WHITE);
// points taken from your example
Point p1 = new Point(-1, -2);
Point p2 = new Point(-1, 0);
Point p3 = new Point(1, 0);
Point p4 = new Point(1, -2);
dataPoints = Arrays.asList(p1, p2, p3, p4);
scaledPoints = dataPoints.stream()
.map(p -> new Point(p.x * scale + translate, -p.y * scale + translate))
.collect(Collectors.toList());
}
#Override
public Dimension getPreferredSize() {
return new Dimension(size, size);
}
#Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
// draw a grid
for (int i = 0; i < gridSize; i++) {
g2d.drawLine(i * scale, 0, i * scale, size);
g2d.drawLine(0, i * scale, size, i * scale);
}
// draw the rectangle
g2d.setPaint(Color.RED);
g2d.drawPolygon(scaledPoints.stream().mapToInt(p -> p.x).toArray(),
scaledPoints.stream().mapToInt(p -> p.y).toArray(),
scaledPoints.size());
// draw the points
g2d.setPaint(Color.BLUE);
// origin
g2d.fillRect(translate, translate, pointSize, pointSize);
g2d.drawString("(0, 0)", translate, translate);
// data
for (int i = 0; i < dataPoints.size(); i++) {
Point sp = scaledPoints.get(i);
Point dp = dataPoints.get(i);
g2d.fillRect(sp.x, sp.y, pointSize, pointSize);
g2d.drawString("(" + dp.x + ", " + dp.y + ")", sp.x, sp.y);
}
}
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.setContentPane(new Graph());
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
And another:
You might want to have the points aligned on the grid intersections and not below and to the right of them. I trust you will figure this one out.
Also, I ordered the points so that drawPolygon will paint the lines in the correct order. If your points are arbitrarily arranged, look for ways to find the outline. If you want lines between all points like in your example, iterate over all combinations of them with drawLine.

WorldWind line of sight

I've found this example of how to render line of sight in WorldWind: http://patmurris.blogspot.com/2008/04/ray-casting-and-line-of-sight-for-wwj.html (its a bit old, but it still seems to work). This is the class used in the example (slightly modified code below to work with WorldWind 2.0). It looks like the code also uses RayCastingSupport (Javadoc and Code) to do its magic.
What I'm trying to figure out is if this code/example is using the curvature of the earth/and or the distance to the horizon as part of its logic. Just looking at the code, I'm not sure I understand completely what it is doing.
For instance, if I was trying to figure out what terrain a person 200 meters above the earth could "see", would it take the distance to the horizon into account?
What would it take to modify the code to account for distance to the horizon/curvature of the earth (if its not already)?
package gov.nasa.worldwindx.examples;
import gov.nasa.worldwind.util.RayCastingSupport;
import gov.nasa.worldwind.view.orbit.OrbitView;
import gov.nasa.worldwind.geom.Angle;
import gov.nasa.worldwind.geom.Position;
import gov.nasa.worldwind.geom.Sector;
import gov.nasa.worldwind.geom.Vec4;
import gov.nasa.worldwind.globes.Globe;
import gov.nasa.worldwind.layers.CrosshairLayer;
import gov.nasa.worldwind.layers.RenderableLayer;
import gov.nasa.worldwind.render.*;
import javax.swing.*;
import javax.swing.border.CompoundBorder;
import javax.swing.border.TitledBorder;
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
public class LineOfSight extends ApplicationTemplate
{
public static class AppFrame extends ApplicationTemplate.AppFrame
{
private double samplingLength = 30; // Ray casting sample length
private int centerOffset = 100; // meters above ground for center
private int pointOffset = 10; // meters above ground for sampled points
private Vec4 light = new Vec4(1, 1, -1).normalize3(); // Light direction (from South-East)
private double ambiant = .4; // Minimum lighting (0 - 1)
private RenderableLayer renderableLayer;
private SurfaceImage surfaceImage;
private ScreenAnnotation screenAnnotation;
private JComboBox radiusCombo;
private JComboBox samplesCombo;
private JCheckBox shadingCheck;
private JButton computeButton;
public AppFrame()
{
super(true, true, false);
// Add USGS Topo Maps
// insertBeforePlacenames(getWwd(), new USGSTopographicMaps());
// Add our renderable layer for result display
this.renderableLayer = new RenderableLayer();
this.renderableLayer.setName("Line of sight");
this.renderableLayer.setPickEnabled(false);
insertBeforePlacenames(getWwd(), this.renderableLayer);
// Add crosshair layer
insertBeforePlacenames(getWwd(), new CrosshairLayer());
// Update layer panel
this.getLayerPanel().update(getWwd());
// Add control panel
this.getLayerPanel().add(makeControlPanel(), BorderLayout.SOUTH);
}
private JPanel makeControlPanel()
{
JPanel controlPanel = new JPanel(new GridLayout(0, 1, 0, 0));
controlPanel.setBorder(
new CompoundBorder(BorderFactory.createEmptyBorder(9, 9, 9, 9),
new TitledBorder("Line Of Sight")));
// Radius combo
JPanel radiusPanel = new JPanel(new GridLayout(0, 2, 0, 0));
radiusPanel.setBorder(BorderFactory.createEmptyBorder(6, 6, 6, 6));
radiusPanel.add(new JLabel("Max radius:"));
radiusCombo = new JComboBox(new String[] {"5km", "10km",
"20km", "30km", "50km", "100km", "200km"});
radiusCombo.setSelectedItem("10km");
radiusPanel.add(radiusCombo);
// Samples combo
JPanel samplesPanel = new JPanel(new GridLayout(0, 2, 0, 0));
samplesPanel.setBorder(BorderFactory.createEmptyBorder(6, 6, 6, 6));
samplesPanel.add(new JLabel("Samples:"));
samplesCombo = new JComboBox(new String[] {"128", "256", "512"});
samplesCombo.setSelectedItem("128");
samplesPanel.add(samplesCombo);
// Shading checkbox
JPanel shadingPanel = new JPanel(new GridLayout(0, 2, 0, 0));
shadingPanel.setBorder(BorderFactory.createEmptyBorder(6, 6, 6, 6));
shadingPanel.add(new JLabel("Light:"));
shadingCheck = new JCheckBox("Add shading");
shadingCheck.setSelected(false);
shadingPanel.add(shadingCheck);
// Compute button
JPanel buttonPanel = new JPanel(new GridLayout(0, 1, 0, 0));
buttonPanel.setBorder(BorderFactory.createEmptyBorder(6, 6, 6, 6));
computeButton = new JButton("Compute");
computeButton.addActionListener(new ActionListener()
{
public void actionPerformed(ActionEvent actionEvent)
{
update();
}
});
buttonPanel.add(computeButton);
// Help text
JPanel helpPanel = new JPanel(new GridLayout(0, 1, 0, 0));
buttonPanel.setBorder(BorderFactory.createEmptyBorder(6, 6, 6, 6));
helpPanel.add(new JLabel("Place view center on an elevated"));
helpPanel.add(new JLabel("location and click \"Compute\""));
// Panel assembly
controlPanel.add(radiusPanel);
controlPanel.add(samplesPanel);
controlPanel.add(shadingPanel);
controlPanel.add(buttonPanel);
controlPanel.add(helpPanel);
return controlPanel;
}
// Update line of sight computation
private void update()
{
new Thread(new Runnable() {
public void run()
{
computeLineOfSight();
}
}, "LOS thread").start();
}
private void computeLineOfSight()
{
computeButton.setEnabled(false);
computeButton.setText("Computing...");
try
{
Globe globe = getWwd().getModel().getGlobe();
OrbitView view = (OrbitView)getWwd().getView();
Position centerPosition = view.getCenterPosition();
// Compute sector
String radiusString = ((String)radiusCombo.getSelectedItem());
double radius = 1000 * Double.parseDouble(radiusString.substring(0, radiusString.length() - 2));
double deltaLatRadians = radius / globe.getEquatorialRadius();
double deltaLonRadians = deltaLatRadians / Math.cos(centerPosition.getLatitude().radians);
Sector sector = new Sector(centerPosition.getLatitude().subtractRadians(deltaLatRadians),
centerPosition.getLatitude().addRadians(deltaLatRadians),
centerPosition.getLongitude().subtractRadians(deltaLonRadians),
centerPosition.getLongitude().addRadians(deltaLonRadians));
// Compute center point
double centerElevation = globe.getElevation(centerPosition.getLatitude(),
centerPosition.getLongitude());
Vec4 center = globe.computePointFromPosition(
new Position(centerPosition, centerElevation + centerOffset));
// Compute image
float hueScaleFactor = .7f;
int samples = Integer.parseInt((String)samplesCombo.getSelectedItem());
BufferedImage image = new BufferedImage(samples, samples, BufferedImage.TYPE_4BYTE_ABGR);
double latStepRadians = sector.getDeltaLatRadians() / image.getHeight();
double lonStepRadians = sector.getDeltaLonRadians() / image.getWidth();
for (int x = 0; x < image.getWidth(); x++)
{
Angle lon = sector.getMinLongitude().addRadians(lonStepRadians * x + lonStepRadians / 2);
for (int y = 0; y < image.getHeight(); y++)
{
Angle lat = sector.getMaxLatitude().subtractRadians(latStepRadians * y + latStepRadians / 2);
double el = globe.getElevation(lat, lon);
// Test line of sight from point to center
Vec4 point = globe.computePointFromPosition(lat, lon, el + pointOffset);
double distance = point.distanceTo3(center);
if (distance <= radius)
{
if (RayCastingSupport.intersectSegmentWithTerrain(
globe, point, center, samplingLength, samplingLength) == null)
{
// Center visible from point: set pixel color and shade
float hue = (float)Math.min(distance / radius, 1) * hueScaleFactor;
float shade = shadingCheck.isSelected() ?
(float)computeShading(globe, lat, lon, light, ambiant) : 0f;
image.setRGB(x, y, Color.HSBtoRGB(hue, 1f, 1f - shade));
}
else if (shadingCheck.isSelected())
{
// Center not visible: apply shading nonetheless if selected
float shade = (float)computeShading(globe, lat, lon, light, ambiant);
image.setRGB(x, y, new Color(0f, 0f, 0f, shade).getRGB());
}
}
}
}
// Blur image
PatternFactory.blur(PatternFactory.blur(PatternFactory.blur(PatternFactory.blur(image))));
// Update surface image
if (this.surfaceImage != null)
this.renderableLayer.removeRenderable(this.surfaceImage);
this.surfaceImage = new SurfaceImage(image, sector);
this.surfaceImage.setOpacity(.5);
this.renderableLayer.addRenderable(this.surfaceImage);
// Compute distance scale image
BufferedImage scaleImage = new BufferedImage(64, 256, BufferedImage.TYPE_4BYTE_ABGR);
Graphics g2 = scaleImage.getGraphics();
int divisions = 10;
int labelStep = scaleImage.getHeight() / divisions;
for (int y = 0; y < scaleImage.getHeight(); y++)
{
int x1 = scaleImage.getWidth() / 5;
if (y % labelStep == 0 && y != 0)
{
double d = radius / divisions * y / labelStep / 1000;
String label = Double.toString(d) + "km";
g2.setColor(Color.BLACK);
g2.drawString(label, x1 + 6, y + 6);
g2.setColor(Color.WHITE);
g2.drawLine(x1, y, x1 + 4 , y);
g2.drawString(label, x1 + 5, y + 5);
}
float hue = (float)y / (scaleImage.getHeight() - 1) * hueScaleFactor;
g2.setColor(Color.getHSBColor(hue, 1f, 1f));
g2.drawLine(0, y, x1, y);
}
// Update distance scale screen annotation
if (this.screenAnnotation != null)
this.renderableLayer.removeRenderable(this.screenAnnotation);
this.screenAnnotation = new ScreenAnnotation("", new Point(20, 20));
this.screenAnnotation.getAttributes().setImageSource(scaleImage);
this.screenAnnotation.getAttributes().setSize(
new Dimension(scaleImage.getWidth(), scaleImage.getHeight()));
this.screenAnnotation.getAttributes().setAdjustWidthToText(Annotation.SIZE_FIXED);
this.screenAnnotation.getAttributes().setDrawOffset(new Point(scaleImage.getWidth() / 2, 0));
this.screenAnnotation.getAttributes().setBorderWidth(0);
this.screenAnnotation.getAttributes().setCornerRadius(0);
this.screenAnnotation.getAttributes().setBackgroundColor(new Color(0f, 0f, 0f, 0f));
this.renderableLayer.addRenderable(this.screenAnnotation);
// Redraw
this.getWwd().redraw();
}
finally
{
computeButton.setEnabled(true);
computeButton.setText("Compute");
}
}
/**
* Compute shadow intensity at a globe position.
* #param globe the <code>Globe</code>.
* #param lat the location latitude.
* #param lon the location longitude.
* #param light the light direction vector. Expected to be normalized.
* #param ambiant the minimum ambiant light level (0..1).
* #return the shadow intensity for the location. No shadow = 0, totaly obscured = 1.
*/
private static double computeShading(Globe globe, Angle lat, Angle lon, Vec4 light, double ambiant)
{
double thirtyMetersRadians = 30 / globe.getEquatorialRadius();
Vec4 p0 = globe.computePointFromPosition(lat, lon, 0);
Vec4 px = globe.computePointFromPosition(lat, Angle.fromRadians(lon.radians - thirtyMetersRadians), 0);
Vec4 py = globe.computePointFromPosition(Angle.fromRadians(lat.radians + thirtyMetersRadians), lon, 0);
double el0 = globe.getElevation(lat, lon);
double elx = globe.getElevation(lat, Angle.fromRadians(lon.radians - thirtyMetersRadians));
double ely = globe.getElevation(Angle.fromRadians(lat.radians + thirtyMetersRadians), lon);
Vec4 vx = new Vec4(p0.distanceTo3(px), 0, elx - el0).normalize3();
Vec4 vy = new Vec4(0, p0.distanceTo3(py), ely - el0).normalize3();
Vec4 normal = vx.cross3(vy).normalize3();
return 1d - Math.max(-light.dot3(normal), ambiant);
}
}
public static void main(String[] args)
{
ApplicationTemplate.start("World Wind Line Of Sight Calculation", AppFrame.class);
}
}
You are correct. This code does not take into account the earth curve.
From what I could see, a ray trace is done for the center of the light but the cone of the light was drawn on an image (I am not sure about that, but it looks as if this example draws on an image of gray scale).
Any way this demo is about detecting hitting the ground to stop the ray trace.
From what I understand, the algorithm stops after a distance set in the form (5km,10km ... 200km etc.)
I don't understand the direction of the ray. It makes sense to check for 200km radius only if you check light from out of space....
If you want to take the horizon into account you should check the pitch of the light source first. Its relevant for positive pitch values (above the horizon).
In that case you should decide when to stop once the center of the light gets very high above ground. How high depends on whether you point your light towards a mountain slope of you terrain is relatively flat, or if the source of light is narrow beam or wide.

Procedural terrain texture with lines and zones

I am currently making a program to procedurally generate 2d terrain maps, with different technics such as perlin noise, simplex, voronoi, fractal noise, etc. on a size-defined image to be able to use it in my games requiring a 2d terrain.
I've come across the "Modelling fake planets" section of http://paulbourke.net/fractals/noise and I need to make it on a 2d texture, and not on a 3d world like it is explained.
Now I'm trying to
create a line from point 'X' to point 'Y'
That line will define a zone with a boolean value for left or right of the line to be "darker".
Doing that for a number of iteration to create a texture.
Using the RGB value of the final image to change stuffs such as forests, lakes, etc.
this would work this way:
overrides with this method below,
http://img35.imageshack.us/img35/24/islf.png
I used my high school maths powers to create a code sample but it's not really working...
Questions:
How should i change it so it works instead of just being failing?
Is there a simpler way than using what i am using?
Java file:
if i need an example on how i will proceed, here it is:
package Generator;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import java.util.Random;
import VectorialStuffs.Vector2;
public class Linear
{
public static BufferedImage generateImage(Dimension dim, int iterations)
{
BufferedImage image = new BufferedImage(dim.width, dim.height, BufferedImage.TYPE_INT_ARGB);
//point X and point Y
Vector2 pointX;
Vector2 pointY;
//difference between those
Vector2 diff;
Vector2 side;
double slope;
//random
Random rand = new Random();
boolean direction; //the orientation of the dark zone. (left/right)
for (int i = 0; i < iterations; ++i)
{
pointX = new Vector2(0, 0);
pointY = new Vector2(0, 0);
direction = rand.nextBoolean();
System.out.println(direction);
side = new Vector2(0, 0); //there are 4 sides of the image.
while (side.x == side.y)
{
side.x = rand.nextInt(3); //0 - 1 - 2 - 3
side.y = rand.nextInt(3);
}
switch(side.x) //not the x coord, the X point! ;D
{
//x = random and y = 0
case 0:
pointX.x = rand.nextInt(dim.width);
pointX.y = 0;
break;
//x = max and y = random
case 2:
pointX.x = dim.width;
pointX.y = rand.nextInt(dim.height);
break;
//x = random and y = max
case 1:
pointX.x = rand.nextInt(dim.width);
pointX.y = dim.height;
break;
//x = 0 and y = random
case 3:
pointX.x = 0;
pointX.y = rand.nextInt(dim.height);
break;
}
switch(side.y) //not the y coord, the Y point! ;D
{
//x = random and y = 0
case 0:
pointY.x = rand.nextInt(dim.width);
pointY.y = 0;
break;
//x = max and y = random
case 2:
pointY.x = dim.width;
pointY.y = rand.nextInt(dim.height);
break;
//x = random and y = max
case 1:
pointY.x = rand.nextInt(dim.width);
pointY.y = dim.height;
break;
//x = 0 and y = random
case 3:
pointY.x = 0;
pointY.y = rand.nextInt(dim.height);
break;
}
diff = new Vector2((pointY.x - pointX.x), (pointY.y - pointX.y));
slope = diff.y / diff.x;
Graphics graph = image.getGraphics();
if (direction) //true = right | false = left
{
int start; //the start x coordinate, on the line then increases until reaching the end of the image
int end = dim.width;
graph.setColor(Color.red);
graph.fillRect(pointX.x - 8, pointX.y -8, 16, 16);
graph.setColor(Color.yellow);
graph.fillRect(pointY.x - 8, pointY.y -8, 16, 16);
for (int times = 0; times < dim.height; ++times) //horizontal drawer
{
System.out.println(times);
start = (int)((times-diff.y)/slope + diff.y); //this is where it goes wrong?
for (int value = start; value < end; ++value)
{
graph.setColor(new Color(rand.nextInt(255), rand.nextInt(255), rand.nextInt(255), 100));
graph.fillRect(value, times, 1, 1);
}
}
graph.dispose();
}
else
{
int start; //the start x coordinate, on the line then increases until reaching the end of the image
int end = dim.width;
graph.setColor(Color.red);
graph.fillRect(pointX.x - 8, pointX.y -8, 16, 16);
graph.setColor(Color.yellow);
graph.fillRect(pointY.x - 8, pointY.y -8, 16, 16);
for (int times = 0; times < dim.height; ++times) //horizontal drawer
{
System.out.println(times);
start = (int)((times-diff.y)/slope);
for (int value = end; value < start; --value)
{
graph.setColor(new Color(rand.nextInt(255), rand.nextInt(255), rand.nextInt(255), 100));
graph.fillRect(value, times, 1, 1);
}
}
graph.dispose();
}
}
return image;
}
}
Note:
In this case vector2 is just a class with X and Y, which can be accessed (this is probably going to be temporary).
Startup part to avoid you losing time:
terrainImage = Linear.generateImage(size, 1); //size being a Dimension. -> "new Dimension(256, 256)"
if (terrainImage != null)
{
Icon wIcon = new ImageIcon(terrainImage);
JOptionPane.showMessageDialog(null, "message", "title", JOptionPane.OK_OPTION, wIcon);
}
//edit
here is the code that needs improvement:
if (direction) //true = right | false = left
{
int start; //the start x coordinate, on the line then increases until reaching the end of the image
int end = dim.width;
graph.setColor(Color.red);
graph.fillRect(pointX.x - 8, pointX.y -8, 16, 16);
graph.setColor(Color.yellow);
graph.fillRect(pointY.x - 8, pointY.y -8, 16, 16);
for (int times = 0; times < dim.height; ++times) //horizontal drawer
{
System.out.println(times);
start = (int)((times-diff.y)/slope + diff.y); //this is where it goes wrong?
for (int value = start; value < end; ++value)
{
graph.setColor(new Color(rand.nextInt(255), rand.nextInt(255), rand.nextInt(255), 100));
graph.fillRect(value, times, 1, 1);
}
}
graph.dispose();
}
else
{
int start; //the start x coordinate, on the line then increases until reaching the end of the image
int end = dim.width;
graph.setColor(Color.red);
graph.fillRect(pointX.x - 8, pointX.y -8, 16, 16);
graph.setColor(Color.yellow);
graph.fillRect(pointY.x - 8, pointY.y -8, 16, 16);
for (int times = 0; times < dim.height; ++times) //horizontal drawer
{
System.out.println(times);
start = (int)((times-diff.y)/slope);
for (int value = end; value < start; --value)
{
graph.setColor(new Color(rand.nextInt(255), rand.nextInt(255), rand.nextInt(255), 100));
graph.fillRect(value, times, 1, 1);
}
}
graph.dispose();
}
i can't get it to work like i showed in the picture above, all it does is either nothing, or offset from the 2 points.
Also, sometimes it freezes for no reason, so idk what will happen if i make more iterations of this :/
The pattern generation element of your code should only take about 3 lines, including rotation, colour pattern modulation and all as a function of iterations of i.
I will try and be clear:
you don't need a bar/line to generate your maps, you need any pattern on one/2 axes that starts off half of the period of the map and that gets a smaller and smaller proportion of the map or a smaller and smaller period.
pattern:
A line is round(x); or round (x+y) or round(sin(x+y +translatebar)+barwidth)<--a real bar in middle not just on side //
you can do curvy and zigzag lines later and 2D lines using additions and multiplications of X and Y functions. That function is essentially just a single line where you can change it X value so that rotates.
Rotation:
instead of a functional X every time which make a vertical line, you need to use sinus and co sinus function to generate X and Y values.
4 example 30; rotation is : round( X * 0.866+ Y* 0.5)
Get the sine and cosine of a random values and it will give you random rotations of your pattern the handy thing is that you just make a random value of your loop iteration and send it to a sign cosine.
OK i ll write this in pseudocode it will be simpler:
var pattern = 0; // black canvas
for(var i=1; i=100; i++)
{
pattern += round((sin (X*sin(pseudorand(i)) + Y*cos(pseudorand(i)) + translation) + roundshift )*strength;
}
The above loop will generate thousands of map patterns by adding bars of different rotations.
Round = quantizes your sin(XY) function so it is just black and white / red grey.
Sin(XY) = a variable function to use as a pattern, quantized by round to 0/1 values... multiply and clamp that value in the same line so it doesnt exceed 1 or 0
roundshift = value inside round(sin) pattern that shifts the sin down or up inside the round value resulting in smaller or larger amouts of black/white ration of each iteration. its a multiple of i so it's a function of i, gets smaller every loop.
xsin(rnd i) ycos(rnd i) = rotates your pattern both rnd's are same number necessarily.
translate value = when you +/- a number to a Sin(x+translate). it moves bar backwards/forwards
in the end your pattern value will equals maxiumum 100, so devide by 100 so it's 0-1 or mult by 2.56 for 256, and use a color randomiser to make RGB random multiples of your pattern value.
The above loop obviously needs to run once for every pixel x y.
i dont know how to do the canvas array/texture addin pixels in JS, it should be easy.
The above code will give you great patterns and visual feedback of your errors so you should be able to refine it very nicely, only think i missed is clamp to 0-1 values of sin (-1 1)+ roundshift result.
so a bar is round(sin(xy)+translate), and you can use many many functions of xy added muptiplied sins to add together everything else instead bars, graph circles, squares, wiggles, ovals, rectangles etc.
there is a website all about patterns of this type, except for ordered angles and say 5-6 iterations, using dots bars triangles etc, he is Canadian and on deviant art as well, if there weren't so many TD pattern generated I could find his website!
Here is a website explaining the process of "pattern piling" it's overlaying many shapes in smaller and smaller iterations.
only difference is he uses ordered rotations to create symmetry, and you want random rotations to create chaos maps.
see all the pics of piled patterns in 2d, he has many examples on deviant art and his site, i learnt alot from this guy:
http://algorithmic-worlds.net/info/info.php?page=pilpat
here is more work of superimposed smaller and smaller patterns in symmetry rotations:
https://www.google.com/search?q=Samuel+Monnier&espv=210&es_sm=93&source=lnms&tbm=isch&sa=X&ei=It0AU9uTCOn20gXXv4G4Cw&ved=0CAkQ_AUoAQ&biw=1365&bih=911
same as this using random sin cos rotations.

Cube texturing in opengl3

Just doing my computer graphics assignment - put texture (600x400 bitmap with different numbers) on a cube to form a proper dice. I managed to do it using "classical" texture mapping: creating verices and adding corresponding texture coordinates to it:
int arrayindex = 0;
float xpos = 0.0f;
float xposEnd = 0.32f;
float ypos = 0.0f;
float yposEnd = 0.49f;
int count = 0;
void quad( int a, int b, int c, int d ) {
colors[arrayindex] = vertex_colors[a];
points[arrayindex] = vertices[a];
tex_coord[arrayindex] = new Point2(xpos, ypos);
arrayindex++;
colors[arrayindex] = vertex_colors[b];
points[arrayindex] = vertices[b];
tex_coord[arrayindex] = new Point2(xpos, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[c];
points[arrayindex] = vertices[c];
tex_coord[arrayindex] = new Point2(xposEnd, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[a];
points[arrayindex] = vertices[a];
tex_coord[arrayindex] = new Point2(xpos, ypos);
arrayindex++;
colors[arrayindex] = vertex_colors[c];
points[arrayindex] = vertices[c];
tex_coord[arrayindex] = new Point2(xposEnd, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[d];
points[arrayindex] = vertices[d];
tex_coord[arrayindex] = new Point2(xposEnd, ypos);
arrayindex++;
xpos = xpos + 0.34f;
xposEnd = xpos + 0.32f;
count++;
if (count == 3) {
xpos = 0.0f;
xposEnd = 0.33f;
ypos = 0.51f;
yposEnd = 1.0f;
}
}
void colorcube() {
quad( 1, 0, 3, 2 );
quad( 2, 3, 7, 6 );
quad( 3, 0, 4, 7 );
quad( 6, 5, 1, 2 );
quad( 5, 4, 0, 1 );
quad( 4, 5, 6, 7 );
pointsBuf = VectorMath.toBuffer(points);
colorsBuf = VectorMath.toBuffer(colors);
texcoord = VectorMath.toBuffer(tex_coord);
}
Passing all this stuff to shaders and just putting it up together.
But reviewing the slides i noticed this method is supposed to be "pre opengl3".
Is there any other method to do this stuff?
In lecture examples i noticed putting it up together in the vertex shader but it was just for a simple 2d plane, not a 3d cube
tex_coords = vec2(vPosition.x+0.5,vPosition.z+0.5);
and later passed to fragment shader to create the texture.
But reviewing the slides i noticed this method is supposed to be "pre opengl3".
I think your slides refer to the old immediate mode. In immediate mode each vertex and its attributes are sent to OpenGL by calling functions that immediately draw them.
In your code however you're initializing a buffer with vertex data. This buffer may then passed as a whole to OpenGL and drawn as a batch by only a single OpenGL call. I wrote "may" because there's not a single OpenGL call in your question.

Categories

Resources