Inverse Vector3 - java

I'm trying to convert some C++ code into Java. As there is no default Vector3 datatype in java, I have created my own custom Vector3 class. However, I saw that it is possible to do this: 1/VECTOR3 in C++. This had me sort of confused as a Vector3 has 3 values within it. So I was wondering could someone explain to me what is actually happening to each of the 3 values when 1/VECTOR3 is used?
The code I'm trying to convert is from here, at the bottom of the page:
https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection
class Ray { public:
Ray(const Vec3f &orig, const Vec3f &dir) : orig(orig), dir(dir)
{
invdir = 1 / dir;
sign[0] = (invdir.x < 0);
sign[1] = (invdir.y < 0);
sign[2] = (invdir.z < 0);
}
Vec3 orig, dir; // ray orig and dir
Vec3 invdir;
int sign[3]; };

First of all there is no Vec3 in C++. If you read that page carefully, its all their types. The text is a bit sloppy. They call it "inverse of the ray-direction" but for a vector I am not aware of a commonly used defintion for "inverse". Anyhow...
You just need to scroll to the end of the page, follow the link to the complete example, then look at geometry.h. There is the definition of that division:
friend Vec3 operator / (const T &r, const Vec3 &v)
{ return Vec3<T>(r / v.x, r / v.y, r / v.z); }
(T is the element type)
They define division of a scalar by a vector to be element-wise division.

Thanks to User: idclev 463035818. I found out that for this specific paper essentially this operation was a custom operation made in c# (because apparently, c# is just that overpowered). When you have a number, that we'll call n divided by vector3 that we'll call VECTOR3. Essentially, each this operation returns the following as a vector3 (n / VECTOR3.x, n / VECTOR3.y, n / VECTOR3.z)

Related

Issues with Raytracing triangles (orientation and coloring)

EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.

3D Vector linear interpolation

How can I lerp between two 3d vectors?
I use this method for 2d vectors:
public Vector2d lerp(Vector2d other, double speed, double error) {
if (equals(other) || getDistanceSquared(other) <= error * error)
return other;
double dx = other.getX() - this.x, dy = other.getY() - this.y;
double direction = Math.atan2(dy, dx);
double x = this.x + (speed * Math.cos(direction));
double y = this.y + (speed * Math.sin(direction));
return new Vector2d(x, y);
}
Note: this is not exactly "linear interpolation"; this method will interpolate at a constant rate, which is what I want.
I want to do exactly what this does but with an added z component for the third dimension. How can I do this?
The easiest way would be to transform your two vectors such that they lie in the (u, v) plane; then apply your method above; then transform back to the original coordinate space.
This requires you to construct a rotation matrix:
Take the cross product of your two vectors to get the mutual normal vector; call this cross_1;
Define that this points along the u axis;
Take the cross product of this and cross_1 to get a vector cross_2, which is the direction of your v axis.
Normalize each of these three vectors; call them this_norm, cross_2_norm and cross_1_norm.
These three vectors can be written as a 3x3 orthonormal matrix (each of the vectors is a 3-element column vector):
R = [ this_norm cross_2_norm cross_1_norm ]
Now: you can multiply your 3d vectors this and other by this matrix, and you will get vectors which have the form
[ u ]
[ v ]
[ 0 ]
i.e. a 3-dimensional column vector with zero as the third element (or, at least, you should. I may have forgotten to transpose the 3x3 matrix above).
So, you can obviously discard the third element, and have 2-element column vectors: you can store these in Vector2d. And so you can apply your method above to do the interpolation.
That gives you a Vector2d which interpolates in the (u, v) plane. You can transform that back to the (x, y, z) space by attaching a zero third element to it, and pre-multiplying by R' (which is the inverse of R, since it is orthonormal).
Of course, you need to handle degenerate cases, like zero and (anti-)parallel vectors. In these cases, one or both of the cross products are zero, meaning you can't normalize them; simply pick arbitrary directions instead.
If I understand your code correctly, when you compute dx and dy offsets, then compute angle from it, and finally sin/cos pair - you're basically normalizing the dx,dy vector, so you could write it like that:
Vector2d delta = other - this; // I'm not sure about your API here,
delta.normalize(); // you may need to fix those lines
double x = this.x + (speed * delta.x);
double y = this.y + (speed * delta.y);
Now it should be straightforward to add a Z component.

SurfacePlotMesh (FXyz) constructor arguments

I would like to know if I am correct with understanding the constructor argument as a Function<Point2D, Number> function.
My function which I have used for 1D charts based on the applying the variables after every step on the x axis, but there is as a parametr Point2D which contain 2 variables : x and y, if i am correct the x varriable is step which increase "0.5" for every calculations after apply the function of y.
Then what is the second parametr of generic type, the Number ?
How could I implement other functions, using the SurfacePlotMesh class. Could someone explain me a little bit how it works ? Or link the documenations ( If it exist ) ?
If you have a look at the code for SurfacePlotMesh in the FXyz library, you'll find createPlotMesh(), a method that creates the mesh for the surface, based on two coordinates on a plane grid (x, y), taken from the Point2D coordinates, and a function value (z), given by the function applied on that point.
If you have a look at the default parameters:
private static final Function<Point2D, Number> DEFAULT_FUNCTION =
p -> Math.sin(p.magnitude()) / p.magnitude();
private static final double DEFAULT_X_RANGE = 10; // -5 +5
private static final double DEFAULT_Y_RANGE = 10; // -5 +5
private static final int DEFAULT_X_DIVISIONS = 64;
private static final int DEFAULT_Y_DIVISIONS = 64;
private static final double DEFAULT_FUNCTION_SCALE = 1.0D;
what it means it that there will be a grid of 10x10 units, with 64x64 divisions. In each and every vertex (x,y) of the total 65x65 vertices, we will evaluate the function to get the value z = f(x, y), with a default scale of 1.
I.e., for the top left 2D point at (-5, -5) -> f(-5, -5) = 1.0025, so the 3D point for the mesh will be (-5, -5, 1.0025), and so on.
This picture shows a grid of 10x10 range with 20x20 divisions, and the mesh with a scale of 4 for that function.
You can change the function at any time, like:
p -> p.getX()
p -> p.getX() * p.getY()
p -> Math.cos(p.getX()) * Math.sin(p.getY())
...
as well as the other parameters (range, divisions, scale).
For the moment there is no documentation, but the code is fully available.
Also there is a sampler to run most of the samples and modify the parameters to easily check the result without recompiling all over again here.
EDIT
Based on the OP comment, for a function where there is no y dependency, a ribbon type of surface can be created by setting a very low value on y:
private void createSurface(double time) {
surface = new SurfacePlotMesh(
p-> Math.sqrt(Math.pow(Math.exp(-(Math.pow((p.getX() - time), 2))) *
(Math.cos((2 * Math.PI * (p.getX() - time)))), 2) +
Math.pow(Math.exp(-(Math.pow((p.getX() - time), 2))) *
(Math.sin((2 * Math.PI * (p.getX() - time)))), 2)),
10, 0.1, 64, 2, 2);
}
where the time parameter will be set to a fixed value or in an animation.

How to find evenly distributed points on a line?

This is to find evenly-distributed points lying on a specific line (from a starting position, 2 points on the line, and an angle against the horizontal) and then past the second point, to draw something so it's moving at a fixed rate in a given direction.
I'm thinking about calculating a slope, which would give me vertical movement to horizontal movement. However, I don't even know how to assure they'd be the same speed in two different lines. For example, if there are two different pairs of points, how it would take the same amount of time for the drawing to travel the same distance on both.
Is what I'm describing the correct idea? Are there any methods in OpenGL that could help me?
You should use vectors. Start with a point and travel in the direction of a vector. For example:
typedef struct vec2 {
float x;
float y;
} vec2;
That defines a basic 2D vector type. (This will work in 3D, just add a z coord.)
To move a fixed distance in a given direction, simply take some starting point and add the direction vector scaled by a scalar amount. Like this:
typedef struct point2D {
float x;
float y;
} point2D; //Notice any similarities here?
point2D somePoint = { someX, someY };
vec2 someDirection = { someDirectionX, someDirectionY };
float someScale = 5.0;
point2D newPoint;
newPoint.x = somePoint.x + someScale * someDirection.x;
newPoint.y = somePoint.y + someScale * someDirection.y;
The newPoint will be 5 units in the direction of someDirection. Note that you'll probably want to normalize someDirection before using it in this manner, so it's length is 1.0:
void normalize (vec2* vec)
{
float mag = sqrt(vec->x * vec->x + vec->y * vec->y);
// TODO: Deal with mag being 0
vec->x /= mag;
vec->y /= mag;
}

Virtual trackball implementation

I've been trying for the past few days to make a working implementation of a virtual trackball for the user interface for a 3D graphing-like program. But I'm having trouble.
Looking at the numbers and many tests the problems seems to be the actual concatenation of my quaternions but I don't know or think so. I've never worked with quaternions or virtual trackballs before, this is all new to me. I'm using the Quaternion class supplied by JOGL. I tried making my own and it worked (or at least as far a I know) but it was a complete mess so I just went with JOGL's.
When I do not concatenate the quaternions the slight rotations I see seem to be what I want, but of course It's hard when it's only moving a little bit in any direction. This code is based off of the Trackball Tutorial on the OpenGL wiki.
When I use the Quaternion class's mult (Quaternion q) method the graph hardly moves (even less than not trying to concatenate the quaternions).
When I tried Quaternionclass'sadd (Quaternion q)` method for the fun of it I get something that at the very least rotates the graph but not in any coherent way. It spazzes out and rotates randomly as I move the mouse. Occasionally I'll get quaternions entirely filled with NaN.
In my code I will not show either of these, I'm lost with what to do with my quaternions. I know I want to multiply them because as far as I'm aware that's how they are concatenated. But like I said I've had no success, I'm assuming the screw up is somewhere else in my code.
Anyway, my setup has a Trackball class with a public Point3f projectMouse (int x, int y) method and a public void rotateFor (Point3f p1, Point3f p2), Where Point3f is a class I made. Another class called Camera has a public void transform (GLAutoDrawable g) method which will call OpenGL methods to rotate based on the trackball's quaternion.
Here's the code:
public Point3f projectMouse (int x, int y)
{
int off = Screen.WIDTH / 2; // Half the width of the GLCanvas
x = x - objx_ - off; // obj being the 2D center of the graph
y = off - objy_ - y;
float t = Util.sq(x) + Util.sq(y); // Util is a class I made with
float rsq = Util.sq(off); // simple some math stuff
// off is also the radius of the sphere
float z;
if (t >= rsq)
z = (rsq / 2.0F) / Util.sqrt(t);
else
z = Util.sqrt(rsq - t);
Point3f result = new Point3f (x, y, z);
return result;
}
Here's the rotation method:
public void rotateFor (Point3f p1, Point3f p2)
{
// Vector3f is a class I made, I already know it works
// all methods in Vector3f modify the object's numbers
// and return the new modify instance of itself
Vector3f v1 = new Vector3f(p1.x, p1.y, p1.z).normalize();
Vector3f v2 = new Vector3f(p2.x, p2.y, p2.z).normalize();
Vector3f n = v1.copy().cross(v2);
float theta = (float) Math.acos(v1.dot(v2));
float real = (float) Math.cos(theta / 2.0F);
n.multiply((float) Math.sin(theta / 2.0F));
Quaternion q = new Quaternion(real, n.x, n.y, n.z);
rotation = q; // A member that can be accessed by a getter
// Do magic on the quaternion
}
EDIT:
I'm getting closer, I found out a few simple mistakes.
1: The JOGL implementation treats W as the real number, not X, I was using X for real
2: I was not starting with the quaternion 1 + 0i + 0j + 0k
3: I was not converting the quaternion into an axis/angle for opengl
4: I was not converting the angle into degrees for opengl
Also as Markus pointed out I was not normalizing the normal, when I did I couldn't see much change, thought it's hard to tell, he's right though.
The problem now is when I do the whole thing the graph shakes with a fierceness like you would never believe. It (kinda) moves in the direction you want it to, but the seizures are too fierce to make anything out of it.
Here's my new code with a few name changes:
public void rotate (Vector3f v1, Vector3f v2)
{
Vector3f v1p = v1.copy().normalize();
Vector3f v2p = v2.copy().normalize();
Vector3f n = v1p.copy().cross(v2p);
if (n.length() == 0) return; // Sometimes v1p equals v2p
float w = (float) Math.acos(v1p.dot(v2p));
n.normalize().multiply((float) Math.sin(w / 2.0F));
w = (float) Math.cos(w / 2.0F);
Quaternion q = new Quaternion(n.x, n.y, n.z, w);
q.mult(rot);
rot_ = q;
}
Here's the OpenGL code:
Vector3f p1 = tb_.project(x1, y1); // projectMouse [changed name]
Vector3f p2 = tb_.project(x2, y2);
tb_.rotate (p1, p2);
float[] q = tb_.getRotation().toAxis(); // Converts to angle/axis
gl.glRotatef((float)Math.toDegrees(q[0]), q[1], q[2], q[3]);
The reason for the name changes is because I deleted everything in the Trackball class and started over. Probably not the greatest idea, but oh well.
EDIT2:
I can say with pretty good certainty that there is nothing wrong with projecting onto the sphere.
I can also say that as far as the whole thing goes it seems to be the VECTOR that is the problem. The angle looks just fine, but the vector seems to jump around.
EDIT3:
The problem is the multiplication of the two quaternions, I can confirm that everything else works as expected. Something goes whacky with the axis during multiplication!
The problem is the multiplication of the two quaternions, I can confirm that everything else works as expected. Something goes whacky with the axis during multiplication!
You are absolutely correct!! I just recently submitted a correct multiplication and Jogamp has accepted my change. They had incorrect multiplication on mult(quaternion).
I am sure if you get the latest jogl release, it'll have the correct mult(Quaternion)
I did it!
Thanks to this C++ implementation I was able to develop a working trackball/arcball interface. My goodness me, I'm still not certain what the problem was, but I rewrote everything and even wrote my own Quaternions class and suddenly the whole thing works. I also made a Vectors class for vectors. I had a Vector3f class before but the Quaternions and Vectors classes are full of static methods and take in arrays. To make it easy to do vector computations on quaternions and vice versa. I will link the code for those two classes below, but only the Trackball class will be show here.
I made those two classes pretty quickly this morning so if there are any mathematical errors, well, uh, oops. I only used what I needed to use and made sure they were correct. These classes are below:
Quaternions: http://pastebin.com/raxS4Ma9
Vectors: http://pastebin.com/fU3PKZB9
Here is my Trackball class:
public class Trackball
{
private static final float RADIUS_ = Screen.DFLT_WIDTH / 2.0F;
private static final int REFRESH_ = 50;
private static final float SQRT2_ = (float) Math.sqrt(2);
private static final float SQRT2_INVERSE_ = 1.0F / SQRT2_;
private int count_;
private int objx_, objy_;
private float[] v1_, v2_;
private float[] rot_;
public Trackball ()
{
v1_ = new float[4];
v2_ = new float[4];
rot_ = new float[] {0, 0, 0, 1};
}
public void click (int x, int y)
{
v1_ = project(x, y);
}
public void drag (int x, int y)
{
v2_ = project(x, y);
if (Arrays.equals(v1_, v2_)) return;
float[] n = Vectors.cross(v2_, v1_, null);
float[] o = Vectors.sub(v1_, v2_, null);
float dt = Vectors.len(o) / (2.0F * RADIUS_);
dt = dt > 1.0F ? 1.0F : dt < -1.0F ? -1.0F : dt;
float a = 2.0F * (float) Math.asin(dt);
Vectors.norm_r(n);
Vectors.mul_r(n, (float) Math.sin(a / 2.0F));
if (count_++ == REFRESH_) { count_ = 0; Quaternions.norm_r(rot_); }
float[] q = Arrays.copyOf(n, 4);
q[3] = (float) Math.cos(a / 2.0F);
rot_ = Quaternions.mul(q, rot_, rot_);
}
public float[] getAxis ()
{
return Quaternions.axis(rot_, null);
}
public float[] project (float x, float y)
{
x = RADIUS_ - objx_ - x;
y = y - objy_ - RADIUS_;
float[] v = new float[] {x, y, 0, 0};
float len = Vectors.len(v);
float tr = RADIUS_ * SQRT2_INVERSE_;
if (len < tr)
v[2] = (float) Math.sqrt(RADIUS_ * RADIUS_ - len * len);
else
v[2] = tr * tr / len;
return v;
}
}
You can see there's a lot of similarities from the C++ example. Also I'd like to note there is no method for setting the objx_ and objy_ values yet. Those are for setting the center of the graph which can be moved around. Just saying, so you don't scratch your head about those fields.
The cross-product of two normalized vectors is not normalized itself. It's length is sin(theta). Try this instead:
n = n.normalize().multiply((float) Math.sin(theta / 2.0F));

Categories

Resources