Can't texture model properly - java

My Blender 3D object that was exported with triangulated faces and UV's written doesn't apply the texture properly.
It looks like this:
My render code:
Color.white.bind();
texture.bind();
glBegin(GL_TRIANGLES);
for(ObjFace face : model.faces){
float[] vertex1 = model.vertices[face.indices[0]-1];
float[] vertex2 = model.vertices[face.indices[1]-1];
float[] vertex3 = model.vertices[face.indices[2]-1];
float[] normal1 = model.normals[face.normals[0]-1];
float[] normal2 = model.normals[face.normals[1]-1];
float[] normal3 = model.normals[face.normals[2]-1];
float[] tex1 = model.texCoords[face.texCoords[0]-1];
float[] tex2 = model.texCoords[face.texCoords[1]-1];
float[] tex3 = model.texCoords[face.texCoords[2]-1];
glNormal3f(normal1[0], normal1[1], normal1[2]);
glTexCoord2f(tex1[0], tex1[1]);
glVertex3f(vertex1[0], vertex1[1], vertex1[2]);
glNormal3f(normal2[0], normal2[1], normal2[2]);
glTexCoord2f(tex2[0], tex2[1]);
glVertex3f(vertex2[0], vertex2[1], vertex2[2]);
glNormal3f(normal3[0], normal3[1], normal3[2]);
glTexCoord2f(tex3[0], tex3[1]);
glVertex3f(vertex3[0], vertex3[1], vertex3[2]);
}
glEnd();
The parsing code:
for (int i = 0; i < lines.length; ++i) {
String[] spaced = lines[i].split(" ");
if (lines[i].startsWith("v ")) {
float[] vertices = new float[3];
vertices[0] = parseFloat(spaced[1]);
vertices[1] = parseFloat(spaced[2]);
vertices[2] = parseFloat(spaced[3]);
verticesArray.add(vertices);
} else if (lines[i].startsWith("vn ")) {
float[] normals = new float[3];
normals[0] = parseFloat(spaced[1]);
normals[1] = parseFloat(spaced[2]);
normals[2] = parseFloat(spaced[3]);
normalsArray.add(normals);
} else if (lines[i].startsWith("vt ")) {
float[] texCoords = new float[2];
texCoords[0] = parseFloat(spaced[1]);
texCoords[1] = parseFloat(spaced[2]);
texCoordsArray.add(texCoords);
} else if (lines[i].startsWith("f ")) {
int[] faceIndices = new int[3];
int[] faceNormals = new int[3];
int[] faceTextureCoords = new int[3];
faceIndices[0] = parseInt(spaced[1].split("/")[0]);
faceIndices[1] = parseInt(spaced[2].split("/")[0]);
faceIndices[2] = parseInt(spaced[3].split("/")[0]);
faceNormals[0] = parseInt(spaced[1].split("/")[2]);
faceNormals[1] = parseInt(spaced[2].split("/")[2]);
faceNormals[2] = parseInt(spaced[3].split("/")[2]);
faceTextureCoords[0] = parseInt(spaced[1].split("/")[1]);
faceTextureCoords[1] = parseInt(spaced[2].split("/")[1]);
faceTextureCoords[2] = parseInt(spaced[3].split("/")[1]);
faceArray.add(new ObjFace(faceIndices, faceNormals, faceTextureCoords));
}
}
Although I'm not sure if it could be a problem with my Blender export.
Thanks.
EDIT: Updated pic after I made the texture image's width and height powers of two.
Edit 2: I tried a simple box to make sure that it wasn't the model that was screwing up and tested the face culling. On the box, culling the back faces fixes the problem to some extent however, on the cup it makes little difference.
Edit 3: I included a video to demonstrate what I think is the problem. I think that the triangular glitch is caused by overlapping triangles like with the handle is in front of the actual cup. youtube vid

Is that another instance of the 'textures that have sizes not power of 2' problem ? lwjgl will extend your texture to a power of 2 size, meaning your UV coordinates that are [0...1] will be wrong, they should be [0 ... 0.5783] because the rest is lwjgl padding to reach power of 2. Can't find a reference...

IIRC, in GLES20, the normals for a triangle are indicated by the winding (clock wise or counter clock wise). I'm not sure exactly what GL will understand if you set a normal per vertex (that's for lighting) and none for the triangle. I'm not sure you can set one for a triangle, meaning it will be computed from vertex position+winding. What makes me think it's a problem is the fact that all your quads are half rendered (one triangle in two).

Related

Transform cartesian pixel-data-array to lat/lon pixel-data-array

I have an image (basically, I get raw image data as 1024x1024 pixels) and the position in lat/lon of the center pixel of the image.
Each pixel represents the same fixed pixel scale in meters (e.g. 30m per pixel).
Now, I would like to draw the image onto a map which uses the coordinate reference system "EPSG:4326" (WGS84).
When I draw it by defining just corners in lat/lon of the image, depending on a "image size in pixel * pixel scale" calculation and converting the distances from the center point to lat/lon coordinates of each corner, I suppose, the image is not correctly drawn onto the map.
By the term "not correctly drawn" I mean, that the image seems to be shifted and also the contents of the image are not at the map location, where I expected them to be.
I suppose this is the case because I "mix" a pixel scaled image and a "EPSG:4326" coordinate reference system.
Now, with the information I have given, can I transform the whole pixel matrix from fixed pixel scale base to a new pixel matrix in the "EPSG:4326" coordinate reference system, using Geotools?
Of course, the transformation must be dependant on the center position in lat/lon, that I have been given, and on the pixel scale.
I wonder if using something like this would point me into the correct direction:
MathTransform transform = CRS.findMathTransform(DefaultGeocentricCRS.CARTESIAN, DefaultGeographicCRS.WGS84, true);
DirectPosition2D srcDirectPosition2D = new DirectPosition2D(DefaultGeocentricCRS.CARTESIAN, degreeLat.getDegree(), degreeLon.getDegree());
DirectPosition2D destDirectPosition2D = new DirectPosition2D();
transform.transform(srcDirectPosition2D, destDirectPosition2D);
double transX = destDirectPosition2D.x;
double transY = destDirectPosition2D.y;
int kmPerPixel = mapImage.getWidth / 1024; // It is known to me that my map is 1024x1024km ...
double x = zeroPointX + ((transX * 0.001) * kmPerPixel);
double y = zeroPointY + (((transX * -1) * 0.001) * kmPerPixel);
(got this code from another SO thread and already modified it a little bit, but still wonder if this is the correct starting point for my problem.)
I only suppose that my original image coordinate reference system is of the type DefaultGeocentricCRS.CARTESIAN. Can someone confirm this?
And from here on, is this the correct start to use Geotools for this kind of problem solving, or am I on the complete wrong path?
Additionally, I would like to add that this would be used in a quiet dynamic system. So my image update would be about 10Hz and the transormations have to be performed accordingly often.
Again, is this initial thought of mine leading to a solution, or do you have other solutions for solving my problem?
Thank you very much,
Kiamur
This is not as simple as it might sound. You are essentially trying to define an area on a sphere (ellipsoid technically) using a flat square. As such there is no "correct" way to do it, so you will always end up with some distortion. Without knowing exactly where your image came from there is no way to answer this exactly but the following code provides you with 3 different possible answers:
The first two make use of GeoTools' GeodeticCalculator to calculate the corner points using bearings and distances. These are the blue "square" and the green "square" above. The blue is calculating the corners directly while the green calculates the edges and infers the corners from the intersections (that's why it is squarer).
final int width = 1024, height = 1024;
GeometryFactory gf = new GeometryFactory();
Point centre = gf.createPoint(new Coordinate(0,51));
WKTWriter writer = new WKTWriter();
//direct method
GeodeticCalculator calc = new GeodeticCalculator(DefaultGeographicCRS.WGS84);
calc.setStartingGeographicPoint(centre.getX(), centre.getY());
double height2 = height/2.0;
double width2 = width/2.0;
double dist = Math.sqrt(height2*height2+width2 *width2);
double bearing = 45.0;
Coordinate[] corners = new Coordinate[5];
for (int i=0;i<4;i++) {
calc.setDirection(bearing, dist*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
corners[i] = new Coordinate(corner.getX(),corner.getY());
bearing+=90.0;
}
corners[4] = corners[0];
Polygon bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
double[] edges = new double[4];
bearing = 0;
for(int i=0;i<4;i++) {
calc.setDirection(bearing, height2*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
if(i%2 ==0) {
edges[i] = corner.getY();
}else {
edges[i] = corner.getX();
}
bearing+=90.0;
}
corners[0] = new Coordinate( edges[1],edges[0]);
corners[1] = new Coordinate( edges[1],edges[2]);
corners[2] = new Coordinate( edges[3],edges[2]);
corners[3] = new Coordinate( edges[3],edges[0]);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
Another way to do this is to transform the centre point into a projection that is "flatter" and use simple addition to calculate the corners and then reverse the transformation. To do this we can use the AUTO projection defined by the OGC WMS Specification to generate an Orthographic projection centred on our point, this gives the red "square" which is very similar to the blue one.
String code = "AUTO:42003," + centre.getX() + "," + centre.getY();
// System.out.println(code);
CoordinateReferenceSystem auto = CRS.decode(code);
// System.out.println(auto);
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84,
auto);
MathTransform rtransform = CRS.findMathTransform(auto,DefaultGeographicCRS.WGS84);
Point g = (Point)JTS.transform(centre, transform);
width2 *=1000.0;
height2 *= 1000.0;
corners[0] = new Coordinate(g.getX()-width2,g.getY()-height2);
corners[1] = new Coordinate(g.getX()+width2,g.getY()-height2);
corners[2] = new Coordinate(g.getX()+width2,g.getY()+height2);
corners[3] = new Coordinate(g.getX()-width2,g.getY()+height2);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
bbox = (Polygon)JTS.transform(bbox, rtransform);
System.out.println(writer.write(bbox));
Which solution to use is a matter of taste, and depends on where your image came from but I suspect that either the red or the blue will be best. If you need to do this at 10Hz then you will need to test them for speed, but I suspect that transforming the images will be the bottle neck.
Once you have your bounding box setup to your satisfaction you can convert you (unreferenced) image to a georeferenced coverage using:
GridCoverageFactory factory = CoverageFactoryFinder.getGridCoverageFactory(null);
GridCoverage2D gc = factory.create("name", image, new ReferencedEnvelope(bbox.getEnvelopeInternal(),DefaultGeographicCRS.WGS84));
String fileName = "myImage.tif";
AbstractGridFormat format = GridFormatFinder.findFormat(fileName);
File out = new File(fileName);
GridCoverageWriter writer = format.getWriter(out);
try {
writer.write(gc, null);
writer.dispose();
} catch (IllegalArgumentException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

Rotation won't work in Java physics engine

I am making a java rigid body physics engine, and it has gone great so far, until I tried to implement rotation. I don't know where the problem is coming from. I have methods calculating the moment of inertia of convex polygons and circles using formulas from these websites:
http://lab.polygonal.de/?p=57
http://en.wikipedia.org/wiki/List_of_moments_of_inertia
This is the code for the polygon moment of inertia:
public float momentOfInertia() {
Vector C = centerOfMass().subtract(position); //center of mass
Line[] sides = sides(); //sides of the polygon
float moi = 0; //moment of inertia
for(int i = 0; i < sides.length; i++) {
Line l = sides[i]; //current side of polygon being looped through
Vector p1 = C; //points 1, 2, and 3 are the points of the triangle
Vector p2 = l.point1;
Vector p3 = l.point2;
Vector Cp = p1.add(p2).add(p3).divide(3); //center of mass of the triangle, or C'
float d = new Line(C, Cp).length(); //distance between center of mass
Vector bv = p2.subtract(p1); //vector for side b of triangle
float b = bv.magnitude(); //scalar for length of side b
Vector u = bv.divide(b); //unit vector for side b
Vector cv = p3.subtract(p1); //vector for side c of triangle, only used to calculate variables a and h
float a = cv.dot(u); //length of a in triangle
Vector av = u.multiply(a); //vector for a in triangle
Vector hv = cv.subtract(av); //vector for height of triangle, or h in diagram
float h = hv.magnitude(); //length of height of triangle, or h in diagram
float I = ((b*b*b*h)-(b*b*h*a)+(b*h*a*a)+(b*h*h*h))/36; //calculate moment of inertia of individual triangle
float M = (b*h)/2; //mass or area of triangle
moi += I+M*d*d; //equation in sigma series of website
}
return moi;
}
And this is for the circle:
public float momentOfInertia() {
return (float) Math.pow(radius, 2)*area()/2;
}
I know for a fact that the area functions work fine, I have checked them. I just don't know how to check if the moment of inertia equations are wrong.
For collision detection, I used the separating axis theorem to check for any combination of two polygons and circles, where it can find out whether they are colliding, the normal velocity of the collision, and the contact point of the collision. These methods all work beautifully.
I might also like to say how positions are organized. Every body has a position and a shape, either a polygon or a circle. Each shape has a position, and polygons have individual vertices. So if I want to find the absolute position of a vertex of a polygon-shaped body, I need to add the positions of the body, the polygon, and the vertex itself. The center of mass equation is in absolute position according to the shape, with no account for the body. The center of mass and moment of inertia methods are in the Shape class.
For every body, the constants are being updated according to the force and torque in the body's update method where dt is delta time. I also rotate the polygon based on the difference in rotation, because the vertices are ever changing.
public void update(float dt) {
if(mass != 0) {
momentum = momentum.add(force.multiply(dt));
velocity = momentum.divide(mass);
position = position.add(velocity.multiply(dt));
angularMomentum += torque*dt;
angularVelocity = angularMomentum/momentOfInertia;
angle += angularVelocity*dt;
shape.rotate(angularVelocity*dt);
}
}
Finally, I also have a CollisionResolver class which fixes the collision of two colliding bodies, involving applying the normal force and friction. Here is the class's only method which does all of this:
public static void resolveCollision(Body a, Body b, float dt) {
//calculate normal vector
Vector norm = CollisionDetector.normal(a, b);
Vector normb = norm.multiply(-1);
//undo overlap between bodies
float ratio1 = a.mass/(a.mass+b.mass);
float ratio2 = b.mass/(b.mass+a.mass);
a.position = a.position.add(norm.multiply(ratio1));
b.position = b.position.add(normb.multiply(ratio2));
//calculate contact point of collision and other values needed for rotation
Vector cp = CollisionDetector.contactPoint(a, b, norm);
Vector c = a.shape.centerOfMass().add(a.position);
Vector cb = b.shape.centerOfMass().add(b.position);
Vector d = cp.subtract(c);
Vector db = cp.subtract(cb);
//create the normal force vector from the velocity
Vector u = norm.unit();
Vector ub = u.multiply(-1);
Vector F = new Vector(0, 0);
boolean doA = a.mass != 0;
if(doA) {
F = a.force;
}else {
F = b.force;
}
Vector n = new Vector(0, 0);
Vector nb = new Vector(0, 0);
if(doA) {
Vector Fyp = u.multiply(F.dot(u));
n = Fyp.multiply(-1);
nb = Fyp;
}else{
Vector Fypb = ub.multiply(F.dot(ub));
n = Fypb;
nb = Fypb.multiply(-1);
}
//calculate normal force for body A
float r = a.restitution;
Vector v1 = a.velocity;
Vector vy1p = u.multiply(u.dot(v1));
Vector vx1p = v1.subtract(vy1p);
Vector vy2p = vy1p.multiply(-r);
Vector v2 = vy2p.add(vx1p);
//calculate normal force for body B
float rb = b.restitution;
Vector v1b = b.velocity;
Vector vy1pb = ub.multiply(ub.dot(v1b));
Vector vx1pb = v1b.subtract(vy1pb);
Vector vy2pb = vy1pb.multiply(-rb);
Vector v2b = vy2pb.add(vx1pb);
//calculate friction for body A
float mk = (a.friction+b.friction)/2;
Vector v = a.velocity;
Vector vyp = u.multiply(v.dot(u));
Vector vxp = v.subtract(vyp);
float fk = -n.multiply(mk).magnitude();
Vector fkv = vxp.unit().multiply(fk); //friction force
Vector vr = vxp.subtract(d.multiply(a.angularVelocity));
Vector fkvr = vr.unit().multiply(fk); //friction torque - indicated by r for rotation
//calculate friction for body B
Vector vb = b.velocity;
Vector vypb = ub.multiply(vb.dot(ub));
Vector vxpb = vb.subtract(vypb);
float fkb = -nb.multiply(mk).magnitude();
Vector fkvb = vxpb.unit().multiply(fkb); //friction force
Vector vrb = vxpb.subtract(db.multiply(b.angularVelocity));
Vector fkvrb = vrb.unit().multiply(fkb); //friction torque - indicated by r for rotation
//move bodies based on calculations
a.momentum = v2.multiply(a.mass).add(fkv.multiply(dt));
if(a.mass != 0) {
a.velocity = a.momentum.divide(a.mass);
a.position = a.position.add(a.velocity.multiply(dt));
}
b.momentum = v2b.multiply(b.mass).add(fkvb.multiply(dt));
if(b.mass != 0) {
b.velocity = b.momentum.divide(b.mass);
b.position = b.position.add(b.velocity.multiply(dt));
}
//apply torque to bodies
float t = (d.cross(fkvr)+d.cross(n));
float tb = (db.cross(fkvrb)+db.cross(nb));
if(a.mass != 0) {
a.angularMomentum = t*dt;
a.angularVelocity = a.angularMomentum/a.momentOfInertia;
a.angle += a.angularVelocity*dt;
a.shape.rotate(a.angularVelocity*dt);
}
if(b.mass != 0) {
b.angularMomentum = tb*dt;
b.angularVelocity = b.angularMomentum/b.momentOfInertia;
b.angle += b.angularVelocity*dt;
b.shape.rotate(b.angularVelocity*dt);
}
}
As for the actual problem, both the circles and polygons rotate very slowly and often in wrong directions. I know I am throwing a lot out there, but this problem has been bugging me for a while, and I would appreciate any help I can get.
Thanks.
This answer addresses the "I just don't know how to check if the moment of inertia equations are wrong." part of the question.
There are several possible approaches, some of which you may have already tried, and they can be used in combination:
Unit testing
Take your moment of inertia code and apply it to problems with known solutions from a tutorial or textbook.
Dimensional analysis
I would recommend this anyway for any scientific or engineering program. You may have deleted comments for compactness of posted code, but they are important. Annotate each variable that represents a physical quantity with its units. Check that every expression you evaluate has the right units, based on its inputs, for its result variable. For example, in the classic equation F=ma in SI units: F is in Newtons, equivalent to kg.m/(s^2), m is in kg, a is in m/(s^2), so it all balances. Be careful with transitions between physics world coordinates and screen coordinates.
Program simplification
Try working first with only one instance of one very simple shape for which you can do all the calculations by hand. Since some of your problems do not relate to rotation, a circle may be a good first choice because of its symmetry. Debug that, comparing intermediate results to equivalent results from paper-and-pencil (and calculator). Gradually add more instances of the same shape, then debug a single instance of the next shape...
Deliberate error
Given that you suspect your inertia calculations, try setting arbitrary values slightly different from your calculations, and see what differences they make in the display. Are the effects similar to the problems you are seeing? If so, keep it as a hypothesis.
As a more general note, programs that do iterative simulation can be very vulnerable to accumulated floating point error. Unless you have a real need to save space, and have done enough analysis of the numerical stability of your code to be sure float is OK, I strongly recommend using double instead. This is probably not your current problem, but is something that could become an issue later.

Is there a way to add on to the points of a shape/or a way to grab the perimeter points? [duplicate]

i have made a transform and rendered a Polygon object with it(mesh is of type Polygon):
at.setToTranslation(gameObject.position.x, gameObject.position.y);
at.rotate(Math.toRadians(rotation));
at.scale(scale, scale);
g2d.setTransform(at);
g2d.fillPolygon(mesh);
now i want to return the exact mesh i rendered so that i can do collision checks on it. only problem is that if i return mesh it returns the un-transformed mesh. so i tried setting the transform to the Polygon object (mesh) like so:
mesh = (Polygon)at.createTransformedShape(mesh);
but unfortunately at.createTransformedShape() returns a Shape that can only be casted to Path2D.Double. so if anyone knows how to convert Path2D.Double to Polygon or knows another way to set the transformations to the mesh please please help.
If AffineTransform#createTransformedShape doesn't provide the desired result for Polygons (as it seems to be the case), you can split the Polygon into Points, transform each Point and combine into a new Polygon. Try:
//Polygon mesh
//AffineTransform at
int[] x = mesh.xpoints;
int[] y = mesh.ypoints;
int[] rx = new int[x.length];
int[] ry = new int[y.length];
for(int i=0; i<mesh.npoints; i++){
Point2d p = new Point2d.Double(x[i], y[i]);
at.transform(p,p);
rx[i]=p.x;
ry[i]=p.y;
}
mesh = new Polygon(rx, ry, mesh.npoints)

LWJGL: gluLookAt that returns Matrix4f in opengl 3+?

I'm trying to write a little game with third-person-camera and I'm just wondering about gluLookAt function. It works with Opengl 1.1, but I'm using 3.2 one so I need something that can return Matrix4f to me, but I didn't find anything on the Internet except some code in C++ and I found it extremely hard to translate it to the LWJGL (their API's are not the same, no sir). For example, I tried to remake this code (This link) :
// ----------------------------------------------------
// View Matrix
//
// note: it assumes the camera is not tilted,
// i.e. a vertical up vector (remmeber gluLookAt?)
//
void setCamera(float posX, float posY, float posZ,
float lookAtX, float lookAtY, float lookAtZ) {
float dir[3], right[3], up[3];
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
dir[0] = (lookAtX - posX);
dir[1] = (lookAtY - posY);
dir[2] = (lookAtZ - posZ);
normalize(dir);
crossProduct(dir,up,right);
normalize(right);
crossProduct(right,dir,up);
normalize(up);
float aux[16];
viewMatrix[0] = right[0];
viewMatrix[4] = right[1];
viewMatrix[8] = right[2];
viewMatrix[12] = 0.0f;
viewMatrix[1] = up[0];
viewMatrix[5] = up[1];
viewMatrix[9] = up[2];
viewMatrix[13] = 0.0f;
viewMatrix[2] = -dir[0];
viewMatrix[6] = -dir[1];
viewMatrix[10] = -dir[2];
viewMatrix[14] = 0.0f;
viewMatrix[3] = 0.0f;
viewMatrix[7] = 0.0f;
viewMatrix[11] = 0.0f;
viewMatrix[15] = 1.0f;
setTranslationMatrix(aux, -posX, -posY, -posZ);
multMatrix(viewMatrix, aux);
}
I can understand everything until "float aux[16]", then it just gets messy in my mind, especially in the end.
Can someone make it clear for me? Maybe someone already made "gluLookAt-clone" or something?
EDIT:
Thank you, Brett, now I must understand how to express that in code). You say that "aux" is a matrix, but we give it only 3 floats, so its better be a vector, BUT if it is a vector, so how do I multiply it with 4x4 ViewMatrix? And I cant find a way to just fill Matrix4f with numbers, there is no methods in lib to do that (most likely that because I'm noob and I can't find it, but hey, I really can't)
FINAL EDIT:
Finally I got it to work. I just wasn't understanding full matrix stuff that was required. Here is final working code, if someone interested(I guess not but whatever)). To deal with it, don't forget to setup projection matrix in the begining.
void setCamera(float posX, float posY, float posZ,
float lookAtX, float lookAtY, float lookAtZ) {
Vector3f dir = new Vector3f(lookAtX - posX, lookAtY - posY, lookAtZ - posZ);
Vector3f up = new Vector3f(0, 1f, 0);
Vector3f right = new Vector3f();
dir.normalise();
Vector3f.cross(dir,up,right);
right.normalise();
Vector3f.cross(right,dir,up);
up.normalise();
Matrix4f aux = new Matrix4f();
viewMatrix = new Matrix4f();
viewMatrix.m00 = right.getX();
viewMatrix.m01 = right.getY();
viewMatrix.m02 = right.getZ();
viewMatrix.m03 = 0.0f;
viewMatrix.m10 = up.getX();
viewMatrix.m11 = up.getY();
viewMatrix.m12 = up.getZ();
viewMatrix.m13 = 0.0f;
viewMatrix.m20 = -dir.getX();
viewMatrix.m21 = -dir.getY();
viewMatrix.m22 = -dir.getZ();
viewMatrix.m23 = 0.0f;
viewMatrix.m30 = 0.0f;
viewMatrix.m31 = 0.0f;
viewMatrix.m32 = 0.0f;
viewMatrix.m33 = 1.0f;
//setup aux as a translation matrix by placing positions in the last column
aux.m30 = -posX;
aux.m31 = -posY;
aux.m32 = -posZ;
//multiplication(in fact translation) viewMatrix with aux
Matrix4f.mul(viewMatrix, aux, viewMatrix);
}
I have a first-person camera class that I use (on OpenGL 3.2) which handles movement: x, y and z positions as well as pitch, yaw and roll. The way that I do this is update positions each cycle and a part of rendering I apply these updates from the camera by creating a new view matrix and sending it as a uniform to my vertex shader.
Here is the method that accomplishes this:
#Override
public void applyTranslations(int uniformLocation) {
viewMatrix = new Matrix4f();
Matrix4f.rotate(MatrixUtils.degreesToRadians(pitch), new Vector3f(1, 0, 0), viewMatrix, viewMatrix);
Matrix4f.rotate(MatrixUtils.degreesToRadians(yaw), new Vector3f(0, 1, 0), viewMatrix, viewMatrix);
Matrix4f.rotate(MatrixUtils.degreesToRadians(roll), new Vector3f(0, 0, 1), viewMatrix, viewMatrix);
Matrix4f.translate(new Vector3f(-x, -y, -z), viewMatrix, viewMatrix);
viewMatrix.store(matrix44Buffer); matrix44Buffer.flip();
glUniformMatrix4(uniformLocation, false, matrix44Buffer);
}
Where uniformLocation is the location of my viewMatrix uniform within my shaders.
Steps are to:
Create a new 4x4 matrix
Apply rotations for x, y and z axes
Apply translation for the x, y and z axes
Send matrix to shaders
The code is just populating the matrix as described in the gluLookAt() documentation.
Up until float aux[16], the code is creating an orthonormal basis (3 mutually perpendicular vectors as 'axes'). The properties of orthogonal rotation matrices allow viewMatrix elements to be set directly. aux is then (presumably) populated as a translation matrix, and the transforms are then concatenated with multMatrix.

OpenGL: creating my own camera

I'm trying to create a camera to move around a 3d space and am having some problems setting it up. I'm doing this is Java, and apparently using gluPerspective and gluLookAt together creates a conflict (the screen starts flickering like mad).
gluPerspective is set like this:
gl.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(50.0f, h, 1.0, 1000.0);
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
I then create a camera matrix, making use of eye coordinates, forward and up vectors (http://people.freedesktop.org/~idr/glu3/form_4.png) (lets assume the code for the camera is correct.
Lastly, before I draw any thing I have:
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMultMatrixf(camera.matrix);
And then I call my drawing routines (which do some translation/rotation on their own by calling glRotatef and glTranslatef).
Without the call to glMultMatrixf the camera shows the items I need to see in the centre of the screen as it should. With glMulMatrixf however, all I get is a black screen. I tried using glLoadMatrixf instead and it didn't work either. Am I doing something wrong? Am I putting something out of place? If not, and this is how it should be done let me know and I'll post some of the camera code that might be creating the conflicts.
EDIT: Here is the camera matrix creation code:
private void createMatrix()
{
float[] f = new float[3]; //forward (centre-eye)
float[] s = new float[3]; //side (f x up)
float[] u = new float[3]; //'new up' (s x f)
for(int i=0;i<3;i++){
f[i] = centre[i]-eye[i];
}
f = Maths.normalize(f);
s = Maths.crossProduct(f,upVec);
u = Maths.crossProduct(s,f);
float[][] mtx = new float[4][4];
float[][] mtx2 = new float[4][4];
//initializing matrices to all 0s
for (int i = 0; i < mtx.length; i++) {
for (int j = 0; j < mtx[0].length; j++) {
mtx[i][j] = 0;
mtx2[i][j] = 0;
}
}
//mtx = [ [s] 0,[u] 0,[-f] 0, 0 0 0 1]
//mtx2 = [1 0 0 -eye(x), 0 1 0 -eye(y), 0 0 1 -eye(z), 0 0 0 1]
for(int i=0;i<3;i++){
mtx[0][i] = s[i];
mtx[1][i] = u[i];
mtx[2][i] = -f[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
}
mtx[3][3] = 1;
mtx2[0][0]=1;mtx2[1][1] = 1;mtx2[2][2] = 1;mtx2[3][3] = 1;
mtx = Maths.matrixMultiply(mtx,mtx2);
for(int i=0;i<4;i++){
for(int j=0;j<4;j++){
// this.mtx is a float[16] for glMultMatrixf
this.mtx[i*4+j] = mtx[i][j];
}
}
}
I'm hopping the error is somewhere in this piece of code, if not, I'll have a look at my maths functions to see whats going on..
EDIT2: Though I should mention that at least the initial vectors (eye,centre,up) are correct and do put teh camera where it should be (worked with gluLookAt but had teh flickering issue).
It might be simpler to use glRotatef, glTranslatef, and glFrustum to create the camera, although your math seems fine to me (just as long as UpVec is actually defined). In most of the 3D graphics that I have done, you didn't really have a defined object that you wanted to track. I went through various implementations of a 3D camera using gluLookAt before I finally settled on this.
Here is how I tend to define my cameras:
When I create or initialize my camera, I set up the projection matrix with glFrustum. You can use glPerspecive if you prefer:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(left, right, down, up, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
After I clear the color and depth buffers for a render pass, then I call
glLoadIdentity();
glRotated(orientation.x, 1.0, 0.0, 0.0);
glRotated(orientation.y, 0.0, 1.0, 0.0);
glRotated(orientation.z, 0.0, 0.0, 1.0);
glTranslatef(position.x, position.y, position.z);
To position and orient the camera. Initially, you set position and orientation both to {0}, then add or subtract from position when a key is pressed, and add or subtract from orientation.x and orientation.y when the mouse is moved... (I generally don't mess with orientation.z)
Cheers.
Fixed it kind of. The problem was using glMultMatrix(float[] matrix,int ?ofset?)... for some reason if I just use glMultMatrix(FloatBuffer matrix) it works fine..
There are some issues with the transformations I'm making but I should be able to deal with those... Thank you for your input though guys.

Categories

Resources