safeguard against floating point errors in java - java

As the number of frames rendered in a game increase beyond a certain
limit, instead of obtaining an identity matrix for the following type of matrix transformation:
Matrix.setIdentity(ModelMatrix);
Matrix.translate(ModelMatrix, xmov,ymov,0);
Matrix.translate(ModelMatrix, -xmov,-ymov,0);
there are small values that get added to the columns (because of floating point errors in java), and progressively become larger in the matrix (which is no longer identity) and causes strange translations. Below is the code:
...// _ModelMatrixNozzle is set as identity matrix like all other 4x4 matrices in my app in onSurfaceChanged method
...// this code is part of update() method, called by onDrawFrame() in renderer thread
GLES20Renderer._uNozzleCentreMatrix[0] = GLES20Renderer._ModelMatrixNozzle[0] * GLES20Renderer._uNozzleCentre[0] + GLES20Renderer._ModelMatrixNozzle[4] * GLES20Renderer._uNozzleCentre[1] + GLES20Renderer._ModelMatrixNozzle[8] * GLES20Renderer._uNozzleCentre[2] + GLES20Renderer._ModelMatrixNozzle[12] * GLES20Renderer._uNozzleCentre[3];
GLES20Renderer._uNozzleCentreMatrix[1] = GLES20Renderer._ModelMatrixNozzle[1] * GLES20Renderer._uNozzleCentre[0] + GLES20Renderer._ModelMatrixNozzle[5] * GLES20Renderer._uNozzleCentre[1] + GLES20Renderer._ModelMatrixNozzle[9] * GLES20Renderer._uNozzleCentre[2] + GLES20Renderer._ModelMatrixNozzle[13] * GLES20Renderer._uNozzleCentre[3];
GLES20Renderer._uNozzleCentreMatrix[2] = 0;
GLES20Renderer._uNozzleCentreMatrix[3] = 1;
if(Math.abs(ds) > 0) {
/*transformations will be added here if the errors are solved*/
} else {
if(GLES20Renderer._zAngle >= 360) {
GLES20Renderer._zAngle = GLES20Renderer._zAngle - 360;
}
if(GLES20Renderer._zAngle <= -360) {
GLES20Renderer._zAngle = GLES20Renderer._zAngle + 360;
}
Matrix.translateM(GLES20Renderer._ModelMatrixNozzle, 0, GLES20Renderer._uNozzleCentreMatrix[0], GLES20Renderer._uNozzleCentreMatrix[1], 0);
Matrix.rotateM(GLES20Renderer._ModelMatrixNozzle, 0, GLES20Renderer._zAngle, 0, 0, 1);
//Matrix.rotateM(GLES20Renderer._ModelMatrixNozzle, 0, -GLES20Renderer._lastZAngle, 0, 0, 1);
Matrix.translateM(GLES20Renderer._ModelMatrixNozzle, 0, -GLES20Renderer._uNozzleCentreMatrix[0], -GLES20Renderer._uNozzleCentreMatrix[1], 0);
}
Download apk:http://www.pixdip.com/opengles/rotation/floating.apk
Although this is not required, but the complete code is here:Drift in rotation about z-axis

Matrix.setIdentity(ModelMatrix);
Matrix.translate(ModelMatrix, xmov,ymov,0);
Matrix.translate(ModelMatrix, -xmov,-ymov,0);
Will always produce floating point errors because of matrix stack.
The way I finally removed this was by using seperate matrices for such critical transformations:
private static float[] _TMatrix = new float[16];
private static float[] _ModelMatrix = new float[16];
Matrix.setIdentity(Renderer._ModelMatrix);
Matrix.setIdentity(Renderer._TMatrix);
Matrix.translate(Renderer._ModelMatrix, xmov,ymov,0);
Matrix.translate(Renderer._TMatrix, -xmov,-ymov,0);
Matrix.multiply(Renderer._ModelMatrix, Renderer._TMatrix, Renderer._ModelMatrix);
// will result in an identity model matrix, without any floating point errors

Related

copying an image onto another with JOCL/OpenCL

so my goal is to use the GPU for my brand new Java project which is to create a game and the game engine itself (I think it is a very good way to learn in deep how it works).
I was using multi-threading on the CPU with java.awt.Graphics2D to display my game, but i have observed on other PCs that the game was running below 40FPS so i have decided to learn how to use GPU (I will be still rendering all objects in a for loop then draw the image on screen).
For that reason, I started to code following the OpenCL documentation and the JOCL samples a small simple test which is to paint the texture onto the background image (let's amdit that every entities has a texture).
This method is called in each render call and it is given the background, the texture, and the position of this entity as arguments.
Both codes below has been updated to fit #ProjectPhysX recommandations.
public static void XXX(final BufferedImage output_image, final BufferedImage input_image, float x, float y) {
cl_image_format format = new cl_image_format();
format.image_channel_order = CL_RGBA;
format.image_channel_data_type = CL_UNSIGNED_INT8;
//allocate ouput pointer
cl_image_desc output_description = new cl_image_desc();
output_description.buffer = null; //must be null for 2D image
output_description.image_depth = 0; //is only used if the image is a 3D image
output_description.image_row_pitch = 0; //must be 0 if host_ptr is null
output_description.image_slice_pitch = 0; //must be 0 if host_ptr is null
output_description.num_mip_levels = 0; //must be 0
output_description.num_samples = 0; //must be 0
output_description.image_type = CL_MEM_OBJECT_IMAGE2D;
output_description.image_width = output_image.getWidth();
output_description.image_height = output_image.getHeight();
output_description.image_array_size = output_description.image_width * output_description.image_height;
cl_mem output_memory = clCreateImage(context, CL_MEM_WRITE_ONLY, format, output_description, null, null);
//set up first kernel arg
clSetKernelArg(kernel, 0, Sizeof.cl_mem, Pointer.to(output_memory));
//allocates input pointer
cl_image_desc input_description = new cl_image_desc();
input_description.buffer = null; //must be null for 2D image
input_description.image_depth = 0; //is only used if the image is a 3D image
input_description.image_row_pitch = 0; //must be 0 if host_ptr is null
input_description.image_slice_pitch = 0; //must be 0 if host_ptr is null
input_description.num_mip_levels = 0; //must be 0
input_description.num_samples = 0; //must be 0
input_description.image_type = CL_MEM_OBJECT_IMAGE2D;
input_description.image_width = input_image.getWidth();
input_description.image_height = input_image.getHeight();
input_description.image_array_size = input_description.image_width * input_description.image_height;
DataBufferInt input_buffer = (DataBufferInt) input_image.getRaster().getDataBuffer();
int input_data[] = input_buffer.getData();
cl_mem input_memory = clCreateImage(context, CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR, format, input_description, Pointer.to(input_data), null);
//loads the input buffer to the gpu memory
long[] input_origin = new long[] { 0, 0, 0 };
long[] input_region = new long[] { input_image.getWidth(), input_image.getHeight(), 1 };
int input_row_pitch = input_image.getWidth() * Sizeof.cl_uint; //the length of each row in bytes
clEnqueueWriteImage(commandQueue, input_memory, CL_TRUE, input_origin, input_region, input_row_pitch, 0, Pointer.to(input_data), 0, null, null);
//set up second kernel arg
clSetKernelArg(kernel, 1, Sizeof.cl_mem, Pointer.to(input_memory));
//set up third and fourth kernel args
clSetKernelArg(kernel, 2, Sizeof.cl_float, Pointer.to(new float[] { x }));
clSetKernelArg(kernel, 3, Sizeof.cl_float, Pointer.to(new float[] { y }));
//blocks until all previously queued commands are issued
clFinish(commandQueue);
//enqueue the program execution
long[] globalWorkSize = new long[] { input_description.image_width, input_description.image_height };
clEnqueueNDRangeKernel(commandQueue, kernel, 2, null, globalWorkSize, null, 0, null, null);
//transfer the output result back to host
DataBufferInt output_buffer = (DataBufferInt) output_image.getRaster().getDataBuffer();
int output_data[] = output_buffer.getData();
long[] output_origin = new long[] { 0, 0, 0 };
long[] output_region = new long[] { output_description.image_width, output_description.image_height, 1 };
int output_row_pitch = output_image.getWidth() * Sizeof.cl_uint;
clEnqueueReadImage(commandQueue, output_memory, CL_TRUE, output_origin, output_region, output_row_pitch, 0, Pointer.to(output_data), 0, null, null);
//free pointers
clReleaseMemObject(input_memory);
clReleaseMemObject(output_memory);
}
And here's the program source runned on the kernel.
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
__kernel void drawImage(__write_only image2d_t dst_image, __read_only image2d_t src_image, float xoff, float yoff)
{
const int x = get_global_id(0);
const int y = get_global_id(1);
int2 in_coords = (int2) { x, y };
uint4 pixel = read_imageui(src_image, sampler, in_coords);
pixel = -16184301;
printf("%d, %d, %u\n", x, y, pixel);
const int sx = get_global_size(0);
const int sy = get_global_size(1);
int2 out_coords = (int2) { ((int) xoff + x) % sx, ((int) yoff + y) % sy};
write_imageui(dst_image, out_coords, pixel);
}
Without the call to write_imageui, the background is painted black, otherwhise it is white.
At the moment, I am a bit struggling to understand why pixel = 0 in the C function, but i think that someone familiar with JOCL would found out very quick my error in this code. I am very confused with this code for today, maybe tomorrow, but i don't feel like I will ever catch myself my mistake. For that reason i request your help to review my code. I feel like an idiot that i can't figure it out at that point.
Try
const int sx = get_global_size(0);
const int sy = get_global_size(1);
int2 out_coords = (int2) { (xoff + x)%sx, (yoff + y)%sy};
to avoid errors or undefined behaviour. Right now you are writing into Nirwana if the coordinate+offset is putside the image region. Also there is no clEnqueueWriteImage before the kernel is called, so src_image on the GPU is undefined and may contain random values.
OpenCL requires kernel parameters to be declared in global memory space:
__kernel void drawImage(global image2d_t dst_image, global image2d_t src_image, global float xoff, global float yoff)
Also as someone who has written a graphics engine in Java, C++ and GPU-parallelized in OpenCL, let me give you some guidance: In the Java code, you probably use painter's algorithm: Make a list of all drawn objects with their approximate z-coordinates, sort the objects by z-coordinate and draw them back-to-front in a single for-loop. On the GPU, painter's algorithm won't work as you cannot parallelize it. Instead you have a list of objects (lines/triangles) in 3D space, and you parallelize over this list: Each GPU thread rasterizes a single triangle, all threads at the same time, and draw the pixels on the frame at the same time. To solve the draing order problem, you use a z-buffer: an image consisting of a z-coordinate per pixel. During rasterization of the line/triange, you calculate the z-coordinate for every pixel, and only if it is larger than the one previously in the z-buffer at that pixel, you draw the new color.
Regarding performance: java.awt.Graphics2D is very efficient in terms of CPU usage, you can do ~40k triangles per frame at 60fps. With OpenCL, expect ~30M triangles per frame at 60fps.

Detecting costs/variables resulting in unbounded optimization problem in ojAlgo

I am using the ojAlgo linear/quadratic solver via ExpressionsBasedModel to solve the layout of graphical elements in a plotting library so that they fit neatly into the screen boundaries. Specifically, I want to solve for scale and translation so that the coordinates of a scatter plot fill up the screen space. I do that by declaring scale and translation variables of the ExpressionsBasedModel and transform the scatter plot coordinates to the screen using those variables and then construct linear constraints that the transformed coordinates should project inside the screen. I also add a negative cost to the scale variables, so that they are maximized and the scatter plot covers as much screen space as possible. My problem is that in some special cases, for example if I have only one point to plot, this results in an unbounded problem where the scale goes towards infinity without any constraint being active. How can I detect the scale variables for which this would happen and fix them to some default values?
To illustrate the above problem, I constructed a toy plotting library (the full library that I am working on is too big to fit in this question). To help layout the graphical elements, I have a problem class:
class Problem {
private ArrayList<Variable> _scale_variables = new ArrayList<Variable>();
private ExpressionsBasedModel _model = new ExpressionsBasedModel();
Variable freeVariable() {
return _model.addVariable();
}
Variable scaleVariable() {
Variable x = _model.addVariable();
x.lower(0.0); // Negative scale not allowed
_scale_variables.add(x);
return x;
}
Expression expr() {
return _model.addExpression();
}
Result solve() {
for (Variable scale_var: _scale_variables) {
// This is may result in unbounded solution for degenerate cases.
Expression expr = _model.addExpression("Encourage-larger-scale");
expr.set(scale_var, -1.0);
expr.weight(1.0);
}
return _model.minimise();
}
}
It wraps an ExpressionsBasedModel and has some facilities to create variables. For the transform that I will use to map my scatter point coordinates to screen coordinates, I have this class:
class Transform2d {
Variable x_scale;
Variable y_scale;
Variable x_translation;
Variable y_translation;
Transform2d(Problem problem) {
x_scale = problem.scaleVariable();
y_scale = problem.scaleVariable();
x_translation = problem.freeVariable();
y_translation = problem.freeVariable();
}
void respectBounds(double x, double y, double marker_size,
double width, double height,
Problem problem) {
// Respect left and right screen bounds
{
Expression expr = problem.expr();
expr.set(x_scale, x);
expr.set(x_translation, 1.0);
expr.lower(marker_size);
expr.upper(width - marker_size);
}
// Respect top and bottom screen bounds
{
Expression expr = problem.expr();
expr.set(y_scale, y);
expr.set(y_translation, 1.0);
expr.lower(marker_size);
expr.upper(height - marker_size);
}
}
}
The respectBounds method is used to add the constraints of a single point in the scatter plot the the Problem class mentioned before. To add all the points of a scatter plot, I have this function:
void addScatterPoints(
double[] xy_pairs,
// How much space every marker occupies
double marker_size,
Transform2d transform_to_screen,
// Screen size
double width, double height,
Problem problem) {
int data_count = xy_pairs.length/2;
for (int i = 0; i < data_count; i++) {
int offset = 2*i;
double x = xy_pairs[offset + 0];
double y = xy_pairs[offset + 1];
transform_to_screen.respectBounds(x, y, marker_size, width, height, problem);
}
}
First, let's look at what a non-degenerate case looks like. I specify the screen size and the size of the markers used for the scatter plot. I also specify the data to plot, build the problem and solve it. Here is the code
Problem problem = new Problem();
double marker_size = 4;
double width = 800;
double height = 600;
double[] data_to_plot = new double[] {
1.0, 2.0,
4.0, 9.3,
7.0, 4.5};
Transform2d transform = new Transform2d(problem);
addScatterPoints(data_to_plot, marker_size, transform, width, height, problem);
Result result = problem.solve();
System.out.println("Solution: " + result);
which prints out Solution: OPTIMAL -81.0958904109589 # { 0, 81.0958904109589, 795.99999999999966, -158.19178082191794 }.
This is what a degenerate case looks like, plotting two points with the same y-coordinate:
Problem problem = new Problem();
double marker_size = 4;
double width = 800;
double height = 600;
double[] data_to_plot = new double[] {
1, 1,
9, 1
};
Transform2d transform = new Transform2d(problem);
addScatterPoints(data_to_plot, marker_size, transform, width, height, problem);
Result result = problem.solve();
System.out.println("Solution: " + result);
It displays Solution: UNBOUNDED -596.0 # { 88.44444444444444, 596, 0, 0 }.
As mentioned before, my question is: How can I detect the scale variables whose negative cost would result in an unbounded solution and constraint them to some default value, so that my solution is not unbounded?

OpenGL - difficulty with constructing shapes

I'm starting work on a simple shape-batching system for my 3D engine that will enable me to draw lines and rectangles, etc... with a lower draw call count. I think I've got the basic ideas figured out for the most part, but I'm having problems when I try to draw multiple objects (currently just lines with a thickness you can specify).
Here's a screenshot to show you what I mean:
I'm using indexed rendering with glDrawElements, and two VBOs to represent the vertex data - one for positions, and one for colours.
I construct a line for my shape-batcher by specifying start and end points, like so:
shapeRenderer.begin();
shapeRenderer.setViewMatrix(viewMatrix);
shapeRenderer.setProjectionMatrix(projectionMatrix);
shapeRenderer.setCurrentColour(0, 1f, 0);
shapeRenderer.drawLine(2, 2, 5, 2);
shapeRenderer.setCurrentColour(0, 1f, 1f);
shapeRenderer.drawLine(2, 5, 5, 5);
shapeRenderer.end();
The first line, represented in green in the screenshot, shows perfectly. If I draw only one line it's completely fine. If I were to draw only the second line it would show perfectly as well.
When I call drawLine the following code executes, which I use to compute directions and normals:
private Vector2f temp2fA = new Vector2f();
private Vector2f temp2fB = new Vector2f();
private Vector2f temp2fDir = new Vector2f();
private Vector2f temp2fNrm = new Vector2f();
private Vector2f temp2fTMP = new Vector2f();
private boolean flip = false;
public void drawLine(float xStart, float yStart, float xEnd, float yEnd){
resetLineStates();
temp2fA.set(xStart, yStart);
temp2fB.set(xEnd, yEnd);
v2fDirection(temp2fA, temp2fB, temp2fDir);
v2fNormal(temp2fDir, temp2fNrm);
float halfThickness = currentLineThickness / 2;
//System.out.println("new line called");
v2fScaleAndAdd(temp2fB, temp2fNrm, -halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fB, temp2fNrm, halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fA, temp2fNrm, halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
v2fScaleAndAdd(temp2fA, temp2fNrm, -halfThickness, temp2fTMP);
pushVertex(temp2fTMP);
//System.out.println(indexCount + " before rendering.");
int index = indexCount;
pushIndices(index, index + 1, index + 3);
pushIndices(index + 1, index + 2, index + 3);
//System.out.println(indexCount + " after rendering.");
}
private void resetLineStates(){
temp2fA.set(0);
temp2fB.set(0);
temp2fDir.set(0);
temp2fNrm.set(0);
temp2fTMP.set(0);
}
pushIndices is the following function:
private void pushIndices(int i1, int i2, int i3){
shapeIndices.add(i1);
shapeIndices.add(i2);
shapeIndices.add(i3);
indexCount += 3;
}
And pushVertex works like so:
private void pushVertex(float x, float y, float z){
shapeVertexData[vertexDataOffset] = x;
shapeColourData[vertexDataOffset] = currentShapeColour.x;
shapeVertexData[vertexDataOffset + 1] = y;
shapeColourData[vertexDataOffset + 1] = currentShapeColour.y;
shapeVertexData[vertexDataOffset + 2] = z;
shapeColourData[vertexDataOffset + 2] = currentShapeColour.z;
//System.out.println("\tpushed vertex: " + data.x + ", " + data.y + ", 0");
vertexDataOffset += 3;
}
I'm using the following fields to store vertex data and such - this is all sub-buffered to a VBO when I flush the batch. If the vertex data arrays have not had to grow in size, I will sub-buffer them to their respective VBO, likewise with the element buffer, otherwise if they have had to grow then I re-buffer the VBO to fit.
private float[] shapeVertexData;
private float[] shapeColourData;
private int vertexDataOffset;
private ArrayList<Integer> shapeIndices;
private int indexCount;
When I use my debugger in IDEA, the vertex data appears completely correct in the arrays I'm constructing, but when I explore it in RenderDoc, it's wrong. I don't understand what I'm doing wrong to get these results, and obviously the first two vertices appear completely fine even for the second rectangle, but the others are totally wrong.
I'm confident that my shaders are not the problem, as they're very simple, but here they are:
shape_render.vs (vertex shader):
#version 330
layout (location = 0) in vec3 aPosition;
layout (location = 1) in vec3 aColour;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
flat out vec3 shapeFill;
void main(){
shapeFill = aColour;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(aPosition.x, aPosition.y, 0.0, 1.0);
}
shape_render.fs (fragment shader):
#version 330
layout (location = 0) out vec4 fragColour;
in vec3 shapeFill;
void main(){
fragColour = vec4(shapeFill, 1);
}
I think I've just about explained it to the best of my knowledge, any insight would be greatly appreciated. I've already checked and determined I'm enabling the necessary vertex arrays, etc... and rendering the correct amount of indices (12):
Thanks so much for having a look at this for me.
I figured it out after thinking about it for a while longer. It was to do with how I was specifying the indices. I was using the correct amount of indices however specifying them incorrectly.
For arguments sake to construct a triangle, the first one would have an index count of 0, and with four vertices, the indices would be 1,2,3 and 2,3,1 (for example). However for each new triangle I was starting the index count at the old count plus six, which makes sense for addressing an array, but because each rectangle only specified four vertices, I was pointing indices at data that didn’t exist.
So instead of using indexCount += 3 each time I pushed indices, I’ll get the current count of vertices instead and build my indices from that.

Why is my PVector which should remain constant being modified in this sketch

I am building a sketch which at the moment has two classes; walkers and emitters. The walkers do a random perlin-noise walk and fade out over time. The emitters are meant to be the points at which the walkers are emitted and have a PVector 'position' describing the emitter location and a float 'density' describing the density of walkers emitted per frame of animation.
I have two problems. The first and most serious problem is that for some reason the position PVector in my emitter class is varying over time (looks as if I'm somehow making it also randomly walk). How is this happening? Notice in the emit() method I have a commented line which forces the same PVector each time and this works precisely as intended.
The second problem is that the walkers seem to have a tendency to both drift on a north-easterly bearing and also seem to be loosely bound in a square. I have no idea what causes this behavior so any insights would be much appreciated.
Cheers!
CODE:
ArrayList<Walker> walkers;
ArrayList<Emitter> emitters;
int tmax = 1200;
int stepSize = 2;
int nWalkers = 50;
void setup(){
size(1024,1024);
frameRate(60);
walkers = new ArrayList<Walker>();
emitters = new ArrayList<Emitter>();
emitters.add(new Emitter(new PVector(width/2, height/2), 0.5));
}
void draw() {
for (Emitter e: emitters){
e.emit();
}
fill(255, 50); // alpha will control fade-out (agaaaaaaaaaaaaaaaaaaaaaaaaaaaaaain)
noStroke();
rect(0, 0, width, height); // Creates fading tail for walkers
for(int i = walkers.size() - 1; i>=0; i--){
Walker w = (Walker) walkers.get(i);
if(w.time > tmax) {
walkers.remove(i);
}
w.walk();
w.displayline();
}
}
class Emitter {
PVector position;
float density;
Emitter(PVector positionIni, float densityIni) {
position = positionIni;
density = densityIni;
}
void emit() {
if(random(1000) > map(density, 0, 1, 0, 1000)) {
walkers.add(new Walker(position, new PVector(random(-10,10), random(-10,10)))); // DOESN'T WORK
//walkers.add(new Walker(new PVector(width/2, height/2), new PVector(random(-10,10), random(-10,10))));
}
}
}
class Walker {
PVector location, plocation;
PVector noff, step;
int time;
Walker(PVector locationIni, PVector noffIni) {
location = locationIni;
plocation = new PVector(location.x, location.y);
noff = noffIni;
step = new PVector(map(noise(noff.x), 0, 1, -stepSize, stepSize), map(noise(noff.y), 0, 1, -stepSize, stepSize));
time = 0;
}
void displayline() {
strokeWeight(1);
fill(map(time, 0, tmax, 0, 255));
stroke(map(time, 0, tmax, 0, 255));
//ellipse(location.x, location.y,1,1);
line(plocation.x, plocation.y,location.x, location.y);
time++;
}
void walk() {
plocation.x = location.x;
plocation.y = location.y;
step.x = map(noise(noff.x), 0, 1, -stepSize, stepSize);
step.y = map(noise(noff.y), 0, 1, -stepSize, stepSize);
location.add(step);
noff.x += 0.05;
noff.y += 0.05;
}
}
I have two problems. The first and most serious problem is that for some reason the position PVector in my emitter class is varying over time (looks as if I'm somehow making it also randomly walk). How is this happening? Notice in the emit() method I have a commented line which forces the same PVector each time and this works precisely as intended.
You've described the problem exactly. You're passing in the position variable to your Walker class, and then the Walker class is changing that PVector in this line:
location.add(step);
Since you're changing the variable passed in, you're changing the original position variable. That's why it works fine if you pass in a different PVector instance.
To fix this problem, you might want to see the copy() function of the PVector class. More info can be found in the reference.
The second problem is that the walkers seem to have a tendency to both drift on a north-easterly bearing and also seem to be loosely bound in a square.
The square is happening because your max bounds are a square. Think about the maximum possible values your positions could take. That forms a square, so if you have a bunch of positions in that square, you'll start to see the square shape. To fix this, you'll have to use some basic trigonometry to make the maximum a circle instead. Basically: store a heading and a velocity, and then use cos() and sin() to calculate the position. Google is your friend here.
If you notice them moving towards a direction, that means your random number generation is off. Try splitting this line up into multiple steps to track down exactly where that bias comes from:
step = new PVector(map(noise(noff.x), 0, 1, -stepSize, stepSize), map(noise(noff.y), 0, 1, -stepSize, stepSize));

java 3D rotations not working

I've had this old graphics project laying around (written in oberon) and since i wrote it as one of my first projects it looks kinda chaotic.
So I descided that, since i'm bored anyway, i would rewrite it in java.
Everything so far seems to work... Until i try to rotate and/or do my eye-point transformation.
If i ignore said operations the image comes out just fine but the moment i try to do any of the operations that require me to multiply a point with a transformation matrix it all goes bad.
the eye point transformation generates stupidly small numbers with end coördinates like [-0.002027571306540029, 0.05938634628270456, -123.30022583847628]
this causes the resulting image to look empty but if i multiply each point with 1000 it turns out it's just very, very small and, in stead of being rotated, has just been translated in some (seemingly) random direction.
if i then ignore the eye point and simply focus on my rotations the results are also pretty strange (note: the image auto scales depending on the range of coordinates):
setting xRotation to 90° only makes the image very narrow and way too high (resolution should be about 1000x1000 and is then 138x1000
setting yRotation to 90° makes it very wide (1000x138)
setting zRotation to 90° simply seems to translate the image all the way to the right side of the screen.
What i have checked so far:
i have checked and re-checked my rotation matrices at least 15 times now so they are (probably) correct
doing a test multiplication with a vector and the identity matrix does return the original vector
my matrices are initialized as identity matrices prior to being used as rotation matrices
the angles in the files are in degrees but are converted to radian when read.
Having said that i have 2 more notes:
a vector in this case is a simple 3 value array of doubles (representing the x, y and z values)
a matrix is a 4x4 array of doubles initialized as the identity matrix
When trying to rotate them i do it in the order:
scale (multiplying with a scale factor)
rotate along x-axis
rotate along y-axis
rotate along z-axis
translate
do eye-point transformation
then, if the point isn't already on the z-plane, project it
like so:
protected void rotate() throws ParseException
{
Matrix rotate_x = Transformations.x_rotation(rotateX);
Matrix rotate_y = Transformations.y_rotation(rotateY);
Matrix rotate_z = Transformations.z_rotation(rotateZ);
Matrix translate = Transformations.translation(center.x(), center.y(), center.z());
for(Vector3D point : points)
{
point = Vector3D.mult(point, scale);
point = Vector3D.mult(point, rotate_x);
point = Vector3D.mult(point, rotate_y);
point = Vector3D.mult(point, rotate_z);
point = Vector3D.mult(point, translate);
point = Vector3D.mult(point, eye);
if(point.z() != 0)
{
point.setX(point.x()/(-point.z()));
point.setY(point.y()/(-point.z()));
}
checkMinMax(point);
}
}
here's the code that initializes the rotation matrices if you're interested:
public static Matrix eye_transformation(Vector3D eye)throws ParseException
{
double r = eye.length();
double teta = Math.atan2(eye.y(), eye.x());
double zr = eye.z()/r;
double fi = Math.acos(zr);
Matrix v = new Matrix();
v.set(0, 0, -Math.sin(teta));
v.set(0, 1, -Math.cos(teta) * Math.cos(fi));
v.set(0, 2, Math.cos(teta) * Math.sin(fi));
v.set(1, 0, Math.cos(teta));
v.set(1, 1, -Math.sin(teta) * Math.cos(fi));
v.set(1, 2, Math.sin(teta) * Math.sin(fi));
v.set(2, 1, Math.sin(fi));
v.set(2, 2, Math.cos(fi));
v.set(3, 2, -r);
return v;
}
public static Matrix z_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();
v.set(0, 0, Math.cos(angle));
v.set(0, 1, Math.sin(angle));
v.set(1, 0, -Math.sin(angle));
v.set(1, 1, Math.cos(angle));
return v;
}
public static Matrix x_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();;
v.set(1, 1, Math.cos(angle));
v.set(1, 2, Math.sin(angle));
v.set(2, 1, -Math.sin(angle));
v.set(2, 2, Math.cos(angle));
return v;
}
public static Matrix y_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();
v.set(0, 0, Math.cos(angle));
v.set(0, 2, -Math.sin(angle));
v.set(2, 0, Math.sin(angle));
v.set(2, 2, Math.cos(angle));
return v;
}
public static Matrix translation(double a, double b, double c) throws ParseException
{
Matrix v = new Matrix();;
v.set(3, 0, a);
v.set(3, 1, b);
v.set(3, 2, c);
return v;
}
And the actual method that multiplies a point with a rotation matrix
note: NR_DIMS is defined as 3.
public static Vector3D mult(Vector3D lhs, Matrix rhs) throws ParseException
{
if(rhs.get(0, 3)!=0 || rhs.get(1, 3)!=0 || rhs.get(2, 3)!=0 || rhs.get(3, 3)!=1)
throw new ParseException("the matrix multiplificiation thingy just borked");
Vector3D ret = new Vector3D();
double[] vec = new double[NR_DIMS];
double[] temp = new double[NR_DIMS+1];
temp[0] = lhs.x;
temp[1] = lhs.y;
temp[2] = lhs.z;
temp[3] = lhs.infty? 0:1;
for (int i = 0; i < NR_DIMS; i++)
{
vec[i] = 0;
// Multiply the original vector with the i-th column of the matrix.
for (int j = 0; j <= NR_DIMS; j++)
{
vec[i] += temp[j] * rhs.get(j,i);
}
}
ret.x = vec[0];
ret.y = vec[1];
ret.z = vec[2];
ret.infty = lhs.infty;
return ret;
}
I've checked and re-checked this code with my old code (note: the old code works) and it's identical when it comes to the operations.
So I'm at a loss here, I did look around for similar questions but they didn't really provide any useful information.
Thanks :)
small addition:
if i ignore both the eye-point and the rotations (so i only project the image) it comes out perfectly fine.
I can see that the image is complete apart from the rotations.
Any more suggestions?
A few possible mistakes I can think of:
In the constructor of Matrix, you are not loading the identity matrix.
You are passing your angles in degrees instead of radians.
Your eye-projection matrix projects in another range you think? I mean, in OpenGL all projection matrixes should projection onto the rectangle [(-1,-1),(1,1)]. This rectangle represents the screen.
Mixing up premultiply and postmultiply. Id est: I usually do: matrix*vector, where in your code, you seem to be doing vector*matrix, if I'm not mistaken.
Mixing up columns and rows in your Matrix?
I'm going to take another look at your question tomorrow. Hopefully, one of these suggestions helps you.
EDIT: I overlooked you already checked the first two items.
alright, i'm currently feeling like a complete idiot. The issue was a simply logic error.
The error sits in this part of the code:
for(Vector3D point : points)
{
point = Vector3D.mult(point, scale);
point = Vector3D.mult(point, rotate_x);
point = Vector3D.mult(point, rotate_y);
point = Vector3D.mult(point, rotate_z);
point = Vector3D.mult(point, translate);
point = Vector3D.mult(point, eye);
if(point.z() != 0)
{
point.setX(point.x()/(-point.z()));
point.setY(point.y()/(-point.z()));
}
checkMinMax(point);
}
I forgot that, when you obtain an object from a list, it is a new instance of that object with the same data rather than a reference to it.
So what i have done is simply remove the old entry and replace it with the new one.
Problem solved.

Categories

Resources