so my goal is to use the GPU for my brand new Java project which is to create a game and the game engine itself (I think it is a very good way to learn in deep how it works).
I was using multi-threading on the CPU with java.awt.Graphics2D to display my game, but i have observed on other PCs that the game was running below 40FPS so i have decided to learn how to use GPU (I will be still rendering all objects in a for loop then draw the image on screen).
For that reason, I started to code following the OpenCL documentation and the JOCL samples a small simple test which is to paint the texture onto the background image (let's amdit that every entities has a texture).
This method is called in each render call and it is given the background, the texture, and the position of this entity as arguments.
Both codes below has been updated to fit #ProjectPhysX recommandations.
public static void XXX(final BufferedImage output_image, final BufferedImage input_image, float x, float y) {
cl_image_format format = new cl_image_format();
format.image_channel_order = CL_RGBA;
format.image_channel_data_type = CL_UNSIGNED_INT8;
//allocate ouput pointer
cl_image_desc output_description = new cl_image_desc();
output_description.buffer = null; //must be null for 2D image
output_description.image_depth = 0; //is only used if the image is a 3D image
output_description.image_row_pitch = 0; //must be 0 if host_ptr is null
output_description.image_slice_pitch = 0; //must be 0 if host_ptr is null
output_description.num_mip_levels = 0; //must be 0
output_description.num_samples = 0; //must be 0
output_description.image_type = CL_MEM_OBJECT_IMAGE2D;
output_description.image_width = output_image.getWidth();
output_description.image_height = output_image.getHeight();
output_description.image_array_size = output_description.image_width * output_description.image_height;
cl_mem output_memory = clCreateImage(context, CL_MEM_WRITE_ONLY, format, output_description, null, null);
//set up first kernel arg
clSetKernelArg(kernel, 0, Sizeof.cl_mem, Pointer.to(output_memory));
//allocates input pointer
cl_image_desc input_description = new cl_image_desc();
input_description.buffer = null; //must be null for 2D image
input_description.image_depth = 0; //is only used if the image is a 3D image
input_description.image_row_pitch = 0; //must be 0 if host_ptr is null
input_description.image_slice_pitch = 0; //must be 0 if host_ptr is null
input_description.num_mip_levels = 0; //must be 0
input_description.num_samples = 0; //must be 0
input_description.image_type = CL_MEM_OBJECT_IMAGE2D;
input_description.image_width = input_image.getWidth();
input_description.image_height = input_image.getHeight();
input_description.image_array_size = input_description.image_width * input_description.image_height;
DataBufferInt input_buffer = (DataBufferInt) input_image.getRaster().getDataBuffer();
int input_data[] = input_buffer.getData();
cl_mem input_memory = clCreateImage(context, CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR, format, input_description, Pointer.to(input_data), null);
//loads the input buffer to the gpu memory
long[] input_origin = new long[] { 0, 0, 0 };
long[] input_region = new long[] { input_image.getWidth(), input_image.getHeight(), 1 };
int input_row_pitch = input_image.getWidth() * Sizeof.cl_uint; //the length of each row in bytes
clEnqueueWriteImage(commandQueue, input_memory, CL_TRUE, input_origin, input_region, input_row_pitch, 0, Pointer.to(input_data), 0, null, null);
//set up second kernel arg
clSetKernelArg(kernel, 1, Sizeof.cl_mem, Pointer.to(input_memory));
//set up third and fourth kernel args
clSetKernelArg(kernel, 2, Sizeof.cl_float, Pointer.to(new float[] { x }));
clSetKernelArg(kernel, 3, Sizeof.cl_float, Pointer.to(new float[] { y }));
//blocks until all previously queued commands are issued
clFinish(commandQueue);
//enqueue the program execution
long[] globalWorkSize = new long[] { input_description.image_width, input_description.image_height };
clEnqueueNDRangeKernel(commandQueue, kernel, 2, null, globalWorkSize, null, 0, null, null);
//transfer the output result back to host
DataBufferInt output_buffer = (DataBufferInt) output_image.getRaster().getDataBuffer();
int output_data[] = output_buffer.getData();
long[] output_origin = new long[] { 0, 0, 0 };
long[] output_region = new long[] { output_description.image_width, output_description.image_height, 1 };
int output_row_pitch = output_image.getWidth() * Sizeof.cl_uint;
clEnqueueReadImage(commandQueue, output_memory, CL_TRUE, output_origin, output_region, output_row_pitch, 0, Pointer.to(output_data), 0, null, null);
//free pointers
clReleaseMemObject(input_memory);
clReleaseMemObject(output_memory);
}
And here's the program source runned on the kernel.
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
__kernel void drawImage(__write_only image2d_t dst_image, __read_only image2d_t src_image, float xoff, float yoff)
{
const int x = get_global_id(0);
const int y = get_global_id(1);
int2 in_coords = (int2) { x, y };
uint4 pixel = read_imageui(src_image, sampler, in_coords);
pixel = -16184301;
printf("%d, %d, %u\n", x, y, pixel);
const int sx = get_global_size(0);
const int sy = get_global_size(1);
int2 out_coords = (int2) { ((int) xoff + x) % sx, ((int) yoff + y) % sy};
write_imageui(dst_image, out_coords, pixel);
}
Without the call to write_imageui, the background is painted black, otherwhise it is white.
At the moment, I am a bit struggling to understand why pixel = 0 in the C function, but i think that someone familiar with JOCL would found out very quick my error in this code. I am very confused with this code for today, maybe tomorrow, but i don't feel like I will ever catch myself my mistake. For that reason i request your help to review my code. I feel like an idiot that i can't figure it out at that point.
Try
const int sx = get_global_size(0);
const int sy = get_global_size(1);
int2 out_coords = (int2) { (xoff + x)%sx, (yoff + y)%sy};
to avoid errors or undefined behaviour. Right now you are writing into Nirwana if the coordinate+offset is putside the image region. Also there is no clEnqueueWriteImage before the kernel is called, so src_image on the GPU is undefined and may contain random values.
OpenCL requires kernel parameters to be declared in global memory space:
__kernel void drawImage(global image2d_t dst_image, global image2d_t src_image, global float xoff, global float yoff)
Also as someone who has written a graphics engine in Java, C++ and GPU-parallelized in OpenCL, let me give you some guidance: In the Java code, you probably use painter's algorithm: Make a list of all drawn objects with their approximate z-coordinates, sort the objects by z-coordinate and draw them back-to-front in a single for-loop. On the GPU, painter's algorithm won't work as you cannot parallelize it. Instead you have a list of objects (lines/triangles) in 3D space, and you parallelize over this list: Each GPU thread rasterizes a single triangle, all threads at the same time, and draw the pixels on the frame at the same time. To solve the draing order problem, you use a z-buffer: an image consisting of a z-coordinate per pixel. During rasterization of the line/triange, you calculate the z-coordinate for every pixel, and only if it is larger than the one previously in the z-buffer at that pixel, you draw the new color.
Regarding performance: java.awt.Graphics2D is very efficient in terms of CPU usage, you can do ~40k triangles per frame at 60fps. With OpenCL, expect ~30M triangles per frame at 60fps.
Related
I have an int array where each value stores a bitpacked rgb value (8 bits per channel) and alpha is always 255(opaque) and i want to display that in javafx.
My current approach is using a canvas like this:
GraphicsContext graphics = canvas.getGraphicsContext2D();
PixelWriter pw = graphics.getPixelWriter();
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), pixels, 0, width);
However before that i actually have to set the alpha component of each pixel by iterating each pixel and OR'ing it with a mask that turns the pixel from rgb to argb like this:
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000 | pixels[i];
}
Is there a more efficient to do this (as the pixels array is updated many times every second)?
I was hoping there's a IntRgbInstance but unfortunately there isn't (only ByteRgbInstance)
Other approaches i've tested:
Approach 1: Creating a IntBuffer that is filled up like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
And then generating a PixelBuffer that uses this buffer, the pixel buffer is then used as an input to this WritableImage constructor: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer)
and then i display that WritableImage using a ImageView
This however still didn't speed up anything(rather made it a bit slower) and im guessing that because i have to construct a new WritableImage instance each time the pixels int array is updated.
Approach 2 (that didn't work for some reason, i.e. it displayed nothing in the screen): Creating a buffer the same way as above and using that in one of the setPixels() methods that takes in a buffer:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), buffer, width);
After a bit of more research i found out that i don't need to create a new WritableImage instance each time the pixels array is updated but i can just use the updateBuffer method here: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/PixelBuffer.html#updateBuffer(javafx.util.Callback)
So the code currently looks like this:
pb.updateBuffer(callback -> {
buffer.clear();
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
return null;
});
Where pb, buffer is only created once like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
PixelBuffer<IntBuffer> pb = new PixelBuffer<>(width, height, buffer, PixelFormat.getIntArgbPreInstance());
view.setImage(new WritableImage(pb));
and this did indeed result in a nice speedup (close to 2x compared to my initial approach)
Maybe this https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer) is what you are looking for. You could create a PixelBuffer from an IntBuffer of your data.
I was looking to convert a buffered image to its corresponding pixel value array. I found a code for that:
public static double[] createArrFromIm(BufferedImage im){
int imWidth = im.getWidth();
int imHeight = im.getHeight();
double[] imArr = new double[imWidth* imHeight];
im.getData().getPixels(0, 0, imWidth, imHeight, imArr);
return imArr;
}
The original author who wrote this code block also gave some sample images which work perfect for this block. However, when I try to run this block against my images (the images are always 125*150) the block throws an array index out of bound exception at line:
im.getData().getPixels(0, 0, imWidth, imHeight, imArr);
This incident seems very arcane to me. Any help or suggestion will be very much appreciable. Thanks.
As #FiReTiTi says, you should use the getRaster() method instead of the getData() method, unless you really want a copy of the image data.
However, that is not the cause of the exception. The problem is that your double array only allocates space for a single band (similarly, FiReTiTi's version works, because he explicitly leaves the last parameter 0, only requesting the first band). This is fine for single band (gray scale) images, but I assume you use RGB, CMYK or other color model with multiple bands.
The fix is to multiply the allocated space with the number of bands, as below:
public static double[] createArrFromIm(BufferedImage im) {
int imWidth = im.getWidth();
int imHeight = im.getHeight();
int imBands = im.getRaster().getNumBands(); // typically 3 or 4, depending on RGB or ARGB
double[] imArr = new double[imWidth * imHeight * imBands];
im.getRaster().getPixels(0, 0, imWidth, imHeight, imArr);
return imArr;
}
Do it using the raster:
public static double[] createArrFromIm(BufferedImage im){
int imWidth = im.getWidth();
int imHeight = im.getHeight();
double[] imArr = new double[imWidth* imHeight];
for (int y=0, nb=0 ; y < imHeight ; y++)
for (int x=0 ; x < imWidth ; x++, nb++)
imArr[nb] = im.getRaster().getSampleDouble(x, y, 0) ;
return imArr;
}
As pointed by #haraldK, getData() also works, but it returns a copy of the raster, so it's slower.
There is a faster way using the DataBuffer, but then you have to manage the BufferedImage encoding because you have a direct access to the pixel values.
And here is the answer why what you did, didn't work. im.getData().getPixels() returns an array, it does not fill the array you give as parameter. The array that you give as parameter just determines the return type. So if you want to use getData (it's not the best option), you have to do:
double[] imArr = im.getData().getPixels(0, 0, imWidth, imHeight, (double[])null) ;
I want to co-ordinates of lines with the help of OpenCV in android. I studied the tutorial and this is what my api call is
Mat ImageMat = new Mat(croppedImage.getHeight(), croppedImage.getWidth(), CvType.CV_8U, new Scalar(4));
int threshold = 50;
int minLineSize = 100;
int lineGap = 20;
Mat lines = new Mat();
Imgproc.HoughLinesP(ImageMat, lines, 1, Math.PI / 180, threshold, minLineSize, lineGap);
I provide a simple image with one line in it but in "lines" variable I get hundreds of co-ordinates. I just one co-ordinate of that single line. How to get co-ordinate of that single line only. Also what is the unit in which minLineSize is measured? My lines are the lines which are in front of FirstName, LastName etc.
Here's code in C++. Since mostly OpenCV functions are used, you might be able to port it do android CV easily:
int main()
{
// loading your image. you dont need theses parts
cv::Mat input = cv::imread("../inputData/FormularLineDetection.png");
// convert to grayscale: you will do something similar:
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
// computation of binary thresholding so that dark areas of the image will bevcome "foreground pixel".
// If your image have bright features you'll have to choose different parameters.
// If you want to detect contour lines instead you'll compute gradient magnitude first.
cv::Mat mask;
cv::threshold(gray, mask, 0, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU);
std::vector<cv::Vec4i> lines;
//cv::HoughLinesP(mask, lines, 1, CV_PI/180.0, 50, 50, 10 );
// I've changed the min-Size of a line to 1/3 of the images width. Maybe you'll have to adjust that parameter to your needs!
cv::HoughLinesP(mask, lines, 1, CV_PI/180.0, 50, input.cols/3, 10 );
// draw the lines to visualize: you might not do this at all
for( size_t i = 0; i < lines.size(); i++ )
{
cv::Vec4i l = lines[i];
cv::line( input, cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar(0,0,255), 3, CV_AA);
}
// display and save to disk
cv::imshow("mask", mask); // you might not want to display the image here.
cv::imshow("output",input);
cv::imwrite("../outputData/FormularLineDetection.png", input);
cv::waitKey(0);
return 0;
}
with your input I get this output:
as you can see, your desired lines are detected, but in addition that big thick "line" is detected too. You might want to try to detect structures like that and filter them out!
This past two weeks I have been trying to create an Android App that tracks points in space as I move my Samsung Galaxy III's camera. In short, I use the OpenCV libraries to try to track said points using the two following methods:
public static void goodFeaturesToTrack(
Mat image,
MatOfPoint corners,
int maxCorners,
double qualityLevel,
double minDistance);
Were image corresponds to a 8-bit gray image, corners corresponds to the output with the best points to be used for tracking, maxCorners corresponds to the maximum number of points to be obtained, qualityLevel corresponds to a fraction such that all points to be obtained must be >= BestQualityPointValue*qualityLevel and minDistance corresponds to the minimum distance between points to be found. (Link)
public static void calcOpticalFlowPyrLK(
Mat prevImg,
Mat nextImg,
MatOfPoint2f prevPts,
MatOfPoint2f nextPts,
MatOfByte status,
MatOfFloat err);
Were prevImg corresponds to a 8-bit gray image at time t, nextImg corresponds to a 8-bit gray image at time t+dt, prevPts corresponds to the matrix containing the 2D(x,y) points to be tracked, nextPts corresponds to the OUTPUT matrix containing the NEW POSITION of the points, status indicates which points have been tracked(1) and which not(0) AND err contains the error associated with those points whose displacement has been computed. (Link)
So far I have been successful using the goodFeaturesToTrack(...) method, but I am still unable to calculate the FLOW of the points using calcOpticalFlowPyrLK(...) method.
Here is the chunk of code that takes care of initializing the variables and tracking the points:
private static final double MIN_FEATURE_QUALITY = 0.05;
private static final double MIN_FEATURE_DISTANCE = 4.0;
private Mat prevGray;
private MatOfPoint2f prev2D,next2D;
private MatOfPoint corners;
private MatOfByte status;
private MatOfFloat err;
private Scalar color;
private Size winSize;
private int maxCorners,maxLevel;
...
public void onCameraViewStarted(int width, int height) {
nextGray = new Mat(height, width, CvType.CV_8UC1); //unsigned char
Rscale = new Mat(height, width, CvType.CV_8UC1);
prevGray = new Mat(height, width, CvType.CV_8UC1);
prev2D = new MatOfPoint2f(new Point());
next2D = new MatOfPoint2f(new Point());
status = new MatOfByte();
err = new MatOfFloat();
corners = new MatOfPoint();
maxLevel = 0;
winSize = new Size(21,21);
color = new Scalar(0, 255, 0);
maxCorners = 1;
}
//THIS IS THE METHOD THAT TAKES CARE OF TRACKING THE POINTS
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
nextGray = inputFrame.gray();
Rscale = nextGray;
if(!corners.empty()){
Video.calcOpticalFlowPyrLK(
prevGray,
nextGray,
prev2D,
next2D,
status,
err,
winSize,
maxLevel);
System.out.println("status = " + status.toArray()[0]);
System.out.println("err = " + err.toArray()[0]);
}
prevGray = nextGray;
prev2D = next2D;
for(int i=0;i<next2D.toArray().length;i++){
Core.circle(Rscale, next2D.toArray()[i], 3, color,-1);
}
System.out.println("Prev2D = " + prev2D.toArray()[0].toString());
System.out.println("Next2D = " + next2D.toArray()[0].toString());
return Rscale;
}
THE PROBLEM:
As mentioned earlier, the parameter status tells the user if the flow has been computed or not for each point. Using System.out.println(...), I check each point status and they are all 1. Moreover, I also check the error and get that error = 0.0 However, and this is what is killing me, the new computed points are always the same as the input points (i.e. nextPts = prevPts). This being said, sometimes the points may change position by tiny amounts that are imperceptible, but that rarely happens...
prevGray = nextGray;
this shallow copy will lead to both Mat's point to the same pixel data. so, in the next iteration, when you say:
nextGray = inputFrame.gray();
prevGray will get updaded to the very same pixels, too ;)
what you want is a deep copy:
prevGray = nextGray.clone();
prev2D = next2D.clone(); // same story..
I've had this old graphics project laying around (written in oberon) and since i wrote it as one of my first projects it looks kinda chaotic.
So I descided that, since i'm bored anyway, i would rewrite it in java.
Everything so far seems to work... Until i try to rotate and/or do my eye-point transformation.
If i ignore said operations the image comes out just fine but the moment i try to do any of the operations that require me to multiply a point with a transformation matrix it all goes bad.
the eye point transformation generates stupidly small numbers with end coördinates like [-0.002027571306540029, 0.05938634628270456, -123.30022583847628]
this causes the resulting image to look empty but if i multiply each point with 1000 it turns out it's just very, very small and, in stead of being rotated, has just been translated in some (seemingly) random direction.
if i then ignore the eye point and simply focus on my rotations the results are also pretty strange (note: the image auto scales depending on the range of coordinates):
setting xRotation to 90° only makes the image very narrow and way too high (resolution should be about 1000x1000 and is then 138x1000
setting yRotation to 90° makes it very wide (1000x138)
setting zRotation to 90° simply seems to translate the image all the way to the right side of the screen.
What i have checked so far:
i have checked and re-checked my rotation matrices at least 15 times now so they are (probably) correct
doing a test multiplication with a vector and the identity matrix does return the original vector
my matrices are initialized as identity matrices prior to being used as rotation matrices
the angles in the files are in degrees but are converted to radian when read.
Having said that i have 2 more notes:
a vector in this case is a simple 3 value array of doubles (representing the x, y and z values)
a matrix is a 4x4 array of doubles initialized as the identity matrix
When trying to rotate them i do it in the order:
scale (multiplying with a scale factor)
rotate along x-axis
rotate along y-axis
rotate along z-axis
translate
do eye-point transformation
then, if the point isn't already on the z-plane, project it
like so:
protected void rotate() throws ParseException
{
Matrix rotate_x = Transformations.x_rotation(rotateX);
Matrix rotate_y = Transformations.y_rotation(rotateY);
Matrix rotate_z = Transformations.z_rotation(rotateZ);
Matrix translate = Transformations.translation(center.x(), center.y(), center.z());
for(Vector3D point : points)
{
point = Vector3D.mult(point, scale);
point = Vector3D.mult(point, rotate_x);
point = Vector3D.mult(point, rotate_y);
point = Vector3D.mult(point, rotate_z);
point = Vector3D.mult(point, translate);
point = Vector3D.mult(point, eye);
if(point.z() != 0)
{
point.setX(point.x()/(-point.z()));
point.setY(point.y()/(-point.z()));
}
checkMinMax(point);
}
}
here's the code that initializes the rotation matrices if you're interested:
public static Matrix eye_transformation(Vector3D eye)throws ParseException
{
double r = eye.length();
double teta = Math.atan2(eye.y(), eye.x());
double zr = eye.z()/r;
double fi = Math.acos(zr);
Matrix v = new Matrix();
v.set(0, 0, -Math.sin(teta));
v.set(0, 1, -Math.cos(teta) * Math.cos(fi));
v.set(0, 2, Math.cos(teta) * Math.sin(fi));
v.set(1, 0, Math.cos(teta));
v.set(1, 1, -Math.sin(teta) * Math.cos(fi));
v.set(1, 2, Math.sin(teta) * Math.sin(fi));
v.set(2, 1, Math.sin(fi));
v.set(2, 2, Math.cos(fi));
v.set(3, 2, -r);
return v;
}
public static Matrix z_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();
v.set(0, 0, Math.cos(angle));
v.set(0, 1, Math.sin(angle));
v.set(1, 0, -Math.sin(angle));
v.set(1, 1, Math.cos(angle));
return v;
}
public static Matrix x_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();;
v.set(1, 1, Math.cos(angle));
v.set(1, 2, Math.sin(angle));
v.set(2, 1, -Math.sin(angle));
v.set(2, 2, Math.cos(angle));
return v;
}
public static Matrix y_rotation(double angle) throws ParseException
{
Matrix v = new Matrix();
v.set(0, 0, Math.cos(angle));
v.set(0, 2, -Math.sin(angle));
v.set(2, 0, Math.sin(angle));
v.set(2, 2, Math.cos(angle));
return v;
}
public static Matrix translation(double a, double b, double c) throws ParseException
{
Matrix v = new Matrix();;
v.set(3, 0, a);
v.set(3, 1, b);
v.set(3, 2, c);
return v;
}
And the actual method that multiplies a point with a rotation matrix
note: NR_DIMS is defined as 3.
public static Vector3D mult(Vector3D lhs, Matrix rhs) throws ParseException
{
if(rhs.get(0, 3)!=0 || rhs.get(1, 3)!=0 || rhs.get(2, 3)!=0 || rhs.get(3, 3)!=1)
throw new ParseException("the matrix multiplificiation thingy just borked");
Vector3D ret = new Vector3D();
double[] vec = new double[NR_DIMS];
double[] temp = new double[NR_DIMS+1];
temp[0] = lhs.x;
temp[1] = lhs.y;
temp[2] = lhs.z;
temp[3] = lhs.infty? 0:1;
for (int i = 0; i < NR_DIMS; i++)
{
vec[i] = 0;
// Multiply the original vector with the i-th column of the matrix.
for (int j = 0; j <= NR_DIMS; j++)
{
vec[i] += temp[j] * rhs.get(j,i);
}
}
ret.x = vec[0];
ret.y = vec[1];
ret.z = vec[2];
ret.infty = lhs.infty;
return ret;
}
I've checked and re-checked this code with my old code (note: the old code works) and it's identical when it comes to the operations.
So I'm at a loss here, I did look around for similar questions but they didn't really provide any useful information.
Thanks :)
small addition:
if i ignore both the eye-point and the rotations (so i only project the image) it comes out perfectly fine.
I can see that the image is complete apart from the rotations.
Any more suggestions?
A few possible mistakes I can think of:
In the constructor of Matrix, you are not loading the identity matrix.
You are passing your angles in degrees instead of radians.
Your eye-projection matrix projects in another range you think? I mean, in OpenGL all projection matrixes should projection onto the rectangle [(-1,-1),(1,1)]. This rectangle represents the screen.
Mixing up premultiply and postmultiply. Id est: I usually do: matrix*vector, where in your code, you seem to be doing vector*matrix, if I'm not mistaken.
Mixing up columns and rows in your Matrix?
I'm going to take another look at your question tomorrow. Hopefully, one of these suggestions helps you.
EDIT: I overlooked you already checked the first two items.
alright, i'm currently feeling like a complete idiot. The issue was a simply logic error.
The error sits in this part of the code:
for(Vector3D point : points)
{
point = Vector3D.mult(point, scale);
point = Vector3D.mult(point, rotate_x);
point = Vector3D.mult(point, rotate_y);
point = Vector3D.mult(point, rotate_z);
point = Vector3D.mult(point, translate);
point = Vector3D.mult(point, eye);
if(point.z() != 0)
{
point.setX(point.x()/(-point.z()));
point.setY(point.y()/(-point.z()));
}
checkMinMax(point);
}
I forgot that, when you obtain an object from a list, it is a new instance of that object with the same data rather than a reference to it.
So what i have done is simply remove the old entry and replace it with the new one.
Problem solved.