OpenGL: creating my own camera - java

I'm trying to create a camera to move around a 3d space and am having some problems setting it up. I'm doing this is Java, and apparently using gluPerspective and gluLookAt together creates a conflict (the screen starts flickering like mad).
gluPerspective is set like this:
gl.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(50.0f, h, 1.0, 1000.0);
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
I then create a camera matrix, making use of eye coordinates, forward and up vectors (http://people.freedesktop.org/~idr/glu3/form_4.png) (lets assume the code for the camera is correct.
Lastly, before I draw any thing I have:
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMultMatrixf(camera.matrix);
And then I call my drawing routines (which do some translation/rotation on their own by calling glRotatef and glTranslatef).
Without the call to glMultMatrixf the camera shows the items I need to see in the centre of the screen as it should. With glMulMatrixf however, all I get is a black screen. I tried using glLoadMatrixf instead and it didn't work either. Am I doing something wrong? Am I putting something out of place? If not, and this is how it should be done let me know and I'll post some of the camera code that might be creating the conflicts.
EDIT: Here is the camera matrix creation code:
private void createMatrix()
{
float[] f = new float[3]; //forward (centre-eye)
float[] s = new float[3]; //side (f x up)
float[] u = new float[3]; //'new up' (s x f)
for(int i=0;i<3;i++){
f[i] = centre[i]-eye[i];
}
f = Maths.normalize(f);
s = Maths.crossProduct(f,upVec);
u = Maths.crossProduct(s,f);
float[][] mtx = new float[4][4];
float[][] mtx2 = new float[4][4];
//initializing matrices to all 0s
for (int i = 0; i < mtx.length; i++) {
for (int j = 0; j < mtx[0].length; j++) {
mtx[i][j] = 0;
mtx2[i][j] = 0;
}
}
//mtx = [ [s] 0,[u] 0,[-f] 0, 0 0 0 1]
//mtx2 = [1 0 0 -eye(x), 0 1 0 -eye(y), 0 0 1 -eye(z), 0 0 0 1]
for(int i=0;i<3;i++){
mtx[0][i] = s[i];
mtx[1][i] = u[i];
mtx[2][i] = -f[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
}
mtx[3][3] = 1;
mtx2[0][0]=1;mtx2[1][1] = 1;mtx2[2][2] = 1;mtx2[3][3] = 1;
mtx = Maths.matrixMultiply(mtx,mtx2);
for(int i=0;i<4;i++){
for(int j=0;j<4;j++){
// this.mtx is a float[16] for glMultMatrixf
this.mtx[i*4+j] = mtx[i][j];
}
}
}
I'm hopping the error is somewhere in this piece of code, if not, I'll have a look at my maths functions to see whats going on..
EDIT2: Though I should mention that at least the initial vectors (eye,centre,up) are correct and do put teh camera where it should be (worked with gluLookAt but had teh flickering issue).

It might be simpler to use glRotatef, glTranslatef, and glFrustum to create the camera, although your math seems fine to me (just as long as UpVec is actually defined). In most of the 3D graphics that I have done, you didn't really have a defined object that you wanted to track. I went through various implementations of a 3D camera using gluLookAt before I finally settled on this.
Here is how I tend to define my cameras:
When I create or initialize my camera, I set up the projection matrix with glFrustum. You can use glPerspecive if you prefer:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(left, right, down, up, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
After I clear the color and depth buffers for a render pass, then I call
glLoadIdentity();
glRotated(orientation.x, 1.0, 0.0, 0.0);
glRotated(orientation.y, 0.0, 1.0, 0.0);
glRotated(orientation.z, 0.0, 0.0, 1.0);
glTranslatef(position.x, position.y, position.z);
To position and orient the camera. Initially, you set position and orientation both to {0}, then add or subtract from position when a key is pressed, and add or subtract from orientation.x and orientation.y when the mouse is moved... (I generally don't mess with orientation.z)
Cheers.

Fixed it kind of. The problem was using glMultMatrix(float[] matrix,int ?ofset?)... for some reason if I just use glMultMatrix(FloatBuffer matrix) it works fine..
There are some issues with the transformations I'm making but I should be able to deal with those... Thank you for your input though guys.

Related

How to draw a line in LibGDX without ShapeRenderer

I am using LibGDX and have an ArrayList of multiple points which I want to connect. I am aware there are several methods with ShapeRenderer that work, however im running a SpriteBatch at the same time so what do I do now to draw a line with two Vectors. (If it exists, I'd also like the function that draws multiple lines at once with an Àrray or Vector2 as a parameter, though it isn't a problem as otherwise I'd manage with a for-loop probably).
I am also aware I can use Pixmaps but they don't seem to work correctly. Here is my attempt:
// point1 and point2 are of type Vector2
Pixmap pixmap = new Pixmap(point2.x - point1.x, 2, Pixmap.Format.RGBA8888);
pixmap.setColor(Color.WHITE);
pixmap.drawLine(point1.x, point1.y, point2.x, point2.y);
In response to a possible solution that involves using ShapeRenderer at the same time, this problem arises (the second image uses the points with pixmaps, the first the ShapeRenderer with lines)
The code used for the first image is the following:
for(int i = 1; i < dotPositions.size(); i++) {
sr.line(dotPositions.get(i-1), dotPositions.get(i));
}
The code used for the second image is the following:
Pixmap pixmap = new Pixmap(2, 2, Pixmap.Format.RGBA8888);
pixmap.setColor(Color.WHITE);
pixmap.fillCircle(2, 2, 2);
Texture texture = new Texture(pixmap);
for(int i = 1; i < dotPositions.size(); i++) {
batch.draw(texture, dotPositions.get(i).x, dotPositions.get(i).y);
}
In both cases dotPositions is an ArrayList<Vector2> with the same values.
If anyone in the future may be interested, I found the solution and all I had to do was use ShapeRender.setProjectionMatrix(cam.combined) to sync it with the SpriteBatch.

LWJGL Mesh to JBullet collider

I'm working on creating a voxel engine in LWJGL 3, I have all the basics down (chunks, mesh rendering, etc).
Now I'm working on adding physics using JBullet. This is my first time using JBullet directly, but I've used Bullet before in other 3D engines.
From here I gathered that all I needed to do to create a collision object the same shape as my mesh was the plug the vertices and indices into a TriangleIndexVertexArray and use that for a BvhTriangleMeshShape.
Here is my code:
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Float.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
I know that the vertices and indices are valid as I'm getting them here after drawing my mesh.
This seems to somewhat work, but when I try to drop a cube rigidbody onto the terrain, it seems to collide way above the terrain! (I know that the cube is setup correctly because if I remove the mesh collider it hits the base ground plane at y=0).
I thought maybe it was a scaling issue (although I don't see how that could be), so I tried changing:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f))); to:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 0.5f)));
But after changing the scale from 1 it acted like the mesh collider didn't exist.
It's hard to find any resources or code for JBullet surrounding mesh collision, and I've been working on this for almost 2 days, so I'm hoping maybe some of you people who have done it before can help me out :)
Update 1:
I created an implementation of the IDebugDrawer so I could draw the debug infomation in the scene.
To test it I ran it with just a basic ground plane and a falling cube. I noticed that when the cube is falling the aabb matches the cube size, but when it hits the floor the aabb becomes significantly larger then it was.
I'm going to assue that this is normal Bullet behavior due to collition bouncing, and look at that later as it doesn't effect my current problem.
I re-enabled the generation of the colliders from the chunk meshs, and saw this:
It appears that the aabb visualization of the chunk is a lot higher then the actual chunk (I know my y positioning of the overall collision object is correct).
I'm going to try to figure out if I can draw the actual collision mesh or not.
Update 2:
As far as I can see looking at the source, the meshof the colliders should be drawing in debug, so I'm not sure why it isn't.
I tried changing the Box rigidbody to a sphere, and it actually rolled across the top of the visualized aabb for the terrain collider. It just rolled flat though and didn't go hit or down where there where hills or dips in the terrain where, so it was obviously just rolling across the flat top of the aabb.
So after adding in the Debug Drawer, I was confused as to why the aabb was x2 larger then it should have been.
After spending hours trying little adjustments, I noticed something odd - there was a 0.25 gap between the collider and the edge of the chunk. I proceeded to zoom out and surprisingly noticed this:
There is an extera row and column of colliders? No that doesn't make sense, there should be 5x5 colliders to match the 5x5 chunks.
Then I counted blocks and realized that the colliders where spanning 64 blocks (my chunks are 32x32!).
I quickly realized that this was a scaling issue, and after adding
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
To scale the colliders down by half, everything fit and worked! My "sphere" rolled and came to a stop where there was a hill in the terrain like it should.
My full code for coverting an LWJGL mesh to a JBullet mesh collder is:
public void addMesh(org.joml.Vector3f position, Mesh mesh){
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Integer.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.rewind();
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Integer.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.rewind();
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
}
Update 1:
Even though the scaling was the fix for said prolem, it caused me to look deeper and realize that I mistakenly was using to block size (0.5f) for the mesh scaling factor in my mesh view matrix. Changing the scale to 1 like it should be fixed it.

Loop and change pixel values in Mat OpenCV Android

I am now building a project based on the sample color blob tracking method. I used bounding rectangles around the contours to indicate the blobs. Now I want to improve this algorithm by using an error correction method. What I do now is simply summing up the pixels in the rect region using elemsum method and calculate the average intensity and set it as the new blob detection parameter in each frame. However, the problem is that it is not accurate since those pixels outside the contour but inside the bounding rect will be counted as well. And the result is poor.
In order to solve the problem, I used another a straightforward way to loop through each pixel in the rectangle region (which is a submat), and set all pixel values out of range to the desired (or previous) hsv scalar. Then sum up all the pixels again and calculate the average intensity. This would much more accurate and easily solves the problem. The problem is that the program runs too slow on the phone (with around 1 frame per sec), though the result is accurate.
I found some sources online on how to do it in c++ using mat.forEach. I do not want to do the ndk thing and I would like to know if there is a more efficient way to do it in Java (Android).
UPDATE:
It turned out I can solve the problem by simply reducing the sampling rate. Instead of calculating the average intensity of all pixels, just a few number of them would do the job. My code:
for (int i=0; i< bounding_rect_hsv.rows();i+=10){
for (int j=0; j<bounding_rect_hsv.cols();j+=10){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30) {
data[k] = new_hsvColor.val[k];
}
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
My source code:
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(original_frame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0, 255), 3);
//Todo: use the bounding rectangular to calculate average intensity (turn the pixels out of the contour to new_hsvColor)
//Just change the boundary values would be enough
bounding_rect_rgb = original_frame.submat(rect);
Imgproc.cvtColor(bounding_rect_rgb, bounding_rect_hsv, Imgproc.COLOR_RGB2HSV_FULL);
//Todo: change the logic so that pixels outside the contour will be changed to new_hsvColor
for (int i=0; i< bounding_rect_hsv.rows();i++){
for (int j=0; j<bounding_rect_hsv.cols();j++){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30)
data[k] = new_hsvColor.val[k];
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
If you want to compute the mean value of pixels inside a contour you can simply:
Create a mask, using drawContours with parameter CV_FILLED and color Scalar(255) on a black (Scalar(0)) initialized CV_8UC1 image with same size as the original image.
Use mean to compute the mean of pixels under the mask.
You also don't need to convert to HSV every region (Rect), but you can convert the whole image once, and then access the desired region directly on the HSV image.
In the general case you want to sum the pixel values of a lot of rectangular regions, you may prefer to compute the integral image and compute the sum as the difference of values at bottom-right and top-left rectangle positions.

Line detection | Angle detection with Java

I'm processing some images that my UGV (Unmanned Ground Vehichle) captures to make it move on a line.
I want to get the angle of that line based on the horizon. I'll try to explain with a few examples:
The image above would make my UGV to keep straight ahead, as the angle is about 90 degrees.
But the following would make it turn left, as the angle compaired to the horizon rounds about 120.
I could successfully transform those images into the image below using otsu for thresholding:
And also used an edge detection algorithm to get this:
But I'm stuck right now trying to find an algorithm that detecs those edges/lines and outputs - or helps me to output - the angle of such line..
Here's my attempt using ImageJ:
// Open the Image
ImagePlus image = new ImagePlus(filename);
// Make the Image 8 bit
IJ.run(image, "8-bit", "");
// Apply a Threshold (0 - 50)
ByteProcessor tempBP = (ByteProcessor)image.getProcessor();
tempBP.setThreshold(0, 50, 0);
IJ.run(image, "Convert to Mask", "");
// Analyze the Particles
ParticleAnalyzer pa = new ParticleAnalyzer(
ParticleAnalyzer.SHOW_MASKS +
ParticleAnalyzer.IN_SITU_SHOW,
1023 +
ParticleAnalyzer.ELLIPSE
, rt, 0.0, 999999999, 0, 0.5);
IJ.run(image, "Set Measurements...", "bounding fit redirect=None decimal=3");
pa.analyze(image);
int k = 0;
double maxSize = -1;
for (int i = 0; i < rt.getCounter(); i ++) {
// Determine creteria for best oval.
// The major axis should be much longer than the minor axis.
// let k = best oval
}
double bx = rt.getValue("BX", k);
double by = rt.getValue("BY", k);
double width = rt.getValue("Width", k);
double height = rt.getValue("Height", k);
// Your angle:
double angle = rt.getValue("Angle", k);
double majorAxis = rt.getValue("Major", k);
double minorAxis = rt.getValue("Minor", k);
How the code works:
Make the image grayscaled.
Apply a threshold on it to only get the dark areas. This assumes the lines will always be near black.
Apply a Particle Analyzer to find Ellipses on the image.
Loop through the "Particles" to find ones that fit our criteria.
Get the angle from our Particle.
Here's an example of what the image looks like when I analyze it:
NOTE: The code is untested. I just converted what I did in the Visual ImageJ into Java.

how to flatten 2 different image layers?

I have 2 Mat objects, overlay and background.
How do I put my overlay Mat on top of my background Mat such that only the non-transparent pixels of the overlay Mat completely obscures the background Mat?
I have tried addWeighted() which combines the 2 Mat but both "layers" are still visible.
The overlay Mat has a transparent channel while the background Mat does not.
The pixel in the overlay Mat is either completely transparent or fully obscure.
Both Mats are of the same size.
The function addWeighted won't work since it will use the same alpha value to all the pixels. To do exactly what you are saying, to only replace the non transparent values in the background, you can create a small function for that, like this:
cv::Mat blending(cv::Mat& overlay, cv::Mat& background){
//must have same size for this to work
assert(overlay.cols == background.cols && overlay.rows == background.rows);
cv::Mat result = background.clone();
for (int i = 0; i < result.rows; i++){
for (int j = 0; j < result.cols; j++){
cv::Vec4b pix = overlay.at<cv::Vec4b>(i,j);
if (pix[3] == 0){
result.at<cv::Vec3b>(i,j) = cv::Vec3b(pix[0], pix[1], pix[2]);
}
}
}
return result;
}
I am not sure if the transparent value in opencv is 0 or 255, so change it accordingly.... I think it is 0 for non-transparent adn 255 for fully transparent.
If you want to use the value of the alpha channel as a rate to blend, then change it a little to this:
cv::Mat blending(cv::Mat& overlay, cv::Mat& background){
//must have same size for this to work
assert(overlay.cols == background.cols && overlay.rows == background.rows);
cv::Mat result = background.clone();
for (int i = 0; i < result.rows; i++){
for (int j = 0; j < result.cols; j++){
cv::Vec4b pix = overlay.at<cv::Vec4b>(i,j);
double alphaRate = 1.0 - pix[3]/255.0;
result.at<cv::Vec3b>(i,j) = (1.0 - alphaRate) * cv::Vec3b(pix[0], pix[1], pix[2]) + result.at<cv::Vec3b>(i,j) * alphaRate;
}
}
return result;
}
Sorry for the code being in C++ and not in JAVA, but I think you can get an idea. Basically is just a loop in the pixels and changing the pixels in the copy of background to those of the overlay if they are not transparent.
* EDIT *
I will answer your comment with this edit, since it may take space. The problem is how OpenCV matrix works. For an image with alpha, the data is organized as an array like BGRA BGRA .... BGRA, and the basic operations like add, multiply and so on work in matrices with the same dimensions..... you can always try to separate the matrix with split (this will re write the matrix so it may be slow), then change the alpha channel to double (again, rewrite) and then do the multiplication and adding of the matrices. It should be faster since OpenCV optimizes these functions.... also you can do this in GPU....
Something like this:
cv::Mat blending(cv::Mat& overlay, cv::Mat& background){
std::vector<cv::Mat> channels;
cv::split(overlay, channels);
channels[3].convertTo(channels[3], CV_64F, 1.0/255.0);
cv::Mat newOverlay, result;
cv::merge(channels, newOverlay);
result = newOverlay * channels[3] + ((1 - channels[3]) * background);
return result;
}
Not sure if OpenCV allows a CV_8U to multiply a CV_64F, or if this will be faster or not.... but it may be.
Also, the ones with loops has no problem in threads, so it can be optimized... running this in release mode will greatly increase the speed too since the .at function of OpenCV does several asserts.... that in release mode are not done. Not sure if this can be change in JAVA though...
I was able to port api55's edited answer for java:
private void merge(Mat background, Mat overlay) {
List<Mat> backgroundChannels = new ArrayList<>();
Core.split(background, backgroundChannels);
List<Mat> overlayChannels = new ArrayList<>();
Core.split(overlay, overlayChannels);
// compute "alphaRate = 1 - overlayAlpha / 255"
Mat overlayAlphaChannel = overlayChannels.get(3);
Mat alphaRate = new Mat(overlayAlphaChannel.size(), overlayAlphaChannel.type());
Core.divide(overlayAlphaChannel, new Scalar(255), alphaRate);
Core.absdiff(alphaRate, new Scalar(1), alphaRate);
for (int i = 0; i < 3; i++) {
// compute "(1 - alphaRate) * overlay"
Mat overlayChannel = overlayChannels.get(i);
Mat temp = new Mat(alphaRate.size(), alphaRate.type());
Core.absdiff(alphaRate, new Scalar(1), temp);
Core.multiply(temp, overlayChannel, overlayChannel);
temp.release();
// compute "background * alphaRate"
Mat backgroundChannel = backgroundChannels.get(i);
Core.multiply(backgroundChannel, alphaRate, backgroundChannel);
// compute the merged channel
Core.add(backgroundChannel, overlayChannel, backgroundChannel);
}
alphaRate.release();
Core.merge(backgroundChannels, background);
}
it is a lot faster compared to the double nested loop calculation.

Categories

Resources